[jira] [Commented] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-01-07 Thread JJJJude (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804161#comment-17804161
 ] 

ude commented on FLINK-28693:
-

same problem [FLINK-34016|https://issues.apache.org/jira/browse/FLINK-34016]

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Priority: Major
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
>  

[jira] [Updated] (FLINK-34016) Janino compile failed when watermark with column by udf

2024-01-07 Thread JJJJude (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ude updated FLINK-34016:

Description: 
After submit the following flink sql by sql-client.sh will throw an exception:
{code:java}
Caused by: java.lang.RuntimeException: Could not instantiate generated class 
'WatermarkGenerator$0'
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
    at 
org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:69)
    at 
org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:109)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:462)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:438)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:414)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
    at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
    at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: 
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:101)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
    ... 16 more
Caused by: 
org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:92)
    ... 18 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:107)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$0(CompileUtils.java:92)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
    ... 21 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 123: 
Line 29, Column 123: Cannot determine simple type name "org"
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:7007)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6886)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at 

[jira] [Updated] (FLINK-34016) Janino compile failed when watermark with column by udf

2024-01-07 Thread JJJJude (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ude updated FLINK-34016:

Description: 
After submit the following flink sql by sql-client.sh will throw an exception:
{code:java}
Caused by: java.lang.RuntimeException: Could not instantiate generated class 
'WatermarkGenerator$0'
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
    at 
org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:69)
    at 
org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:109)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:462)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:438)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:414)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
    at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
    at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: 
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:101)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
    ... 16 more
Caused by: 
org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:92)
    ... 18 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:107)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$0(CompileUtils.java:92)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
    ... 21 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 123: 
Line 29, Column 123: Cannot determine simple type name "org"
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:7007)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6886)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at 

[jira] [Created] (FLINK-34016) Janino compile failed when watermark with column by udf

2024-01-07 Thread Jude Zhu (Jira)
Jude Zhu created FLINK-34016:


 Summary: Janino compile failed when watermark with column by udf
 Key: FLINK-34016
 URL: https://issues.apache.org/jira/browse/FLINK-34016
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.18.0, 1.15.0
Reporter: Jude Zhu


After submit the following flink sql by sql-client.sh will throw an exception:
{code:java}
Caused by: java.lang.RuntimeException: Could not instantiate generated class 
'WatermarkGenerator$0'
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
    at 
org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:69)
    at 
org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:109)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:462)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:438)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:414)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
    at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
    at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
    at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkRuntimeException: 
org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:101)
    at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
    ... 16 more
Caused by: 
org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:92)
    ... 18 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
cannot be compiled. This is a bug. Please file an issue.
    at 
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:107)
    at 
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$0(CompileUtils.java:92)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
    at 
org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
    ... 21 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 123: 
Line 29, Column 123: Cannot determine simple type name "org"
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:7007)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6886)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6899)
    at 

[jira] [Closed] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang closed FLINK-34012.

Resolution: Fixed

Merged into master via 639deeca33757c7380d474d43b8a70bacb84dd20

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang updated FLINK-34012:
-
Issue Type: Technical Debt  (was: Bug)

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34012][python] Fix mypy error [flink]

2024-01-07 Thread via GitHub


HuangXingBo merged PR #24043:
URL: https://github.com/apache/flink/pull/24043


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34015) Setting `execution.savepoint.ignore-unclaimed-state` does not take effect when passing this parameter by dynamic properties

2024-01-07 Thread Renxiang Zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804147#comment-17804147
 ] 

Renxiang Zhou commented on FLINK-34015:
---

[~JunRuiLi] Many thanks for your reply, I have also tested this command without 
the space, and it still does not take effect. After reading the code, the 
reason I think is that the setting has been replaced by mistake in client side 
since we do not use `-s` option to start the job from a checkpoint/savepoint 
but use -Dexecution.savepoint.path.

> Setting `execution.savepoint.ignore-unclaimed-state` does not take effect 
> when passing this parameter by dynamic properties
> ---
>
> Key: FLINK-34015
> URL: https://issues.apache.org/jira/browse/FLINK-34015
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.17.0
>Reporter: Renxiang Zhou
>Priority: Critical
>  Labels: ignore-unclaimed-state-invalid
> Attachments: image-2024-01-08-14-22-09-758.png, 
> image-2024-01-08-14-24-30-665.png
>
>
> We set `execution.savepoint.ignore-unclaimed-state` to true and use -D option 
> to submit the job, but unfortunately we found the value is still false in 
> jobmanager log.
> Pic 1: we  set `execution.savepoint.ignore-unclaimed-state` to true in 
> submiting job.
> !image-2024-01-08-14-22-09-758.png|width=1012,height=222!
> Pic 2: The value is still false in jmlog.
> !image-2024-01-08-14-24-30-665.png|width=651,height=51!
>  
> Besides, the parameter `execution.savepoint-restore-mode` has the same 
> problem since when we pass it by -D option.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33738][Scheduler] Make exponential-delay restart-strategy the default restart strategy [flink]

2024-01-07 Thread via GitHub


1996fanrui commented on code in PR #24040:
URL: https://github.com/apache/flink/pull/24040#discussion_r1444232288


##
flink-core/src/main/java/org/apache/flink/configuration/RestartStrategyOptions.java:
##
@@ -89,11 +89,10 @@ public class RestartStrategyOptions {
 "here")))
 .text(
 "If checkpointing is disabled, the 
default value is %s. "
-+ "If checkpointing is 
enabled, the default value is %s with %s restart attempts and '%s' delay.",
++ "If checkpointing is 
enabled, the default value is %s, and the values of all %s related options have 
not changed.",

Review Comment:
   Hi @zhuzhurk , do you have any good suggestion about the doc of default 
restart strategy?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34001][docs][table] Fix ambiguous document description towards the default value for configuring operator-level state TTL [flink]

2024-01-07 Thread via GitHub


LadyForest commented on PR #24039:
URL: https://github.com/apache/flink/pull/24039#issuecomment-1880476143

   The CI failure is caused by FLINK-34012


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33970] Add necessary checks for connector document (#78) [flink-connector-pulsar]

2024-01-07 Thread via GitHub


tisonkun closed pull request #80: [FLINK-33970] Add necessary checks for 
connector document (#78)
URL: https://github.com/apache/flink-connector-pulsar/pull/80


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804141#comment-17804141
 ] 

Jane Chan commented on FLINK-34012:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56091=logs=dd7e7115-b4b1-5414-20ec-97b9411e0cfc=c759a57f-2774-59e9-f882-8e4d5d3fbb9f

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34015) Setting `execution.savepoint.ignore-unclaimed-state` does not take effect when passing this parameter by dynamic properties

2024-01-07 Thread Junrui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804140#comment-17804140
 ] 

Junrui Li commented on FLINK-34015:
---

If the jobmanager log reflects your actual input and it shows a space after 
{{{}-D{}}}, then the usage is incorrect. The correct format should be:

 
{code:java}
-Dexecution.savepoint.ignore-unclaimed-state=true{code}
 

> Setting `execution.savepoint.ignore-unclaimed-state` does not take effect 
> when passing this parameter by dynamic properties
> ---
>
> Key: FLINK-34015
> URL: https://issues.apache.org/jira/browse/FLINK-34015
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.17.0
>Reporter: Renxiang Zhou
>Priority: Critical
>  Labels: ignore-unclaimed-state-invalid
> Attachments: image-2024-01-08-14-22-09-758.png, 
> image-2024-01-08-14-24-30-665.png
>
>
> We set `execution.savepoint.ignore-unclaimed-state` to true and use -D option 
> to submit the job, but unfortunately we found the value is still false in 
> jobmanager log.
> Pic 1: we  set `execution.savepoint.ignore-unclaimed-state` to true in 
> submiting job.
> !image-2024-01-08-14-22-09-758.png|width=1012,height=222!
> Pic 2: The value is still false in jmlog.
> !image-2024-01-08-14-24-30-665.png|width=651,height=51!
>  
> Besides, the parameter `execution.savepoint-restore-mode` has the same 
> problem since when we pass it by -D option.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34015) Setting `execution.savepoint.ignore-unclaimed-state` does not take effect when passing this parameter by dynamic properties

2024-01-07 Thread Renxiang Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renxiang Zhou updated FLINK-34015:
--
Summary: Setting `execution.savepoint.ignore-unclaimed-state` does not take 
effect when passing this parameter by dynamic properties  (was: 
execution.savepoint.ignore-unclaimed-state is invalid when passing this 
parameter by dynamic properties)

> Setting `execution.savepoint.ignore-unclaimed-state` does not take effect 
> when passing this parameter by dynamic properties
> ---
>
> Key: FLINK-34015
> URL: https://issues.apache.org/jira/browse/FLINK-34015
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.17.0
>Reporter: Renxiang Zhou
>Priority: Critical
>  Labels: ignore-unclaimed-state-invalid
> Attachments: image-2024-01-08-14-22-09-758.png, 
> image-2024-01-08-14-24-30-665.png
>
>
> We set `execution.savepoint.ignore-unclaimed-state` to true and use -D option 
> to submit the job, but unfortunately we found the value is still false in 
> jobmanager log.
> Pic 1: we  set `execution.savepoint.ignore-unclaimed-state` to true in 
> submiting job.
> !image-2024-01-08-14-22-09-758.png|width=1012,height=222!
> Pic 2: The value is still false in jmlog.
> !image-2024-01-08-14-24-30-665.png|width=651,height=51!
>  
> Besides, the parameter `execution.savepoint-restore-mode` has the same 
> problem since when we pass it by -D option.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33970] Add necessary checks for connector document (#78) [flink-connector-pulsar]

2024-01-07 Thread via GitHub


GOODBOY008 commented on PR #80:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/80#issuecomment-1880464103

   @tisonkun Hi, tison. This check should be add to 
`flink-connector-shared-utils/ci_utils` and also hugo build check should be 
included. As we discussed in [jira 
](https://issues.apache.org/jira/browse/FLINK-33970).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Set copyright year to 2024 [flink-connector-pulsar]

2024-01-07 Thread via GitHub


tisonkun merged PR #81:
URL: https://github.com/apache/flink-connector-pulsar/pull/81


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33634] Add Conditions to Flink CRD's Status field [flink-kubernetes-operator]

2024-01-07 Thread via GitHub


gyfora commented on code in PR #749:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/749#discussion_r1444205943


##
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonCRStatus.java:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.operator.api.status;
+
+import io.fabric8.kubernetes.api.model.Condition;
+import io.fabric8.kubernetes.api.model.ConditionBuilder;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+
+/** Status of CR. */
+public class CommonCRStatus {

Review Comment:
   This is a very confusing class name. We already have `CommonStatus` which is 
actually part of the status (superclass). This class here is just a utility 
class for creating `Condition` objects. So maybe `FlinkConditions` or 
`ConditionUtils` would be a better name.



##
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonCRStatus.java:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.operator.api.status;
+
+import io.fabric8.kubernetes.api.model.Condition;
+import io.fabric8.kubernetes.api.model.ConditionBuilder;
+
+import java.text.SimpleDateFormat;
+import java.util.Date;
+
+/** Status of CR. */
+public class CommonCRStatus {
+
+public static Condition crReadyTrueCondition(final String message) {
+return crCondition("Ready", "True", message, "Ready");
+}
+
+public static Condition crReadyFalseCondition(final String message) {
+return crCondition("Ready", "False", message, "Progressing");
+}
+
+public static Condition crErrorCondition(final String message) {
+return crCondition("Error", "True", message, "UnhandledException");
+}
+
+public static Condition crCondition(

Review Comment:
   I think if we rename the class we can also rename this methods simply to: 
`neady` / `notReady` / `error`



##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/controller/FlinkDeploymentController.java:
##
@@ -227,4 +235,28 @@ private boolean 
validateDeployment(FlinkResourceContext ctx) {
 }
 return true;
 }
+
+private void setCRStatus(FlinkDeployment flinkApp) {

Review Comment:
   Instead of having to call `setCrStatus` can we move this logic to within the 
actual `CommonStatus` or `FlinkDeploymentStatus` so that it's computed 
automatically like the `ResourceLifecycleState`?
   
   Also I think we could actually compute this directly from the resource 
lifecycle state instead of adding arbitrary new logic here



##
flink-kubernetes-operator-api/src/main/java/org/apache/flink/kubernetes/operator/api/status/CommonCRStatus.java:
##
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the 

[jira] [Updated] (FLINK-34015) execution.savepoint.ignore-unclaimed-state is invalid when passing this parameter by dynamic properties

2024-01-07 Thread Renxiang Zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renxiang Zhou updated FLINK-34015:
--
Attachment: (was: image-2024-01-08-14-29-04-347.png)

> execution.savepoint.ignore-unclaimed-state is invalid when passing this 
> parameter by dynamic properties
> ---
>
> Key: FLINK-34015
> URL: https://issues.apache.org/jira/browse/FLINK-34015
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.17.0
>Reporter: Renxiang Zhou
>Priority: Critical
>  Labels: ignore-unclaimed-state-invalid
> Attachments: image-2024-01-08-14-22-09-758.png, 
> image-2024-01-08-14-24-30-665.png
>
>
> We set `execution.savepoint.ignore-unclaimed-state` to true and use -D option 
> to submit the job, but unfortunately we found the value is still false in 
> jobmanager log.
> Pic 1: we  set `execution.savepoint.ignore-unclaimed-state` to true in 
> submiting job.
> !image-2024-01-08-14-22-09-758.png|width=1012,height=222!
> Pic 2: The value is still false in jmlog.
> !image-2024-01-08-14-24-30-665.png|width=651,height=51!
>  
> Besides, the parameter `execution.savepoint-restore-mode` has the same 
> problem since when we pass it by -D option.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34015) execution.savepoint.ignore-unclaimed-state is invalid when passing this parameter by dynamic properties

2024-01-07 Thread Renxiang Zhou (Jira)
Renxiang Zhou created FLINK-34015:
-

 Summary: execution.savepoint.ignore-unclaimed-state is invalid 
when passing this parameter by dynamic properties
 Key: FLINK-34015
 URL: https://issues.apache.org/jira/browse/FLINK-34015
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.17.0
Reporter: Renxiang Zhou
 Attachments: image-2024-01-08-14-22-09-758.png, 
image-2024-01-08-14-24-30-665.png, image-2024-01-08-14-29-04-347.png

We set `execution.savepoint.ignore-unclaimed-state` to true and use -D option 
to submit the job, but unfortunately we found the value is still false in 
jobmanager log.

Pic 1: we  set `execution.savepoint.ignore-unclaimed-state` to true in 
submiting job.
!image-2024-01-08-14-22-09-758.png|width=1012,height=222!

Pic 2: The value is still false in jmlog.

!image-2024-01-08-14-24-30-665.png|width=651,height=51!

 

Besides, the parameter `execution.savepoint-restore-mode` has the same problem 
since when we pass it by -D option.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33632] Adding custom flink mutator [flink-kubernetes-operator]

2024-01-07 Thread via GitHub


gyfora commented on code in PR #733:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/733#discussion_r1444204137


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/utils/MutatorUtils.java:
##
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.kubernetes.operator.utils;
+
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.core.plugin.PluginUtils;
+import org.apache.flink.kubernetes.operator.config.FlinkConfigManager;
+import org.apache.flink.kubernetes.operator.mutator.DefaultFlinkMutator;
+import org.apache.flink.kubernetes.operator.mutator.FlinkResourceMutator;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.HashSet;
+import java.util.Set;
+
+/** Mutator utilities. */
+public final class MutatorUtils {
+
+private static final Logger LOG = 
LoggerFactory.getLogger(MutatorUtils.class);
+
+/**
+ * discovers mutators.
+ *
+ * @param configManager Flink Config manager
+ * @return Set of FlinkResourceMutator
+ */
+public static Set 
discoverMutators(FlinkConfigManager configManager) {

Review Comment:
   We should have a test for this similar to the `ValidatorUtilsTest` 
alternatively it would be even better to refactor the code and eliminate the 
overlapping logic between discovering the validator and mutator and then we can 
have a single test



##
flink-kubernetes-webhook/src/test/java/org/apache/flink/kubernetes/operator/admission/AdmissionHandlerTest.java:
##
@@ -136,6 +140,10 @@ public void testHandleValidateRequestWithAdmissionReview() 
throws IOException {
 public void testMutateHandler() throws Exception {
 final EmbeddedChannel embeddedChannel = new 
EmbeddedChannel(admissionHandler);
 var sessionJob = new FlinkSessionJob();
+ObjectMeta objectMeta = new ObjectMeta();

Review Comment:
   We should also add a test that has a custom mutator and make sure it is 
called.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [hotfix] Set copyright year to 2024 [flink-connector-pulsar]

2024-01-07 Thread via GitHub


GOODBOY008 commented on PR #81:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/81#issuecomment-1880441534

   @tisonkun  PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33115) AbstractHadoopRecoverableWriterITCase is hanging with timeout on AZP

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804136#comment-17804136
 ] 

Sergey Nuyanzin commented on FLINK-33115:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56089=logs=93ebd72a-004d-5a68-6295-7ace4ad889cd=35e92294-2840-51f1-1753-ae015c24c41f=13514

> AbstractHadoopRecoverableWriterITCase is hanging with timeout on AZP
> 
>
> Key: FLINK-33115
> URL: https://issues.apache.org/jira/browse/FLINK-33115
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0, 1.19.0, 1.17.3
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: stale-critical, test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=53281=logs=4eda0b4a-bd0d-521a-0916-8285b9be9bb5=2ff6d5fa-53a6-53ac-bff7-fa524ea361a9=14239
> is failing as
> {noformat}
> Sep 15 11:33:02 
> ==
> Sep 15 11:33:02 Process produced no output for 900 seconds.
> Sep 15 11:33:02 
> ==
> ...
> Sep 15 11:33:03   at 
> java.io.DataInputStream.read(DataInputStream.java:149)
> Sep 15 11:33:03   at 
> org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:96)
> Sep 15 11:33:03   at 
> sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
> Sep 15 11:33:03   at 
> sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
> Sep 15 11:33:03   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
> Sep 15 11:33:03   - locked <0xbfa7a760> (a 
> java.io.InputStreamReader)
> Sep 15 11:33:03   at 
> java.io.InputStreamReader.read(InputStreamReader.java:184)
> Sep 15 11:33:03   at java.io.BufferedReader.fill(BufferedReader.java:161)
> Sep 15 11:33:03   at 
> java.io.BufferedReader.readLine(BufferedReader.java:324)
> Sep 15 11:33:03   - locked <0xbfa7a760> (a 
> java.io.InputStreamReader)
> Sep 15 11:33:03   at 
> java.io.BufferedReader.readLine(BufferedReader.java:389)
> Sep 15 11:33:03   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopRecoverableWriterITCase.getContentsOfFile(AbstractHadoopRecoverableWriterITCase.java:387)
> Sep 15 11:33:03   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopRecoverableWriterITCase.testResumeAfterMultiplePersist(AbstractHadoopRecoverableWriterITCase.java:377)
> Sep 15 11:33:03   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopRecoverableWriterITCase.testResumeAfterMultiplePersistWithMultiPartUploads(AbstractHadoopRecoverableWriterITCase.java:330)
> Sep 15 11:33:03   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopRecoverableWri
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34012][python] Fix mypy error [flink]

2024-01-07 Thread via GitHub


flinkbot commented on PR #24043:
URL: https://github.com/apache/flink/pull/24043#issuecomment-1880440945

   
   ## CI report:
   
   * 571f01449171798b8e88c8cbf38c875ef885bdc9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33535) Support autoscaler for session jobs

2024-01-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-33535.
--
Resolution: Fixed

merged to main 66e143362174491cef4b1d251b3fa21058fcf1c1

> Support autoscaler for session jobs
> ---
>
> Key: FLINK-33535
> URL: https://issues.apache.org/jira/browse/FLINK-33535
> Project: Flink
>  Issue Type: New Feature
>  Components: Autoscaler, Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Currently the operator autoscaler is not enabled for session jobs. There are 
> some issues around the actual scaling of the jobs, such as: 
> https://issues.apache.org/jira/browse/FLINK-33534
> Furthermore there is a bug which in any case prevents the submission of these 
> jobs due to how the jobid is generated, causing collisions if we only change 
> the parallelism override without the metadata generation.
> We could consider still enabling the autoscaler for a limited 1.18 rescale 
> api support



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33535) Support autoscaler for session jobs

2024-01-07 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-33535:
--

Assignee: Gyula Fora

> Support autoscaler for session jobs
> ---
>
> Key: FLINK-33535
> URL: https://issues.apache.org/jira/browse/FLINK-33535
> Project: Flink
>  Issue Type: New Feature
>  Components: Autoscaler, Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
>
> Currently the operator autoscaler is not enabled for session jobs. There are 
> some issues around the actual scaling of the jobs, such as: 
> https://issues.apache.org/jira/browse/FLINK-33534
> Furthermore there is a bug which in any case prevents the submission of these 
> jobs due to how the jobid is generated, causing collisions if we only change 
> the parallelism override without the metadata generation.
> We could consider still enabling the autoscaler for a limited 1.18 rescale 
> api support



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33535] Support autoscaler for session jobs [flink-kubernetes-operator]

2024-01-07 Thread via GitHub


gyfora merged PR #739:
URL: https://github.com/apache/flink-kubernetes-operator/pull/739


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34012:
---
Labels: pull-request-available test-stability  (was: test-stability)

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Xingbo Huang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xingbo Huang reassigned FLINK-34012:


Assignee: Xingbo Huang

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Assignee: Xingbo Huang
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34012][python] Fix mypy error [flink]

2024-01-07 Thread via GitHub


HuangXingBo opened a new pull request, #24043:
URL: https://github.com/apache/flink/pull/24043

   ## What is the purpose of the change
   
   *This pull request will fix mypy error*
   
   
   ## Brief change log
   
 - *ignore google package import error*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *current mypy tests*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33988][configuration] Fix the invalid configuration when using initialized root logger level on yarn deployment mode [flink]

2024-01-07 Thread via GitHub


flinkbot commented on PR #24042:
URL: https://github.com/apache/flink/pull/24042#issuecomment-1880416774

   
   ## CI report:
   
   * 09af1b68e8bf66b0c07c760f2c879c0a364469d4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34001][docs][table] Fix ambiguous document description towards the default value for configuring operator-level state TTL [flink]

2024-01-07 Thread via GitHub


LadyForest commented on PR #24039:
URL: https://github.com/apache/flink/pull/24039#issuecomment-1880416354

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34001][docs][table] Fix ambiguous document description towards the default value for configuring operator-level state TTL [flink]

2024-01-07 Thread via GitHub


LadyForest commented on PR #24039:
URL: https://github.com/apache/flink/pull/24039#issuecomment-1880416082

   The CI failure is not relevant to the changes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33988) Invalid configuration when using initialized root logger level on yarn application mode

2024-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33988:
---
Labels: pull-request-available  (was: )

> Invalid configuration when using initialized root logger level on yarn 
> application mode
> ---
>
> Key: FLINK-33988
> URL: https://issues.apache.org/jira/browse/FLINK-33988
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: RocMarshal
>Assignee: RocMarshal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> relevant https://issues.apache.org/jira/browse/FLINK-33166
> When I set env. log. level=DEBUG and start the flink job by yarn application 
> mode, the logs of TM and JM are still INFO.
> Preliminary inference is that it is ROOT_ LOG_ LEVEL value transmission link 
> is not complete enough.
> So I used the following configuration:
> containerized. taskmanager. env. ROOT_ LOG_ LEVEL=DEBUG
> containerized. master. env. ROOT_ LOG_ LEVEL=DEBUG
>  
> When starting the job by yarn application mode, TM and JM can output DEBUG 
> level logs.
>  
> Repair ideas:
> Fill the value of *env. log. level* into the Flink configuration by 
> *containerized. xxx. env. ROOT_ LOG_ LEVEL* before obtaining the environment 
> variable for the container



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33988][configuration] Fix the invalid configuration when using initialized root logger level on yarn deployment mode [flink]

2024-01-07 Thread via GitHub


RocMarshal opened a new pull request, #24042:
URL: https://github.com/apache/flink/pull/24042

   
   
   ## What is the purpose of the change
   
   Fix the invalid configuration when using initialized root logger level on 
yarn deployment mode
   
   
   ## Brief change log
   
   - Fill the containerized env values of root logger level before submiting  
yarn application.
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
 - *Manually verified the change by running a 4 nodes cluster.

   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33434][runtime-web] Support invoke async-profiler on TaskManager via REST API [flink]

2024-01-07 Thread via GitHub


flinkbot commented on PR #24041:
URL: https://github.com/apache/flink/pull/24041#issuecomment-1880408069

   
   ## CI report:
   
   * 785c67752eaef9d7a13fdd48f613190c9e6ecb82 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34009) Apache flink: Checkpoint restoration issue on Application Mode of deployment

2024-01-07 Thread Vijay (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804118#comment-17804118
 ] 

Vijay commented on FLINK-34009:
---

As flink support multi-job execution on Application mode of deployment (with HA 
being disabled), we need more details of how to enable restoration process via 
checkpointing (when app / flink is upgraded). Please support us to overcome 
this issue. Thanks.

> Apache flink: Checkpoint restoration issue on Application Mode of deployment
> 
>
> Key: FLINK-34009
> URL: https://issues.apache.org/jira/browse/FLINK-34009
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.18.0
> Environment: Flink version: 1.18
> Zookeeper version: 3.7.2
> Env: Custom flink docker image (with embedded application class) deployed 
> over kubernetes (v1.26.11).
>Reporter: Vijay
>Priority: Major
>
> Hi Team,
> Good Day. Wish you all a happy new year 2024.
> We are using Flink (1.18) version on our flink cluster. Job manager has been 
> deployed on "Application mode" and HA is disabled (high-availability.type: 
> NONE), under this configuration parameters we are able to start multiple jobs 
> (using env.executeAsync()) of a single application.
> Note: We have also setup checkpoint on a s3 instance with 
> RETAIN_ON_CANCELLATION mode (plus other required settings).
> Lets say now we start two jobs of the same application (ex: Jobidxxx1, 
> jobidxxx2) and they are currently running on the k8s env. If we have to 
> perform Flink minor upgrade (or) upgrade of our application with minor 
> changes, in that case we will stop the Job Manager and Task Managers 
> instances and perform the necessary up-gradation then when we start both Job 
> Manager and Task Managers instance. On startup we expect the job's to be 
> restored back from the last checkpoint, but the job restoration is not 
> happening on Job manager startup. Please let us know if this is an bug (or) 
> its the general behavior of flink under application mode of deployment.
> Additional information: If we enable HA (using Zookeeper) on Application 
> mode, we are able to startup only one job (i.e., per-job behavior). When we 
> perform Flink minor upgrade (or) upgrade of our application with minor 
> changes, the checkpoint restoration is working properly on Job Manager & Task 
> Managers restart process.
> It seems checkpoint restoration and HA are inter-related, but why checkpoint 
> restoration doesn't work when HA is disabled.
>  
> Please let us know if anyone has experienced similar issues or if have any 
> suggestions, it will be highly appreciated. Thanks in advance for your 
> assistance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33434][runtime-web] Support invoke async-profiler on TaskManager via REST API [flink]

2024-01-07 Thread via GitHub


yuchen-ecnu commented on PR #24041:
URL: https://github.com/apache/flink/pull/24041#issuecomment-1880405288

   Hi @Myasuka , do you have time to help review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33434) Support invoke async-profiler on Taskmanager through REST API

2024-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33434:
---
Labels: pull-request-available  (was: )

> Support invoke async-profiler on Taskmanager through REST API
> -
>
> Key: FLINK-33434
> URL: https://issues.apache.org/jira/browse/FLINK-33434
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST
>Affects Versions: 1.19.0
>Reporter: Yu Chen
>Assignee: Yu Chen
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33434][runtime-web] Support invoke async-profiler on TaskManager via REST API [flink]

2024-01-07 Thread via GitHub


yuchen-ecnu opened a new pull request, #24041:
URL: https://github.com/apache/flink/pull/24041

   
   ## What is the purpose of the change
   
   This is a subtask of 
[FLIP-375](https://cwiki.apache.org/confluence/x/64lEE), which introduces the 
async-profiler for profiling Jobmanager. 
   
   ## Brief change log
   - Generalized file upload from TaskManager to support different `FileType` 
uploading (different `fileType` could have different `baseDir`)
   - Introduce APIs for Creating Profiling Instances / Downloading Profiling 
Results / Retrieving Profiling List on TaskManager
   - Provide a web page for profiling TaskManager on  Flink WEB
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no** 
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no** 
 - The runtime per-record code paths (performance sensitive): **no** 
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: **no** 
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **yes**
 - If yes, how is the feature documented? not documented, it will be added 
in [FLINK-33436](https://issues.apache.org/jira/browse/FLINK-33436)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33738][Scheduler] Make exponential-delay restart-strategy the default restart strategy [flink]

2024-01-07 Thread via GitHub


flinkbot commented on PR #24040:
URL: https://github.com/apache/flink/pull/24040#issuecomment-1880359291

   
   ## CI report:
   
   * 897dba2811c190461951040a2bd696fca4a18c88 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33738) Make exponential-delay restart-strategy the default restart strategy

2024-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33738:
---
Labels: pull-request-available  (was: )

> Make exponential-delay restart-strategy the default restart strategy
> 
>
> Key: FLINK-33738
> URL: https://issues.apache.org/jira/browse/FLINK-33738
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33996) Support disabling project rewrite when multiple exprs in the project reference the same sub project field.

2024-01-07 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804102#comment-17804102
 ] 

Jane Chan commented on FLINK-33996:
---

I tend to agree with [~libenchao]. Splitting calc into two layers at the plan 
level may seem beneficial, but considering that multiple operators will be 
generated during the runtime phase, it will also incur additional overhead. 
Therefore, it is uncertain whether there will be actual performance benefits 
end-to-end.

> Support disabling project rewrite when multiple exprs in the project 
> reference the same sub project field.
> --
>
> Key: FLINK-33996
> URL: https://issues.apache.org/jira/browse/FLINK-33996
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>
> When multiple top projects reference the same bottom project, project rewrite 
> rules may result in complex projects being calculated multiple times.
> Take the following SQL as an example:
> {code:sql}
> create table test_source(a varchar) with ('connector'='datagen');
> explan plan for select a || 'a' as a, a || 'b' as b FROM (select 
> REGEXP_REPLACE(a, 'aaa', 'bbb') as a FROM test_source);
> {code}
> The final SQL plan is as follows:
> {code:sql}
> == Abstract Syntax Tree ==
> LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
> +- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> test_source]])
> == Optimized Physical Plan ==
> Calc(select=[||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
> _UTF-16LE'a') AS a, ||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
> _UTF-16LE'b') AS b])
> +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
> fields=[a])
> == Optimized Execution Plan ==
> Calc(select=[||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'a') AS a, 
> ||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'b') AS b])
> +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
> fields=[a])
> {code}
> It can be observed that after project write, regex_place is calculated twice. 
> Generally speaking, regular expression matching is a time-consuming operation 
> and we usually do not want it to be calculated multiple times. Therefore, for 
> this scenario, we can support disabling project rewrite.
> After disabling some rules, the final plan we obtained is as follows:
> {code:sql}
> == Abstract Syntax Tree ==
> LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
> +- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> test_source]])
> == Optimized Physical Plan ==
> Calc(select=[||(a, _UTF-16LE'a') AS a, ||(a, _UTF-16LE'b') AS b])
> +- Calc(select=[REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb') AS a])
>+- TableSourceScan(table=[[default_catalog, default_database, 
> test_source]], fields=[a])
> == Optimized Execution Plan ==
> Calc(select=[||(a, 'a') AS a, ||(a, 'b') AS b])
> +- Calc(select=[REGEXP_REPLACE(a, 'aaa', 'bbb') AS a])
>+- TableSourceScan(table=[[default_catalog, default_database, 
> test_source]], fields=[a])
> {code}
> After testing, we probably need to modify these few rules:
> org.apache.flink.table.planner.plan.rules.logical.FlinkProjectMergeRule
> org.apache.flink.table.planner.plan.rules.logical.FlinkCalcMergeRule
> org.apache.flink.table.planner.plan.rules.logical.FlinkProjectCalcMergeRule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34014) Jdbc connector can avoid send empty insert to database when there's no buffer data

2024-01-07 Thread luoyuxia (Jira)
luoyuxia created FLINK-34014:


 Summary: Jdbc connector can avoid send empty insert to database 
when there's no buffer data
 Key: FLINK-34014
 URL: https://issues.apache.org/jira/browse/FLINK-34014
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / JDBC
Reporter: luoyuxia


In jdbc connector, we will have a background thread to flush buffered data to 
database, but when no data is in buffer, we can avoid the flush to database.

we can avoid it in method JdbcOutputFormat#attemptFlush or in
JdbcBatchStatementExecutor like TableBufferedStatementExecutor  which can aovid 
calling  {{statementExecutor.executeBatch()}} when buffer is empty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26760) The new CSV source (file system source + CSV format) does not support reading files whose file encoding is not UTF-8

2024-01-07 Thread Yao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804098#comment-17804098
 ] 

Yao Zhang commented on FLINK-26760:
---

Hi community,

I got the same issue with Flink 1.15.4 when reading CSV file from tpcds named 
customer.dat.

Is there a way to change the default encoding for the file reader?

 

Thanks.

> The new CSV source (file system source + CSV format) does not support reading 
> files whose file encoding is not UTF-8
> 
>
> Key: FLINK-26760
> URL: https://issues.apache.org/jira/browse/FLINK-26760
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Formats (JSON, Avro, Parquet, 
> ORC, SequenceFile)
>Affects Versions: 1.13.6, 1.14.4, 1.15.0
>Reporter: Lijie Wang
>Priority: Major
> Fix For: 1.19.0
>
> Attachments: PerformanceTest.java, example.csv
>
>
> The new CSV source (file system source + CSV format) does not support reading 
> files whose file encoding is not UTF-8, but the legacy {{CsvTableSource}} 
> supports it.
> We provide an {{*example.csv*}} whose file encoding is {{{}ISO-8599-1{}}}.
> When reading it with the legacy {{{}CsvTableSource{}}}, it executes correctly:
> {code:java}
> @Test
> public void testLegacyCsvSource() {
> EnvironmentSettings environmentSettings = 
> EnvironmentSettings.inBatchMode();
> TableEnvironment tEnv = TableEnvironment.create(environmentSettings);
> CsvTableSource.Builder builder = CsvTableSource.builder();
> CsvTableSource source =
> builder.path("example.csv")
> .emptyColumnAsNull()
> .lineDelimiter("\n")
> .fieldDelimiter("|")
> .field("name", DataTypes.STRING())
> .build();
> ConnectorCatalogTable catalogTable = 
> ConnectorCatalogTable.source(source, true);
> tEnv.getCatalog(tEnv.getCurrentCatalog())
> .ifPresent(
> catalog -> {
> try {
> catalog.createTable(
> new 
> ObjectPath(tEnv.getCurrentDatabase(), "example"),
> catalogTable,
> false);
> } catch (Exception e) {
> throw new RuntimeException(e);
> }
> });
> tEnv.executeSql("select count(name) from example").print();
> }
> {code}
>  
> When reading it with the new CSV source (file system source + CSV format), it 
> throws the following error:
> {code:java}
> @Test
> public void testNewCsvSource() {
> EnvironmentSettings environmentSettings = 
> EnvironmentSettings.inBatchMode();
> TableEnvironment tEnv = TableEnvironment.create(environmentSettings);
> String ddl =
> "create table example ("
> + "name string"
> + ") with ("
> + "'connector' = 'filesystem',"
> + "'path' = 'example.csv',"
> + "'format' = 'csv',"
> + "'csv.array-element-delimiter' = '\n',"
> + "'csv.field-delimiter' = '|',"
> + "'csv.null-literal' = ''"
> + ")";
> tEnv.executeSql(ddl);
> tEnv.executeSql("select count(name) from example").print();
> }
> {code}
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:225)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:169)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:130)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:385)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:203)
>   at 
> 

Re: [PR] [FLINK-32738][formats] PROTOBUF format supports projection push down [flink]

2024-01-07 Thread via GitHub


ljw-hit commented on code in PR #23323:
URL: https://github.com/apache/flink/pull/23323#discussion_r1444139332


##
flink-formats/flink-protobuf/src/test/java/org/apache/flink/formats/protobuf/BigPbProtoToRowTest.java:
##
@@ -82,9 +82,52 @@ public void testSimple() throws Exception {
 .putMapField33("key2", "value2")
 .build();
 
+String[][] projectedField =

Review Comment:
   Can you add a UT to verify your logic without modifying the original UT,the 
comment applies to all UT,WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-32738][formats] PROTOBUF format supports projection push down [flink]

2024-01-07 Thread via GitHub


ljw-hit commented on code in PR #23323:
URL: https://github.com/apache/flink/pull/23323#discussion_r1444139332


##
flink-formats/flink-protobuf/src/test/java/org/apache/flink/formats/protobuf/BigPbProtoToRowTest.java:
##
@@ -82,9 +82,52 @@ public void testSimple() throws Exception {
 .putMapField33("key2", "value2")
 .build();
 
+String[][] projectedField =

Review Comment:
   Can you add a UT to verify your logic without modifying the original UT,This 
comment applies to all UT,WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33996) Support disabling project rewrite when multiple exprs in the project reference the same sub project field.

2024-01-07 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804093#comment-17804093
 ] 

Benchao Li commented on FLINK-33996:


Instead of solving this in optimization phase, I lean to solve it at the 
codegen phase. Actually the expressions are already reused at the optimization 
phase,  you can see {{RexProgram}}, however, Flink doesn't utilize that, and 
will expand all the expressions again before codegen.

There was a issue about the expression reusing in codegen, see FLINK-21573

> Support disabling project rewrite when multiple exprs in the project 
> reference the same sub project field.
> --
>
> Key: FLINK-33996
> URL: https://issues.apache.org/jira/browse/FLINK-33996
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>
> When multiple top projects reference the same bottom project, project rewrite 
> rules may result in complex projects being calculated multiple times.
> Take the following SQL as an example:
> {code:sql}
> create table test_source(a varchar) with ('connector'='datagen');
> explan plan for select a || 'a' as a, a || 'b' as b FROM (select 
> REGEXP_REPLACE(a, 'aaa', 'bbb') as a FROM test_source);
> {code}
> The final SQL plan is as follows:
> {code:sql}
> == Abstract Syntax Tree ==
> LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
> +- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> test_source]])
> == Optimized Physical Plan ==
> Calc(select=[||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
> _UTF-16LE'a') AS a, ||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
> _UTF-16LE'b') AS b])
> +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
> fields=[a])
> == Optimized Execution Plan ==
> Calc(select=[||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'a') AS a, 
> ||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'b') AS b])
> +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
> fields=[a])
> {code}
> It can be observed that after project write, regex_place is calculated twice. 
> Generally speaking, regular expression matching is a time-consuming operation 
> and we usually do not want it to be calculated multiple times. Therefore, for 
> this scenario, we can support disabling project rewrite.
> After disabling some rules, the final plan we obtained is as follows:
> {code:sql}
> == Abstract Syntax Tree ==
> LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
> +- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> test_source]])
> == Optimized Physical Plan ==
> Calc(select=[||(a, _UTF-16LE'a') AS a, ||(a, _UTF-16LE'b') AS b])
> +- Calc(select=[REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb') AS a])
>+- TableSourceScan(table=[[default_catalog, default_database, 
> test_source]], fields=[a])
> == Optimized Execution Plan ==
> Calc(select=[||(a, 'a') AS a, ||(a, 'b') AS b])
> +- Calc(select=[REGEXP_REPLACE(a, 'aaa', 'bbb') AS a])
>+- TableSourceScan(table=[[default_catalog, default_database, 
> test_source]], fields=[a])
> {code}
> After testing, we probably need to modify these few rules:
> org.apache.flink.table.planner.plan.rules.logical.FlinkProjectMergeRule
> org.apache.flink.table.planner.plan.rules.logical.FlinkCalcMergeRule
> org.apache.flink.table.planner.plan.rules.logical.FlinkProjectCalcMergeRule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) Fix ambiguous document description towards the default value for configuring operator-level state TTL

2024-01-07 Thread yong yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804092#comment-17804092
 ] 

yong yang commented on FLINK-34001:
---

left table is(tb1) 100/s input. and large id  not exists in tb2;

my lookup.cache config is 100,but tb2(mysql) ,flink backpressure is often 
caused by excessive load;

Does lookup.cache have an upper limit? Does a configuration of hundreds of 
millions cause poor cache performance?

 

 

 

> Fix ambiguous document description towards the default value for configuring 
> operator-level state TTL
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33565) The concurrentExceptions doesn't work

2024-01-07 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804091#comment-17804091
 ] 

Rui Fan commented on FLINK-33565:
-

Hey [~mapohl] , I have summited the PR[1] for this JIRA. I didn't finish the 
detailed test due to I wanna check with you whether the solution is fine.

Would you mind helping take a look this PR in your free time? It's better to 
finish it in 1.19, thanks~ :)

 

[1]https://github.com/apache/flink/pull/24003

> The concurrentExceptions doesn't work
> -
>
> Key: FLINK-33565
> URL: https://issues.apache.org/jira/browse/FLINK-33565
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> First of all, thanks to [~mapohl] for helping double-check in advance that 
> this was indeed a bug .
> Displaying exception history in WebUI is supported in FLINK-6042.
> h1. What's the concurrentExceptions?
> When an execution fails due to an exception, other executions in the same 
> region will also restart, and the first Exception is rootException. If other 
> restarted executions also report Exception at this time, we hope to collect 
> these exceptions and Displayed to the user as concurrentExceptions.
> h2. What's this bug?
> The concurrentExceptions is always empty in production, even if other 
> executions report exception at very close times.
> h1. Why doesn't it work?
> If one job has all-to-all shuffle, this job only has one region, and this 
> region has a lot of executions. If one execution throw exception:
>  * JobMaster will mark the state as FAILED for this execution.
>  * The rest of executions of this region will be marked to CANCELING.
>  ** This call stack can be found at FLIP-364 
> [part-4.2.3|https://cwiki.apache.org/confluence/display/FLINK/FLIP-364%3A+Improve+the+restart-strategy#FLIP364:Improvetherestartstrategy-4.2.3Detailedcodeforfull-failover]
>  
> When these executions throw exception as well, it JobMaster will mark the 
> state from CANCELING to CANCELED instead of FAILED.
> The CANCELED execution won't call FAILED logic, so their exceptions are 
> ignored.
> Note: all reports are executed inside of JobMaster RPC thread, it's single 
> thread. So these reports are executed serially. So only one execution is 
> marked to FAILED, and the rest of executions will be marked to CANCELED later.
> h1. How to fix it?
> Offline discuss with [~mapohl] , we need to discuss with community should we 
> keep the concurrentExceptions first.
>  * If no, we can remove related logic directly
>  * If yew, we discuss how to fix it later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34001][docs][table] Fix ambiguous document description towards the default value for configuring operator-level state TTL [flink]

2024-01-07 Thread via GitHub


flinkbot commented on PR #24039:
URL: https://github.com/apache/flink/pull/24039#issuecomment-1880311788

   
   ## CI report:
   
   * 0025867642132f37810d2abf6679a0cad3de3ea3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34001][docs][table] Fix ambiguous document description towards the default value for configuring operator-level state TTL [flink]

2024-01-07 Thread via GitHub


xuyangzhong commented on PR #24039:
URL: https://github.com/apache/flink/pull/24039#issuecomment-1880311358

   LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34001) Fix ambiguous document description towards the default value for configuring operator-level state TTL

2024-01-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34001:
---
Labels: pull-request-available  (was: )

> Fix ambiguous document description towards the default value for configuring 
> operator-level state TTL
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34001][docs][table] Fix ambiguous document description towards the default value for configuring operator-level state TTL [flink]

2024-01-07 Thread via GitHub


LadyForest opened a new pull request, #24039:
URL: https://github.com/apache/flink/pull/24039

   ## What is the purpose of the change
   
   This is a trivial PR to eliminate the ambiguous description about the 
meaning of 0ms when configuring operator-level state TTL for SQL/TableAPI using 
compiled plan.
   
   
   ## Brief change log
   
   Change "state retention is not enabled" to "state never expires"
   
   
   ## Verifying this change
   
   This change is a trivial rework without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34001) Fix ambiguous document description towards the default value for configuring operator-level state TTL

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804090#comment-17804090
 ] 

xuyang commented on FLINK-34001:


[~luca.yang] I guess what you want is that all( or part) data in dimension 
table can be cached in memory, and then reduce the pressure on frequent queries 
to the database? You can ref 
[lookup.cache|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/jdbc/#lookup-cache-1]

> Fix ambiguous document description towards the default value for configuring 
> operator-level state TTL
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34001) Fix ambiguous document description towards the default value for configuring operator-level state TTL

2024-01-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34001:
--
Summary: Fix ambiguous document description towards the default value for 
configuring operator-level state TTL  (was: Fix ambiguous document descriptions 
towards the default value for configuring operator-level state TTL)

> Fix ambiguous document description towards the default value for configuring 
> operator-level state TTL
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34001) Fix ambiguous document descriptions towards the default value for configuring operator-level state TTL

2024-01-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34001:
--
Summary: Fix ambiguous document descriptions towards the default value for 
configuring operator-level state TTL  (was: doc of "Configure Operator-level 
State TTL" error)

> Fix ambiguous document descriptions towards the default value for configuring 
> operator-level state TTL
> --
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread yong yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804089#comment-17804089
 ] 

yong yang commented on FLINK-34001:
---

My dimension table has only a few million rows and is suitable for full storage 
in state

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread yong yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804088#comment-17804088
 ] 

yong yang commented on FLINK-34001:
---

tb1 join tb2(dim table);
The amount of data in tb1 is very large, but only a small part of the data can 
be matched in tb2;
if use lookup join, You can't avoid caching large amounts of data, nor can you 
avoid the stress that a large number of queries put on dimension tables;
so  I want to do this using a two-stream join, with the dimension table using 
the flinkcdc method;

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan reassigned FLINK-34001:
-

Assignee: Jane Chan

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Assignee: Jane Chan
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-34001:
--
Issue Type: Improvement  (was: Bug)

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804079#comment-17804079
 ] 

Jane Chan edited comment on FLINK-34001 at 1/8/24 2:22 AM:
---

Hi [~luca.yang] and [~xuyangzhong], please correct me if I'm wrong, but since 
the state never expires is a by-default behavior, so "state retention is 
disabled" is short for "the TTL for state retention is disabled".
You can check the explanation of `table.exec.state.ttl` in 
[config|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]
{quote}Specifies a minimum time interval for how long idle state (i.e. state 
which was not updated), will be retained. State will never be cleared until it 
was idle for less than the minimum time, and will be cleared at some time after 
it was idle. Default is never clean-up the state. NOTE: Cleaning up state 
requires additional overhead for bookkeeping. Default value is 0, which means 
that it will never clean up state.
{quote}
Anyway, any ambiguous statement should be corrected. Thanks for the report, 
I'll fix it.


was (Author: qingyue):
Hi [~luca.yang] and [~xuyangzhong], please correct me if I'm wrong, but since 
the state never expires is a by-default behavior, so "state retention is 
disabled" is short for "the TTL for state retention is disabled".
You can check the explanation of `table.exec.state.ttl` in 
[config|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]
{quote}Specifies a minimum time interval for how long idle state (i.e. state 
which was not updated), will be retained. State will never be cleared until it 
was idle for less than the minimum time, and will be cleared at some time after 
it was idle. Default is never clean-up the state. NOTE: Cleaning up state 
requires additional overhead for bookkeeping. Default value is 0, which means 
that it will never clean up state.
{quote}
Anyway, any ambiguous statement should be corrected. I'll fix it.

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804079#comment-17804079
 ] 

Jane Chan edited comment on FLINK-34001 at 1/8/24 2:22 AM:
---

Hi [~luca.yang] and [~xuyangzhong], please correct me if I'm wrong, but since 
the state never expires is a by-default behavior, so "state retention is 
disabled" is short for "the TTL for state retention is disabled".
You can check the explanation of `table.exec.state.ttl` in 
[config|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]
{quote}Specifies a minimum time interval for how long idle state (i.e. state 
which was not updated), will be retained. State will never be cleared until it 
was idle for less than the minimum time, and will be cleared at some time after 
it was idle. Default is never clean-up the state. NOTE: Cleaning up state 
requires additional overhead for bookkeeping. Default value is 0, which means 
that it will never clean up state.
{quote}
Anyway, any ambiguous statement should be corrected. I'll fix it.


was (Author: qingyue):
Hi [~luca.yang] and [~xuyangzhong]; please correct me if I'm wrong, but state 
retention is disabled stands for "state never expires".
You can check the explanation of `table.exec.state.ttl` in 
[config|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]
{quote}Specifies a minimum time interval for how long idle state (i.e. state 
which was not updated), will be retained. State will never be cleared until it 
was idle for less than the minimum time, and will be cleared at some time after 
it was idle. Default is never clean-up the state. NOTE: Cleaning up state 
requires additional overhead for bookkeeping. Default value is 0, which means 
that it will never clean up state.
{quote}

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804087#comment-17804087
 ] 

xuyang commented on FLINK-34001:


[~luca.yang] not to close it, but change the type to 'improvement'

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33737) Merge multiple Exceptions into one attempt for exponential-delay restart-strategy

2024-01-07 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804086#comment-17804086
 ] 

Rui Fan commented on FLINK-33737:
-

Merged to master(1.19) via : f9738d63391668396072570454fdc1eb61699098 and 
ef3cefda35428104af354fc3eb563afee58bf639

> Merge multiple Exceptions into one attempt for exponential-delay 
> restart-strategy
> -
>
> Key: FLINK-33737
> URL: https://issues.apache.org/jira/browse/FLINK-33737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33737) Merge multiple Exceptions into one attempt for exponential-delay restart-strategy

2024-01-07 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-33737.
-
Resolution: Fixed

> Merge multiple Exceptions into one attempt for exponential-delay 
> restart-strategy
> -
>
> Key: FLINK-33737
> URL: https://issues.apache.org/jira/browse/FLINK-33737
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804085#comment-17804085
 ] 

xuyang commented on FLINK-34001:


[~luca.yang] Can you give a scenario as an example of why you want one side not 
to retain state when using a two-stream join(or something else)? 

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33737][Scheduler] Merge multiple Exceptions into one attempt for exponential-delay restart-strategy [flink]

2024-01-07 Thread via GitHub


1996fanrui merged PR #23867:
URL: https://github.com/apache/flink/pull/23867


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread yong yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804083#comment-17804083
 ] 

yong yang commented on FLINK-34001:
---

ok.  its not a error. i will close jira.

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread yong yang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804082#comment-17804082
 ] 

yong yang commented on FLINK-34001:
---

Two data streams, how to achieve one stream does not retain the state(???), the 
other stream permanent state("ttl" : "0 ms");  thanks

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804081#comment-17804081
 ] 

xuyang edited comment on FLINK-34001 at 1/8/24 2:12 AM:


[~qingyue] User may be confused, and will think that "State retention is 
disabled" means that the state is not retained, but in fact it means that the 
state is permanently retained without expiration. 

BTW, [~luca.yang] I think this pr is an improvement, not a bug, right ? 


was (Author: xuyangzhong):
[~qingyue] User may be confused, and will think that "State retention is 
disabled" means that the state is not retained, but in fact it means that the 
state is permanently retained without expiration. 

BTW, I think [~luca.yang] this pr is an improvement, not a bug, right? 

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804081#comment-17804081
 ] 

xuyang commented on FLINK-34001:


[~qingyue] User may be confused, and will think that "State retention is 
disabled" means that the state is not retained, but in fact it means that the 
state is permanently retained without expiration. 

BTW, I think [~luca.yang] this pr is an improvement, not a bug, right? 

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33611) Support Large Protobuf Schemas

2024-01-07 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804080#comment-17804080
 ] 

Benchao Li commented on FLINK-33611:


[~dsaisharath] Thanks for the analysis and the effort trying to improve Flink 
Protobuf Format.

bq. Apart from that, making the code change to reduce too many split methods 
has the most impact in supporting large schemas as I found that method names 
are always included in the constant pool even when the code size is too large 
from my experiment. In fact, this is the main reason which causes compilation 
errors with "too many constants error"

I'm wondering if there is a real case that will run into this, if yes, I think 
it's still worth to improve it if there is a way.

bq. With that being said, I would still prefer to keep the changes to reuse 
variable names since the change itself is non-intrusive, harmless, and can only 
improve the performance for compilation. Please let me know your thoughts

I'm inclined to not include it for now, since there code does not solve a real 
problem yet, and might add a small burden to the maintenance since other 
contributors need to understand the code and it's intention. 

 If there is a significant improvement in the compilation, let's reconsider 
this, what do you think?


> Support Large Protobuf Schemas
> --
>
> Key: FLINK-33611
> URL: https://issues.apache.org/jira/browse/FLINK-33611
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Assignee: Sai Sharath Dandi
>Priority: Major
>  Labels: pull-request-available
>
> h3. Background
> Flink serializes and deserializes protobuf format data by calling the decode 
> or encode method in GeneratedProtoToRow_XXX.java generated by codegen to 
> parse byte[] data into Protobuf Java objects. FLINK-32650 has introduced the 
> ability to split the generated code to improve the performance for large 
> Protobuf schemas. However, this is still not sufficient to support some 
> larger protobuf schemas as the generated code exceeds the java constant pool 
> size [limit|https://en.wikipedia.org/wiki/Java_class_file#The_constant_pool] 
> and we can see errors like "Too many constants" when trying to compile the 
> generated code. 
> *Solution*
> Since we already have the split code functionality already introduced, the 
> main proposal here is to now reuse the variable names across different split 
> method scopes. This will greatly reduce the constant pool size. One more 
> optimization is to only split the last code segment also only when the size 
> exceeds split threshold limit. Currently, the last segment of the generated 
> code is always being split which can lead to too many split methods and thus 
> exceed the constant pool size limit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804079#comment-17804079
 ] 

Jane Chan commented on FLINK-34001:
---

Hi [~luca.yang] and [~xuyangzhong]; please correct me if I'm wrong, but state 
retention is disabled stands for "state never expires".
You can check the explanation of `table.exec.state.ttl` in 
[config|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/]
{quote}Specifies a minimum time interval for how long idle state (i.e. state 
which was not updated), will be retained. State will never be cleared until it 
was idle for less than the minimum time, and will be cleared at some time after 
it was idle. Default is never clean-up the state. NOTE: Cleaning up state 
requires additional overhead for bookkeeping. Default value is 0, which means 
that it will never clean up state.
{quote}

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-24024) Support session Window TVF

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804078#comment-17804078
 ] 

xuyang commented on FLINK-24024:


Hi, [~Sergey Nuyanzin] Would you like to review this pr? The pr is ready and 
the CI has passed.

> Support session Window TVF 
> ---
>
> Key: FLINK-24024
> URL: https://issues.apache.org/jira/browse/FLINK-24024
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jing Zhang
>Assignee: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
>  
>  # Fix calcite syntax  CALCITE-4337
>  # Introduce session window TVF in Flink
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34001) doc of "Configure Operator-level State TTL" error

2024-01-07 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804076#comment-17804076
 ] 

xuyang commented on FLINK-34001:


[~luca.yang] Would you like to fix it?

cc [~qingyue] for the double check. 

> doc of "Configure Operator-level State TTL" error
> -
>
> Key: FLINK-34001
> URL: https://issues.apache.org/jira/browse/FLINK-34001
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.18.0
>Reporter: yong yang
>Priority: Major
>
> doc:
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/table/concepts/overview/#idle-state-retention-time
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state retention is not enabled. 
>  
> but i test find :
> The current TTL value for both left and right side is {{{}"0 ms"{}}}, which 
> means the state is permanence keep!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444114157


##
flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/sink/writer/ExactlyOnceJdbcWriterTest.java:
##
@@ -0,0 +1,105 @@
+package org.apache.flink.connector.jdbc.sink.writer;
+
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.jdbc.JdbcExactlyOnceOptions;
+import org.apache.flink.connector.jdbc.JdbcExecutionOptions;
+import 
org.apache.flink.connector.jdbc.datasource.connections.JdbcConnectionProvider;
+import 
org.apache.flink.connector.jdbc.datasource.connections.xa.SimpleXaConnectionProvider;
+import org.apache.flink.connector.jdbc.sink.committer.JdbcCommitable;
+import org.apache.flink.connector.jdbc.testutils.tables.templates.BooksTable;
+
+import org.junit.jupiter.api.Test;
+
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** */

Review Comment:
   I guess need to fill it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444114075


##
flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/sink/writer/AlLeastOnceJdbcWriterTest.java:
##
@@ -0,0 +1,88 @@
+package org.apache.flink.connector.jdbc.sink.writer;
+
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.jdbc.JdbcExactlyOnceOptions;
+import org.apache.flink.connector.jdbc.JdbcExecutionOptions;
+import 
org.apache.flink.connector.jdbc.datasource.connections.JdbcConnectionProvider;
+import 
org.apache.flink.connector.jdbc.datasource.connections.SimpleJdbcConnectionProvider;
+import org.apache.flink.connector.jdbc.sink.committer.JdbcCommitable;
+import org.apache.flink.connector.jdbc.testutils.tables.templates.BooksTable;
+
+import org.junit.jupiter.api.Test;
+
+import java.util.Collection;
+import java.util.List;
+
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** */

Review Comment:
   I guess need to fill it



##
flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/sink/writer/BaseJdbcWriterTest.java:
##
@@ -0,0 +1,210 @@
+package org.apache.flink.connector.jdbc.sink.writer;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.operators.MailboxExecutor;
+import org.apache.flink.api.common.operators.ProcessingTimeService;
+import org.apache.flink.api.common.serialization.SerializationSchema;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.connector.sink2.Sink;
+import org.apache.flink.api.connector.sink2.SinkWriter;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.jdbc.JdbcExactlyOnceOptions;
+import org.apache.flink.connector.jdbc.JdbcExecutionOptions;
+import org.apache.flink.connector.jdbc.databases.derby.DerbyTestBase;
+import 
org.apache.flink.connector.jdbc.datasource.connections.JdbcConnectionProvider;
+import 
org.apache.flink.connector.jdbc.datasource.statements.JdbcQueryStatement;
+import 
org.apache.flink.connector.jdbc.datasource.statements.SimpleJdbcQueryStatement;
+import org.apache.flink.connector.jdbc.internal.JdbcOutputSerializer;
+import org.apache.flink.connector.jdbc.sink.committer.JdbcCommitable;
+import org.apache.flink.connector.jdbc.testutils.TableManaged;
+import org.apache.flink.connector.jdbc.testutils.tables.templates.BooksTable;
+import org.apache.flink.metrics.groups.SinkWriterMetricGroup;
+import org.apache.flink.util.StringUtils;
+import org.apache.flink.util.UserCodeClassLoader;
+
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.BeforeEach;
+
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.OptionalLong;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.connector.jdbc.JdbcTestFixture.TEST_DATA;
+import static org.assertj.core.api.Assertions.assertThat;
+
+/** */

Review Comment:
   I guess need to fill it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444113959


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/xa/JobSubtask.java:
##
@@ -0,0 +1,50 @@
+package org.apache.flink.connector.jdbc.xa;
+
+import org.apache.flink.api.common.functions.RuntimeContext;
+import org.apache.flink.api.connector.sink2.Sink;
+
+import java.io.Serializable;
+
+/** */

Review Comment:
   I guess need to fill it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444113628


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/sink/committer/JdbcCommitter.java:
##
@@ -0,0 +1,62 @@
+package org.apache.flink.connector.jdbc.sink.committer;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.connector.sink2.Committer;
+import org.apache.flink.connector.base.DeliveryGuarantee;
+import org.apache.flink.connector.jdbc.JdbcExactlyOnceOptions;
+import 
org.apache.flink.connector.jdbc.datasource.connections.JdbcConnectionProvider;
+import 
org.apache.flink.connector.jdbc.datasource.connections.xa.XaConnectionProvider;
+import 
org.apache.flink.connector.jdbc.datasource.transactions.xa.XaTransaction;
+import 
org.apache.flink.connector.jdbc.datasource.transactions.xa.domain.TransactionId;
+import org.apache.flink.connector.jdbc.sink.writer.JdbcWriterState;
+import org.apache.flink.util.FlinkRuntimeException;
+
+import java.io.IOException;
+import java.util.Collection;
+
+/** */

Review Comment:
   I guess need to fill it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444113533


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/sink/committer/JdbcCommitable.java:
##
@@ -0,0 +1,39 @@
+package org.apache.flink.connector.jdbc.sink.committer;
+
+import org.apache.flink.annotation.Internal;
+import 
org.apache.flink.connector.jdbc.datasource.transactions.xa.XaTransaction;
+
+import javax.annotation.Nullable;
+import javax.transaction.xa.Xid;
+
+import java.io.Serializable;
+import java.util.Optional;
+
+/** */

Review Comment:
   I guess need to fill it



##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/sink/committer/JdbcCommitableSerializer.java:
##
@@ -0,0 +1,37 @@
+package org.apache.flink.connector.jdbc.sink.committer;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.connector.jdbc.xa.XidSerializer;
+import org.apache.flink.core.io.SimpleVersionedSerializer;
+import org.apache.flink.core.memory.DataInputDeserializer;
+import org.apache.flink.core.memory.DataOutputSerializer;
+
+import javax.transaction.xa.Xid;
+
+import java.io.IOException;
+
+/** */

Review Comment:
   I guess need to fill it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444113024


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/sink/writer/JdbcWriterState.java:
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.jdbc.sink.writer;
+
+import org.apache.flink.annotation.Internal;
+import 
org.apache.flink.connector.jdbc.datasource.transactions.xa.domain.TransactionId;
+
+import org.apache.commons.lang3.builder.EqualsBuilder;
+import org.apache.commons.lang3.builder.HashCodeBuilder;
+import org.apache.commons.lang3.builder.ToStringBuilder;
+import org.apache.commons.lang3.builder.ToStringStyle;
+
+import javax.annotation.concurrent.ThreadSafe;
+import javax.transaction.xa.Xid;
+
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+
+/** Thread-safe (assuming immutable {@link Xid} implementation). */
+@ThreadSafe
+@Internal
+public class JdbcWriterState implements Serializable {
+private final Collection prepared;
+private final Collection hanging;
+
+public static JdbcWriterState empty() {
+return new JdbcWriterState(Collections.emptyList(), 
Collections.emptyList());
+}
+
+public static JdbcWriterState of(
+Collection prepared, Collection 
hanging) {
+return new JdbcWriterState(
+Collections.unmodifiableList(new ArrayList<>(prepared)),
+Collections.unmodifiableList(new ArrayList<>(hanging)));
+}
+
+protected JdbcWriterState(
+Collection prepared, Collection 
hanging) {
+this.prepared = prepared;
+this.hanging = hanging;
+}
+
+/**
+ * @return immutable collection of prepared XA transactions to {@link
+ * javax.transaction.xa.XAResource#commit commit}.
+ */
+public Collection getPrepared() {
+return prepared;
+}
+
+/**
+ * @return immutable collection of XA transactions to {@link
+ * javax.transaction.xa.XAResource#rollback rollback} (if they were 
prepared) or {@link
+ * javax.transaction.xa.XAResource#end end} (if they were only 
started).
+ */
+public Collection getHanging() {
+return hanging;
+}
+
+@Override
+public boolean equals(Object o) {
+if (this == o) {
+return true;
+}
+if (o == null || getClass() != o.getClass()) {
+return false;
+}
+
+JdbcWriterState that = (JdbcWriterState) o;
+return new EqualsBuilder()
+.append(prepared, that.prepared)
+.append(hanging, that.hanging)
+.isEquals();

Review Comment:
   is there no way to use just `equals` ?
   It seems collections only, so equals e.g. `Objects.equals` should work...
   I'm confused by creation a new object each time `equals` is called.
   Or did I miss anything?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25421] Port JDBC Sink to new Unified Sink API (FLIP-143) [flink-connector-jdbc]

2024-01-07 Thread via GitHub


snuyanzin commented on code in PR #2:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/2#discussion_r1444112306


##
pom.xml:
##
@@ -47,7 +47,7 @@ under the License.
 
 
 
-1.16.2
+1.18.0

Review Comment:
   Are we going to drop support for flink 1.16, 1.17 together with this commit?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34013) ProfilingServiceTest.testRollingDeletion is unstable on AZP

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804073#comment-17804073
 ] 

Sergey Nuyanzin commented on FLINK-34013:
-

I think it is related to FLINK-33433
[~Yu Chen], [~yunta] could you please have a look?

> ProfilingServiceTest.testRollingDeletion is unstable on AZP
> ---
>
> Key: FLINK-34013
> URL: https://issues.apache.org/jira/browse/FLINK-34013
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=8258
>  fails as 
> {noformat}
> Jan 06 02:09:28 org.opentest4j.AssertionFailedError: expected: <2> but was: 
> <3>
> Jan 06 02:09:28   at 
> org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
> Jan 06 02:09:28   at 
> org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
> Jan 06 02:09:28   at 
> org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
> Jan 06 02:09:28   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
> Jan 06 02:09:28   at 
> org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
> Jan 06 02:09:28   at 
> org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:531)
> Jan 06 02:09:28   at 
> org.apache.flink.runtime.util.profiler.ProfilingServiceTest.verifyRollingDeletionWorks(ProfilingServiceTest.java:167)
> Jan 06 02:09:28   at 
> org.apache.flink.runtime.util.profiler.ProfilingServiceTest.testRollingDeletion(ProfilingServiceTest.java:117)
> Jan 06 02:09:28   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 06 02:09:28   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Jan 06 02:09:28   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Jan 06 02:09:28   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Jan 06 02:09:28   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Jan 06 02:09:28   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22765) ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804071#comment-17804071
 ] 

Sergey Nuyanzin commented on FLINK-22765:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=a657ddbf-d986-5381-9649-342d9c92e7fb=dc085d4a-05c8-580e-06ab-21f5624dab16=8776

> ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable
> 
>
> Key: FLINK-22765
> URL: https://issues.apache.org/jira/browse/FLINK-22765
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.5, 1.15.0, 1.17.2, 1.19.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.14.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18292=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> May 25 00:56:38 java.lang.AssertionError: 
> May 25 00:56:38 
> May 25 00:56:38 Expected: is ""
> May 25 00:56:38  but: was "The system is out of resources.\nConsult the 
> following stack trace for details."
> May 25 00:56:38   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:956)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:923)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.run(ExceptionUtilsITCase.java:94)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError(ExceptionUtilsITCase.java:70)
> May 25 00:56:38   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 25 00:56:38   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 25 00:56:38   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 25 00:56:38   at java.lang.reflect.Method.invoke(Method.java:498)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 25 00:56:38   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 25 00:56:38   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 25 00:56:38   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> May 25 00:56:38   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> May 25 00:56:38   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> May 25 00:56:38   at 
> 

[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804067#comment-17804067
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=e92ecf6d-e207-5a42-7ff7-528ff0c5b259=40fc352e-9b4c-5fd8-363f-628f24b01ec2=20738

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804070#comment-17804070
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=60960eae-6f09-579e-371e-29814bdd1adc=7a70c083-6a74-5348-5106-30a76c29d8fa=20371

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804072#comment-17804072
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a=20755

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804065#comment-17804065
 ] 

Sergey Nuyanzin commented on FLINK-31472:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd=10392

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804068#comment-17804068
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06=b68c9f5c-04c9-5c75-3862-a3a27aabbce3=20819

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804066#comment-17804066
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=bf5e383b-9fd3-5f02-ca1c-8f788e2e76d3=85189c57-d8a0-5c9c-b61d-fc05cfac62cf=20560

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804064#comment-17804064
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=20383

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804063#comment-17804063
 ] 

Sergey Nuyanzin commented on FLINK-34012:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56084=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755

> Flink python fails with  can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
> ---
>
> Key: FLINK-34012
> URL: https://issues.apache.org/jira/browse/FLINK-34012
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Reporter: Sergey Nuyanzin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.19.0
>
>
> This build 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
> {noformat}
> Jan 06 03:02:43 Installing collected packages: types-pytz, 
> types-python-dateutil, types-protobuf
> Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
> types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
> Jan 06 03:02:44 mypy: can't read file 
> '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
> such file or directory
> Jan 06 03:02:44 Installing missing stub packages:
> Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
> types-protobuf types-python-dateutil types-pytz
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34013) ProfilingServiceTest.testRollingDeletion is unstable on AZP

2024-01-07 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-34013:
---

 Summary: ProfilingServiceTest.testRollingDeletion is unstable on 
AZP
 Key: FLINK-34013
 URL: https://issues.apache.org/jira/browse/FLINK-34013
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.19.0
Reporter: Sergey Nuyanzin


This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7c1d86e3-35bd-5fd5-3b7c-30c126a78702=8258
 fails as 
{noformat}
Jan 06 02:09:28 org.opentest4j.AssertionFailedError: expected: <2> but was: <3>
Jan 06 02:09:28 at 
org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
Jan 06 02:09:28 at 
org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
Jan 06 02:09:28 at 
org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
Jan 06 02:09:28 at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:150)
Jan 06 02:09:28 at 
org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:145)
Jan 06 02:09:28 at 
org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:531)
Jan 06 02:09:28 at 
org.apache.flink.runtime.util.profiler.ProfilingServiceTest.verifyRollingDeletionWorks(ProfilingServiceTest.java:167)
Jan 06 02:09:28 at 
org.apache.flink.runtime.util.profiler.ProfilingServiceTest.testRollingDeletion(ProfilingServiceTest.java:117)
Jan 06 02:09:28 at java.lang.reflect.Method.invoke(Method.java:498)
Jan 06 02:09:28 at 
java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
Jan 06 02:09:28 at 
java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
Jan 06 02:09:28 at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
Jan 06 02:09:28 at 
java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
Jan 06 02:09:28 at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22765) ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804062#comment-17804062
 ] 

Sergey Nuyanzin commented on FLINK-22765:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=a657ddbf-d986-5381-9649-342d9c92e7fb=dc085d4a-05c8-580e-06ab-21f5624dab16=9113

> ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable
> 
>
> Key: FLINK-22765
> URL: https://issues.apache.org/jira/browse/FLINK-22765
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.5, 1.15.0, 1.17.2, 1.19.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.14.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18292=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> May 25 00:56:38 java.lang.AssertionError: 
> May 25 00:56:38 
> May 25 00:56:38 Expected: is ""
> May 25 00:56:38  but: was "The system is out of resources.\nConsult the 
> following stack trace for details."
> May 25 00:56:38   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:956)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:923)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.run(ExceptionUtilsITCase.java:94)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError(ExceptionUtilsITCase.java:70)
> May 25 00:56:38   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 25 00:56:38   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 25 00:56:38   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 25 00:56:38   at java.lang.reflect.Method.invoke(Method.java:498)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 25 00:56:38   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 25 00:56:38   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 25 00:56:38   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> May 25 00:56:38   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> May 25 00:56:38   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> May 25 00:56:38   at 
> 

[jira] [Created] (FLINK-34012) Flink python fails with can't read file '/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google

2024-01-07 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-34012:
---

 Summary: Flink python fails with  can't read file 
'/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google
 Key: FLINK-34012
 URL: https://issues.apache.org/jira/browse/FLINK-34012
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Sergey Nuyanzin
 Fix For: 1.19.0


This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56073=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=c67e71ed-6451-5d26-8920-5a8cf9651901=20755
{noformat}
Jan 06 03:02:43 Installing collected packages: types-pytz, 
types-python-dateutil, types-protobuf
Jan 06 03:02:43 Successfully installed types-protobuf-4.24.0.20240106 
types-python-dateutil-2.8.19.20240106 types-pytz-2023.3.1.1
Jan 06 03:02:44 mypy: can't read file 
'/__w/2/s/flink-python/dev/.conda/lib/python3.10/site-packages//google': No 
such file or directory
Jan 06 03:02:44 Installing missing stub packages:
Jan 06 03:02:44 /__w/2/s/flink-python/dev/.conda/bin/python -m pip install 
types-protobuf types-python-dateutil types-pytz

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31472) AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804061#comment-17804061
 ] 

Sergey Nuyanzin commented on FLINK-31472:
-

1.18: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56014=logs=1c002d28-a73d-5309-26ee-10036d8476b4=d1c117a6-8f13-5466-55f0-d48dbb767fcd=10570

> AsyncSinkWriterThrottlingTest failed with Illegal mailbox thread
> 
>
> Key: FLINK-31472
> URL: https://issues.apache.org/jira/browse/FLINK-31472
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.17.0, 1.16.1, 1.19.0
>Reporter: Ran Tao
>Assignee: Ahmed Hamdy
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> when run mvn clean test, this case failed occasionally.
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.955 
> s <<< FAILURE! - in 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest
> [ERROR] 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize
>   Time elapsed: 0.492 s  <<< ERROR!
> java.lang.IllegalStateException: Illegal thread detected. This method must be 
> called from inside the mailbox thread!
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.checkIsMailboxThread(TaskMailboxImpl.java:262)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.TaskMailboxImpl.take(TaskMailboxImpl.java:137)
>         at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxExecutorImpl.yield(MailboxExecutorImpl.java:84)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.flush(AsyncSinkWriter.java:367)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriter.lambda$registerCallback$3(AsyncSinkWriter.java:315)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService$CallbackTask.onProcessingTime(TestProcessingTimeService.java:199)
>         at 
> org.apache.flink.streaming.runtime.tasks.TestProcessingTimeService.setCurrentTime(TestProcessingTimeService.java:76)
>         at 
> org.apache.flink.connector.base.sink.writer.AsyncSinkWriterThrottlingTest.testSinkThroughputShouldThrottleToHalfBatchSize(AsyncSinkWriterThrottlingTest.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>         at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>         at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>         at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>         at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>         at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>         at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>         at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>         at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>         at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>         at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>         at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>         at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>         at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>         at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>         at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>         at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
>         at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
>         at 
> 

[jira] [Commented] (FLINK-22765) ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804060#comment-17804060
 ] 

Sergey Nuyanzin commented on FLINK-22765:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56013=logs=a657ddbf-d986-5381-9649-342d9c92e7fb=dc085d4a-05c8-580e-06ab-21f5624dab16=8924

> ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable
> 
>
> Key: FLINK-22765
> URL: https://issues.apache.org/jira/browse/FLINK-22765
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.5, 1.15.0, 1.17.2, 1.19.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.14.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18292=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> May 25 00:56:38 java.lang.AssertionError: 
> May 25 00:56:38 
> May 25 00:56:38 Expected: is ""
> May 25 00:56:38  but: was "The system is out of resources.\nConsult the 
> following stack trace for details."
> May 25 00:56:38   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:956)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:923)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.run(ExceptionUtilsITCase.java:94)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError(ExceptionUtilsITCase.java:70)
> May 25 00:56:38   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 25 00:56:38   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 25 00:56:38   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 25 00:56:38   at java.lang.reflect.Method.invoke(Method.java:498)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 25 00:56:38   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 25 00:56:38   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 25 00:56:38   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> May 25 00:56:38   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> May 25 00:56:38   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> May 25 00:56:38   at 
> 

[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804059#comment-17804059
 ] 

Sergey Nuyanzin commented on FLINK-27756:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55992=logs=2e8cb2f7-b2d3-5c62-9c05-cd756d33a819=2dd510a3-5041-5201-6dc3-54d310f68906=10705

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-22765) ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804058#comment-17804058
 ] 

Sergey Nuyanzin commented on FLINK-22765:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55980=logs=a657ddbf-d986-5381-9649-342d9c92e7fb=dc085d4a-05c8-580e-06ab-21f5624dab16=8924

> ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError is unstable
> 
>
> Key: FLINK-22765
> URL: https://issues.apache.org/jira/browse/FLINK-22765
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.14.0, 1.13.5, 1.15.0, 1.17.2, 1.19.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, stale-assigned, test-stability
> Fix For: 1.14.0, 1.16.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18292=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56
> {code}
> May 25 00:56:38 java.lang.AssertionError: 
> May 25 00:56:38 
> May 25 00:56:38 Expected: is ""
> May 25 00:56:38  but: was "The system is out of resources.\nConsult the 
> following stack trace for details."
> May 25 00:56:38   at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:956)
> May 25 00:56:38   at org.junit.Assert.assertThat(Assert.java:923)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.run(ExceptionUtilsITCase.java:94)
> May 25 00:56:38   at 
> org.apache.flink.runtime.util.ExceptionUtilsITCase.testIsMetaspaceOutOfMemoryError(ExceptionUtilsITCase.java:70)
> May 25 00:56:38   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> May 25 00:56:38   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> May 25 00:56:38   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> May 25 00:56:38   at java.lang.reflect.Method.invoke(Method.java:498)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> May 25 00:56:38   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> May 25 00:56:38   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> May 25 00:56:38   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> May 25 00:56:38   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> May 25 00:56:38   at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> May 25 00:56:38   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> May 25 00:56:38   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> May 25 00:56:38   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> May 25 00:56:38   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> May 25 00:56:38   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> May 25 00:56:38   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> May 25 00:56:38   at 
> 

[jira] [Commented] (FLINK-27756) Fix intermittently failing test in AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds

2024-01-07 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17804057#comment-17804057
 ] 

Sergey Nuyanzin commented on FLINK-27756:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=55980=logs=b6f8a893-8f59-51d5-fe28-fb56a8b0932c=095f1730-efbe-5303-c4a3-b5e3696fc4e2=10795

> Fix intermittently failing test in 
> AsyncSinkWriterTest.checkLoggedSendTimesAreWithinBounds
> --
>
> Key: FLINK-27756
> URL: https://issues.apache.org/jira/browse/FLINK-27756
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.15.0, 1.17.0, 1.19.0
>Reporter: Ahmed Hamdy
>Assignee: Ahmed Hamdy
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0
>
>
> h2. Motivation
>  - One of the integration tests ({{checkLoggedSendTimesAreWithinBounds}}) of 
> {{AsyncSinkWriterTest}} has been reported to fail intermittently on build 
> pipeline causing blocking of new changes.
>  - Reporting build is [linked 
> |https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=36009=logs=aa18c3f6-13b8-5f58-86bb-c1cffb239496=502fb6c0-30a2-5e49-c5c2-a00fa3acb203]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   >