Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-06 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1931473172

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-02-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-28693:
---
Labels: pull-request-available  (was: )

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Priority: Major
>  Labels: pull-request-available
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:519)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 

Re: [PR] [FLINK-28693][table] Fix janino compile failed because the code generated refers the class in table-planner [flink]

2024-02-06 Thread via GitHub


flinkbot commented on PR #24280:
URL: https://github.com/apache/flink/pull/24280#issuecomment-1931461621

   
   ## CI report:
   
   * f065a4c0a11e1b34f8a3f17eb0c4b4b883a97627 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34016) Janino compile failed when watermark with column by udf

2024-02-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34016:
---
Labels: pull-request-available  (was: )

> Janino compile failed when watermark with column by udf
> ---
>
> Key: FLINK-34016
> URL: https://issues.apache.org/jira/browse/FLINK-34016
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.18.0
>Reporter: ude
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2024-01-25-11-53-06-158.png, 
> image-2024-01-25-11-54-54-381.png, image-2024-01-25-12-57-21-318.png, 
> image-2024-01-25-12-57-34-632.png
>
>
> After submit the following flink sql by sql-client.sh will throw an exception:
> {code:java}
> Caused by: java.lang.RuntimeException: Could not instantiate generated class 
> 'WatermarkGenerator$0'
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:69)
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:109)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:462)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:438)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:414)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:101)
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>     ... 16 more
> Caused by: 
> org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
>  org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:92)
>     ... 18 more
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:107)
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$0(CompileUtils.java:92)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2159)
>     at 
> 

[PR] [FLINK-34016][table] Fix janino compile failed because the code generated by codegen refers the class in table-planner [flink]

2024-02-06 Thread via GitHub


xuyangzhong opened a new pull request, #24280:
URL: https://github.com/apache/flink/pull/24280

   ## What is the purpose of the change
   
   The code generated by codegen references the class in the table-planner 
package, but the class in the table-planner package is hidden by 
table-planner-loader, so classloader cannot find it. 
   
   This pr moves the referred class 
`WatermarkGeneratorCodeGeneratorFunctionContextWrapper` from table-planner to 
table-runtime. 
   
   ## Brief change log
   
 - *Moves the class `WatermarkGeneratorCodeGeneratorFunctionContextWrapper` 
from table-planner to table-runtime*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   This pr can't be verified by IT cases or UT cases because the bug only can 
be -reproduced without jar `table-planner`.
   So we need to verify this pr manually. The following steps can verify it.
   
   1. run a kafka and prepare some data
   You can start docker and run the test in flink-connector-kafka by debugging 
to do it:
   
https://github.com/apache/flink-connector-kafka/blob/abf4563e0342abe25dc28bb6b5457bb971381f61/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableITCase.java#L488C17-L488C58
   
   Note: debug the code after executing preparing data:
   
https://github.com/apache/flink-connector-kafka/blob/abf4563e0342abe25dc28bb6b5457bb971381f61/flink-connector-kafka/src/test/java/org/apache/flink/streaming/connectors/kafka/table/KafkaTableITCase.java#L531C9-L531C48
 
   
   2. run start-cluster to start flink
   3. run sql-client and then execute ddl
   Add computed column and watermark column like:
   ```
   ts AS COALESCE(`timestamp` ,CURRENT_TIMESTAMP),
   WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
   ```
   4. run `select * from kafka_table`
   
   Before this fix, you will see an exception thrown. After this fix and 
re-build flink repo, re-run 2-4, then you can see the actual data from 
sql-client.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34282) Create a release branch

2024-02-06 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815143#comment-17815143
 ] 

lincoln lee commented on FLINK-34282:
-

(Record this for validating the release steps)

After pushing the new release branch release-1.19,  according to the Managing 
Documentation details manually trigger a  release-1.19 doc build, but the build 
action doesn't contain release-1.19 
 https://github.com/apache/flink/actions/runs/7811065494
 !screenshot-1.png! 

So I was wondering the similar update like 
https://github.com/apache/flink/pull/23258 is also necessary for this step? 
(but seems there was some discussions on when to merge the pr) 
cc [~jingge] [~renqs] [~snuyanzin] since you're the author/reviewers.



> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on 

Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-06 Thread via GitHub


lxliyou001 commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1931443754

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815142#comment-17815142
 ] 

Xintong Song commented on FLINK-34402:
--

Thanks for the detailed information.

I think this should not be a bug. The reported issue doesn't really affect any 
real production use case. And Flink is not designed to be executed with a 
PowerMockRunner or JavassistMockClassLoader.

I'm also not familiar with other approaches for checking API calling times and 
query outputs, other than manually implementing them. But if the cases are only 
for internal usages in your company, you don't really need to follow the 
community code-style guides.

My suggestion would be to apply the proposed changes only to your internal 
fork. WDYT?

>  Class loading conflicts when using PowerMock in ITcase.
> 
>
> Key: FLINK-34402
> URL: https://issues.apache.org/jira/browse/FLINK-34402
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: yisha zhou
>Assignee: yisha zhou
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently when no user jars exist, system classLoader will be used to load 
> classes as default. However, if we use powerMock to create some ITCases, the 
> framework will utilize JavassistMockClassLoader to load classes.  Forcing the 
> use of the system classLoader can lead to class loading conflict issue.
> Therefore we should use Thread.currentThread().getContextClassLoader() 
> instead of 
> ClassLoader.getSystemClassLoader() here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-28693) Codegen failed if the watermark is defined on a columnByExpression

2024-02-06 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815141#comment-17815141
 ] 

xuyang commented on FLINK-28693:


This bug is caused by that the code generated by codegen references the class 
in the table-planner package, but the class in the table-planner package is 
hidden by table-planner-loader, so classloader cannot find it. 

I'll try to fix it.

> Codegen failed if the watermark is defined on a columnByExpression
> --
>
> Key: FLINK-28693
> URL: https://issues.apache.org/jira/browse/FLINK-28693
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.1
>Reporter: Hongbo
>Priority: Major
>
> The following code will throw an exception:
>  
> {code:java}
> Table program cannot be compiled. This is a bug. Please file an issue.
>  ...
>  Caused by: org.codehaus.commons.compiler.CompileException: Line 29, Column 
> 54: Cannot determine simple type name "org" {code}
> {color:#00}Code:{color}
> {code:java}
> public class TestUdf extends  ScalarFunction {
> @DataTypeHint("TIMESTAMP(3)")
> public LocalDateTime eval(String strDate) {
>return LocalDateTime.now();
> }
> }
> public class FlinkTest {
> @Test
> void testUdf() throws Exception {
> //var env = StreamExecutionEnvironment.createLocalEnvironment();
> // run `gradlew shadowJar` first to generate the uber jar.
> // It contains the kafka connector and a dummy UDF function.
> var env = 
> StreamExecutionEnvironment.createRemoteEnvironment("localhost", 8081,
> "build/libs/flink-test-all.jar");
> env.setParallelism(1);
> var tableEnv = StreamTableEnvironment.create(env);
> tableEnv.createTemporarySystemFunction("TEST_UDF", TestUdf.class);
> var testTable = tableEnv.from(TableDescriptor.forConnector("kafka")
> .schema(Schema.newBuilder()
> .column("time_stamp", DataTypes.STRING())
> .columnByExpression("udf_ts", "TEST_UDF(time_stamp)")
> .watermark("udf_ts", "udf_ts - INTERVAL '1' second")
> .build())
> // the kafka server doesn't need to exist. It fails in the 
> compile stage before fetching data.
> .option("properties.bootstrap.servers", "localhost:9092")
> .option("topic", "test_topic")
> .option("format", "json")
> .option("scan.startup.mode", "latest-offset")
> .build());
> testTable.printSchema();
> tableEnv.createTemporaryView("test", testTable );
> var query = tableEnv.sqlQuery("select * from test");
> var tableResult = 
> query.executeInsert(TableDescriptor.forConnector("print").build());
> tableResult.await();
> }
> }{code}
> What does the code do?
>  # read a stream from Kakfa
>  # create a derived column using an UDF expression
>  # define the watermark based on the derived column
> The full callstack:
>  
> {code:java}
> org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:97)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:62)
>  ~[flink-table-runtime-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:104)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:426)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:402)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:387)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>  ~[flink-dist-1.15.1.jar:1.15.1]
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)

[jira] [Updated] (FLINK-34282) Create a release branch

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34282:

Attachment: screenshot-1.png

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  *  
> 

[jira] [Commented] (FLINK-34016) Janino compile failed when watermark with column by udf

2024-02-06 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815140#comment-17815140
 ] 

xuyang commented on FLINK-34016:


Hi, [~wczhu] I have found the root bug and will try to fix it. You can 
temporarily replace flink-table-planner-loader with flink-table-planer in the 
opt/ folder just like https://issues.apache.org/jira/browse/FLINK-28693 said to 
work around this bug.

> Janino compile failed when watermark with column by udf
> ---
>
> Key: FLINK-34016
> URL: https://issues.apache.org/jira/browse/FLINK-34016
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.15.0, 1.18.0
>Reporter: ude
>Priority: Major
> Attachments: image-2024-01-25-11-53-06-158.png, 
> image-2024-01-25-11-54-54-381.png, image-2024-01-25-12-57-21-318.png, 
> image-2024-01-25-12-57-34-632.png
>
>
> After submit the following flink sql by sql-client.sh will throw an exception:
> {code:java}
> Caused by: java.lang.RuntimeException: Could not instantiate generated class 
> 'WatermarkGenerator$0'
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:74)
>     at 
> org.apache.flink.table.runtime.generated.GeneratedWatermarkGeneratorSupplier.createWatermarkGenerator(GeneratedWatermarkGeneratorSupplier.java:69)
>     at 
> org.apache.flink.streaming.api.operators.source.ProgressiveTimestampsAndWatermarks.createMainOutput(ProgressiveTimestampsAndWatermarks.java:109)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.initializeMainOutput(SourceOperator.java:462)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNextNotReading(SourceOperator.java:438)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:414)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:562)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:858)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:807)
>     at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:953)
>     at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:932)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:746)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)
>     at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkRuntimeException: 
> org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:94)
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:101)
>     at 
> org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:68)
>     ... 16 more
> Caused by: 
> org.apache.flink.shaded.guava31.com.google.common.util.concurrent.UncheckedExecutionException:
>  org.apache.flink.api.common.InvalidProgramException: Table program cannot be 
> compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2055)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache.get(LocalCache.java:3966)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4863)
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:92)
>     ... 18 more
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table program 
> cannot be compiled. This is a bug. Please file an issue.
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:107)
>     at 
> org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$0(CompileUtils.java:92)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4868)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3533)
>     at 
> org.apache.flink.shaded.guava31.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2282)
>     at 
> 

Re: [PR] [FLINK-34386][state] Add RocksDB bloom filter metrics [flink]

2024-02-06 Thread via GitHub


hejufang commented on PR #24274:
URL: https://github.com/apache/flink/pull/24274#issuecomment-1931429706

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread yisha zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815132#comment-17815132
 ] 

yisha zhou edited comment on FLINK-34402 at 2/7/24 6:58 AM:


Hi [~xtsong] ,

For the first question, yes, it's an ITCase that I'm going to add. The code is 
only for internal use in our company and not intended to be contributed to the 
community. But I can provide the keys to reproduce the issue. 

It's ITcase for Redis SQL Connector. I added a RunWith and PrepareForTest 
annotation, and then created a StreamTableEnvironment and used it to execute 
queries. Those queries would make mocked class to work and be checked. 
{code:java}
@RunWith(PowerMockRunner.class) 
@PrepareForTest(RedisClientTableUtils.class)

···

EnvironmentSettings envSettings =
EnvironmentSettings.newInstance().inStreamingMode().build();
tEnv = StreamTableEnvironment.create(env, envSettings);{code}
 Then an exception is thrown:
{code:java}
Caused by: java.lang.RuntimeException: 
org.apache.flink.runtime.client.JobInitializationException: Could not start the 
JobMaster.
    at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
    at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
    at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
    at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
    at 
java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
    at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
    at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
    at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
    at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could 
not start the JobMaster.
    at 
org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
    at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
    at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
    at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
java.lang.ClassCastException: org.apache.flink.api.common.ExecutionConfig 
cannot be cast to org.apache.flink.api.common.ExecutionConfig
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
    at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
    ... 3 more
Caused by: java.lang.ClassCastException: 
org.apache.flink.api.common.ExecutionConfig cannot be cast to 
org.apache.flink.api.common.ExecutionConfig
    at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:98)
    at 
org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119)
    at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:403)
    at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:379)
    at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123)
    at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
    at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
    at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
    ... 3 more {code}
For question 2, I found if no users jars exist the system classLoader will be 
used in 
[BlobLibraryCacheManager#242|https://github.com/apache/flink/blob/f1fba33d85a802b896170ff3cdb0107ee082c44a/flink-runtime/src/main/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheManager.java#L242]
 . The related issue is FLINK-32265

The fixed code should be:
{code:java}
private UserCodeClassLoader getOrResolveClassLoader(
Collection libraries, Collection classPaths)
throws IOException {
synchronized (lockObject) {
verifyIsNotReleased();

if (resolvedClassLoader == null) {
boolean 

[jira] [Commented] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread yisha zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815132#comment-17815132
 ] 

yisha zhou commented on FLINK-34402:


Hi [~xtsong] ,

For the first question, yes, it's an ITCase that I'm going to add. The code is 
only for internal use in our company and not intended to be contributed to the 
community. But I can provide the keys to reproduce the issue. 

It's ITcase for Redis SQL Connector. I added a RunWith and PrepareForTest 
annotation, and then created a StreamTableEnvironment and used it to execute 
queries. Those queries would make mocked class to work and be checked. 
{code:java}
@RunWith(PowerMockRunner.class) 
@PrepareForTest(RedisClientTableUtils.class)

···

EnvironmentSettings envSettings =
EnvironmentSettings.newInstance().inStreamingMode().build();
tEnv = StreamTableEnvironment.create(env, envSettings);{code}
 Then an exception is thrown:

 

 
{code:java}
Caused by: java.lang.RuntimeException: 
org.apache.flink.runtime.client.JobInitializationException: Could not start the 
JobMaster.
    at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
    at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
    at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
    at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
    at 
java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
    at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
    at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
    at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
    at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could 
not start the JobMaster.
    at 
org.apache.flink.runtime.jobmaster.DefaultJobMasterServiceProcess.lambda$new$0(DefaultJobMasterServiceProcess.java:97)
    at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
    at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
    at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
    at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.CompletionException: 
java.lang.ClassCastException: org.apache.flink.api.common.ExecutionConfig 
cannot be cast to org.apache.flink.api.common.ExecutionConfig
    at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
    at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
    at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1592)
    ... 3 more
Caused by: java.lang.ClassCastException: 
org.apache.flink.api.common.ExecutionConfig cannot be cast to 
org.apache.flink.api.common.ExecutionConfig
    at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:98)
    at 
org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119)
    at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:403)
    at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:379)
    at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123)
    at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
    at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
    at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
    ... 3 more {code}
 

For question 2, I found if no users jars exist the system classLoader will be 
used in 
[BlobLibraryCacheManager#242|https://github.com/apache/flink/blob/f1fba33d85a802b896170ff3cdb0107ee082c44a/flink-runtime/src/main/java/org/apache/flink/runtime/execution/librarycache/BlobLibraryCacheManager.java#L242]
 . The related issue is 
[FLINK-32265|https://issues.apache.org/jira/browse/FLINK-32265]

The fixed code should be:
{code:java}
private UserCodeClassLoader getOrResolveClassLoader(
Collection libraries, Collection classPaths)
throws IOException {
synchronized (lockObject) {
verifyIsNotReleased();

if (resolvedClassLoader == null) {
   

[jira] [Commented] (FLINK-34403) GHA misc test cannot pass due to OOM

2024-02-06 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815124#comment-17815124
 ] 

Benchao Li commented on FLINK-34403:


CC [~mapohl], is it possible we can increase the java heap size for running 
these tests? The newly introduced test indeed needs more memory since it 
verifies an extreme use case.

> GHA misc test cannot pass due to OOM
> 
>
> Key: FLINK-34403
> URL: https://issues.apache.org/jira/browse/FLINK-34403
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / CI
>Affects Versions: 1.20
>Reporter: Benchao Li
>Priority: Major
>
> After FLINK-33611 merged, the misc test on GHA cannot pass due to out of 
> memory error, throwing following exceptions:
> {code:java}
> Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
> Error: 05:43:21 05:43:21.773 [ERROR] 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
> elapsed: 40.97 s <<< ERROR!
> Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
> serialization.
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
> Feb 07 05:43:21   at 
> org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
> Feb 07 05:43:21   at 
> org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
> Feb 07 05:43:21   at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
> Feb 07 05:43:21   at 
> org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
> Feb 07 05:43:21   at java.lang.reflect.Method.invoke(Method.java:498)
> Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
> java.lang.IllegalArgumentException: Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> Feb 07 05:43:21   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:323)
> Feb 07 05:43:21   ... 18 more
> Feb 07 05:43:21 Caused by: java.lang.IllegalArgumentException: 
> Self-suppression not permitted
> Feb 07 05:43:21   at 
> java.lang.Throwable.addSuppressed(Throwable.java:1072)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:556)
> Feb 07 05:43:21   at 
> org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:486)
> Feb 07 05:43:21   at 
> org.apache.flink.streaming.api.graph.StreamConfig.lambda$triggerSerializationAndReturnFuture$0(StreamConfig.java:182)
> Feb 07 05:43:21   at 
> 

[jira] [Created] (FLINK-34403) GHA misc test cannot pass due to OOM

2024-02-06 Thread Benchao Li (Jira)
Benchao Li created FLINK-34403:
--

 Summary: GHA misc test cannot pass due to OOM
 Key: FLINK-34403
 URL: https://issues.apache.org/jira/browse/FLINK-34403
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Affects Versions: 1.20
Reporter: Benchao Li


After FLINK-33611 merged, the misc test on GHA cannot pass due to out of memory 
error, throwing following exceptions:

{code:java}
Error: 05:43:21 05:43:21.768 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 40.98 s <<< FAILURE! -- in 
org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest
Error: 05:43:21 05:43:21.773 [ERROR] 
org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple -- Time 
elapsed: 40.97 s <<< ERROR!
Feb 07 05:43:21 org.apache.flink.util.FlinkRuntimeException: Error in 
serialization.
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:327)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:162)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:1007)
Feb 07 05:43:21 at 
org.apache.flink.client.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:56)
Feb 07 05:43:21 at 
org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:45)
Feb 07 05:43:21 at 
org.apache.flink.client.deployment.executors.PipelineExecutorUtils.getJobGraph(PipelineExecutorUtils.java:61)
Feb 07 05:43:21 at 
org.apache.flink.client.deployment.executors.LocalExecutor.getJobGraph(LocalExecutor.java:104)
Feb 07 05:43:21 at 
org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:81)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2440)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:2421)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.datastream.DataStream.executeAndCollectWithClient(DataStream.java:1495)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1382)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.datastream.DataStream.executeAndCollect(DataStream.java:1367)
Feb 07 05:43:21 at 
org.apache.flink.formats.protobuf.ProtobufTestHelper.validateRow(ProtobufTestHelper.java:66)
Feb 07 05:43:21 at 
org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:89)
Feb 07 05:43:21 at 
org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:76)
Feb 07 05:43:21 at 
org.apache.flink.formats.protobuf.ProtobufTestHelper.rowToPbBytes(ProtobufTestHelper.java:71)
Feb 07 05:43:21 at 
org.apache.flink.formats.protobuf.VeryBigPbRowToProtoTest.testSimple(VeryBigPbRowToProtoTest.java:37)
Feb 07 05:43:21 at java.lang.reflect.Method.invoke(Method.java:498)
Feb 07 05:43:21 Caused by: java.util.concurrent.ExecutionException: 
java.lang.IllegalArgumentException: Self-suppression not permitted
Feb 07 05:43:21 at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
Feb 07 05:43:21 at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:323)
Feb 07 05:43:21 ... 18 more
Feb 07 05:43:21 Caused by: java.lang.IllegalArgumentException: Self-suppression 
not permitted
Feb 07 05:43:21 at 
java.lang.Throwable.addSuppressed(Throwable.java:1072)
Feb 07 05:43:21 at 
org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:556)
Feb 07 05:43:21 at 
org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:486)
Feb 07 05:43:21 at 
org.apache.flink.streaming.api.graph.StreamConfig.lambda$triggerSerializationAndReturnFuture$0(StreamConfig.java:182)
Feb 07 05:43:21 at 
java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:670)
Feb 07 05:43:21 at 
java.util.concurrent.CompletableFuture$UniAccept.tryFire(CompletableFuture.java:646)
Feb 07 05:43:21 at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
Feb 07 05:43:21 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
Feb 07 05:43:21 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Feb 07 05:43:21 at 

[jira] [Closed] (FLINK-34302) Release Testing Instructions: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34302.
---
Resolution: Fixed

[~dwysakowicz] Thanks for the updates!

> Release Testing Instructions: Verify FLINK-33644 Make QueryOperations SQL 
> serializable
> --
>
> Key: FLINK-34302
> URL: https://issues.apache.org/jira/browse/FLINK-34302
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Dawid Wysakowicz
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34211) Filtering on Column names with ?s fails for JDBC lookup join.

2024-02-06 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815118#comment-17815118
 ] 

xuyang commented on FLINK-34211:


Hi, [~davidradl] this Jira looks like an improvement, not a bug, right?

> Filtering on Column names with ?s fails for JDBC lookup join. 
> --
>
> Key: FLINK-34211
> URL: https://issues.apache.org/jira/browse/FLINK-34211
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / JDBC
>Reporter: david radley
>Priority: Minor
>
> There is a check for ? character in 
> [https://github.com/apache/flink-connector-jdbc/blob/e3dd84160cd665ae17672da8b6e742e61a72a32d/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/statement/FieldNamedPreparedStatementImpl.java#L186
>  |FieldNamedPreparedStatementImpl.java]
> Removing this check allows column names containing _?_ 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34345) Remove TaskExecutorManager related logic

2024-02-06 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-34345.
-
Fix Version/s: 1.20
   Resolution: Fixed

> Remove TaskExecutorManager related logic
> 
>
> Key: FLINK-34345
> URL: https://issues.apache.org/jira/browse/FLINK-34345
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.19.0
>Reporter: Caican Cai
>Assignee: Caican Cai
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.20
>
>
> FLINK-31449 removed DeclarativeSlotManager related logic. Some related 
> classes should be removed as well after FLINK-31449, such as:
>  * TaskExecutorManager
>  * TaskExecutorManagerBuilder
>  * TaskExecutorManagerTest
>  * 
> {{org.apache.flink.runtime.resourcemanager.slotmanager.TaskManagerRegistration}}
>  * PendingTaskManagerSlot
>  * TaskManagerSlotId
>  * SlotStatusUpdateListener
>  * TestingTaskManagerSlotInformation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34345) Remove TaskExecutorManager related logic

2024-02-06 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815116#comment-17815116
 ] 

Rui Fan commented on FLINK-34345:
-

Merged to master(1.20) via: f1fba33d85a802b896170ff3cdb0107ee082c44a

> Remove TaskExecutorManager related logic
> 
>
> Key: FLINK-34345
> URL: https://issues.apache.org/jira/browse/FLINK-34345
> Project: Flink
>  Issue Type: Technical Debt
>Affects Versions: 1.19.0
>Reporter: Caican Cai
>Assignee: Caican Cai
>Priority: Minor
>  Labels: pull-request-available
>
> FLINK-31449 removed DeclarativeSlotManager related logic. Some related 
> classes should be removed as well after FLINK-31449, such as:
>  * TaskExecutorManager
>  * TaskExecutorManagerBuilder
>  * TaskExecutorManagerTest
>  * 
> {{org.apache.flink.runtime.resourcemanager.slotmanager.TaskManagerRegistration}}
>  * PendingTaskManagerSlot
>  * TaskManagerSlotId
>  * SlotStatusUpdateListener
>  * TestingTaskManagerSlotInformation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34345][runtime] Remove TaskExecutorManager related logic [flink]

2024-02-06 Thread via GitHub


1996fanrui merged PR #24257:
URL: https://github.com/apache/flink/pull/24257


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34345][runtime] Remove TaskExecutorManager related logic [flink]

2024-02-06 Thread via GitHub


1996fanrui commented on PR #24257:
URL: https://github.com/apache/flink/pull/24257#issuecomment-1931352108

   The master branch has been cut to 1.20, merging~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34391) Release Testing Instructions: Verify FLINK-15959 Add min number of slots configuration to limit total number of slots

2024-02-06 Thread xiangyu feng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815115#comment-17815115
 ] 

xiangyu feng commented on FLINK-34391:
--

[~lincoln.86xy] Hi, I think this feature needs cross-team testing because it is 
a user facing feature.

> Release Testing Instructions: Verify FLINK-15959 Add min number of slots 
> configuration to limit total number of slots
> -
>
> Key: FLINK-34391
> URL: https://issues.apache.org/jira/browse/FLINK-34391
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: xiangyu feng
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan closed FLINK-34384.
---
Resolution: Fixed

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, 
> image-2024-02-07-13-49-03-024.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34382) Release Testing: Verify FLINK-33625 Support System out and err to be redirected to LOG or discarded

2024-02-06 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34382:
---

Assignee: Caican Cai  (was: Cancai Cai)

> Release Testing: Verify FLINK-33625 Support System out and err to be 
> redirected to LOG or discarded
> ---
>
> Key: FLINK-34382
> URL: https://issues.apache.org/jira/browse/FLINK-34382
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Caican Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: test1.png, test2.png
>
>
> Test suggestion:
>  # Prepare a Flink SQL job and a flink datastream job
>  ** they can use the print sink or call System.out.println inside of the UDF
>  # Add this config to the config.yaml
>  ** taskmanager.system-out.mode : LOG
>  # Run the job
>  # Check whether the print log is redirected to log file
> SQL demo:
> {code:java}
> ./bin/sql-client.sh
> CREATE TABLE orders (
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3),
>   WATERMARK FOR ts AS ts
> ) WITH (
>'connector' = 'datagen',
>'rows-per-second'='20',
>'fields.app.min'='1',
>'fields.app.max'='10',
>'fields.channel.min'='21',
>'fields.channel.max'='30',
>'fields.user_id.length'='10'
> );
> create table print_sink ( 
>   id   INT,
>   app  INT,
>   channel  INT,
>   user_id  STRING,
>   ts   TIMESTAMP(3)
> ) with ('connector' = 'print' );
> insert into print_sink
> select id   ,app   ,channel   ,user_id   ,ts   from orders 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34384:
---

Assignee: Caican Cai  (was: Cancai Cai)

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Caican Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, 
> image-2024-02-07-13-49-03-024.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815112#comment-17815112
 ] 

Rui Fan commented on FLINK-34384:
-

Thanks a lot [~caicancai] for the testing.

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, 
> image-2024-02-07-13-49-03-024.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-02-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan closed FLINK-34115.
-

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> 

[jira] [Commented] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-02-06 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815111#comment-17815111
 ] 

Jane Chan commented on FLINK-34115:
---

I think this issue can be closed now.

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 

[jira] [Resolved] (FLINK-34115) TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails

2024-02-06 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan resolved FLINK-34115.
---
Resolution: Fixed

> TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate fails
> --
>
> Key: FLINK-34115
> URL: https://issues.apache.org/jira/browse/FLINK-34115
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0, 1.19.0
>Reporter: Matthias Pohl
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.19.0, 1.18.2
>
>
> It failed twice in the same pipeline run:
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=de826397-1924-5900-0034-51895f69d4b7=f311e913-93a2-5a37-acab-4a63e1328f94=11613]
>  * 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56348=logs=a9db68b9-a7e0-54b6-0f98-010e0aff39e2=cdd32e0b-6047-565b-c58f-14054472f1be=11963]
> {code:java}
>  Jan 14 01:20:01 01:20:01.949 [ERROR] Tests run: 18, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 29.07 s <<< FAILURE! -- in 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase
> Jan 14 01:20:01 01:20:01.949 [ERROR] 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate
>  -- Time elapsed: 0.518 s <<< FAILURE!
> Jan 14 01:20:01 org.opentest4j.AssertionFailedError: 
> Jan 14 01:20:01 
> Jan 14 01:20:01 expected: List((true,6,1), (false,6,1), (true,6,1), 
> (true,3,2), (false,6,1), (false,3,2), (true,6,1), (true,5,2), (false,6,1), 
> (false,5,2), (true,8,1), (true,6,2), (false,8,1), (false,6,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01  but was: List((true,3,1), (false,3,1), (true,5,1), 
> (true,3,2), (false,5,1), (false,3,2), (true,8,1), (true,5,2), (false,8,1), 
> (false,5,2), (true,8,1), (true,5,2), (false,8,1), (false,5,2), (true,8,1), 
> (true,6,2))
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> Jan 14 01:20:01   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> Jan 14 01:20:01   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.checkRank$1(TableAggregateITCase.scala:122)
> Jan 14 01:20:01   at 
> org.apache.flink.table.planner.runtime.stream.table.TableAggregateITCase.testFlagAggregateWithOrWithoutIncrementalUpdate(TableAggregateITCase.scala:69)
> Jan 14 01:20:01   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Jan 14 01:20:01   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Jan 14 01:20:01   at 
> scala.collection.convert.Wrappers$IteratorWrapper.forEachRemaining(Wrappers.scala:26)
> Jan 14 01:20:01   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Jan 14 01:20:01   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Jan 14 01:20:01   at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:272)
> Jan 14 01:20:01   at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
> Jan 14 01:20:01   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Jan 14 01:20:01   at 
> 

[jira] [Commented] (FLINK-34239) Introduce a deep copy method of SerializerConfig for merging with Table configs in org.apache.flink.table.catalog.DataTypeFactoryImpl

2024-02-06 Thread Kumar Mallikarjuna (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815110#comment-17815110
 ] 

Kumar Mallikarjuna commented on FLINK-34239:


Thanks [~Zhanghao Chen] ! Hey [~zjureel] , would really appreciate if you could 
assign the task! TIA :)

> Introduce a deep copy method of SerializerConfig for merging with Table 
> configs in org.apache.flink.table.catalog.DataTypeFactoryImpl 
> --
>
> Key: FLINK-34239
> URL: https://issues.apache.org/jira/browse/FLINK-34239
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> *Problem*
> Currently, 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig
>  will create a deep-copy of the SerializerConfig and merge Table config into 
> it. However, the deep copy is done by manully calling the getter and setter 
> methods of SerializerConfig, and is prone to human errors, e.g. missing 
> copying a newly added field in SerializerConfig.
> *Proposal*
> Introduce a deep copy method for SerializerConfig and replace the curr impl 
> in 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Caican Cai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815106#comment-17815106
 ] 

Caican Cai commented on FLINK-34384:


!image-2024-02-07-13-49-03-024.png!

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, 
> image-2024-02-07-13-49-03-024.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Caican Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caican Cai updated FLINK-34384:
---
Attachment: image-2024-02-07-13-49-03-024.png

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, 
> image-2024-02-07-13-49-03-024.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Caican Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caican Cai updated FLINK-34384:
---
Attachment: 2024-02-06 22-30-32屏幕截图.png

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Caican Cai (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815105#comment-17815105
 ] 

Caican Cai commented on FLINK-34384:


[~fanrui] [~lincoln.86xy] test success

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34384) Release Testing: Verify FLINK-33735 Improve the exponential-delay restart-strategy

2024-02-06 Thread Caican Cai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caican Cai updated FLINK-34384:
---
Attachment: (was: 2024-02-06 22-30-32屏幕截图.png)

> Release Testing: Verify FLINK-33735 Improve the exponential-delay 
> restart-strategy 
> ---
>
> Key: FLINK-34384
> URL: https://issues.apache.org/jira/browse/FLINK-34384
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Cancai Cai
>Priority: Blocker
>  Labels: release-testing
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-15-05-05-331.png, screenshot-1.png
>
>
> Test suggestion:
>  # Prepare a datastream job that all tasks throw exception directly.
>  ## Set the parallelism to 5 or above
>  # Prepare some configuration options:
>  ** restart-strategy.type : exponential-delay
>  ** restart-strategy.exponential-delay.attempts-before-reset-backoff : 7
>  # Start the cluster: ./bin/start-cluster.sh
>  # Run the job: ./bin/flink run -c className jarName
>  # Check the result
>  ** Check whether job will be retried 7 times
>  ** Check the exception history, the list has 7 exceptions
>  ** Each retries except the last one can see the 5 subtasks(They are 
> concurrent exceptions).
> !image-2024-02-06-15-05-05-331.png|width=1624,height=797!  
>  
> Note: Set these options mentioned at step2 at 2 level separately
>  * Cluster level: set them in the config.yaml
>  * Job level: Set them in the code
>  
> Job level demo:
> {code:java}
> public static void main(String[] args) throws Exception {
> Configuration conf = new Configuration();
> conf.setString("restart-strategy", "exponential-delay");
> 
> conf.setString("restart-strategy.exponential-delay.attempts-before-reset-backoff",
>  "6");
> StreamExecutionEnvironment env =  
> StreamExecutionEnvironment.getExecutionEnvironment(conf);
> env.setParallelism(5);
> DataGeneratorSource generatorSource =
> new DataGeneratorSource<>(
> value -> value,
> 300,
> RateLimiterStrategy.perSecond(10),
> Types.LONG);
> env.fromSource(generatorSource, WatermarkStrategy.noWatermarks(), "Data 
> Generator")
> .map(new RichMapFunction() {
> @Override
> public Long map(Long value) {
> throw new RuntimeException(
> "Excepted testing exception, subtaskIndex: " + 
> getRuntimeContext().getIndexOfThisSubtask());
> }
> })
> .print();
> env.execute();
> } {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-33611) Support Large Protobuf Schemas

2024-02-06 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li resolved FLINK-33611.

Fix Version/s: 1.20
   Resolution: Fixed

Fixed via df03ada10e226053780cb2e5e9742add4536289c (master)

[~dsaisharath] Thanks for your contribution!

> Support Large Protobuf Schemas
> --
>
> Key: FLINK-33611
> URL: https://issues.apache.org/jira/browse/FLINK-33611
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.18.0
>Reporter: Sai Sharath Dandi
>Assignee: Sai Sharath Dandi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20
>
>
> h3. Background
> Flink serializes and deserializes protobuf format data by calling the decode 
> or encode method in GeneratedProtoToRow_XXX.java generated by codegen to 
> parse byte[] data into Protobuf Java objects. FLINK-32650 has introduced the 
> ability to split the generated code to improve the performance for large 
> Protobuf schemas. However, this is still not sufficient to support some 
> larger protobuf schemas as the generated code exceeds the java constant pool 
> size [limit|https://en.wikipedia.org/wiki/Java_class_file#The_constant_pool] 
> and we can see errors like "Too many constants" when trying to compile the 
> generated code. 
> *Solution*
> Since we already have the split code functionality already introduced, the 
> main proposal here is to now reuse the variable names across different split 
> method scopes. This will greatly reduce the constant pool size. One more 
> optimization is to only split the last code segment also only when the size 
> exceeds split threshold limit. Currently, the last segment of the generated 
> code is always being split which can lead to too many split methods and thus 
> exceed the constant pool size limit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33611] [flink-protobuf] Support Large Protobuf Schemas [flink]

2024-02-06 Thread via GitHub


libenchao closed pull request #23937: [FLINK-33611] [flink-protobuf] Support 
Large Protobuf Schemas
URL: https://github.com/apache/flink/pull/23937


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-34282] Update dev-master point to 1.20 [flink-docker]

2024-02-06 Thread via GitHub


lincoln-lil opened a new pull request, #178:
URL: https://github.com/apache/flink-docker/pull/178

   As a step for creating flink release-1.9 branch 
https://issues.apache.org/jira/browse/FLINK-34282
   this pr update dev-master point to the most recent snapshot version 1.20


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34282] Update dev-master point to 1.20 [flink-docker]

2024-02-06 Thread via GitHub


lincoln-lil closed pull request #177: [FLINK-34282] Update dev-master point to 
1.20
URL: https://github.com/apache/flink-docker/pull/177


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34282) Create a release branch

2024-02-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34282:
---
Labels: pull-request-available  (was: )

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  *  
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]
>  should have 

[PR] [FLINK-34282] Update dev-master point to 1.20 [flink-docker]

2024-02-06 Thread via GitHub


lincoln-lil opened a new pull request, #177:
URL: https://github.com/apache/flink-docker/pull/177

   As a step for creating flink release-1.9 branch  
https://issues.apache.org/jira/browse/FLINK-34282
   this pr update dev-master  point to the most recent snapshot version 1.20
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815096#comment-17815096
 ] 

Xintong Song commented on FLINK-34402:
--

Hi [~nilerzhou],

The description of this issue is a bit unclear to me. Could you provide a bit 
more information?
- In which ITCase did you run into the problem? If it's an ITCase that is not 
yet exist and you are planning to add, it would be helpful to also provide the 
codes so others can reproduce the issue.
- Where exactly are you suggesting to replace 
`ClassLoader.getSystemClassLoader()` with 
`Thread.currentThread().getContextClassLoader()`?

BTW, it is discouraged to use Mockito for testing. See the Code Style and 
Quality Guide for more details. 
https://flink.apache.org/how-to-contribute/code-style-and-quality-common/#avoid-mockito---use-reusable-test-implementations

>  Class loading conflicts when using PowerMock in ITcase.
> 
>
> Key: FLINK-34402
> URL: https://issues.apache.org/jira/browse/FLINK-34402
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: yisha zhou
>Assignee: yisha zhou
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently when no user jars exist, system classLoader will be used to load 
> classes as default. However, if we use powerMock to create some ITCases, the 
> framework will utilize JavassistMockClassLoader to load classes.  Forcing the 
> use of the system classLoader can lead to class loading conflict issue.
> Therefore we should use Thread.currentThread().getContextClassLoader() 
> instead of 
> ClassLoader.getSystemClassLoader() here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread Fang Yong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong reassigned FLINK-34402:
-

Assignee: yisha zhou

>  Class loading conflicts when using PowerMock in ITcase.
> 
>
> Key: FLINK-34402
> URL: https://issues.apache.org/jira/browse/FLINK-34402
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: yisha zhou
>Assignee: yisha zhou
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently when no user jars exist, system classLoader will be used to load 
> classes as default. However, if we use powerMock to create some ITCases, the 
> framework will utilize JavassistMockClassLoader to load classes.  Forcing the 
> use of the system classLoader can lead to class loading conflict issue.
> Therefore we should use Thread.currentThread().getContextClassLoader() 
> instead of 
> ClassLoader.getSystemClassLoader() here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread yisha zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815094#comment-17815094
 ] 

yisha zhou commented on FLINK-34402:


[~zjureel] hi, could you please assign this task to me? I'd like to fix it.

>  Class loading conflicts when using PowerMock in ITcase.
> 
>
> Key: FLINK-34402
> URL: https://issues.apache.org/jira/browse/FLINK-34402
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: yisha zhou
>Priority: Major
> Fix For: 1.19.0
>
>
> Currently when no user jars exist, system classLoader will be used to load 
> classes as default. However, if we use powerMock to create some ITCases, the 
> framework will utilize JavassistMockClassLoader to load classes.  Forcing the 
> use of the system classLoader can lead to class loading conflict issue.
> Therefore we should use Thread.currentThread().getContextClassLoader() 
> instead of 
> ClassLoader.getSystemClassLoader() here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34402) Class loading conflicts when using PowerMock in ITcase.

2024-02-06 Thread yisha zhou (Jira)
yisha zhou created FLINK-34402:
--

 Summary:  Class loading conflicts when using PowerMock in ITcase.
 Key: FLINK-34402
 URL: https://issues.apache.org/jira/browse/FLINK-34402
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.19.0
Reporter: yisha zhou
 Fix For: 1.19.0


Currently when no user jars exist, system classLoader will be used to load 
classes as default. However, if we use powerMock to create some ITCases, the 
framework will utilize JavassistMockClassLoader to load classes.  Forcing the 
use of the system classLoader can lead to class loading conflict issue.

Therefore we should use Thread.currentThread().getContextClassLoader() instead 
of 
ClassLoader.getSystemClassLoader() here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34392) Release Testing Instructions: Verify FLINK-33146 Unify the Representation of TaskManager Location in REST API and Web UI

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34392.
---
Resolution: Won't Fix

[~Zhanghao Chen]Thanks for confirming! 

> Release Testing Instructions: Verify FLINK-33146 Unify the Representation of 
> TaskManager Location in REST API and Web UI
> 
>
> Key: FLINK-34392
> URL: https://issues.apache.org/jira/browse/FLINK-34392
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Zhanghao Chen
>Priority: Blocker
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34376) FLINK SQL SUM() causes a precision error

2024-02-06 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815091#comment-17815091
 ] 

xuyang commented on FLINK-34376:


This bug is introduced by https://issues.apache.org/jira/browse/FLINK-22586 .

We can simplify the sql to re-produce this bug: 
{code:java}
select cast(9.11 AS DECIMAL(38,18)) * 10

++--+
| op |                                   EXPR$0 |
++--+
| +I |                               90.000 |
++--+ {code}
This is a designed behavior(but there seems to be some problems). For the 
multiplication of Decimal types, the following formula is currently used.

 
{code:java}
// = Decimal Precision Deriving 
==
// Adopted from 
"https://docs.microsoft.com/en-us/sql/t-sql/data-types/precision-
// scale-and-length-transact-sql"
//
// OperationResult PrecisionResult Scale
// e1 + e2  max(s1, s2) + max(p1-s1, p2-s2) + 1 max(s1, s2)
// e1 - e2  max(s1, s2) + max(p1-s1, p2-s2) + 1 max(s1, s2)
// e1 * e2  p1 + p2 + 1 s1 + s2
// e1 / e2  p1 - s1 + s2 + max(6, s1 + p2 + 1)  max(6, s1 + p2 + 1)
// e1 % e2  min(p1-s1, p2-s2) + max(s1, s2) max(s1, s2)
//
// Also, if the precision / scale are out of the range, the scale may be 
sacrificed
// in order to prevent the truncation of the integer part of the decimals. 
{code}
 

For Integer type, the default precision is 10 and the scale is 0. So the result 
precision and scale is (49, 18). However, the precision exceeds the max 
precision 38, then it chooses to adjust scale from 18 to 7:

 
{code:java}
integer part: 49 - 18 = 31
adjusted scale: 38 - 31 = 7{code}
IMO, the original design that choose to keep the integer part of the completion 
makes sense. But in this case, the result is wrong and we should fix it (by 
verifying mysql the result is `90.000110`).

 

 

> FLINK SQL SUM() causes a precision error
> 
>
> Key: FLINK-34376
> URL: https://issues.apache.org/jira/browse/FLINK-34376
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.14.3, 1.18.1
>Reporter: Fangliang Liu
>Priority: Major
> Attachments: image-2024-02-06-11-15-02-669.png, 
> image-2024-02-06-11-17-03-399.png
>
>
> {code:java}
> select cast(sum(CAST(9.11 AS DECIMAL(38,18)) *10 ) as STRING) 
> {code}
> The precision is wrong in the Flink 1.14.3 and master branch
> !image-2024-02-06-11-15-02-669.png!
>  
> The accuracy is correct in the Flink 1.13.2 
> !image-2024-02-06-11-17-03-399.png!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34390) Release Testing: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-06 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan reassigned FLINK-34390:
---

Assignee: junzhong qin  (was: Rui Fan)

> Release Testing: Verify FLINK-33325 Built-in cross-platform powerful java 
> profiler
> --
>
> Key: FLINK-34390
> URL: https://issues.apache.org/jira/browse/FLINK-34390
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: Yun Tang
>Assignee: junzhong qin
>Priority: Major
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> See https://issues.apache.org/jira/browse/FLINK-34310



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34392) Release Testing Instructions: Verify FLINK-33146 Unify the Representation of TaskManager Location in REST API and Web UI

2024-02-06 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815087#comment-17815087
 ] 

Zhanghao Chen commented on FLINK-34392:
---

[~lincoln.86xy] Hi, I cannot close it. Could you help with that?

> Release Testing Instructions: Verify FLINK-33146 Unify the Representation of 
> TaskManager Location in REST API and Web UI
> 
>
> Key: FLINK-34392
> URL: https://issues.apache.org/jira/browse/FLINK-34392
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Zhanghao Chen
>Priority: Blocker
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34392) Release Testing Instructions: Verify FLINK-33146 Unify the Representation of TaskManager Location in REST API and Web UI

2024-02-06 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815086#comment-17815086
 ] 

Zhanghao Chen commented on FLINK-34392:
---

Closing it as it does not require cross-team testing

> Release Testing Instructions: Verify FLINK-33146 Unify the Representation of 
> TaskManager Location in REST API and Web UI
> 
>
> Key: FLINK-34392
> URL: https://issues.apache.org/jira/browse/FLINK-34392
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Zhanghao Chen
>Priority: Blocker
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34390) Release Testing: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-06 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34390:

Description: See https://issues.apache.org/jira/browse/FLINK-34310

> Release Testing: Verify FLINK-33325 Built-in cross-platform powerful java 
> profiler
> --
>
> Key: FLINK-34390
> URL: https://issues.apache.org/jira/browse/FLINK-34390
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: Yun Tang
>Assignee: Rui Fan
>Priority: Major
>  Labels: release-testing
> Fix For: 1.19.0
>
>
> See https://issues.apache.org/jira/browse/FLINK-34310



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33636][autoscaler] Support the JDBCAutoScalerEventHandler [flink-kubernetes-operator]

2024-02-06 Thread via GitHub


1996fanrui commented on code in PR #765:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/765#discussion_r1480045403


##
docs/content/docs/custom-resource/autoscaler.md:
##
@@ -286,16 +286,20 @@ please download JDBC driver and initialize database and 
table first.
 
 ```
 JDBC_DRIVER_JAR=./mysql-connector-java-8.0.30.jar
-# export the password of jdbc state store
+# export the password of jdbc state store & jdbc event handler
 export STATE_STORE_JDBC_PWD=123456
+export EVENT_HANDLER_JDBC_PWD=123456
 
 java -cp flink-autoscaler-standalone-{{< version >}}.jar:${JDBC_DRIVER_JAR} \
 org.apache.flink.autoscaler.standalone.StandaloneAutoscalerEntrypoint \
 --autoscaler.standalone.fetcher.flink-cluster.host localhost \
 --autoscaler.standalone.fetcher.flink-cluster.port 8081 \
 --autoscaler.standalone.state-store.type jdbc \
 --autoscaler.standalone.state-store.jdbc.url 
jdbc:mysql://localhost:3306/flink_autoscaler \
---autoscaler.standalone.state-store.jdbc.username root
+--autoscaler.standalone.state-store.jdbc.username root \
+--autoscaler.standalone.event-handler.type jdbc \
+--autoscaler.standalone.event-handler.jdbc.url 
jdbc:mysql://localhost:3306/flink_autoscaler \
+--autoscaler.standalone.event-handler.jdbc.username root

Review Comment:
   I'm still working on this PR now, I found it's better to include this PR in 
`1.8.0`. Because I believe  jdbcStateStore and jdbcEventHandler  always work 
together.
   
   During I test the `jdbcEventHandler`, I found some defects of 
`jdbcStateStore`, such as: FLINK-34389, this comment 
https://github.com/apache/flink-kubernetes-operator/pull/765#discussion_r1480012807.
 
   
   It means if we includes `jdbcEventHandler` in  this version we can ensure 
they work well together. If we release the jdbcStateStore in this version, and 
release jdbcEventHandler in the next version. We might need to change the 
jdbcStateStore again in the next version. It's better to release a final 
version for users, and it won't introduce any incompatibility problem.
   
   So I'd like to increase this priority, WDYT?
   
   Note: the prod code is done(you can help review if you are free), and the 
progress of unit test and ITCase is about 40% and I will finish it asap..



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-06 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815083#comment-17815083
 ] 

lincoln lee commented on FLINK-34310:
-

[~Yu Chen]  Thanks for the updates! The time is ok, as it falls on the 
vocation, and Happy Chinese New Year! :)

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yun Tang
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png, screenshot-5.png
>
>
> Instructions:
> 1. For the default case, it will print the hint to tell users how to enable 
> this feature.
>  !screenshot-2.png! 
> 2. After we add {{rest.profiling.enabled: true}} in the configurations, we 
> can use this feature now, and the default mode should be {{ITIMER}}
>  !screenshot-3.png! 
> 3. We cannot create another profiling while one is running
>  !screenshot-4.png! 
> 4. We can get at most 10 profilling snapshots by default, and the older one 
> will be deleted automaticially.
>  !screenshot-5.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34310) Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform powerful java profiler

2024-02-06 Thread Yu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815079#comment-17815079
 ] 

Yu Chen commented on FLINK-34310:
-

Sorry for the late response.

Thanks [~yunta] for creating the Testing Instructions.  
{quote}When I test other features in my Local, I found Profiler page throw some 
exceptions. I'm not sure whether it's expected.  
{quote}
Hi [~fanrui], so far, the determination of whether the profiler is enabled or 
not is achieved by checking if the interface is registered with
`WebMonitorEndpoint`. Therefore, this behavior is by design.
But I think we can implement this check more elegantly in a later version by 
registering an interface to check the enabled status of the profiler.
 
{quote}[~Yu Chen] Could you estimate when the user doc 
(https://issues.apache.org/jira/browse/FLINK-33436) can be finished?{quote}
Hi [~lincoln.86xy] , really sorry for the late response. I was quite busy 
recently, is that OK for me to finish working on the documentation within the 
next week (before 02.18)?

> Release Testing Instructions: Verify FLINK-33325 Built-in cross-platform 
> powerful java profiler
> ---
>
> Key: FLINK-34310
> URL: https://issues.apache.org/jira/browse/FLINK-34310
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / REST, Runtime / Web Frontend
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Yun Tang
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: image-2024-02-06-14-09-39-874.png, screenshot-1.png, 
> screenshot-2.png, screenshot-3.png, screenshot-4.png, screenshot-5.png
>
>
> Instructions:
> 1. For the default case, it will print the hint to tell users how to enable 
> this feature.
>  !screenshot-2.png! 
> 2. After we add {{rest.profiling.enabled: true}} in the configurations, we 
> can use this feature now, and the default mode should be {{ITIMER}}
>  !screenshot-3.png! 
> 3. We cannot create another profiling while one is running
>  !screenshot-4.png! 
> 4. We can get at most 10 profilling snapshots by default, and the older one 
> will be deleted automaticially.
>  !screenshot-5.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34276) Create a new version in JIRA

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34276:

Fix Version/s: (was: 1.20)

> Create a new version in JIRA
> 
>
> Key: FLINK-34276
> URL: https://issues.apache.org/jira/browse/FLINK-34276
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
>
> When contributors resolve an issue in JIRA, they are tagging it with a 
> release that will contain their changes. With the release currently underway, 
> new issues should be resolved against a subsequent future release. Therefore, 
> you should create a release item for this subsequent release, as follows:
>  # In JIRA, navigate to the [Flink > Administration > 
> Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
>  # Add a new release: choose the next minor version number compared to the 
> one currently underway, select today’s date as the Start Date, and choose Add.
> (Note: Only PMC members have access to the project administration. If you do 
> not have access, ask on the mailing list for assistance.)
>  
> 
> h3. Expectations
>  * The new version should be listed in the dropdown menu of {{fixVersion}} or 
> {{affectedVersion}} under "unreleased versions" when creating a new Jira 
> issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34276) Create a new version in JIRA

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee updated FLINK-34276:

Fix Version/s: 1.20

> Create a new version in JIRA
> 
>
> Key: FLINK-34276
> URL: https://issues.apache.org/jira/browse/FLINK-34276
> Project: Flink
>  Issue Type: Sub-task
>Reporter: lincoln lee
>Priority: Major
> Fix For: 1.20
>
>
> When contributors resolve an issue in JIRA, they are tagging it with a 
> release that will contain their changes. With the release currently underway, 
> new issues should be resolved against a subsequent future release. Therefore, 
> you should create a release item for this subsequent release, as follows:
>  # In JIRA, navigate to the [Flink > Administration > 
> Versions|https://issues.apache.org/jira/plugins/servlet/project-config/FLINK/versions].
>  # Add a new release: choose the next minor version number compared to the 
> one currently underway, select today’s date as the Start Date, and choose Add.
> (Note: Only PMC members have access to the project administration. If you do 
> not have access, ask on the mailing list for assistance.)
>  
> 
> h3. Expectations
>  * The new version should be listed in the dropdown menu of {{fixVersion}} or 
> {{affectedVersion}} under "unreleased versions" when creating a new Jira 
> issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-06 Thread via GitHub


flinkbot commented on PR #24279:
URL: https://github.com/apache/flink/pull/24279#issuecomment-1931161930

   
   ## CI report:
   
   * 044000baf9a560be53dfa6e8432af20ef8b07865 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34401) Translate "Flame Graphs" page into Chinese

2024-02-06 Thread li you (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815060#comment-17815060
 ] 

li you commented on FLINK-34401:


I have done with it

> Translate "Flame Graphs" page into Chinese
> --
>
> Key: FLINK-34401
> URL: https://issues.apache.org/jira/browse/FLINK-34401
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: li you
>Priority: Major
>  Labels: pull-request-available
>
> The page is located at _"docs/content.zh/docs/ops/debugging/flame_graphs.md"_



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34401) Translate "Flame Graphs" page into Chinese

2024-02-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34401:
---
Labels: pull-request-available  (was: )

> Translate "Flame Graphs" page into Chinese
> --
>
> Key: FLINK-34401
> URL: https://issues.apache.org/jira/browse/FLINK-34401
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: li you
>Priority: Major
>  Labels: pull-request-available
>
> The page is located at _"docs/content.zh/docs/ops/debugging/flame_graphs.md"_



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34401][docs-zh] Translate "Flame Graphs" page into Chinese [flink]

2024-02-06 Thread via GitHub


lxliyou001 opened a new pull request, #24279:
URL: https://github.com/apache/flink/pull/24279

   
   
   ## What is the purpose of the change
   
   Translate Flame Graphs to Chinese
   
   ## Brief change log
   
   The page is located at "docs/content.zh/docs/ops/debugging/flame_graphs.md"
   
   ## Verifying this change
   
   Docs only change.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) no
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) bo
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33549][table-runtime] Enable BatchMultipleInputStreamOperatorFactory to support YieldingOperatorFactory interface [flink]

2024-02-06 Thread via GitHub


xuyangzhong commented on code in PR #24259:
URL: https://github.com/apache/flink/pull/24259#discussion_r1480812721


##
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/multipleinput/BatchMultipleInputStreamOperatorFactory.java:
##
@@ -21,14 +21,15 @@
 import org.apache.flink.streaming.api.operators.AbstractStreamOperatorFactory;
 import org.apache.flink.streaming.api.operators.StreamOperator;
 import org.apache.flink.streaming.api.operators.StreamOperatorParameters;
+import org.apache.flink.streaming.api.operators.YieldingOperatorFactory;
 import org.apache.flink.table.data.RowData;
 import org.apache.flink.table.runtime.operators.multipleinput.input.InputSpec;
 
 import java.util.List;
 
 /** The factory to create {@link BatchMultipleInputStreamOperator}. */
-public class BatchMultipleInputStreamOperatorFactory
-extends AbstractStreamOperatorFactory {
+public class BatchMultipleInputStreamOperatorFactory extends 
AbstractStreamOperatorFactory
+implements YieldingOperatorFactory {

Review Comment:
   Just a little curious, why no tests are added for this change.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34401) Translate "Flame Graphs" page into Chinese

2024-02-06 Thread li you (Jira)
li you created FLINK-34401:
--

 Summary: Translate "Flame Graphs" page into Chinese
 Key: FLINK-34401
 URL: https://issues.apache.org/jira/browse/FLINK-34401
 Project: Flink
  Issue Type: Improvement
  Components: chinese-translation
Reporter: li you


The page is located at _"docs/content.zh/docs/ops/debugging/flame_graphs.md"_



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-06 Thread xuyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815054#comment-17815054
 ] 

xuyang commented on FLINK-34378:


[~lsy] The situation is that although I set the parallelism is "1", but the 
order of output in minibatch is still disrupted.

Hi, [~libenchao] . Thanks for reminding, I have attached the diff about results 
while tuning on and off the minibatch join.

> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}
> Result
> {code:java}
> ++---+---+---+---+ 
> | op | a | b | a0| b0| 
> ++---+---+---+---+ 
> | +I | 3 | 3 | 3 | 3 | 
> | +I | 7 | 7 | 7 | 7 | 
> | +I | 2 | 2 | 2 | 2 | 
> | +I | 5 | 5 | 5 | 5 | 
> | +I | 1 | 1 | 1 | 1 | 
> | +I | 6 | 6 | 6 | 6 | 
> | +I | 4 | 4 | 4 | 4 | 
> | +I | 8 | 8 | 8 | 8 | 
> ++---+---+---+---+
> {code}
> When I do not use minibatch join, the result is :
> {code:java}
> ++---+---+++
> | op | a | b | a0 | b0 |
> ++---+---+++
> | +I | 1 | 1 |  1 |  1 |
> | +I | 2 | 2 |  2 |  2 |
> | +I | 3 | 3 |  3 |  3 |
> | +I | 4 | 4 |  4 |  4 |
> | +I | 5 | 5 |  5 |  5 |
> | +I | 6 | 6 |  6 |  6 |
> | +I | 7 | 7 |  7 |  7 |
> | +I | 8 | 8 |  8 |  8 |
> ++---+---+++
>  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-06 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Description: 
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}
Result
{code:java}
++---+---+---+---+ 
| op | a | b | a0| b0| 
++---+---+---+---+ 
| +I | 3 | 3 | 3 | 3 | 
| +I | 7 | 7 | 7 | 7 | 
| +I | 2 | 2 | 2 | 2 | 
| +I | 5 | 5 | 5 | 5 | 
| +I | 1 | 1 | 1 | 1 | 
| +I | 6 | 6 | 6 | 6 | 
| +I | 4 | 4 | 4 | 4 | 
| +I | 8 | 8 | 8 | 8 | 
++---+---+---+---+

{code}
When I do not use minibatch join, the result is :
{code:java}
++---+---+++
| op | a | b | a0 | b0 |
++---+---+++
| +I | 1 | 1 |  1 |  1 |
| +I | 2 | 2 |  2 |  2 |
| +I | 3 | 3 |  3 |  3 |
| +I | 4 | 4 |  4 |  4 |
| +I | 5 | 5 |  5 |  5 |
| +I | 6 | 6 |  6 |  6 |
| +I | 7 | 7 |  7 |  7 |
| +I | 8 | 8 |  8 |  8 |
++---+---+++
 {code}
 

  was:
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}
Result
{code:java}

++---+---+---+---+ 
| op | a | b | a0| b0| 
++---+---+---+---+ 
| +I | 3 | 3 | 3 | 3 | 
| +I | 7 | 7 | 7 | 7 | 
| +I | 2 | 2 | 2 | 2 | 
| +I | 5 | 5 | 5 | 5 | 
| +I | 1 | 1 | 1 | 1 | 
| +I | 6 | 6 | 6 | 6 | 
| +I | 4 | 4 | 4 | 4 | 
| +I | 8 | 8 | 8 | 8 | 
++---+---+---+---+

{code}
 

 


> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  

[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-06 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Description: 
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}
Result
{code:java}

++---+---+---+---+ 
| op | a | b | a0| b0| 
++---+---+---+---+ 
| +I | 3 | 3 | 3 | 3 | 
| +I | 7 | 7 | 7 | 7 | 
| +I | 2 | 2 | 2 | 2 | 
| +I | 5 | 5 | 5 | 5 | 
| +I | 1 | 1 | 1 | 1 | 
| +I | 6 | 6 | 6 | 6 | 
| +I | 4 | 4 | 4 | 4 | 
| +I | 8 | 8 | 8 | 8 | 
++---+---+---+---+

{code}
 

 

  was:
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}
Result
{code:java}
// code placeholder
{code}
 

 


> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> 

[jira] [Closed] (FLINK-34365) [docs] Delete repeated pages in Chinese Flink website and correct the Paimon url

2024-02-06 Thread Lijie Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Wang closed FLINK-34365.
--
Resolution: Fixed

> [docs] Delete repeated pages in Chinese Flink website and correct the Paimon 
> url
> 
>
> Key: FLINK-34365
> URL: https://issues.apache.org/jira/browse/FLINK-34365
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Waterking
>Assignee: Waterking
>Priority: Major
>  Labels: pull-request-available
> Attachments: 微信截图_20240205214854.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The "教程" column on the [Flink 
> 中文网|https://flink.apache.org/zh/how-to-contribute/contribute-documentation/] 
> currently has two "[With Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink/%22];.
> Therefore, I delete one for brevity.
> Also, the current link is wrong and I correct it with this link "[With 
> Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink];



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34365) [docs] Delete repeated pages in Chinese Flink website and correct the Paimon url

2024-02-06 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815053#comment-17815053
 ] 

Lijie Wang commented on FLINK-34365:


Fixed via branch asf-site(flink-web): ec2e5c2b4a312fe56e44e13c57b84c6f1331b992

> [docs] Delete repeated pages in Chinese Flink website and correct the Paimon 
> url
> 
>
> Key: FLINK-34365
> URL: https://issues.apache.org/jira/browse/FLINK-34365
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Waterking
>Assignee: Waterking
>Priority: Major
>  Labels: pull-request-available
> Attachments: 微信截图_20240205214854.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> The "教程" column on the [Flink 
> 中文网|https://flink.apache.org/zh/how-to-contribute/contribute-documentation/] 
> currently has two "[With Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink/%22];.
> Therefore, I delete one for brevity.
> Also, the current link is wrong and I correct it with this link "[With 
> Paimon(incubating) (formerly Flink Table 
> Store)|https://paimon.apache.org/docs/master/engines/flink];



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34378) Minibatch join disrupted the original order of input records

2024-02-06 Thread xuyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyang updated FLINK-34378:
---
Description: 
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}
Result
{code:java}
// code placeholder
{code}
 

 

  was:
I'm not sure if it's a bug. The following case can re-produce this situation.
{code:java}
// add it in CalcITCase
@Test
def test(): Unit = {
  env.setParallelism(1)
  val rows = Seq(
row(1, "1"),
row(2, "2"),
row(3, "3"),
row(4, "4"),
row(5, "5"),
row(6, "6"),
row(7, "7"),
row(8, "8"))
  val dataId = TestValuesTableFactory.registerData(rows)

  val ddl =
s"""
   |CREATE TABLE t1 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl)

  val ddl2 =
s"""
   |CREATE TABLE t2 (
   |  a int,
   |  b string
   |) WITH (
   |  'connector' = 'values',
   |  'data-id' = '$dataId',
   |  'bounded' = 'false'
   |)
 """.stripMargin
  tEnv.executeSql(ddl2)

  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, Boolean.box(true))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
Duration.ofSeconds(5))
  tEnv.getConfig.getConfiguration
.set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))

  println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())

  tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
}{code}


> Minibatch join disrupted the original order of input records
> 
>
> Key: FLINK-34378
> URL: https://issues.apache.org/jira/browse/FLINK-34378
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Affects Versions: 1.19.0
>Reporter: xuyang
>Priority: Major
> Fix For: 1.19.0
>
>
> I'm not sure if it's a bug. The following case can re-produce this situation.
> {code:java}
> // add it in CalcITCase
> @Test
> def test(): Unit = {
>   env.setParallelism(1)
>   val rows = Seq(
> row(1, "1"),
> row(2, "2"),
> row(3, "3"),
> row(4, "4"),
> row(5, "5"),
> row(6, "6"),
> row(7, "7"),
> row(8, "8"))
>   val dataId = TestValuesTableFactory.registerData(rows)
>   val ddl =
> s"""
>|CREATE TABLE t1 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl)
>   val ddl2 =
> s"""
>|CREATE TABLE t2 (
>|  a int,
>|  b string
>|) WITH (
>|  'connector' = 'values',
>|  'data-id' = '$dataId',
>|  'bounded' = 'false'
>|)
>  """.stripMargin
>   tEnv.executeSql(ddl2)
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ENABLED, 
> Boolean.box(true))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_ALLOW_LATENCY, 
> Duration.ofSeconds(5))
>   tEnv.getConfig.getConfiguration
> .set(ExecutionConfigOptions.TABLE_EXEC_MINIBATCH_SIZE, Long.box(20L))
>   println(tEnv.sqlQuery("SELECT * from t1 join t2 on t1.a = t2.a").explain())
>   tEnv.executeSql("SELECT * from t1 join t2 on t1.a = t2.a").print()
> }{code}

Re: [PR] [FLINK-34365] Delete repeated pages in Chinese Flink website and correct the Paimon url [flink-web]

2024-02-06 Thread via GitHub


wanglijie95 merged PR #713:
URL: https://github.com/apache/flink-web/pull/713


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34365] Delete repeated pages in Chinese Flink website and correct the Paimon url [flink-web]

2024-02-06 Thread via GitHub


wanglijie95 commented on PR #713:
URL: https://github.com/apache/flink-web/pull/713#issuecomment-1931119435

   Thanks for update @Waterkin. Merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34282) Create a release branch

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-34282:
---

Assignee: lincoln lee

> Create a release branch
> ---
>
> Key: FLINK-34282
> URL: https://issues.apache.org/jira/browse/FLINK-34282
> Project: Flink
>  Issue Type: Sub-task
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: lincoln lee
>Priority: Major
> Fix For: 1.19.0
>
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the {{master}} branch, add a new value (e.g. {{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> Additionally in master, update the branch list of the GitHub Actions nightly 
> workflow (see 
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]):
>  The two most-recent releases and master should be covered.
> The newly created branch and updated {{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from {{dev-master}} a {{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for {{dev-x.y}} it should point to 
> {{{}x.y-SNAPSHOT{}}}, while for {{dev-master}} it should point to the most 
> recent snapshot version (\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the {{master}} branch to {{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named {{dev-x.y}} which could be built on top of 
> (${{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the {{flink.version}} 
> property inside the parent *pom.xml* file. It should be pointing to the most 
> recent snapshot version ($NEXT_SNAPSHOT_VERSION). For example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * {{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  *  
> [apache/flink:.github/workflows/nightly-trigger.yml#L31ff|https://github.com/apache/flink/blob/master/.github/workflows/nightly-trigger.yml#L31]
>  should have the new release branch included
>  * New version is added to 

Re: [PR] [FLINK-25054][table-planner] Extend the exception message for SHA2 function [flink]

2024-02-06 Thread via GitHub


xuyangzhong commented on code in PR #24099:
URL: https://github.com/apache/flink/pull/24099#discussion_r1480779909


##
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/codegen/CodeGeneratorContext.scala:
##
@@ -1052,7 +1052,7 @@ class CodeGeneratorContext(
  |  "Algorithm for 'SHA-" + $bitLen + "' is not available.", e);
  |  }
  |} else {
- |  throw new RuntimeException("Unsupported algorithm.");
+ |  throw new RuntimeException("Unsupported SHA2 function with 
hashLength of $bitLen.");

Review Comment:
   Nit: should we also list the supported algorithms here ?



##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/expressions/ScalarFunctionsTest.scala:
##
@@ -2449,6 +2449,13 @@ class ScalarFunctionsTest extends ScalarTypesTestBase {
 // non-constant bit length
 testAllApis("test".sha2('f44), "SHA2('test', f44)", expectedSha256)
 
+testExpectedAllApisException(
+  "test".sha2(128),
+  "SHA2('test', 128)",
+  "Could not instantiate generated class",

Review Comment:
   Nit: assert the expected message in exception.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34396) Release Testing Instructions: Verify FLINK-32775 Support yarn.provided.lib.dirs to add parent directory to classpath

2024-02-06 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-34396.
---
Resolution: Won't Fix

[~argoyal] Thanks for confirming :)

> Release Testing Instructions: Verify FLINK-32775 Support 
> yarn.provided.lib.dirs to add parent directory to classpath
> 
>
> Key: FLINK-34396
> URL: https://issues.apache.org/jira/browse/FLINK-34396
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Archit Goyal
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34383][Documentation] Modify the comment with incorrect syntax [flink]

2024-02-06 Thread via GitHub


lxliyou001 commented on PR #24273:
URL: https://github.com/apache/flink/pull/24273#issuecomment-1931048962

   @zhuzhurk,hi,this is my first time participating in an Apache project,could 
you help me review this PR and give me some advice? thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34396) Release Testing Instructions: Verify FLINK-32775 Support yarn.provided.lib.dirs to add parent directory to classpath

2024-02-06 Thread Archit Goyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815028#comment-17815028
 ] 

Archit Goyal commented on FLINK-34396:
--

Hi [~lincoln.86xy] , this feature doesnot need cross-team testing.

> Release Testing Instructions: Verify FLINK-32775 Support 
> yarn.provided.lib.dirs to add parent directory to classpath
> 
>
> Key: FLINK-34396
> URL: https://issues.apache.org/jira/browse/FLINK-34396
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Archit Goyal
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34396) Release Testing Instructions: Verify FLINK-32775 Support yarn.provided.lib.dirs to add parent directory to classpath

2024-02-06 Thread Archit Goyal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815028#comment-17815028
 ] 

Archit Goyal edited comment on FLINK-34396 at 2/6/24 11:16 PM:
---

Hi [~lincoln.86xy] , this feature doesnot need cross-team testing. Can you 
please help close this ticket.


was (Author: JIRAUSER300152):
Hi [~lincoln.86xy] , this feature doesnot need cross-team testing.

> Release Testing Instructions: Verify FLINK-32775 Support 
> yarn.provided.lib.dirs to add parent directory to classpath
> 
>
> Key: FLINK-34396
> URL: https://issues.apache.org/jira/browse/FLINK-34396
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Archit Goyal
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33817][flink-protobuf] Set ReadDefaultValues=False by default in proto3 for performance improvement [flink]

2024-02-06 Thread via GitHub


sharath1709 commented on PR #24035:
URL: https://github.com/apache/flink/pull/24035#issuecomment-1930790818

   @wckdman Thanks for the upvote. I'm looking forward to getting this PR 
merged as well after getting the approval from the reviewers


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31220) Replace Pod with PodTemplateSpec for the pod template properties

2024-02-06 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-31220.
--
Fix Version/s: kubernetes-operator-1.8.0
   Resolution: Fixed

merged to main 24643c8c6d9d734732ed2cb7e3112c4452675f40

> Replace Pod with PodTemplateSpec for the pod template properties
> 
>
> Key: FLINK-31220
> URL: https://issues.apache.org/jira/browse/FLINK-31220
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> The current podtemplate fields in the CR use the Pod object for schema. This 
> doesn't make sense as status and other fields should never be specified and 
> they take no effect.
> We should replace this with PodTemplateSpec and make sure that this is not a 
> breaking change even if users incorrectly specified status before.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-31220] Replace Pod with PodTemplateSpec for the pod template properties [flink-kubernetes-operator]

2024-02-06 Thread via GitHub


gyfora merged PR #770:
URL: https://github.com/apache/flink-kubernetes-operator/pull/770


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Alexis Sarda-Espinosa (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814996#comment-17814996
 ] 

Alexis Sarda-Espinosa commented on FLINK-34400:
---

I'm using Kafka connector 3.0.2-1.18

> Kafka sources with watermark alignment sporadically stop consuming
> --
>
> Key: FLINK-34400
> URL: https://issues.apache.org/jira/browse/FLINK-34400
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.1
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
> Attachments: logs.txt
>
>
> I have 2 Kafka sources that read from different topics. I have assigned them 
> to the same watermark alignment group, and I have _not_ enabled idleness 
> explicitly in their watermark strategies. One topic remains pretty much empty 
> most of the time, while the other receives a few events per second all the 
> time. Parallelism of the active source is 2, for the other one is 1, and 
> checkpoints are once every minute.
> This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
> the active sources stops consuming, which causes lag to increase. Weirdly, 
> after another 15 minutes or so, all the backlog is consumed at once, and then 
> everything stops again.
> I'm attaching some logs from the Task Manager where the issue appears. You 
> will notice that the Kafka network client reports disconnections (a long time 
> after the deserializer stopped reporting that events were being consumed), 
> I'm not sure if this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Alexis Sarda-Espinosa (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexis Sarda-Espinosa updated FLINK-34400:
--
Description: 
I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2, for the other one it's 1, and 
checkpoints are once every minute.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections (a long time after 
the deserializer stopped reporting that events were being consumed), I'm not 
sure if this is related.

  was:
I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2, for the other one is 1, and 
checkpoints are once every minute.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections (a long time after 
the deserializer stopped reporting that events were being consumed), I'm not 
sure if this is related.


> Kafka sources with watermark alignment sporadically stop consuming
> --
>
> Key: FLINK-34400
> URL: https://issues.apache.org/jira/browse/FLINK-34400
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.1
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
> Attachments: logs.txt
>
>
> I have 2 Kafka sources that read from different topics. I have assigned them 
> to the same watermark alignment group, and I have _not_ enabled idleness 
> explicitly in their watermark strategies. One topic remains pretty much empty 
> most of the time, while the other receives a few events per second all the 
> time. Parallelism of the active source is 2, for the other one it's 1, and 
> checkpoints are once every minute.
> This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
> the active sources stops consuming, which causes lag to increase. Weirdly, 
> after another 15 minutes or so, all the backlog is consumed at once, and then 
> everything stops again.
> I'm attaching some logs from the Task Manager where the issue appears. You 
> will notice that the Kafka network client reports disconnections (a long time 
> after the deserializer stopped reporting that events were being consumed), 
> I'm not sure if this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Alexis Sarda-Espinosa (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexis Sarda-Espinosa updated FLINK-34400:
--
Description: 
I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2, for the other one is 1, and 
checkpoints are once every minute.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections (a long time after 
the deserializer stopped reporting that events were being consumed), I'm not 
sure if this is related.

  was:
I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections (a long time after 
the deserializer stopped reporting that events were being consumed), I'm not 
sure if this is related.


> Kafka sources with watermark alignment sporadically stop consuming
> --
>
> Key: FLINK-34400
> URL: https://issues.apache.org/jira/browse/FLINK-34400
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.1
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
> Attachments: logs.txt
>
>
> I have 2 Kafka sources that read from different topics. I have assigned them 
> to the same watermark alignment group, and I have _not_ enabled idleness 
> explicitly in their watermark strategies. One topic remains pretty much empty 
> most of the time, while the other receives a few events per second all the 
> time. Parallelism of the active source is 2, for the other one is 1, and 
> checkpoints are once every minute.
> This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
> the active sources stops consuming, which causes lag to increase. Weirdly, 
> after another 15 minutes or so, all the backlog is consumed at once, and then 
> everything stops again.
> I'm attaching some logs from the Task Manager where the issue appears. You 
> will notice that the Kafka network client reports disconnections (a long time 
> after the deserializer stopped reporting that events were being consumed), 
> I'm not sure if this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814992#comment-17814992
 ] 

Martijn Visser commented on FLINK-34400:


Also CC [~fanrui] who probably has better insights in this feature :)

> Kafka sources with watermark alignment sporadically stop consuming
> --
>
> Key: FLINK-34400
> URL: https://issues.apache.org/jira/browse/FLINK-34400
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.1
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
> Attachments: logs.txt
>
>
> I have 2 Kafka sources that read from different topics. I have assigned them 
> to the same watermark alignment group, and I have _not_ enabled idleness 
> explicitly in their watermark strategies. One topic remains pretty much empty 
> most of the time, while the other receives a few events per second all the 
> time. Parallelism of the active source is 2.
> This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
> the active sources stops consuming, which causes lag to increase. Weirdly, 
> after another 15 minutes or so, all the backlog is consumed at once, and then 
> everything stops again.
> I'm attaching some logs from the Task Manager where the issue appears. You 
> will notice that the Kafka network client reports disconnections (a long time 
> after the deserializer stopped reporting that events were being consumed), 
> I'm not sure if this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814991#comment-17814991
 ] 

Martijn Visser commented on FLINK-34400:


[~asardaes] Which version of the Flink Kafka connector did you use?

> Kafka sources with watermark alignment sporadically stop consuming
> --
>
> Key: FLINK-34400
> URL: https://issues.apache.org/jira/browse/FLINK-34400
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.1
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
> Attachments: logs.txt
>
>
> I have 2 Kafka sources that read from different topics. I have assigned them 
> to the same watermark alignment group, and I have _not_ enabled idleness 
> explicitly in their watermark strategies. One topic remains pretty much empty 
> most of the time, while the other receives a few events per second all the 
> time. Parallelism of the active source is 2.
> This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
> the active sources stops consuming, which causes lag to increase. Weirdly, 
> after another 15 minutes or so, all the backlog is consumed at once, and then 
> everything stops again.
> I'm attaching some logs from the Task Manager where the issue appears. You 
> will notice that the Kafka network client reports disconnections (a long time 
> after the deserializer stopped reporting that events were being consumed), 
> I'm not sure if this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34337) Sink.InitContextWrapper should implement metadataConsumer method

2024-02-06 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-34337.
--
Resolution: Fixed

Fixed in apache/flink:master 03b4584422826d2819d571871dfef4efced19f01

> Sink.InitContextWrapper should implement metadataConsumer method
> 
>
> Key: FLINK-34337
> URL: https://issues.apache.org/jira/browse/FLINK-34337
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Jiabao Sun
>Assignee: Jiabao Sun
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> Sink.InitContextWrapper should implement metadataConsumer method.
> If the metadataConsumer method is not implemented, the behavior of the 
> wrapped WriterInitContext's metadataConsumer will be lost.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34337][core] Sink.InitContextWrapper should implement metadataConsumer method [flink]

2024-02-06 Thread via GitHub


MartijnVisser merged PR #24249:
URL: https://github.com/apache/flink/pull/24249


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34337][core] Sink.InitContextWrapper should implement metadataConsumer method [flink]

2024-02-06 Thread via GitHub


MartijnVisser commented on PR #24249:
URL: https://github.com/apache/flink/pull/24249#issuecomment-1930591058

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34095) Add a restore test for StreamExecAsyncCalc

2024-02-06 Thread Timo Walther (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther closed FLINK-34095.

Fix Version/s: 1.19.0
   Resolution: Fixed

Fixed in master: 4264c30d367c1847d702ea5438da64876559a95d

> Add a restore test for StreamExecAsyncCalc
> --
>
> Key: FLINK-34095
> URL: https://issues.apache.org/jira/browse/FLINK-34095
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Timo Walther
>Assignee: Alan Sheinberg
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> StreamExecAsyncCalc needs at least one restore test to check whether the node 
> can be restored from a CompiledPlan and whether restore from a savepoint 
> works.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34095] Adds restore tests for StreamExecAsyncCalc [flink]

2024-02-06 Thread via GitHub


twalthr merged PR #24220:
URL: https://github.com/apache/flink/pull/24220


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Alexis Sarda-Espinosa (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexis Sarda-Espinosa updated FLINK-34400:
--
Description: 
I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections (a long time after 
the deserializer stopped reporting that events were being consumed), I'm not 
sure if this is related.

  was:
I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections - this is because 
my Kafka cluster was indeed restarted (one broker at a time), I'm not sure if 
this is related.


> Kafka sources with watermark alignment sporadically stop consuming
> --
>
> Key: FLINK-34400
> URL: https://issues.apache.org/jira/browse/FLINK-34400
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.18.1
>Reporter: Alexis Sarda-Espinosa
>Priority: Major
> Attachments: logs.txt
>
>
> I have 2 Kafka sources that read from different topics. I have assigned them 
> to the same watermark alignment group, and I have _not_ enabled idleness 
> explicitly in their watermark strategies. One topic remains pretty much empty 
> most of the time, while the other receives a few events per second all the 
> time. Parallelism of the active source is 2.
> This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
> the active sources stops consuming, which causes lag to increase. Weirdly, 
> after another 15 minutes or so, all the backlog is consumed at once, and then 
> everything stops again.
> I'm attaching some logs from the Task Manager where the issue appears. You 
> will notice that the Kafka network client reports disconnections (a long time 
> after the deserializer stopped reporting that events were being consumed), 
> I'm not sure if this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34366) Add support to group rows by column ordinals

2024-02-06 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814898#comment-17814898
 ] 

Jeyhun Karimov edited comment on FLINK-34366 at 2/6/24 5:10 PM:


Hi [~martijnvisser] I worked on this issue. Could you please check the PR in 
your available time? Thanks!


was (Author: jeyhunkarimov):
Hi [~martijnvisser] I worked on this issue. Could you please check the PR?

> Add support to group rows by column ordinals
> 
>
> Key: FLINK-34366
> URL: https://issues.apache.org/jira/browse/FLINK-34366
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Reference: BigQuery 
> https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#group_by_col_ordinals
> The GROUP BY clause can refer to expression names in the SELECT list. The 
> GROUP BY clause also allows ordinal references to expressions in the SELECT 
> list, using integer values. 1 refers to the first value in the SELECT list, 2 
> the second, and so forth. The value list can combine ordinals and value 
> names. The following queries are equivalent:
> {code:sql}
> WITH PlayerStats AS (
>   SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL
>   SELECT 'Buchanan', 'Jie', 0 UNION ALL
>   SELECT 'Coolidge', 'Kiran', 1 UNION ALL
>   SELECT 'Adams', 'Noam', 4 UNION ALL
>   SELECT 'Buchanan', 'Jie', 13)
> SELECT SUM(PointsScored) AS total_points, LastName, FirstName
> FROM PlayerStats
> GROUP BY LastName, FirstName;
> /*--+--+---+
>  | total_points | LastName | FirstName |
>  +--+--+---+
>  | 7| Adams| Noam  |
>  | 13   | Buchanan | Jie   |
>  | 1| Coolidge | Kiran |
>  +--+--+---*/
> {code}
> {code:sql}
> WITH PlayerStats AS (
>   SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL
>   SELECT 'Buchanan', 'Jie', 0 UNION ALL
>   SELECT 'Coolidge', 'Kiran', 1 UNION ALL
>   SELECT 'Adams', 'Noam', 4 UNION ALL
>   SELECT 'Buchanan', 'Jie', 13)
> SELECT SUM(PointsScored) AS total_points, LastName, FirstName
> FROM PlayerStats
> GROUP BY 2, 3;
> /*--+--+---+
>  | total_points | LastName | FirstName |
>  +--+--+---+
>  | 7| Adams| Noam  |
>  | 13   | Buchanan | Jie   |
>  | 1| Coolidge | Kiran |
>  +--+--+---*/
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34366) Add support to group rows by column ordinals

2024-02-06 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814898#comment-17814898
 ] 

Jeyhun Karimov commented on FLINK-34366:


Hi [~martijnvisser] I worked on this issue. Could you please check the PR?

> Add support to group rows by column ordinals
> 
>
> Key: FLINK-34366
> URL: https://issues.apache.org/jira/browse/FLINK-34366
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> Reference: BigQuery 
> https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#group_by_col_ordinals
> The GROUP BY clause can refer to expression names in the SELECT list. The 
> GROUP BY clause also allows ordinal references to expressions in the SELECT 
> list, using integer values. 1 refers to the first value in the SELECT list, 2 
> the second, and so forth. The value list can combine ordinals and value 
> names. The following queries are equivalent:
> {code:sql}
> WITH PlayerStats AS (
>   SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL
>   SELECT 'Buchanan', 'Jie', 0 UNION ALL
>   SELECT 'Coolidge', 'Kiran', 1 UNION ALL
>   SELECT 'Adams', 'Noam', 4 UNION ALL
>   SELECT 'Buchanan', 'Jie', 13)
> SELECT SUM(PointsScored) AS total_points, LastName, FirstName
> FROM PlayerStats
> GROUP BY LastName, FirstName;
> /*--+--+---+
>  | total_points | LastName | FirstName |
>  +--+--+---+
>  | 7| Adams| Noam  |
>  | 13   | Buchanan | Jie   |
>  | 1| Coolidge | Kiran |
>  +--+--+---*/
> {code}
> {code:sql}
> WITH PlayerStats AS (
>   SELECT 'Adams' as LastName, 'Noam' as FirstName, 3 as PointsScored UNION ALL
>   SELECT 'Buchanan', 'Jie', 0 UNION ALL
>   SELECT 'Coolidge', 'Kiran', 1 UNION ALL
>   SELECT 'Adams', 'Noam', 4 UNION ALL
>   SELECT 'Buchanan', 'Jie', 13)
> SELECT SUM(PointsScored) AS total_points, LastName, FirstName
> FROM PlayerStats
> GROUP BY 2, 3;
> /*--+--+---+
>  | total_points | LastName | FirstName |
>  +--+--+---+
>  | 7| Adams| Noam  |
>  | 13   | Buchanan | Jie   |
>  | 1| Coolidge | Kiran |
>  +--+--+---*/
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34400) Kafka sources with watermark alignment sporadically stop consuming

2024-02-06 Thread Alexis Sarda-Espinosa (Jira)
Alexis Sarda-Espinosa created FLINK-34400:
-

 Summary: Kafka sources with watermark alignment sporadically stop 
consuming
 Key: FLINK-34400
 URL: https://issues.apache.org/jira/browse/FLINK-34400
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.18.1
Reporter: Alexis Sarda-Espinosa
 Attachments: logs.txt

I have 2 Kafka sources that read from different topics. I have assigned them to 
the same watermark alignment group, and I have _not_ enabled idleness 
explicitly in their watermark strategies. One topic remains pretty much empty 
most of the time, while the other receives a few events per second all the 
time. Parallelism of the active source is 2.

This works correctly for some time (10 - 15 minutes in my case) but then 1 of 
the active sources stops consuming, which causes lag to increase. Weirdly, 
after another 15 minutes or so, all the backlog is consumed at once, and then 
everything stops again.

I'm attaching some logs from the Task Manager where the issue appears. You will 
notice that the Kafka network client reports disconnections - this is because 
my Kafka cluster was indeed restarted (one broker at a time), I'm not sure if 
this is related.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-34399:


 Summary: Release Testing: Verify FLINK-33644 Make QueryOperations 
SQL serializable
 Key: FLINK-34399
 URL: https://issues.apache.org/jira/browse/FLINK-34399
 Project: Flink
  Issue Type: Sub-task
Reporter: Dawid Wysakowicz


Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:


Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34399) Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-34399:
-
Description: 
Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:

{code}
Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()
{code}

  was:
Test suggestions:
1. Write a few Table API programs.
2. Call Table.getQueryOperation#asSerializableString, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:


Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()


> Release Testing: Verify FLINK-33644 Make QueryOperations SQL serializable
> -
>
> Key: FLINK-34399
> URL: https://issues.apache.org/jira/browse/FLINK-34399
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Dawid Wysakowicz
>Priority: Major
>
> Test suggestions:
> 1. Write a few Table API programs.
> 2. Call Table.getQueryOperation#asSerializableString, manually verify the 
> produced SQL query
> 3. Check the produced SQL query is runnable and produces the same results as 
> the Table API program:
> {code}
> Table table = tEnv.from("a") ...
> String sqlQuery = table.getQueryOperation().asSerializableString();
> //verify the sqlQuery is runnable
> tEnv.sqlQuery(sqlQuery).execute().collect()
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34302) Release Testing Instructions: Verify FLINK-33644 Make QueryOperations SQL serializable

2024-02-06 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814892#comment-17814892
 ] 

Dawid Wysakowicz commented on FLINK-34302:
--

Test suggestions:
1. Write a few Table API programs.
2. Call {{Table.getQueryOperation#asSerializableString}}, manually verify the 
produced SQL query
3. Check the produced SQL query is runnable and produces the same results as 
the Table API program:

{code}

Table table = tEnv.from("a") ...

String sqlQuery = table.getQueryOperation().asSerializableString();

//verify the sqlQuery is runnable
tEnv.sqlQuery(sqlQuery).execute().collect()
{code}

> Release Testing Instructions: Verify FLINK-33644 Make QueryOperations SQL 
> serializable
> --
>
> Key: FLINK-34302
> URL: https://issues.apache.org/jira/browse/FLINK-34302
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.19.0
>Reporter: lincoln lee
>Assignee: Dawid Wysakowicz
>Priority: Blocker
> Fix For: 1.19.0
>
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-33771] Extract method to estimate task slots and add dedicated test [flink-kubernetes-operator]

2024-02-06 Thread via GitHub


mxm opened a new pull request, #773:
URL: https://github.com/apache/flink-kubernetes-operator/pull/773

   While working on #762, I realized this change can be extracted and merged 
independently.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34324) s3_setup is called in test_file_sink.sh even if the common_s3.sh is not sourced

2024-02-06 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814891#comment-17814891
 ] 

Matthias Pohl commented on FLINK-34324:
---

I reverted the fix in release-1.18 with 
[a4dd5854|https://github.com/apache/flink/commit/a4dd58545d59b59089d9321a743d6c98a7c8e855]
 because it introduced [a test 
failure|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57335=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3551].
 I'm gonna get back to this one tomorrow.

> s3_setup is called in test_file_sink.sh even if the common_s3.sh is not 
> sourced
> ---
>
> Key: FLINK-34324
> URL: https://issues.apache.org/jira/browse/FLINK-34324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hadoop Compatibility, Tests
>Affects Versions: 1.17.2, 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> See example CI run from the FLINK-34150 PR:
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56570=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3191
> {code}
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_file_sink.sh: 
> line 38: s3_setup: command not found
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21949][table] Support ARRAY_AGG aggregate function [flink]

2024-02-06 Thread via GitHub


dawidwys commented on PR #23411:
URL: https://github.com/apache/flink/pull/23411#issuecomment-1930284532

   Thanks for the update @Jiabao-Sun The implementation looks good now. I want 
to go through the tests again, but I need a bit more time. I hope this is fine, 
cause anyway we need to wait for a branch cut.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   >