[GitHub] [flink] wangzzu commented on a diff in pull request #23233: [FLINK-32847][flink-runtime][JUnit5 Migration] Module: The operators package of flink-runtime

2023-08-22 Thread via GitHub


wangzzu commented on code in PR #23233:
URL: https://github.com/apache/flink/pull/23233#discussion_r1302499459


##
flink-runtime/src/test/java/org/apache/flink/runtime/operators/CrossTaskTest.java:
##
@@ -244,19 +249,13 @@ public void testFailingStreamCrossTask() {
 
 final CrossDriver testTask = new 
CrossDriver<>();
 
-try {
-testDriver(testTask, MockFailingCrossStub.class);
-Assert.fail("Exception not forwarded.");
-} catch (ExpectedTestException etex) {
-// good!
-} catch (Exception e) {
-e.printStackTrace();
-Assert.fail("Test failed due to an exception.");
-}
+assertThatThrownBy(() -> testDriver(testTask, 
MockFailingCrossStub.class))
+.withFailMessage("Exception not forwarded.")

Review Comment:
   same as above, you can check it globally



##
flink-runtime/src/test/java/org/apache/flink/runtime/operators/CrossTaskTest.java:
##
@@ -155,19 +162,13 @@ public void testFailingBlockCrossTask2() {
 
 final CrossDriver testTask = new 
CrossDriver<>();
 
-try {
-testDriver(testTask, MockFailingCrossStub.class);
-Assert.fail("Exception not forwarded.");
-} catch (ExpectedTestException etex) {
-// good!
-} catch (Exception e) {
-e.printStackTrace();
-Assert.fail("Test failed due to an exception.");
-}
+assertThatThrownBy(() -> testDriver(testTask, 
MockFailingCrossStub.class))
+.withFailMessage("Exception not forwarded.")

Review Comment:
   i think this msg can remove



##
flink-runtime/src/test/java/org/apache/flink/runtime/operators/JoinTaskTest.java:
##
@@ -414,19 +415,13 @@ public void testFailingMatchTask() {
 addInput(new UniformRecordGenerator(keyCnt1, valCnt1, true));
 addInput(new UniformRecordGenerator(keyCnt2, valCnt2, true));
 
-try {
-testDriver(testTask, MockFailingMatchStub.class);
-Assert.fail("Driver did not forward Exception.");
-} catch (ExpectedTestException e) {
-// good!
-} catch (Exception e) {
-e.printStackTrace();
-Assert.fail("The test caused an exception.");
-}
+assertThatThrownBy(() -> testDriver(testTask, 
MockFailingMatchStub.class))
+.withFailMessage("Driver did not forward Exception.")

Review Comment:
   same as above



##
flink-runtime/src/test/java/org/apache/flink/runtime/operators/hash/MutableHashTableTestBase.java:
##
@@ -122,14 +122,16 @@ public void testDifferentProbers() {
 AbstractHashTableProber prober2 =
 table.getProber(intPairComparator, pairComparator);
 
-assertFalse(prober1 == prober2);
+assertThat(prober1).isNotEqualTo(prober2);
 
 table.close(); // (This also tests calling close without calling open 
first.)
-assertEquals("Memory lost", NUM_MEM_PAGES, 
table.getFreeMemory().size());
+assertThat(table.getFreeMemory().size())
+.withFailMessage("Memory lost")
+.isEqualTo(NUM_MEM_PAGES);

Review Comment:
   `hasSize()`



##
flink-runtime/src/test/java/org/apache/flink/runtime/operators/util/BitSetTest.java:
##
@@ -19,17 +19,20 @@
 
 import org.apache.flink.core.memory.MemorySegment;
 import org.apache.flink.core.memory.MemorySegmentFactory;
+import 
org.apache.flink.testutils.junit.extensions.parameterized.ParameterizedTestExtension;
+import org.apache.flink.testutils.junit.extensions.parameterized.Parameters;
 
-import org.junit.Before;
-import org.junit.Test;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.TestTemplate;
+import org.junit.jupiter.api.extension.ExtendWith;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
+import java.util.Arrays;
+import java.util.List;
 
-@RunWith(Parameterized.class)
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+
+@ExtendWith(ParameterizedTestExtension.class)
 public class BitSetTest {

Review Comment:
   `public` is not necessary



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List

2023-08-22 Thread Tan Kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tan Kim updated FLINK-32941:

Priority: Critical  (was: Major)

> Table API Bridge `toDataStream(targetDataType)` function not working 
> correctly for Java List
> 
>
> Key: FLINK-32941
> URL: https://issues.apache.org/jira/browse/FLINK-32941
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Tan Kim
>Priority: Critical
>  Labels: bridge
>
> When the code below is executed, only the first element of the list is 
> assigned to the List variable in MyPoJo repeatedly.
> {code:java}
> case class Item(
>   name: String
> )
> case class MyPojo(
>   @DataTypeHist("RAW") items: java.util.List[Item]
> )
> ...
> tableEnv
>   .sqlQuery("select items from table")
>   .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
>  
> For example, if you have the following list coming in as input,
> ["a","b","c"]
> The value actually stored in MyPojo's list variable is
> ["a","a","a"] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23265: [FLINK-32853][runtime][JUnit5 Migration] The security, taskmanager an…

2023-08-22 Thread via GitHub


flinkbot commented on PR #23265:
URL: https://github.com/apache/flink/pull/23265#issuecomment-1689269938

   
   ## CI report:
   
   * 461f13bcd3383e07cc1f2af915338d76a77a46c0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32853) [JUnit5 Migration] The security, taskmanager and source packages of flink-runtime module

2023-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32853:
---
Labels: pull-request-available  (was: )

> [JUnit5 Migration] The security, taskmanager and source packages of 
> flink-runtime module
> 
>
> Key: FLINK-32853
> URL: https://issues.apache.org/jira/browse/FLINK-32853
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Rui Fan
>Assignee: Yangyang ZHANG
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zhangyy91 opened a new pull request, #23265: [FLINK-32853][runtime][JUnit5 Migration] The security, taskmanager an…

2023-08-22 Thread via GitHub


zhangyy91 opened a new pull request, #23265:
URL: https://github.com/apache/flink/pull/23265

   
   
   ## What is the purpose of the change
   Migrate security, taskmanager and source packages of flink-runtime module to 
JUnit5
   
   ## Brief change log
   Migrate security, taskmanager and source packages of flink-runtime module to 
JUnit5
   
   ## Verifying this change
   This change is already covered by existing tests.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32804) Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink

2023-08-22 Thread Matt Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757795#comment-17757795
 ] 

Matt Wang commented on FLINK-32804:
---

[~renqs] can i task this, I'm interested in this

> Release Testing: Verify FLIP-304: Pluggable failure handling for Apache Flink
> -
>
> Key: FLINK-32804
> URL: https://issues.apache.org/jira/browse/FLINK-32804
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32803) Release Testing: Verify FLINK-32165 - Improve observability of fine-grained resource management

2023-08-22 Thread Weihua Hu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757793#comment-17757793
 ] 

Weihua Hu commented on FLINK-32803:
---

I would like to take this issue. [~chesnay] [~renqs] 

> Release Testing: Verify FLINK-32165 - Improve observability of fine-grained 
> resource management
> ---
>
> Key: FLINK-32803
> URL: https://issues.apache.org/jira/browse/FLINK-32803
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32799) Release Testing: Verify FLINK-26603 -Decouple Hive with Flink planner

2023-08-22 Thread Hang Ruan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757791#comment-17757791
 ] 

Hang Ruan commented on FLINK-32799:
---

I have also tested one insert and select. It looks good.

> Release Testing: Verify FLINK-26603 -Decouple Hive with Flink planner
> -
>
> Key: FLINK-32799
> URL: https://issues.apache.org/jira/browse/FLINK-32799
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Hang Ruan
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: hive.png, lib.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23264: Refactor @Test(expected) with assertThrows

2023-08-22 Thread via GitHub


flinkbot commented on PR #23264:
URL: https://github.com/apache/flink/pull/23264#issuecomment-1689208523

   
   ## CI report:
   
   * 998a2e1a6bb835bceffc405e82acc815be87b3d9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-22 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757780#comment-17757780
 ] 

Shengkai Fang commented on FLINK-32731:
---

Thanks for the sharing. I think we should add some retry mechanism to restart 
the container when namenode fails. I will open a PR to fix this soon.

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: java.net.ConnectException: 
> Connection refused; For more details see:  
> 

[GitHub] [flink] 1996fanrui commented on a diff in pull request #23219: [FLINK-20681][yarn] Support specifying the hdfs path when ship archives or files

2023-08-22 Thread via GitHub


1996fanrui commented on code in PR #23219:
URL: https://github.com/apache/flink/pull/23219#discussion_r1302400685


##
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java:
##
@@ -156,9 +158,9 @@ public class YarnClusterDescriptor implements 
ClusterDescriptor {
 private final boolean sharedYarnClient;
 
 /** Lazily initialized list of files to ship. */
-private final List shipFiles = new LinkedList<>();
+private final List shipFiles = new LinkedList<>();
 
-private final List shipArchives = new LinkedList<>();
+private final List shipArchives = new LinkedList<>();

Review Comment:
   How about adding a comment to describe we have converted the option path str 
to the Path with schema and absolute path? It's more clear for other 
developers, and it's clear to use them.



##
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java:
##
@@ -272,17 +272,25 @@ public class YarnConfigOptions {
 .noDefaultValue()
 .withDeprecatedKeys("yarn.ship-directories")
 .withDescription(
-"A semicolon-separated list of files and/or 
directories to be shipped to the YARN cluster.");
+"A semicolon-separated list of files and/or 
directories to be shipped to the YARN "
++ "cluster. These files/directories can 
come from the local path of flink client "
++ "or HDFS. For example, "

Review Comment:
   How about updating `from the local client and/or HDFS` to `the local path of 
flink client or HDFS` as well?



##
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java:
##
@@ -202,16 +204,27 @@ public YarnClusterDescriptor(
 this.nodeLabel = 
flinkConfiguration.getString(YarnConfigOptions.NODE_LABEL);
 }
 
-private Optional> decodeFilesToShipToCluster(
+private Optional> decodeFilesToShipToCluster(
 final Configuration configuration, final 
ConfigOption> configOption) {
 checkNotNull(configuration);
 checkNotNull(configOption);
 
-final List files =
-ConfigUtils.decodeListFromConfig(configuration, configOption, 
File::new);
+List files = ConfigUtils.decodeListFromConfig(configuration, 
configOption, Path::new);
+files = 
files.stream().map(this::enrichPathSchemaIfNeeded).collect(Collectors.toList());
 return files.isEmpty() ? Optional.empty() : Optional.of(files);
 }
 
+private Path enrichPathSchemaIfNeeded(Path path) {
+if (isWithoutSchema(path)) {
+return new Path(new File(path.toString()).toURI());

Review Comment:
   This class has a couple of ` new Path(new File(pathStr).toURI())` to convert 
path from `localPathStr` to the hdfs `Path`, could we extract one method to do 
it?



##
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java:
##
@@ -202,16 +204,27 @@ public YarnClusterDescriptor(
 this.nodeLabel = 
flinkConfiguration.getString(YarnConfigOptions.NODE_LABEL);
 }
 
-private Optional> decodeFilesToShipToCluster(
+private Optional> decodeFilesToShipToCluster(
 final Configuration configuration, final 
ConfigOption> configOption) {
 checkNotNull(configuration);
 checkNotNull(configOption);
 
-final List files =
-ConfigUtils.decodeListFromConfig(configuration, configOption, 
File::new);
+List files = ConfigUtils.decodeListFromConfig(configuration, 
configOption, Path::new);
+files = 
files.stream().map(this::enrichPathSchemaIfNeeded).collect(Collectors.toList());
 return files.isEmpty() ? Optional.empty() : Optional.of(files);
 }
 
+private Path enrichPathSchemaIfNeeded(Path path) {
+if (isWithoutSchema(path)) {
+return new Path(new File(path.toString()).toURI());
+}
+return path;
+}
+
+private boolean isWithoutSchema(Path path) {
+return StringUtils.isNullOrWhitespaceOnly(path.toUri().getScheme());
+}

Review Comment:
   How about updating the `Path enrichPathSchemaIfNeeded(Path path)` to the 
`Path createPathWithSchema(String path)`?
   
   If so, the `files = 
files.stream().map(this::enrichPathSchemaIfNeeded).collect(Collectors.toList());`
 can be removed. We just call `List files = 
ConfigUtils.decodeListFromConfig(configuration, configOption, 
this::createPathWithSchema);`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] masteryhx commented on pull request #17443: [FLINK-9465][Runtime/Checkpointing] Specify a separate savepoint timeout option via CLI and REST API

2023-08-22 Thread via GitHub


masteryhx commented on PR #17443:
URL: https://github.com/apache/flink/pull/17443#issuecomment-1689189718

   Hi, @zoltar9264 
   Just Kindly ping, Are you still working on the pr? Could you rebase the 
newest master branch and resolve comments?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-15014) Refactor KeyedStateInputFormat to support multiple types of user functions

2023-08-22 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-15014.
--
Resolution: Fixed

I saw the pr has been merged and the probelm should have been resolved, just 
marked as resolved.

> Refactor KeyedStateInputFormat to support multiple types of user functions
> --
>
> Key: FLINK-15014
> URL: https://issues.apache.org/jira/browse/FLINK-15014
> Project: Flink
>  Issue Type: Improvement
>  Components: API / State Processor
>Affects Versions: 1.10.0
>Reporter: Seth Wiesman
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-19917) RocksDBInitTest.testTempLibFolderDeletedOnFail fails on Windows

2023-08-22 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu closed FLINK-19917.

Resolution: Cannot Reproduce

I closed this as it has not been reproduced more than one year, please reopen 
it if reproduced.

>  RocksDBInitTest.testTempLibFolderDeletedOnFail fails on Windows
> 
>
> Key: FLINK-19917
> URL: https://issues.apache.org/jira/browse/FLINK-19917
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: Andrey Zagrebin
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> {code:java}
> java.lang.AssertionError: 
> Expected :0
> Actual   :2{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration

2023-08-22 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1775#comment-1775
 ] 

Jane Chan commented on FLINK-32785:
---

This ticket aims to verify FLINK-31791: Enhance COMPILED PLAN to support 
operator-level state TTL configuration.

More details about this feature and how to use it can be found in this 
[documentation|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/overview/#configure-operator-level-state-ttl].
 The verification steps are as follows.
h3. Part I: Functionality Verification

1. Start the standalone session cluster and sql client.

2. Execute the following DDL statements.
{code:sql}
CREATE TABLE `default_catalog`.`default_database`.`Orders` (
  `order_id` INT,
  `line_order_id` INT
) WITH (
  'connector' = 'datagen'
); 

CREATE TABLE `default_catalog`.`default_database`.`LineOrders` (
  `line_order_id` INT,
  `ship_mode` STRING
) WITH (
  'connector' = 'datagen'
);

CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` (
  `order_id` INT,
  `line_order_id` INT,
  `ship_mode` STRING ) WITH (
  'connector' = 'print'
);
{code}
 
3. Generate Compiled Plan
{code:sql}
COMPILE PLAN '/path/to/plan.json' FOR
INSERT INTO OrdersShipInfo 
SELECT a.order_id, a.line_order_id, b.ship_mode 
FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id;
{code}
 

4. Verify JSON plan content
The generated JSON file should contain the following "state" JSON array for 
StreamJoin ExecNode.
{code:json}
{
"id" : 5,
"type" : "stream-exec-join_1",
"joinSpec" : {
  ...
},
"state" : [ {
  "index" : 0,
  "ttl" : "0 ms",
  "name" : "leftState"
}, {
  "index" : 1,
  "ttl" : "0 ms",
  "name" : "rightState"
} ],
"inputProperties": [...],
"outputType": ...,
"description": ...
}
{code}
h3. Part II: Compatibility Verification

Repeat the previously described steps using the flink-1.17 release, and then 
execute the generated plan using 1.18 via
{code:sql}
EXECUTE PLAN '/path/to/plan-generated-by-old-flink-version.json'
{code}
 

> Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support 
> operator-level state TTL configuration
> -
>
> Key: FLINK-32785
> URL: https://issues.apache.org/jira/browse/FLINK-32785
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-20772) RocksDBValueState with TTL occurs NullPointerException when calling update(null) method

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757776#comment-17757776
 ] 

Hangxiang Yu commented on FLINK-20772:
--

Hi, [~dorbae] 

Sorry for the late reply.

I think you are right. We should make TTLValueState also follow the protocol of 
ValueState#update.

> RocksDBValueState with TTL occurs NullPointerException when calling 
> update(null) method 
> 
>
> Key: FLINK-20772
> URL: https://issues.apache.org/jira/browse/FLINK-20772
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.11.2
> Environment: Flink version: 1.11.2
> Flink Cluster: Standalone cluster with 3 Job managers and Task managers on 
> CentOS 7
>Reporter: Seongbae Chang
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> beginner
>
> h2. Problem
>  * I use ValueState for my custom trigger and set TTL for these ValueState in 
> RocksDB backend environment.
>  * I found an error when I used this code. I know that 
> ValueState.update(null) works equally to ValueState.clear() in general. 
> Unfortunately, this error occurs after using TTL
> {code:java}
> // My Code
> ctx.getPartitionedState(batchTotalSizeStateDesc).update(null);
> {code}
>  * I tested this in Flink 1.11.2, but I think it would be a problem in upper 
> versions.
>  * Plus, I'm a beginner. So, if there is any problem in this discussion 
> issue, please give me advice about that. And I'll fix it! 
> {code:java}
> // Error Stacktrace
> Caused by: TimerException{org.apache.flink.util.FlinkRuntimeException: Error 
> while adding data to RocksDB}
>   ... 12 more
> Caused by: org.apache.flink.util.FlinkRuntimeException: Error while adding 
> data to RocksDB
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:108)
>   at 
> org.apache.flink.runtime.state.ttl.TtlValueState.update(TtlValueState.java:50)
>   at .onProcessingTime(ActionBatchTimeTrigger.java:102)
>   at .onProcessingTime(ActionBatchTimeTrigger.java:29)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onProcessingTime(WindowOperator.java:902)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.onProcessingTime(WindowOperator.java:498)
>   at 
> org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.onProcessingTime(InternalTimerServiceImpl.java:260)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invokeProcessingTimeCallback(StreamTask.java:1220)
>   ... 11 more
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:69)
>   at 
> org.apache.flink.api.common.typeutils.base.IntSerializer.serialize(IntSerializer.java:32)
>   at 
> org.apache.flink.api.common.typeutils.CompositeSerializer.serialize(CompositeSerializer.java:142)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValueInternal(AbstractRocksDBState.java:158)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:178)
>   at 
> org.apache.flink.contrib.streaming.state.AbstractRocksDBState.serializeValue(AbstractRocksDBState.java:167)
>   at 
> org.apache.flink.contrib.streaming.state.RocksDBValueState.update(RocksDBValueState.java:106)
>   ... 18 more
> {code}
>  
> h2. Reason
>  * It relates to RocksDBValueState with TTLValueState
>  * In RocksDBValueState(as well as other types of ValueState), 
> *.update(null)* has to be caught in if-clauses(null checking). However, it 
> skips the null checking and then tries to serialize the null value.
> {code:java}
> // 
> https://github.com/apache/flink/blob/release-1.11/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBValueState.java#L96-L110
> @Override
> public void update(V value) { 
> if (value == null) { 
> clear(); 
> return; 
> }
>  
> try { 
> backend.db.put(columnFamily, writeOptions, 
> serializeCurrentKeyWithGroupAndNamespace(), serializeValue(value)); 
> } catch (Exception e) { 
> throw new FlinkRuntimeException("Error while adding data to RocksDB", 
> e);  
> }
> }{code}
>  *  It is because that TtlValueState wraps the value(null) with the 
> LastAccessTime and makes the new TtlValue Object with the null value.
> {code:java}
> // 
> https://github.com/apache/flink/blob/release-1.11/flink-runtime/src/main/java/org/apache/flink/runtime/state/ttl/TtlValueState.java#L47-L51
> @Override
> public void update(T value) throws 

[GitHub] [flink] wangzzu commented on pull request #23260: [hotfix][docs] Update the parameter types of startRemoteMetricsRpcService in javadocs

2023-08-22 Thread via GitHub


wangzzu commented on PR #23260:
URL: https://github.com/apache/flink/pull/23260#issuecomment-1689181414

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-25814) AdaptiveSchedulerITCase.testStopWithSavepointFailOnFirstSavepointSucceedOnSecond failed due to stop-with-savepoint failed

2023-08-22 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu closed FLINK-25814.

Resolution: Cannot Reproduce

Closed this as not reproduced more than one year. Please reopen it if 
reproduced.

> AdaptiveSchedulerITCase.testStopWithSavepointFailOnFirstSavepointSucceedOnSecond
>  failed due to stop-with-savepoint failed
> -
>
> Key: FLINK-25814
> URL: https://issues.apache.org/jira/browse/FLINK-25814
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.5, 1.14.6
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> 2022-01-25T05:37:28.6339368Z Jan 25 05:37:28 [ERROR] 
> testStopWithSavepointFailOnFirstSavepointSucceedOnSecond(org.apache.flink.test.scheduling.AdaptiveSchedulerITCase)
>   Time elapsed: 300.269 s  <<< ERROR!
> 2022-01-25T05:37:28.6340216Z Jan 25 05:37:28 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.util.FlinkException: Stop with savepoint operation could not 
> be completed.
> 2022-01-25T05:37:28.6342330Z Jan 25 05:37:28  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2022-01-25T05:37:28.6343776Z Jan 25 05:37:28  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2022-01-25T05:37:28.6344983Z Jan 25 05:37:28  at 
> org.apache.flink.test.scheduling.AdaptiveSchedulerITCase.testStopWithSavepointFailOnFirstSavepointSucceedOnSecond(AdaptiveSchedulerITCase.java:231)
> 2022-01-25T05:37:28.6346165Z Jan 25 05:37:28  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2022-01-25T05:37:28.6347145Z Jan 25 05:37:28  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2022-01-25T05:37:28.6348207Z Jan 25 05:37:28  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2022-01-25T05:37:28.6349147Z Jan 25 05:37:28  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2022-01-25T05:37:28.6350068Z Jan 25 05:37:28  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2022-01-25T05:37:28.6351116Z Jan 25 05:37:28  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2022-01-25T05:37:28.6352132Z Jan 25 05:37:28  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2022-01-25T05:37:28.6353816Z Jan 25 05:37:28  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2022-01-25T05:37:28.6354863Z Jan 25 05:37:28  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2022-01-25T05:37:28.6355983Z Jan 25 05:37:28  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2022-01-25T05:37:28.6356958Z Jan 25 05:37:28  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2022-01-25T05:37:28.6357871Z Jan 25 05:37:28  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2022-01-25T05:37:28.6358799Z Jan 25 05:37:28  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2022-01-25T05:37:28.6359658Z Jan 25 05:37:28  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-01-25T05:37:28.6360506Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2022-01-25T05:37:28.6361425Z Jan 25 05:37:28  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2022-01-25T05:37:28.6362486Z Jan 25 05:37:28  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2022-01-25T05:37:28.6364531Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2022-01-25T05:37:28.6365709Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2022-01-25T05:37:28.6366600Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2022-01-25T05:37:28.6367488Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2022-01-25T05:37:28.6368333Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2022-01-25T05:37:28.6369236Z Jan 25 05:37:28  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2022-01-25T05:37:28.6370133Z Jan 25 05:37:28  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2022-01-25T05:37:28.6371056Z Jan 25 05:37:28  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2022-01-25T05:37:28.6371957Z Jan 25 05:37:28  

[jira] [Commented] (FLINK-26490) Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757774#comment-17757774
 ] 

Hangxiang Yu commented on FLINK-26490:
--

Hi, [~liufangqi] .

Just kindly ping, are you still working on this ?

> Adjust the MaxParallelism or remove the MaxParallelism check when unnecessary.
> --
>
> Key: FLINK-26490
> URL: https://issues.apache.org/jira/browse/FLINK-26490
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: chenfengLiu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> Since Flink introduce key group and MaxParallelism, Flink can rescale with 
> less cost.
> But when we want to update the job parallelism bigger than the 
> MaxParallelism, it 's impossible cause there are so many MaxParallelism check 
> that require new parallelism should not bigger than MaxParallelism. 
> Actually, when an operator which don't contain keyed state, there should be 
> no problem when update the parallelism bigger than the MaxParallelism,, cause 
> only keyed state need MaxParallelism and key group.
> So should we remove this check or auto adjust the MaxParallelism when we 
> restore an operator state that don't contain keyed state?
> It can make job restore from checkpoint easier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31678) NonHAQueryableStateFsBackendITCase.testAggregatingState: Query did no succeed

2023-08-22 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu closed FLINK-31678.

Resolution: Cannot Reproduce

Closed this because:
 # not reproduced more than 5 months 
 # Querable State has been marked as deprecated in 1.18

> NonHAQueryableStateFsBackendITCase.testAggregatingState: Query did no succeed
> -
>
> Key: FLINK-31678
> URL: https://issues.apache.org/jira/browse/FLINK-31678
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State, Tests
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: stale-assigned, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=47748=logs=d89de3df-4600-5585-dadc-9bbc9a5e661c=be5a4b15-4b23-56b1-7582-795f58a645a2=40484
> {code}
> ava.lang.AssertionError: Did not succeed query
> Mar 31 01:24:32   at org.junit.Assert.fail(Assert.java:89)
> Mar 31 01:24:32   at org.junit.Assert.assertTrue(Assert.java:42)
> Mar 31 01:24:32   at 
> org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase.testAggregatingState(AbstractQueryableStateTestBase.java:1094)
> [...]
> Mar 31 01:24:32   Suppressed: java.util.concurrent.TimeoutException
> Mar 31 01:24:32   at 
> java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1769)
> Mar 31 01:24:32   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> Mar 31 01:24:32   at 
> org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase$AutoCancellableJob.close(AbstractQueryableStateTestBase.java:1351)
> Mar 31 01:24:32   at 
> org.apache.flink.queryablestate.itcases.AbstractQueryableStateTestBase.testAggregatingState(AbstractQueryableStateTestBase.java:1096)
> Mar 31 01:24:32   ... 52 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-17755) Support side-output of expiring states with TTL.

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757772#comment-17757772
 ] 

Hangxiang Yu commented on FLINK-17755:
--

Hi, [~roeyshemtov] 

Could you also share the specific scenario about this ?
Why you want to get the expired states and what do you want to do with them?

> Support side-output of expiring states with TTL.
> 
>
> Key: FLINK-17755
> URL: https://issues.apache.org/jira/browse/FLINK-17755
> Project: Flink
>  Issue Type: New Feature
>  Components: API / State Processor
>Reporter: Roey Shem Tov
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> When we set a StateTTLConfig to StateDescriptor, then when a record has been 
> expired, it is deleted from the StateBackend.
> I want suggest a new feature, that we can get the expiring results as side 
> output, to process them and not just delete them.
> For example, if we have a ListState that have a TTL enabled, we can get the 
> expiring records in the list as side-output.
> What do you think?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Yubin Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757771#comment-17757771
 ] 

Yubin Li commented on FLINK-32906:
--

[~jingge] Thanks for claring that.

I found that  both configurations  also takes effect in fixed-length type (eg. 
char), Is It expected? i am very glad to contribute :)

!image-2023-08-23-10-39-26-602.png!

!image-2023-08-23-10-39-10-225.png!

in another session,

!image-2023-08-23-10-41-30-913.png!

!image-2023-08-23-10-41-42-513.png!

> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, 
> image-2023-08-23-10-39-26-602.png, image-2023-08-23-10-41-30-913.png, 
> image-2023-08-23-10-41-42-513.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 
> sql-client.display.max-column-width is used, it falls into the old scenario 
> where table.display.max-column-width didn't exist, any changes of 
> table.display.max-column-width won't take effect. You should test it either 
> by only using table.display.max-column-width or by using 
> sql-client.display.max-column-width, but not both of them back and forth.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31970) "Key group 0 is not in KeyGroupRange" when using CheckpointedFunction

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757770#comment-17757770
 ] 

Hangxiang Yu commented on FLINK-31970:
--

Hi, [~YordanPavlov] 

Just kindly ping. 

I think [~pnowojski] 's analysis is right, does this has been resolved after 
updateing your code ?

> "Key group 0 is not in KeyGroupRange" when using CheckpointedFunction
> -
>
> Key: FLINK-31970
> URL: https://issues.apache.org/jira/browse/FLINK-31970
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.17.0
>Reporter: Yordan Pavlov
>Priority: Major
> Attachments: fill-topic.sh, main.scala
>
>
> I am experiencing a problem where the following exception would be thrown on 
> Flink stop (stop with savepoint):
>  
> {code:java}
> org.apache.flink.util.SerializedThrowable: 
> java.lang.IllegalArgumentException: Key group 0 is not in 
> KeyGroupRange{startKeyGroup=86, endKeyGroup=127}.{code}
>  
> I do not have a non deterministic keyBy() operator in fact, I use 
> {code:java}
> .keyBy(_ => 1){code}
> I believe the problem is related to using RocksDB state along with a 
> {code:java}
> CheckpointedFunction{code}
> In my test program I have commented out a reduction of the parallelism which 
> would make the problem go away. I am attaching a standalone program which 
> presents the problem and also a script which generates the input data. For 
> clarity I would paste here the essence of the job:
>  
>  
> {code:scala}
> env.fromSource(kafkaSource, watermarkStrategy, "KafkaSource")
> .setParallelism(3)
> .keyBy(_ => 1)
> .window(TumblingEventTimeWindows.of(Time.of(1, TimeUnit.MILLISECONDS)))
> .apply(new TestWindow())
> /* .setParallelism(1) this would prevent the problem */
> .uid("window tester")
> .name("window tester")
> .print()
> class TestWindow() extends WindowFunction[(Long, Int), Long, Int, TimeWindow] 
> with CheckpointedFunction {
>   var state: ValueState[Long] = _
>   var count = 0
>   override def snapshotState(functionSnapshotContext: 
> FunctionSnapshotContext): Unit = {
> state.update(count)
>   }
>   override def initializeState(context: FunctionInitializationContext): Unit 
> = {
> val storeDescriptor = new 
> ValueStateDescriptor[Long]("state-xrp-dex-pricer", 
> createTypeInformation[Long])
> state = context.getKeyedStateStore.getState(storeDescriptor)
>   }
>   override def apply(key: Int, window: TimeWindow, input: Iterable[(Long, 
> Int)], out: Collector[Long]): Unit = {
> count += input.size
> out.collect(count)
>   }
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-6912) Consider changing the RichFunction#open method signature to take no arguments.

2023-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-6912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-6912:
--
Labels: pull-request-available  (was: )

> Consider changing the RichFunction#open method signature to take no arguments.
> --
>
> Key: FLINK-6912
> URL: https://issues.apache.org/jira/browse/FLINK-6912
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataSet, API / DataStream
>Affects Versions: 1.3.0
>Reporter: Mikhail Pryakhin
>Priority: Not a Priority
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> RichFunction#open(org.apache.flink.configuration.Configuration) method takes 
> a Configuration instance as an argument which is always [passed as a new 
> instance|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/AbstractUdfStreamOperator.java#L111]
>  bearing no configuration parameters. As I figured out it is a remnant of the 
> past since that method signature originates from the Record API. Consider 
> changing the RichFunction#open method signature to take no arguments as well 
> as actualizing java docs.
> You can find the complete discussion 
> [here|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/RichMapFunction-setup-method-td13696.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] WencongLiu opened a new pull request, #23058: [FLINK-6912] Remove parameter in RichFunction#open

2023-08-22 Thread via GitHub


WencongLiu opened a new pull request, #23058:
URL: https://github.com/apache/flink/pull/23058

   ## What is the purpose of the change
   
   Remove parameter in RichFunction#open.
   
   
   ## Brief change log
   
 - Add a new class OpenContext
 - Add a new method RichFunction#open(OpenConext openContext)
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] WencongLiu closed pull request #23058: [FLINK-6912] Remove parameter in RichFunction#open

2023-08-22 Thread via GitHub


WencongLiu closed pull request #23058: [FLINK-6912] Remove parameter in 
RichFunction#open
URL: https://github.com/apache/flink/pull/23058


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-32906:
-
Attachment: image-2023-08-23-10-41-42-513.png

> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, 
> image-2023-08-23-10-39-26-602.png, image-2023-08-23-10-41-30-913.png, 
> image-2023-08-23-10-41-42-513.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 
> sql-client.display.max-column-width is used, it falls into the old scenario 
> where table.display.max-column-width didn't exist, any changes of 
> table.display.max-column-width won't take effect. You should test it either 
> by only using table.display.max-column-width or by using 
> sql-client.display.max-column-width, but not both of them back and forth.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-32906:
-
Attachment: image-2023-08-23-10-41-30-913.png

> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, 
> image-2023-08-23-10-39-26-602.png, image-2023-08-23-10-41-30-913.png, 
> image-2023-08-23-10-41-42-513.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 
> sql-client.display.max-column-width is used, it falls into the old scenario 
> where table.display.max-column-width didn't exist, any changes of 
> table.display.max-column-width won't take effect. You should test it either 
> by only using table.display.max-column-width or by using 
> sql-client.display.max-column-width, but not both of them back and forth.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31053) Example Repair the log output format of the CheckpointCoordinator

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757768#comment-17757768
 ] 

Hangxiang Yu commented on FLINK-31053:
--

Hi, [~xzw0223] 

I saw you closed your pr, are you still working on this ?

I think it's better to use failure.getMessage() to print. WDYT?

> Example Repair the log output format of the CheckpointCoordinator
> -
>
> Key: FLINK-31053
> URL: https://issues.apache.org/jira/browse/FLINK-31053
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.1
>Reporter: xuzhiwen
>Priority: Minor
>  Labels: pull-request-available
> Attachments: image-2023-02-14-13-38-32-967.png
>
>
> !image-2023-02-14-13-38-32-967.png|width=708,height=146!
> The log output format is incorrect.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-32906:
-
Attachment: image-2023-08-23-10-39-10-225.png

> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, 
> image-2023-08-23-10-39-26-602.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 
> sql-client.display.max-column-width is used, it falls into the old scenario 
> where table.display.max-column-width didn't exist, any changes of 
> table.display.max-column-width won't take effect. You should test it either 
> by only using table.display.max-column-width or by using 
> sql-client.display.max-column-width, but not both of them back and forth.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31685) Checkpoint job folder not deleted after job is cancelled

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757767#comment-17757767
 ] 

Hangxiang Yu commented on FLINK-31685:
--

I just linked many related tickets.

It's valid and many users want to resolve.

I think we could just introduce an option whether generate the job id directory 
and make them compatible.

As for the job id layout, I think it's still useful if user want to save some 
historical checkpoints with NO_CLAIM mode.

[~tangyun]  WDYT?

> Checkpoint job folder not deleted after job is cancelled
> 
>
> Key: FLINK-31685
> URL: https://issues.apache.org/jira/browse/FLINK-31685
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.1
>Reporter: Sergio Sainz
>Priority: Major
>
> When flink job is being checkpointed, and after the job is cancelled, the 
> checkpoint is indeed deleted (as per 
> {{{}execution.checkpointing.externalized-checkpoint-retention: 
> DELETE_ON_CANCELLATION{}}}), but the job-id folder still remains:
>  
> [sergio@flink-cluster-54f7fc7c6-k6km8 JobCheckpoints]$ ls
> 01eff17aa2910484b5aeb644bc531172  3a59309ef018541fc0c20856d0d89855  
> 78ff2344dd7ef89f9fbcc9789fc0cd79  a6fd7cec89c0af78c3353d4a46a7d273  
> dbc957868c08ebeb100d708bbd057593
> 04ff0abb9e860fc85f0e39d722367c3c  3e09166341615b1b4786efd6745a05d6  
> 79efc000aa29522f0a9598661f485f67  a8c42bfe158abd78ebcb4adb135de61f  
> dc8e04b02c9d8a1bc04b21d2c8f21f74
> 05f48019475de40230900230c63cfe89  3f9fb467c9af91ef41d527fe92f9b590  
> 7a6ad7407d7120eda635d71cd843916a  a8db748c1d329407405387ac82040be4  
> dfb2df1c25056e920d41c94b659dcdab
> 09d30bc0ff786994a6a3bb06abd3  455525b76a1c6826b6eaebd5649c5b6b  
> 7b1458424496baaf3d020e9fece525a4  aa2ef9587b2e9c123744e8940a66a287
> All folders in the above list, like {{01eff17aa2910484b5aeb644bc531172}}  , 
> are empty ~
>  
> *Expected behaviour:*
> The job folder id should also be deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Yubin Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yubin Li updated FLINK-32906:
-
Attachment: image-2023-08-23-10-39-26-602.png

> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png, image-2023-08-23-10-39-10-225.png, 
> image-2023-08-23-10-39-26-602.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 
> sql-client.display.max-column-width is used, it falls into the old scenario 
> where table.display.max-column-width didn't exist, any changes of 
> table.display.max-column-width won't take effect. You should test it either 
> by only using table.display.max-column-width or by using 
> sql-client.display.max-column-width, but not both of them back and forth.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31573) Nexmark performance drops in 1.17 compared to 1.13

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757765#comment-17757765
 ] 

Hangxiang Yu commented on FLINK-31573:
--

Hi, [~renqs] 

Just kindly ping. What's the status of this ticket?

I saw the pr of nexmark is merged to resolve this.

Could we close this or wait the newest test result ?

> Nexmark performance drops in 1.17 compared to 1.13
> --
>
> Key: FLINK-31573
> URL: https://issues.apache.org/jira/browse/FLINK-31573
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.17.0
>Reporter: Qingsheng Ren
>Priority: Critical
>
> The case was originally 
> [reported|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz] 
> in the voting thread of 1.17.0 RC3. 
> Compared to Flink 1.13, the performance of Nexmark in 1.17.0 RC3 drops ~8% in 
> query 18. Some details could be found in the [mailing 
> list|https://lists.apache.org/thread/n4x2xgcr2kb8vcbby7c6gwb22n0thwhz]. 
> A further investigation showed that with configuration 
> {{execution.checkpointing.checkpoints-after-tasks-finish.enabled}} set to 
> false, the performance of 1.17 is better than 1.16. 
> A fully comparison of Nexmark result between 1.16 and 1.17 is ongoing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32909) The jobmanager.sh pass arguments failed

2023-08-22 Thread Alex Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757763#comment-17757763
 ] 

Alex Wu commented on FLINK-32909:
-

Yes, I would be happy to. Emmm, I should prepare a PR next, right?

> The jobmanager.sh pass arguments failed
> ---
>
> Key: FLINK-32909
> URL: https://issues.apache.org/jira/browse/FLINK-32909
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Scripts
>Affects Versions: 1.16.2, 1.18.0, 1.17.1
>Reporter: Alex Wu
>Priority: Major
>
> I' m trying to use the jobmanager.sh script to create a jobmanager instance 
> manually, and I need to pass arugments to the script dynamically, rather than 
> through flink-conf.yaml. But I found that I didn't succeed in doing that when 
> I commented out all configurations in the flink-conf.yaml,  I typed command 
> like:
>  
> {code:java}
> ./bin/jobmanager.sh start -D jobmanager.memory.flink.size=1024m -D 
> jobmanager.rpc.address=xx.xx.xx.xx -D jobmanager.rpc.port=xxx -D 
> jobmanager.bind-host=0.0.0.0 -D rest.address=xx.xx.xx.xx -D rest.port=xxx -D 
> rest.bind-address=0.0.0.0{code}
> but I got some errors below:
>  
> {code:java}
> [ERROR] The execution result is empty.
> [ERROR] Could not get JVM parameters and dynamic configurations properly.
> [ERROR] Raw output from BashJavaUtils:
> WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will 
> impact performance.
> Exception in thread "main" 
> org.apache.flink.configuration.IllegalConfigurationException: JobManager 
> memory configuration failed: Either required fine-grained memory 
> (jobmanager.memory.heap.size), or Total Flink Memory size (Key: 
> 'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total 
> Process Memory size (Key: 'jobmanager.memory.process.size' , default: null 
> (fallback keys: [])) need to be configured explicitly.
>         at 
> org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78)
>         at 
> org.apache.flink.runtime.util.bash.BashJavaUtils.getJmResourceParams(BashJavaUtils.java:98)
>         at 
> org.apache.flink.runtime.util.bash.BashJavaUtils.runCommand(BashJavaUtils.java:69)
>         at 
> org.apache.flink.runtime.util.bash.BashJavaUtils.main(BashJavaUtils.java:56)
> Caused by: org.apache.flink.configuration.IllegalConfigurationException: 
> Either required fine-grained memory (jobmanager.memory.heap.size), or Total 
> Flink Memory size (Key: 'jobmanager.memory.flink.size' , default: null 
> (fallback keys: [])), or Total Process Memory size (Key: 
> 'jobmanager.memory.process.size' , default: null (fallback keys: [])) need to 
> be configured explicitly.
>         at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.failBecauseRequiredOptionsNotConfigured(ProcessMemoryUtils.java:129)
>         at 
> org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:86)
>         at 
> org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfig(JobManagerProcessUtils.java:83)
>         at 
> org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobM
>  {code}
> It seems to remind me to configure memory for jobmanager instance explicitly, 
> but I had already passed the jobmanager.memory.flink.size parameter. So I 
> debug the script, and found a spelling error in the jobmanager.sh script at 
> line 54:
>  
> {code:java}
> parseJmArgsAndExportLogs "${ARGS[@]}"
> {code}
> the uppercase "$\{ARGS[@]}" is a wrong variable name here from a contextual 
> perspective, and causing an empty string passed to the function. I changed to 
> "$\{args[@]}" and It works fine.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-6485) Use buffering to avoid frequent memtable flushes for short intervals in RockdDB incremental checkpoints

2023-08-22 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757764#comment-17757764
 ] 

Hangxiang Yu commented on FLINK-6485:
-

IIUC, ChangelogStateBackend ([Generalized incremental 
checkpoints|https://cwiki.apache.org/confluence/display/FLINK/FLIP-158%3A+Generalized+incremental+checkpoints])
 could help to resolve this basically.

> Use buffering to avoid frequent memtable flushes for short intervals in 
> RockdDB incremental checkpoints
> ---
>
> Key: FLINK-6485
> URL: https://issues.apache.org/jira/browse/FLINK-6485
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Stefan Richter
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> The current implementation of incremental checkpoitns in RocksDB needs to 
> flush the memtable to disk prior to a checkpoint and this will generate a SST 
> file.
> What is required for fast checkpoint intervals is an alternative mechanism to 
> quickly determine a delta from the previous incremental checkpoint to avoid 
> this frequent flushing. This could be implemented through custom buffering 
> inside the backend, e.g. a changelog buffer that is maintain up to a certain 
> size. 
> The buffer's content becomes part of the private state in the incremental 
> snapshot and the buffer is dropped i) after each checkpoint or ii) after 
> exceeding a certain size that justifies flushing and writing a new SST file.
> This mechanism should not be blocking, which we can achieve in the following 
> way:
> 1) We have a clear upper limit to the buffer size (e.g. 64MB), once the limit 
> of diffs is reached, we can drop the buffer because we can assume enough work 
> was done to justify a new SST file
> 2) We write the buffer to a local FS, so we can expect this to be reasonable 
> fast and that it will not suffer from the kind of blocking that we have in 
> DFS. I mean technically, also flushing the SST file can block. Then, in the 
> async part, we can transfer the locally written buffer file to DFS.
> There might be other mechanisms in RocksDB that we could exploit for this, 
> such as the write ahead log, but this could be already be a good solution.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lincoln-lil commented on a diff in pull request #23209: [FLINK-32824] Port Calcite's fix for the sql like operator

2023-08-22 Thread via GitHub


lincoln-lil commented on code in PR #23209:
URL: https://github.com/apache/flink/pull/23209#discussion_r1302375790


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/delegation/ParserImplTest.java:
##
@@ -114,6 +115,18 @@ public void testCompletionTest() {
 verifySqlCompletion("SELECT a fram b", 10, new String[] {"FETCH", 
"FROM"});
 }
 
+@Test
+public void testSqlLike() {

Review Comment:
Add a separate test class `SqlLikeUtilsTest` for this test because it's not 
related to the parser.



##
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/runtime/batch/sql/CalcITCase.scala:
##
@@ -416,6 +416,30 @@ class CalcITCase extends BatchTestBase {
 row(3, 2L, "Hello world"),
 row(4, 3L, "Hello world, how are you?")
   ))
+
+val rows = Seq(row(3, "H.llo"), row(3, "Hello"))
+val dataId = TestValuesTableFactory.registerData(rows)
+
+val ddl =
+  s"""
+ |CREATE TABLE MyTable (
+ |  a int,
+ |  c string
+ |) WITH (
+ |  'connector' = 'values',
+ |  'data-id' = '$dataId',
+ |  'bounded' = 'true'
+ |)
+ |""".stripMargin
+tEnv.executeSql(ddl)
+
+checkResult(
+  s"""
+ |SELECT c FROM MyTable
+ |  WHERE c LIKE 'H.llo'
+ |""".stripMargin,
+  Seq(row("H.llo"))
+)

Review Comment:
   Also add sql case to cover the  `similar to` & `not similar to` syntax



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway

2023-08-22 Thread Fang Yong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong updated FLINK-32794:
--
Description: Document for jdbc driver: 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/

> Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
> -
>
> Key: FLINK-32794
> URL: https://issues.apache.org/jira/browse/FLINK-32794
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>
> Document for jdbc driver: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32794) Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway

2023-08-22 Thread Fang Yong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757760#comment-17757760
 ] 

Fang Yong commented on FLINK-32794:
---

Thanks [~renqs], DONE

> Release Testing: Verify FLIP-293: Introduce Flink Jdbc Driver For Sql Gateway
> -
>
> Key: FLINK-32794
> URL: https://issues.apache.org/jira/browse/FLINK-32794
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>
> Document for jdbc driver: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/jdbcdriver/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32798) Release Testing: Verify FLIP-294: Support Customized Catalog Modification Listener

2023-08-22 Thread Fang Yong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757759#comment-17757759
 ] 

Fang Yong commented on FLINK-32798:
---

[~renqs] I have add the document link in the "Description"

> Release Testing: Verify FLIP-294: Support Customized Catalog Modification 
> Listener
> --
>
> Key: FLINK-32798
> URL: https://issues.apache.org/jira/browse/FLINK-32798
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>
> The document about catalog modification listener is: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-4675) Remove Parameter from WindowAssigner.getDefaultTrigger()

2023-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-4675:
--
Labels: pull-request-available  (was: )

> Remove Parameter from WindowAssigner.getDefaultTrigger()
> 
>
> Key: FLINK-4675
> URL: https://issues.apache.org/jira/browse/FLINK-4675
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> For legacy reasons the method has {{StreamExecutionEnvironment}} as a 
> parameter. This is not needed anymore.
> [~StephanEwen] do you think we should break this now? {{WindowAssigner}} is 
> {{PublicEvolving}} but I wanted to play it conservative for now.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32798) Release Testing: Verify FLIP-294: Support Customized Catalog Modification Listener

2023-08-22 Thread Fang Yong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong updated FLINK-32798:
--
Description: The document about catalog modification listener is: 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener

> Release Testing: Verify FLIP-294: Support Customized Catalog Modification 
> Listener
> --
>
> Key: FLINK-32798
> URL: https://issues.apache.org/jira/browse/FLINK-32798
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.18.0
>
>
> The document about catalog modification listener is: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] WencongLiu opened a new pull request, #23073: [FLINK-4675] Remove parameter in WindowAssigner#getDefaultTrigger()

2023-08-22 Thread via GitHub


WencongLiu opened a new pull request, #23073:
URL: https://github.com/apache/flink/pull/23073

   ## What is the purpose of the change
   
   Remove parameter in WindowAssigner#getDefaultTrigger().
   
   
   ## Brief change log
   
 - Remove parameter in WindowAssigner#getDefaultTrigger()
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] WencongLiu closed pull request #23073: [FLINK-4675] Remove parameter in WindowAssigner#getDefaultTrigger()

2023-08-22 Thread via GitHub


WencongLiu closed pull request #23073: [FLINK-4675] Remove parameter in 
WindowAssigner#getDefaultTrigger()
URL: https://github.com/apache/flink/pull/23073


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-5336) Make Path immutable

2023-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-5336:
--
Labels: pull-request-available  (was: )

> Make Path immutable
> ---
>
> Key: FLINK-5336
> URL: https://issues.apache.org/jira/browse/FLINK-5336
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataSet
>Reporter: Stephan Ewen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> The {{Path}} class is currently mutable to support the {{IOReadableWritable}} 
> serialization. Since that serialization is not used any more, I suggest to 
> drop that interface from Path and make the Path's URI final.
> Being immutable, we can store configures paths properly without the chance of 
> them being mutated as side effects.
> Many parts of the code make the assumption that the Path is immutable, being 
> susceptible to subtle errors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] WencongLiu opened a new pull request, #23072: [FLINK-5336] Deprecate IOReadableWritable serialization in Path

2023-08-22 Thread via GitHub


WencongLiu opened a new pull request, #23072:
URL: https://github.com/apache/flink/pull/23072

   ## What is the purpose of the change
   
   Deprecate IOReadableWritable serialization in Path.
   
   
   ## Brief change log
   
 - Deprecate IOReadableWritable serialization in Path
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-22937) rocksdb cause jvm to crash

2023-08-22 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu closed FLINK-22937.

Resolution: Cannot Reproduce

Closed this as no response more than two years and lacking enough information 
to reproduce/debug, please reopen it if necessary.

> rocksdb cause jvm to crash
> --
>
> Key: FLINK-22937
> URL: https://issues.apache.org/jira/browse/FLINK-22937
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.13.1
> Environment: deployment: native kubernates
>Reporter: Piers
>Priority: Major
> Attachments: dump.txt, hs_err_pid1.log
>
>
> JVM crash when running job. Possibly RocksDB caused this.
> This link containers JVM crash log.
> Thanks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30461) Some rocksdb sst files will remain forever

2023-08-22 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757756#comment-17757756
 ] 

Rui Fan commented on FLINK-30461:
-

Hi [~mason6345] , I didn't find out in time that the sst files weren't cleaned 
up, but our flink user feedbacks the checkpoint fails. After analysis, the root 
cause is : +_The shared directory of a flink job has more than 1 million files. 
It exceeded the hdfs upper limit, causing new files not to be written._+

 

The `state.checkpoints.num-retained=1`, I deserialized the _metadata file of 
the latest checkpoint : only 50k files are depended on, the other 950k files 
should be cleaned up. So, I think analyzing the  _metadata file can figure out  
that some SST files were not being cleaned up.

BTW, please follow the FLINK-28984 as well, it also cause the sst files leak.

 

> Some rocksdb sst files will remain forever
> --
>
> Key: FLINK-30461
> URL: https://issues.apache.org/jira/browse/FLINK-30461
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.15.4, 1.16.2
>
> Attachments: image-2022-12-20-18-45-32-948.png, 
> image-2022-12-20-18-47-42-385.png, screenshot-1.png
>
>
> In rocksdb incremental checkpoint mode, during file upload, if some files 
> have been uploaded and some files have not been uploaded, the checkpoint is 
> canceled due to checkpoint timeout at this time, and the uploaded files will 
> remain.
>  
> h2. Impact: 
> The shared directory of a flink job has more than 1 million files. It 
> exceeded the hdfs upper limit, causing new files not to be written.
> However only 50k files are available, the other 950k files should be cleaned 
> up.
> !https://user-images.githubusercontent.com/38427477/207588272-dda7ba69-c84c-4372-aeb4-c54657b9b956.png|width=1962,height=364!
> h2. Root cause:
> If an exception is thrown during the checkpoint async phase, flink will clean 
> up metaStateHandle, miscFiles and sstFiles.
> However, when all sst files are uploaded, they are added together to 
> sstFiles. If some sst files have been uploaded and some sst files are still 
> being uploaded, and  the checkpoint is canceled due to checkpoint timeout at 
> this time, all sst files will not be added to sstFiles. The uploaded sst will 
> remain on hdfs.
> [code 
> link|https://github.com/apache/flink/blob/49146cdec41467445de5fc81f100585142728bdf/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java#L328]
> h2. Solution:
> Using the CloseableRegistry as the tmpResourcesRegistry. If the async phase 
> is failed, the tmpResourcesRegistry will cleanup these temporary resources.
>  
> POC code:
> [https://github.com/1996fanrui/flink/commit/86a456b2bbdad6c032bf8e0bff71c4824abb3ce1]
>  
>  
> !image-2022-12-20-18-45-32-948.png|width=1114,height=442!
> !image-2022-12-20-18-47-42-385.png|width=1332,height=552!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] WencongLiu commented on a diff in pull request #22341: [FLINK-27204][flink-runtime] Refract FileSystemJobResultStore to execute I/O operations on the ioExecutor

2023-08-22 Thread via GitHub


WencongLiu commented on code in PR #22341:
URL: https://github.com/apache/flink/pull/22341#discussion_r1302368927


##
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java:
##
@@ -513,36 +514,39 @@ private void stopDispatcherServices() throws Exception {
 public CompletableFuture submitJob(JobGraph jobGraph, Time 
timeout) {
 final JobID jobID = jobGraph.getJobID();
 log.info("Received JobGraph submission '{}' ({}).", 
jobGraph.getName(), jobID);
-
-try {
-if (isInGloballyTerminalState(jobID)) {
-log.warn(
-"Ignoring JobGraph submission '{}' ({}) because the 
job already reached a globally-terminal state (i.e. {}) in a previous 
execution.",
-jobGraph.getName(),
-jobID,
-Arrays.stream(JobStatus.values())
-.filter(JobStatus::isGloballyTerminalState)
-.map(JobStatus::name)
-.collect(Collectors.joining(", ")));
-return FutureUtils.completedExceptionally(
-
DuplicateJobSubmissionException.ofGloballyTerminated(jobID));
-} else if (jobManagerRunnerRegistry.isRegistered(jobID)
-|| submittedAndWaitingTerminationJobIDs.contains(jobID)) {
-// job with the given jobID is not terminated, yet
-return FutureUtils.completedExceptionally(
-DuplicateJobSubmissionException.of(jobID));
-} else if (isPartialResourceConfigured(jobGraph)) {
-return FutureUtils.completedExceptionally(
-new JobSubmissionException(
-jobID,
-"Currently jobs is not supported if parts of 
the vertices have "
-+ "resources configured. The 
limitation will be removed in future versions."));
-} else {
-return internalSubmitJob(jobGraph);
-}
-} catch (FlinkException e) {
-return FutureUtils.completedExceptionally(e);
-}
+return isInGloballyTerminalState(jobID)
+.thenCompose(
+isTerminated -> {
+if (isTerminated) {
+log.warn(
+"Ignoring JobGraph submission '{}' 
({}) because the job already "
++ "reached a globally-terminal 
state (i.e. {}) in a "
++ "previous execution.",
+jobGraph.getName(),
+jobID,
+Arrays.stream(JobStatus.values())
+
.filter(JobStatus::isGloballyTerminalState)
+.map(JobStatus::name)
+.collect(Collectors.joining(", 
")));
+return FutureUtils.completedExceptionally(
+
DuplicateJobSubmissionException.ofGloballyTerminated(
+jobID));
+} else if 
(jobManagerRunnerRegistry.isRegistered(jobID)

Review Comment:
   Thanks for your detailed explanation!  I've modified the `thenCompose` to  
`thenComposeAsync`.



##
flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java:
##
@@ -513,36 +514,39 @@ private void stopDispatcherServices() throws Exception {
 public CompletableFuture submitJob(JobGraph jobGraph, Time 
timeout) {
 final JobID jobID = jobGraph.getJobID();
 log.info("Received JobGraph submission '{}' ({}).", 
jobGraph.getName(), jobID);
-
-try {
-if (isInGloballyTerminalState(jobID)) {
-log.warn(
-"Ignoring JobGraph submission '{}' ({}) because the 
job already reached a globally-terminal state (i.e. {}) in a previous 
execution.",
-jobGraph.getName(),
-jobID,
-Arrays.stream(JobStatus.values())
-.filter(JobStatus::isGloballyTerminalState)
-.map(JobStatus::name)
-.collect(Collectors.joining(", ")));
-return FutureUtils.completedExceptionally(
-
DuplicateJobSubmissionException.ofGloballyTerminated(jobID));
-} else if (jobManagerRunnerRegistry.isRegistered(jobID)
-|| submittedAndWaitingTerminationJobIDs.contains(jobID)) {
-// job with the given jobID is not terminated, yet
-

[jira] [Updated] (FLINK-32909) The jobmanager.sh pass arguments failed

2023-08-22 Thread Alex Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Wu updated FLINK-32909:

Description: 
I' m trying to use the jobmanager.sh script to create a jobmanager instance 
manually, and I need to pass arugments to the script dynamically, rather than 
through flink-conf.yaml. But I found that I didn't succeed in doing that when I 
commented out all configurations in the flink-conf.yaml,  I typed command like:

 
{code:java}
./bin/jobmanager.sh start -D jobmanager.memory.flink.size=1024m -D 
jobmanager.rpc.address=xx.xx.xx.xx -D jobmanager.rpc.port=xxx -D 
jobmanager.bind-host=0.0.0.0 -D rest.address=xx.xx.xx.xx -D rest.port=xxx -D 
rest.bind-address=0.0.0.0{code}
but I got some errors below:

 
{code:java}
[ERROR] The execution result is empty.
[ERROR] Could not get JVM parameters and dynamic configurations properly.
[ERROR] Raw output from BashJavaUtils:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will 
impact performance.
Exception in thread "main" 
org.apache.flink.configuration.IllegalConfigurationException: JobManager memory 
configuration failed: Either required fine-grained memory 
(jobmanager.memory.heap.size), or Total Flink Memory size (Key: 
'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total 
Process Memory size (Key: 'jobmanager.memory.process.size' , default: null 
(fallback keys: [])) need to be configured explicitly.
        at 
org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78)
        at 
org.apache.flink.runtime.util.bash.BashJavaUtils.getJmResourceParams(BashJavaUtils.java:98)
        at 
org.apache.flink.runtime.util.bash.BashJavaUtils.runCommand(BashJavaUtils.java:69)
        at 
org.apache.flink.runtime.util.bash.BashJavaUtils.main(BashJavaUtils.java:56)
Caused by: org.apache.flink.configuration.IllegalConfigurationException: Either 
required fine-grained memory (jobmanager.memory.heap.size), or Total Flink 
Memory size (Key: 'jobmanager.memory.flink.size' , default: null (fallback 
keys: [])), or Total Process Memory size (Key: 'jobmanager.memory.process.size' 
, default: null (fallback keys: [])) need to be configured explicitly.
        at 
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.failBecauseRequiredOptionsNotConfigured(ProcessMemoryUtils.java:129)
        at 
org.apache.flink.runtime.util.config.memory.ProcessMemoryUtils.memoryProcessSpecFromConfig(ProcessMemoryUtils.java:86)
        at 
org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfig(JobManagerProcessUtils.java:83)
        at 
org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobM
 {code}
It seems to remind me to configure memory for jobmanager instance explicitly, 
but I had already passed the jobmanager.memory.flink.size parameter. So I debug 
the script, and found a spelling error in the jobmanager.sh script at line 54:

 
{code:java}
parseJmArgsAndExportLogs "${ARGS[@]}"
{code}
the uppercase "$\{ARGS[@]}" is a wrong variable name here from a contextual 
perspective, and causing an empty string passed to the function. I changed to 
"$\{args[@]}" and It works fine.

 

 

 

  was:
I' m trying to use the jobmanager.sh script to create a jobmanager instance 
manually, and I need to pass arugments to the script dynamically, rather than 
through flink-conf.yaml. But I found that I didn't succeed in doing that when I 
commented out all configurations in the flink-conf.yaml,  I typed command like:

 
{code:java}
./bin/jobmanager.sh start -D jobmanager.memory.flink.size=1024m -D 
jobmanager.rpc.address=xx.xx.xx.xx -D jobmanager.rpc.port=xxx -D 
jobmanager.bind-host=0.0.0.0 -Drest.address=xx.xx.xx.xx -Drest.port=xxx 
-Drest.bind-address=0.0.0.0{code}
but I got some errors below:

 
{code:java}
[ERROR] The execution result is empty.
[ERROR] Could not get JVM parameters and dynamic configurations properly.
[ERROR] Raw output from BashJavaUtils:
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will 
impact performance.
Exception in thread "main" 
org.apache.flink.configuration.IllegalConfigurationException: JobManager memory 
configuration failed: Either required fine-grained memory 
(jobmanager.memory.heap.size), or Total Flink Memory size (Key: 
'jobmanager.memory.flink.size' , default: null (fallback keys: [])), or Total 
Process Memory size (Key: 'jobmanager.memory.process.size' , default: null 
(fallback keys: [])) need to be configured explicitly.
        at 
org.apache.flink.runtime.jobmanager.JobManagerProcessUtils.processSpecFromConfigWithNewOptionToInterpretLegacyHeap(JobManagerProcessUtils.java:78)
        at 
org.apache.flink.runtime.util.bash.BashJavaUtils.getJmResourceParams(BashJavaUtils.java:98)
        at 

[GitHub] [flink] flinkbot commented on pull request #23263: [FLINK-32671][docs] Document Externalized Declarative Resource Management

2023-08-22 Thread via GitHub


flinkbot commented on PR #23263:
URL: https://github.com/apache/flink/pull/23263#issuecomment-1689128611

   
   ## CI report:
   
   * 194010966689800b21dee2f182ec5162eccd9a5c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32671) Document Externalized Declarative Resource Management

2023-08-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32671:
---
Labels: pull-request-available  (was: )

> Document Externalized Declarative Resource Management
> -
>
> Key: FLINK-32671
> URL: https://issues.apache.org/jira/browse/FLINK-32671
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Konstantin Knauf
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] czy006 opened a new pull request, #23263: [FLINK-32671][docs] Document Externalized Declarative Resource Management

2023-08-22 Thread via GitHub


czy006 opened a new pull request, #23263:
URL: https://github.com/apache/flink/pull/23263

   ## What is the purpose of the change
   
   Add Document Externalized Declarative Resource Management (FLIP-291)
   
   
   ## Brief change log
   
   - Externalized Declarative Resource Management (FLIP-291)docs
   
   
   ## Verifying this change
   
   Its changes only affect the JavaDocs of Adaptive Scheduler, adding the 
description of Externalized Declarative Resource Management
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-32926) Create a release branch

2023-08-22 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren reassigned FLINK-32926:
-

Assignee: Jing Ge

> Create a release branch
> ---
>
> Key: FLINK-32926
> URL: https://issues.apache.org/jira/browse/FLINK-32926
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Sergey Nuyanzin
>Assignee: Jing Ge
>Priority: Major
>
> If you are doing a new minor release, you need to update Flink version in the 
> following repositories and the [AzureCI project 
> configuration|https://dev.azure.com/apache-flink/apache-flink/]:
>  * [apache/flink|https://github.com/apache/flink]
>  * [apache/flink-docker|https://github.com/apache/flink-docker]
>  * [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks]
> Patch releases don't require the these repositories to be touched. Simply 
> checkout the already existing branch for that version:
> {code:java}
> $ git checkout release-$SHORT_RELEASE_VERSION
> {code}
> h4. Flink repository
> Create a branch for the new version that we want to release before updating 
> the master branch to the next development version:
> {code:bash}
> $ cd ./tools
> tools $ releasing/create_snapshot_branch.sh
> tools $ git checkout master
> tools $ OLD_VERSION=$CURRENT_SNAPSHOT_VERSION 
> NEW_VERSION=$NEXT_SNAPSHOT_VERSION releasing/update_branch_version.sh
> {code}
> In the \{{master}} branch, add a new value (e.g. \{{v1_16("1.16")}}) to 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  as the last entry:
> {code:java}
> // ...
> v1_12("1.12"),
> v1_13("1.13"),
> v1_14("1.14"),
> v1_15("1.15"),
> v1_16("1.16");
> {code}
> The newly created branch and updated \{{master}} branch need to be pushed to 
> the official repository.
> h4. Flink Docker Repository
> Afterwards fork off from \{{dev-master}} a \{{dev-x.y}} branch in the 
> [apache/flink-docker|https://github.com/apache/flink-docker] repository. Make 
> sure that 
> [apache/flink-docker:.github/workflows/ci.yml|https://github.com/apache/flink-docker/blob/dev-master/.github/workflows/ci.yml]
>  points to the correct snapshot version; for \{{dev-x.y}} it should point to 
> \{{{}x.y-SNAPSHOT{}}}, while for \{{dev-master}} it should point to the most 
> recent snapshot version (\\{[$NEXT_SNAPSHOT_VERSION}}).
> After pushing the new minor release branch, as the last step you should also 
> update the documentation workflow to also build the documentation for the new 
> release branch. Check [Managing 
> Documentation|https://cwiki.apache.org/confluence/display/FLINK/Managing+Documentation]
>  on details on how to do that. You may also want to manually trigger a build 
> to make the changes visible as soon as possible.
> h4. Flink Benchmark Repository
> First of all, checkout the \{{master}} branch to \{{dev-x.y}} branch in 
> [apache/flink-benchmarks|https://github.com/apache/flink-benchmarks], so that 
> we can have a branch named \{{dev-x.y}} which could be built on top of 
> (${\{CURRENT_SNAPSHOT_VERSION}}).
> Then, inside the repository you need to manually update the 
> \{{flink.version}} property inside the parent *pom.xml* file. It should be 
> pointing to the most recent snapshot version ($NEXT_SNAPSHOT_VERSION). For 
> example:
> {code:xml}
> 1.18-SNAPSHOT
> {code}
> h4. AzureCI Project Configuration
> The new release branch needs to be configured within AzureCI to make azure 
> aware of the new release branch. This matter can only be handled by Ververica 
> employees since they are owning the AzureCI setup.
>  
> 
> h3. Expectations (Minor Version only if not stated otherwise)
>  * Release branch has been created and pushed
>  * Changes on the new release branch are picked up by [Azure 
> CI|https://dev.azure.com/apache-flink/apache-flink/_build?definitionId=1&_a=summary]
>  * \{{master}} branch has the version information updated to the new version 
> (check pom.xml files and 
>  * 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum)
>  * New version is added to the 
> [apache-flink:flink-annotations/src/main/java/org/apache/flink/FlinkVersion|https://github.com/apache/flink/blob/master/flink-annotations/src/main/java/org/apache/flink/FlinkVersion.java]
>  enum.
>  * Make sure [flink-docker|https://github.com/apache/flink-docker/] has 
> \{{dev-x.y}} branch and docker e2e tests run against this branch in the 
> corresponding Apache Flink release branch (see 
> 

[jira] [Commented] (FLINK-28866) Use DDL instead of legacy method to register the test source in JoinITCase

2023-08-22 Thread lincoln lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-28866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757748#comment-17757748
 ] 

lincoln lee commented on FLINK-28866:
-

[~xu_shuai_] assigned to you.

> Use DDL instead of legacy method to register the test source in JoinITCase
> --
>
> Key: FLINK-28866
> URL: https://issues.apache.org/jira/browse/FLINK-28866
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: Shuai Xu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-28866) Use DDL instead of legacy method to register the test source in JoinITCase

2023-08-22 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee reassigned FLINK-28866:
---

Assignee: Shuai Xu

> Use DDL instead of legacy method to register the test source in JoinITCase
> --
>
> Key: FLINK-28866
> URL: https://issues.apache.org/jira/browse/FLINK-28866
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: Shuai Xu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List

2023-08-22 Thread Tan Kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tan Kim updated FLINK-32941:

Description: 
When the code below is executed, only the first element of the list is assigned 
to the List variable in MyPoJo repeatedly.
{code:java}
case class Item(
  name: String
)
case class MyPojo(
  @DataTypeHist("RAW") items: java.util.List[Item]
)

...

tableEnv
  .sqlQuery("select items from table")
  .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
 

For example, if you have the following list coming in as input,
["a","b","c"]
The value actually stored in MyPojo's list variable is
["a","a","a"] 

  was:
When the code below is executed, only the first element of the list is assigned 
to the List variable in MyPoJo repeatedly.
{code:java}
case class Item(
  id: String,
  name: String
)
case class MyPojo(
  @DataTypeHist("RAW") items: java.util.List[Item]
)

...

tableEnv
  .sqlQuery("select items from table")
  .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
 

For example, if you have the following list coming in as input,
["a","b","c"]
The value actually stored in MyPojo's list variable is
["a","a","a"] 


> Table API Bridge `toDataStream(targetDataType)` function not working 
> correctly for Java List
> 
>
> Key: FLINK-32941
> URL: https://issues.apache.org/jira/browse/FLINK-32941
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Tan Kim
>Priority: Major
>  Labels: bridge
>
> When the code below is executed, only the first element of the list is 
> assigned to the List variable in MyPoJo repeatedly.
> {code:java}
> case class Item(
>   name: String
> )
> case class MyPojo(
>   @DataTypeHist("RAW") items: java.util.List[Item]
> )
> ...
> tableEnv
>   .sqlQuery("select items from table")
>   .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
>  
> For example, if you have the following list coming in as input,
> ["a","b","c"]
> The value actually stored in MyPojo's list variable is
> ["a","a","a"] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List

2023-08-22 Thread Tan Kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tan Kim updated FLINK-32941:

Description: 
When the code below is executed, only the first element of the list is assigned 
to the List variable in MyPoJo repeatedly.
{code:java}
case class Item(
  id: String,
  name: String
)
case class MyPojo(
  @DataTypeHist("RAW") items: java.util.List[Item]
)

...

tableEnv
  .sqlQuery("select items from table")
  .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
 

For example, if you have the following list coming in as input,
["a","b","c"]
The value actually stored in MyPojo's list variable is
["a","a","a"] 

  was:
 

When the code below is executed, only the first element of the list is assigned 
to the List variable in MyPoJo repeatedly.
{code:java}
case class Item(
  id: String,
  name: String
)
case class MyPojo(
  @DataTypeHist("RAW") items: java.util.List[Item]
)

...

tableEnv
  .sqlQuery("select items from table")
  .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
 


For example, if you have the following list coming in as input,
["a","b","c"]
The value actually stored in MyPojo's list variable is
["a","a","a"] 


> Table API Bridge `toDataStream(targetDataType)` function not working 
> correctly for Java List
> 
>
> Key: FLINK-32941
> URL: https://issues.apache.org/jira/browse/FLINK-32941
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Tan Kim
>Priority: Major
>  Labels: bridge
>
> When the code below is executed, only the first element of the list is 
> assigned to the List variable in MyPoJo repeatedly.
> {code:java}
> case class Item(
>   id: String,
>   name: String
> )
> case class MyPojo(
>   @DataTypeHist("RAW") items: java.util.List[Item]
> )
> ...
> tableEnv
>   .sqlQuery("select items from table")
>   .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
>  
> For example, if you have the following list coming in as input,
> ["a","b","c"]
> The value actually stored in MyPojo's list variable is
> ["a","a","a"] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List

2023-08-22 Thread Tan Kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tan Kim updated FLINK-32941:

Labels: bridge  (was: )

> Table API Bridge `toDataStream(targetDataType)` function not working 
> correctly for Java List
> 
>
> Key: FLINK-32941
> URL: https://issues.apache.org/jira/browse/FLINK-32941
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Tan Kim
>Priority: Major
>  Labels: bridge
>
>  
> When the code below is executed, only the first element of the list is 
> assigned to the List variable in MyPoJo repeatedly.
> {code:java}
> case class Item(
>   id: String,
>   name: String
> )
> case class MyPojo(
>   @DataTypeHist("RAW") items: java.util.List[Item]
> )
> ...
> tableEnv
>   .sqlQuery("select items from table")
>   .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
>  
> For example, if you have the following list coming in as input,
> ["a","b","c"]
> The value actually stored in MyPojo's list variable is
> ["a","a","a"] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List

2023-08-22 Thread Tan Kim (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tan Kim updated FLINK-32941:

Component/s: Table SQL / API

> Table API Bridge `toDataStream(targetDataType)` function not working 
> correctly for Java List
> 
>
> Key: FLINK-32941
> URL: https://issues.apache.org/jira/browse/FLINK-32941
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Tan Kim
>Priority: Major
>
>  
> When the code below is executed, only the first element of the list is 
> assigned to the List variable in MyPoJo repeatedly.
> {code:java}
> case class Item(
>   id: String,
>   name: String
> )
> case class MyPojo(
>   @DataTypeHist("RAW") items: java.util.List[Item]
> )
> ...
> tableEnv
>   .sqlQuery("select items from table")
>   .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
>  
> For example, if you have the following list coming in as input,
> ["a","b","c"]
> The value actually stored in MyPojo's list variable is
> ["a","a","a"] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32941) Table API Bridge `toDataStream(targetDataType)` function not working correctly for Java List

2023-08-22 Thread Tan Kim (Jira)
Tan Kim created FLINK-32941:
---

 Summary: Table API Bridge `toDataStream(targetDataType)` function 
not working correctly for Java List
 Key: FLINK-32941
 URL: https://issues.apache.org/jira/browse/FLINK-32941
 Project: Flink
  Issue Type: Bug
Reporter: Tan Kim


 

When the code below is executed, only the first element of the list is assigned 
to the List variable in MyPoJo repeatedly.
{code:java}
case class Item(
  id: String,
  name: String
)
case class MyPojo(
  @DataTypeHist("RAW") items: java.util.List[Item]
)

...

tableEnv
  .sqlQuery("select items from table")
  .toDataStream(DataTypes.of(classOf[MyPoJo])) {code}
 


For example, if you have the following list coming in as input,
["a","b","c"]
The value actually stored in MyPojo's list variable is
["a","a","a"] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-30461) Some rocksdb sst files will remain forever

2023-08-22 Thread Mason Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757746#comment-17757746
 ] 

Mason Chen commented on FLINK-30461:


[~fanrui] Thanks for fixing this! Just curious, how did you figure out that 
some SST files were not being cleaned up? Are there any tricks to discover the 
issue outside of reading the code? I recently hit this issue too but all I saw 
was that SST sizes continuous growth from RocksDB metrics.

> Some rocksdb sst files will remain forever
> --
>
> Key: FLINK-30461
> URL: https://issues.apache.org/jira/browse/FLINK-30461
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0, 1.15.4, 1.16.2
>
> Attachments: image-2022-12-20-18-45-32-948.png, 
> image-2022-12-20-18-47-42-385.png, screenshot-1.png
>
>
> In rocksdb incremental checkpoint mode, during file upload, if some files 
> have been uploaded and some files have not been uploaded, the checkpoint is 
> canceled due to checkpoint timeout at this time, and the uploaded files will 
> remain.
>  
> h2. Impact: 
> The shared directory of a flink job has more than 1 million files. It 
> exceeded the hdfs upper limit, causing new files not to be written.
> However only 50k files are available, the other 950k files should be cleaned 
> up.
> !https://user-images.githubusercontent.com/38427477/207588272-dda7ba69-c84c-4372-aeb4-c54657b9b956.png|width=1962,height=364!
> h2. Root cause:
> If an exception is thrown during the checkpoint async phase, flink will clean 
> up metaStateHandle, miscFiles and sstFiles.
> However, when all sst files are uploaded, they are added together to 
> sstFiles. If some sst files have been uploaded and some sst files are still 
> being uploaded, and  the checkpoint is canceled due to checkpoint timeout at 
> this time, all sst files will not be added to sstFiles. The uploaded sst will 
> remain on hdfs.
> [code 
> link|https://github.com/apache/flink/blob/49146cdec41467445de5fc81f100585142728bdf/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksIncrementalSnapshotStrategy.java#L328]
> h2. Solution:
> Using the CloseableRegistry as the tmpResourcesRegistry. If the async phase 
> is failed, the tmpResourcesRegistry will cleanup these temporary resources.
>  
> POC code:
> [https://github.com/1996fanrui/flink/commit/86a456b2bbdad6c032bf8e0bff71c4824abb3ce1]
>  
>  
> !image-2022-12-20-18-45-32-948.png|width=1114,height=442!
> !image-2022-12-20-18-47-42-385.png|width=1332,height=552!
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32940) Support projection pushdown to table source for column projections through UDTF

2023-08-22 Thread Venkata krishnan Sowrirajan (Jira)
Venkata krishnan Sowrirajan created FLINK-32940:
---

 Summary: Support projection pushdown to table source for column 
projections through UDTF
 Key: FLINK-32940
 URL: https://issues.apache.org/jira/browse/FLINK-32940
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Venkata krishnan Sowrirajan


Currently, Flink doesn't push down columns projected through UDTF like _UNNEST_ 
to the table source.

For eg:
{code:java}
select t1.name, t2.ename from DEPT_NESTED as t1, unnest(t1.employees) as 
t2{code}
For the above SQL, Flink projects all the columns for DEPT_NESTED rather than 
only _name_ and {_}employees{_}. If the table source supports nested fields 
column projection, ideally it should project only _t1.employees.ename_ from the 
table source.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23262: [hotfix] Apply spotless

2023-08-22 Thread via GitHub


flinkbot commented on PR #23262:
URL: https://github.com/apache/flink/pull/23262#issuecomment-1689052696

   
   ## CI report:
   
   * ad08a5f854e31179019dbeaf87dca92f4e44ba7f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin opened a new pull request, #23262: [hotfix] Apply spotless

2023-08-22 Thread via GitHub


snuyanzin opened a new pull request, #23262:
URL: https://github.com/apache/flink/pull/23262

   ## What is the purpose of the change
   
   The PR is aiming to fix the failing build
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no )
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no )
 - The S3 file system connector: (no )
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-21871) Support watermark for Hive and Filesystem streaming source

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21871:
---
  Labels: auto-deprioritized-major auto-unassigned pull-request-available  
(was: auto-unassigned pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support watermark for Hive and Filesystem streaming source
> --
>
> Key: FLINK-21871
> URL: https://issues.apache.org/jira/browse/FLINK-21871
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem, Connectors / Hive, Table SQL / 
> API
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
>
> Hive and Filesystem already support streaming source. However, they doesn't 
> support watermark on the source. That means users can't leverage the 
> streaming source to perform the Flink powerful streaming analysis, e.g. 
> window aggregate, interval join, and so on. 
> In order to make more Hive users can leverage Flink to perform streaming 
> analysis, and also cooperate with the new optimized window-TVF operations 
> (FLIP-145), we need to support watermark for Hive and Filesystem. 
> h2. How to emit watermark in Hive and Filesystem
> Factual data in Hive are usually partitioned by date time, e.g. 
> {{pt_day=2021-03-19, pt_hour=10}}. In this case, when the data of partition 
> {{pt_day=2021-03-19, pt_hour=10}} are emitted, we should be able to know all 
> the data before {{2021-03-19 11:00:00}} have been arrived, so we can emit a 
> watermark value of {{2021-03-19 11:00:00}}. We call this partition watermark. 
> The partition watermark is much better than record watermark (extract 
> watermark from record, e.g. {{ts - INTERVAL '1' MINUTE}}). Because in above 
> example, if we are using partition watermark, the window of [10:00, 11:00) 
> will be triggered when pt_hour=10 is finished. However, if we are using 
> record watermark, the window of [10:00, 11:00) will be triggered when 
> pt_hour=11 is arrived, that will make the pipeline have one more partition 
> dely. 
> Therefore, we firstly focus on support partition watermark for Hive and 
> Filesystem.
> h2. Example
> In order to support such watermarks, we propose using the following DDL to 
> define a Hive table with watermark defined:
> {code:sql}
> -- using hive dialect
> CREATE TABLE hive_table (
>   x int, 
>   y string,
>   z int,
>   ts timestamp,
>   WATERMARK FOR ts AS SOURCE_WATERMARK
> ) PARTITIONED BY (pt_day string, pt_hour string) 
> TBLPROPERTIES (
>   'streaming-source.enable'='true',
>   'streaming-source.monitor-interval'='1s',
>   'partition.time-extractor.timestamp-pattern'='$pt_day $pt_hour:00:00',
>   'partition.time-interval'='1h'
> );
> -- window aggregate on the hive table
> SELECT window_start, window_end, COUNT(*), MAX(y), SUM(z)
> FROM TABLE(
>TUMBLE(TABLE hive_table, DESCRIPTOR(ts), INTERVAL '1' HOUR))
> GROUP BY window_start, window_end;
> {code}
> For filesystem connector, the DDL can be:
> {code:sql}
> CREATE TABLE fs_table (
> x int,
> y string,
> z int,
> ts TIMESTAMP(3),
> pt_day string,
> pt_hour string,
> WATERMARK FOR ts AS SOURCE_WATERMARK
> ) PARTITIONED BY (pt_day, pt_hour)
>   WITH (
> 'connector' = 'filesystem',
> 'path' = '/path/to/file',
> 'format' = 'parquet',
> 'streaming-source.enable'='true',
> 'streaming-source.monitor-interval'='1s',
> 'partition.time-extractor.timestamp-pattern'='$pt_day $pt_hour:00:00',
> 'partition.time-interval'='1h'
> );
> {code}
> I will explain the new function/configuration. 
> h2. SOURCE_WATERMARK built-in function
> FLIP-66[1] proposed {{SYSTEM_WATERMARK}} function for watermarks preserved in 
> underlying source system. 
> However, the SYSTEM prefix sounds like a Flink system generated value, but 
> actually, this is a SOURCE system generated value. 
> So I propose to use {{SOURCE_WATERMARK}} intead, this also keeps the concept 
> align with the API of 
> {{org.apache.flink.table.descriptors.Rowtime#watermarksFromSource}}.
> h2. Table Options for Watermark
> - {{partition.time-extractor.timestamp-pattern}}: this option already exists. 
> This is used to extract/convert partition value to a timestamp value.
> - {{partition.time-interval}}: this is a new option. It indicates the minimal 
> time interval of the partitions. It's used to calculate the correct watermark 
> when a partition is finished. The watermark = partition-timestamp + 
> time-inteval.
> h2. How to 

[jira] [Updated] (FLINK-22366) HiveSinkCompactionITCase fails on azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22366:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
test-stability  (was: auto-deprioritized-critical stale-major test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> HiveSinkCompactionITCase fails on azure
> ---
>
> Key: FLINK-22366
> URL: https://issues.apache.org/jira/browse/FLINK-22366
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.13.0, 1.12.5
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23420
> {code}
>  [ERROR] testNonPartition[format = 
> sequencefile](org.apache.flink.connectors.hive.HiveSinkCompactionITCase)  
> Time elapsed: 4.999 s  <<< FAILURE!
> Apr 19 22:25:10 java.lang.AssertionError: expected:<[+I[0, 0, 0], +I[0, 0, 
> 0], +I[1, 1, 1], +I[1, 1, 1], +I[2, 2, 2], +I[2, 2, 2], +I[3, 3, 3], +I[3, 3, 
> 3], +I[4, 4, 4], +I[4, 4, 4], +I[5, 5, 5], +I[5, 5, 5], +I[6, 6, 6], +I[6, 6, 
> 6], +I[7, 7, 7], +I[7, 7, 7], +I[8, 8, 8], +I[8, 8, 8], +I[9, 9, 9], +I[9, 9, 
> 9], +I[10, 0, 0], +I[10, 0, 0], +I[11, 1, 1], +I[11, 1, 1], +I[12, 2, 2], 
> +I[12, 2, 2], +I[13, 3, 3], +I[13, 3, 3], +I[14, 4, 4], +I[14, 4, 4], +I[15, 
> 5, 5], +I[15, 5, 5], +I[16, 6, 6], +I[16, 6, 6], +I[17, 7, 7], +I[17, 7, 7], 
> +I[18, 8, 8], +I[18, 8, 8], +I[19, 9, 9], +I[19, 9, 9], +I[20, 0, 0], +I[20, 
> 0, 0], +I[21, 1, 1], +I[21, 1, 1], +I[22, 2, 2], +I[22, 2, 2], +I[23, 3, 3], 
> +I[23, 3, 3], +I[24, 4, 4], +I[24, 4, 4], +I[25, 5, 5], +I[25, 5, 5], +I[26, 
> 6, 6], +I[26, 6, 6], +I[27, 7, 7], +I[27, 7, 7], +I[28, 8, 8], +I[28, 8, 8], 
> +I[29, 9, 9], +I[29, 9, 9], +I[30, 0, 0], +I[30, 0, 0], +I[31, 1, 1], +I[31, 
> 1, 1], +I[32, 2, 2], +I[32, 2, 2], +I[33, 3, 3], +I[33, 3, 3], +I[34, 4, 4], 
> +I[34, 4, 4], +I[35, 5, 5], +I[35, 5, 5], +I[36, 6, 6], +I[36, 6, 6], +I[37, 
> 7, 7], +I[37, 7, 7], +I[38, 8, 8], +I[38, 8, 8], +I[39, 9, 9], +I[39, 9, 9], 
> +I[40, 0, 0], +I[40, 0, 0], +I[41, 1, 1], +I[41, 1, 1], +I[42, 2, 2], +I[42, 
> 2, 2], +I[43, 3, 3], +I[43, 3, 3], +I[44, 4, 4], +I[44, 4, 4], +I[45, 5, 5], 
> +I[45, 5, 5], +I[46, 6, 6], +I[46, 6, 6], +I[47, 7, 7], +I[47, 7, 7], +I[48, 
> 8, 8], +I[48, 8, 8], +I[49, 9, 9], +I[49, 9, 9], +I[50, 0, 0], +I[50, 0, 0], 
> +I[51, 1, 1], +I[51, 1, 1], +I[52, 2, 2], +I[52, 2, 2], +I[53, 3, 3], +I[53, 
> 3, 3], +I[54, 4, 4], +I[54, 4, 4], +I[55, 5, 5], +I[55, 5, 5], +I[56, 6, 6], 
> +I[56, 6, 6], +I[57, 7, 7], +I[57, 7, 7], +I[58, 8, 8], +I[58, 8, 8], +I[59, 
> 9, 9], +I[59, 9, 9], +I[60, 0, 0], +I[60, 0, 0], +I[61, 1, 1], +I[61, 1, 1], 
> +I[62, 2, 2], +I[62, 2, 2], +I[63, 3, 3], +I[63, 3, 3], +I[64, 4, 4], +I[64, 
> 4, 4], +I[65, 5, 5], +I[65, 5, 5], +I[66, 6, 6], +I[66, 6, 6], +I[67, 7, 7], 
> +I[67, 7, 7], +I[68, 8, 8], +I[68, 8, 8], +I[69, 9, 9], +I[69, 9, 9], +I[70, 
> 0, 0], +I[70, 0, 0], +I[71, 1, 1], +I[71, 1, 1], +I[72, 2, 2], +I[72, 2, 2], 
> +I[73, 3, 3], +I[73, 3, 3], +I[74, 4, 4], +I[74, 4, 4], +I[75, 5, 5], +I[75, 
> 5, 5], +I[76, 6, 6], +I[76, 6, 6], +I[77, 7, 7], +I[77, 7, 7], +I[78, 8, 8], 
> +I[78, 8, 8], +I[79, 9, 9], +I[79, 9, 9], +I[80, 0, 0], +I[80, 0, 0], +I[81, 
> 1, 1], +I[81, 1, 1], +I[82, 2, 2], +I[82, 2, 2], +I[83, 3, 3], +I[83, 3, 3], 
> +I[84, 4, 4], +I[84, 4, 4], +I[85, 5, 5], +I[85, 5, 5], +I[86, 6, 6], +I[86, 
> 6, 6], +I[87, 7, 7], +I[87, 7, 7], +I[88, 8, 8], +I[88, 8, 8], +I[89, 9, 9], 
> +I[89, 9, 9], +I[90, 0, 0], +I[90, 0, 0], +I[91, 1, 1], +I[91, 1, 1], +I[92, 
> 2, 2], +I[92, 2, 2], +I[93, 3, 3], +I[93, 3, 3], +I[94, 4, 4], +I[94, 4, 4], 
> +I[95, 5, 5], +I[95, 5, 5], +I[96, 6, 6], +I[96, 6, 6], +I[97, 7, 7], +I[97, 
> 7, 7], +I[98, 8, 8], +I[98, 8, 8], +I[99, 9, 9], +I[99, 9, 9]]> but 
> was:<[+I[0, 0, 0], +I[1, 1, 1], +I[2, 2, 2], +I[3, 3, 3], +I[4, 4, 4], +I[5, 
> 5, 5], +I[6, 6, 6], +I[7, 7, 7], +I[8, 8, 8], +I[9, 9, 9], +I[10, 0, 0], 
> +I[11, 1, 1], +I[12, 2, 2], +I[13, 3, 3], +I[14, 4, 4], +I[15, 5, 5], +I[16, 
> 6, 6], +I[17, 7, 7], +I[18, 8, 8], +I[19, 9, 9], +I[20, 0, 0], +I[21, 1, 1], 
> +I[22, 2, 2], +I[23, 3, 3], +I[24, 4, 4], +I[25, 5, 5], +I[26, 6, 6], +I[27, 
> 7, 7], +I[28, 8, 8], +I[29, 9, 9], +I[30, 0, 0], +I[31, 1, 1], +I[32, 2, 2], 
> +I[33, 3, 3], +I[34, 4, 4], +I[35, 5, 5], +I[36, 6, 6], +I[37, 7, 7], +I[38, 
> 8, 8], +I[39, 9, 9], +I[40, 

[jira] [Updated] (FLINK-24677) JdbcBatchingOutputFormat should not generate circulate chaining of exceptions when flushing fails in timer thread

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24677:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> JdbcBatchingOutputFormat should not generate circulate chaining of exceptions 
> when flushing fails in timer thread
> -
>
> Key: FLINK-24677
> URL: https://issues.apache.org/jira/browse/FLINK-24677
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> This is reported from the [user mailing 
> list|https://lists.apache.org/thread.html/r3e725f52e4f325b9dcb790635cc642bd6018c4bca39f86c71b8a60f4%40%3Cuser.flink.apache.org%3E].
> In the timer thread created in {{JdbcBatchingOutputFormat#open}}, 
> {{flushException}} field will be recorded if the call to {{flush}} throws an 
> exception. This exception is used to fail the job in the main thread.
> However {{JdbcBatchingOutputFormat#flush}} will also check for this exception 
> and will wrap it with a new layer of runtime exception. This will cause a 
> super long stack when the main thread finally discover the exception and 
> fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23238) EventTimeWindowCheckpointingITCase.testTumblingTimeWindowWithKVStateMaxMaxParallelism fails on azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23238:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor test-stability  
(was: auto-deprioritized-major auto-deprioritized-minor stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> EventTimeWindowCheckpointingITCase.testTumblingTimeWindowWithKVStateMaxMaxParallelism
>  fails on azure
> 
>
> Key: FLINK-23238
> URL: https://issues.apache.org/jira/browse/FLINK-23238
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.12.4, 1.14.6, 1.15.3
>Reporter: Xintong Song
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=19873=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=6dff16b1-bf54-58f3-23c6-76282f49a185=4490
> {code}
> [ERROR] Tests run: 42, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 261.311 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase
> [ERROR] testTumblingTimeWindowWithKVStateMaxMaxParallelism[statebackend type 
> =ROCKSDB_INCREMENTAL](org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase)
>   Time elapsed: 79.062 s  <<< FAILURE!
> java.lang.AssertionError: Job execution failed.
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.doTestTumblingTimeWindowWithKVState(EventTimeWindowCheckpointingITCase.java:434)
>   at 
> org.apache.flink.test.checkpointing.EventTimeWindowCheckpointingITCase.testTumblingTimeWindowWithKVStateMaxMaxParallelism(EventTimeWindowCheckpointingITCase.java:350)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> 

[jira] [Updated] (FLINK-22068) FlinkKinesisConsumerTest.testPeriodicWatermark fails on azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22068:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> FlinkKinesisConsumerTest.testPeriodicWatermark fails on azure
> -
>
> Key: FLINK-22068
> URL: https://issues.apache.org/jira/browse/FLINK-22068
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0, 1.14.0, 1.15.0
>Reporter: Dawid Wysakowicz
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> {code}
> [ERROR] Tests run: 11, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 5.567 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest
> [ERROR] 
> testPeriodicWatermark(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest)
>   Time elapsed: 0.845 s  <<< FAILURE!
> java.lang.AssertionError: 
> Expected: iterable containing [, ]
>  but: item 0: was 
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.junit.Assert.assertThat(Assert.java:956)
>   at org.junit.Assert.assertThat(Assert.java:923)
>   at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumerTest.testPeriodicWatermark(FlinkKinesisConsumerTest.java:988)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
>   at 

[jira] [Updated] (FLINK-22805) Dynamic configuration of Flink checkpoint interval

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22805:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major  (was: 
auto-deprioritized-critical stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Dynamic configuration of Flink checkpoint interval
> --
>
> Key: FLINK-22805
> URL: https://issues.apache.org/jira/browse/FLINK-22805
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.1
>Reporter: Fu Kai
>Priority: Minor
>  Labels: auto-deprioritized-critical, auto-deprioritized-major
>
> Flink currently does not support dynamic configuration of checkpoint interval 
> on the fly. It's useful for use cases like backfill/cold-start from a stream 
> containing whole history.
>  
> In the cold-start phase, resources are fully utilized and the back-pressure 
> is high for all upstream operators, causing the checkpoint timeout 
> constantly. The real production traffic is far less than that and the 
> provisioned resource is capable of handling it. 
>  
> With the dynamic checkpoint interval configuration, the cold-start process 
> can be speeded up with less frequent checkpoint interval or even turned off. 
> After the process is completed, the checkpoint interval can be updated to 
> normal.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23632) [DOCS]The link to setup-Pyflink-virtual-env.sh is broken for page dev/python/faq

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23632:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> [DOCS]The link to setup-Pyflink-virtual-env.sh is broken for page 
> dev/python/faq
> 
>
> Key: FLINK-23632
> URL: https://issues.apache.org/jira/browse/FLINK-23632
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: wuguihu
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: image-20210805021609756.png
>
>
> There is no setup-pyflink-virtual-env.sh file in the current version, and no 
> download link can be found.
>  
> 1. There is no setup-pyflink-virtual-env.sh file in the current version, and 
> no download link can be found.
> 2. This file can be found in previous versions
> [https://ci.apache.org/projects/flink/flink-docs-release-1.12/downloads/setup-pyflink-virtual-env.sh]
> 3. The file has not been found since version 1.13.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.13/downloads/setup-pyflink-virtual-env.sh]
>  
> 4. The link below does not take effect
> {code:java}
> [convenience script]({% link downloads/setup-pyflink-virtual-env.sh %}) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22194:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to 
> commit timeout
> --
>
> Key: FLINK-22194
> URL: https://issues.apache.org/jira/browse/FLINK-22194
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0, 1.14.0, 1.12.4, 1.15.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6535
> {code:java}
> [ERROR] 
> testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.123 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24111) UnsignedTypeConversionITCase fails locally

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24111:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> UnsignedTypeConversionITCase fails locally
> --
>
> Key: FLINK-24111
> URL: https://issues.apache.org/jira/browse/FLINK-24111
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Nicolaus Weidner
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> Running UnsignedTypeConversionITCase locally (either on its own or as part of 
> e.g. mvn install) fails with
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> {code}
> and the following root cause:
> {code:java}
> Caused by: com.mysql.cj.exceptions.InvalidConnectionAttributeException: The 
> server time zone value 'CEST' is unrecognized or represents more than one 
> time zone. You must configure either the server or JDBC driver (via the 
> 'serverTimezone' configuration property) to use a more specifc time zone 
> value if you want to utilize time zone support.
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>  Method)
>   at 
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at 
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>   at 
> com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:61)
>   at 
> com.mysql.cj.exceptions.ExceptionFactory.createException(ExceptionFactory.java:85)
>   at com.mysql.cj.util.TimeUtil.getCanonicalTimezone(TimeUtil.java:132)
>   at 
> com.mysql.cj.protocol.a.NativeProtocol.configureTimezone(NativeProtocol.java:2120)
>   at 
> com.mysql.cj.protocol.a.NativeProtocol.initServerSession(NativeProtocol.java:2143)
>   at 
> com.mysql.cj.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:1310)
>   at 
> com.mysql.cj.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:967)
>   at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:826)
> {code}
> Its behavior seems inconsistent: Two people experienced this failure, while 
> one person did not.
> Full stacktrace:
> {code:java}
> java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
>   at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
>   at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:129)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:92)
>   at 
> org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase.testUnsignedType(UnsignedTypeConversionITCase.java:138)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at 

[jira] [Updated] (FLINK-31820) Support data source sub-database and sub-table

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-31820:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support data source sub-database and sub-table
> --
>
> Key: FLINK-31820
> URL: https://issues.apache.org/jira/browse/FLINK-31820
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Reporter: xingyuan cheng
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> At present, apache/flink-connector-jdbc does not support sub-database and 
> table sub-database. Now three commonly used databases Mysql, Postgres and 
> Oracle support sub-database and sub-table
>  
> Taking oracle as an example, users only need to configure the following 
> format to use
>  
> {code:java}
> create table oracle_source (
> EMPLOYEE_ID BIGINT,
> START_DATE TIMESTAMP,
> END_DATE TIMESTAMP,
> JOB_ID VARCHAR,
> DEPARTMENT_ID VARCHAR
> ) with (
> type = 'oracle',    
> url = 
> 'jdbc:oracle:thin:@//localhost:3306/order_([0-9]{1,}),jdbc:oracle:thin:@//localhost:3306/order_([0-9]{1,})',
>    userName = 'userName',
> password = 'password',
> dbName = 'hr', 
> table-name = 'order_([0-9]{1,})',
> timeField = 'START_DATE',
> startTime = '2007-1-1 00:00:00'
> ); {code}
> In the above code, the dbName attribute corresponds to the schema-name 
> attribute in oracle or postgres, and the mysql database needs to manually 
> specify the dbName
>  
> At the same time, I am also developing the CDAS whole database 
> synchronization syntax for the company, and the data source supports 
> sub-database and table as part of it. Add unit tests. For now, please keep 
> this PR in draft status.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24743) New File Sink end-to-end test fails on Azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24743:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> New File Sink end-to-end test fails on Azure
> 
>
> Key: FLINK-24743
> URL: https://issues.apache.org/jira/browse/FLINK-24743
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Runtime / Coordination, Tests
>Affects Versions: 1.15.0
>Reporter: Till Rohrmann
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> The {{New File Sink end-to-end test}} fails on Azure. It seems that the 
> {{TaskExecutor}} cannot connect with the {{ResourceManager}} because 
> {{Discarding inbound message to 
> [Actor[akka://flink/user/rpc/taskmanager_0#1155592017]] in read-only 
> association to [akka.ssl.tcp://flink@localhost:6123]. If this happens often 
> you may consider using akka.remote.use-passive-connections=off or use Artery 
> TCP.}}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=25831=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=070ff179-953e-5bda-71fa-d6599415701c=11674
> cc [~chesnay], this might be related to the recent Akka bump.
> Related: https://github.com/akka/akka/issues/24393
> Maybe we should upgrade to using Artery TCP or set 
> {{akka.remote.use-passive-connections=off}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25100) RMQSourceITCase failed on azure due to java.io.EOFException

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25100:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor test-stability  
(was: auto-deprioritized-major stale-minor test-stability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> RMQSourceITCase failed on azure due to java.io.EOFException
> ---
>
> Key: FLINK-25100
> URL: https://issues.apache.org/jira/browse/FLINK-25100
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.14.1, 1.13.6, 1.14.3, 1.15.0
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Nov 29 12:02:05 [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 55.545 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.rabbitmq.RMQSourceITCase
> Nov 29 12:02:05 [ERROR] testStopWithSavepoint  Time elapsed: 15.014 s  <<< 
> ERROR!
> Nov 29 12:02:05 com.rabbitmq.client.PossibleAuthenticationFailureException: 
> Possibly caused by authentication failure
> Nov 29 12:02:05   at 
> com.rabbitmq.client.impl.AMQConnection.start(AMQConnection.java:388)
> Nov 29 12:02:05   at 
> com.rabbitmq.client.impl.recovery.RecoveryAwareAMQConnectionFactory.newConnection(RecoveryAwareAMQConnectionFactory.java:64)
> Nov 29 12:02:05   at 
> com.rabbitmq.client.impl.recovery.AutorecoveringConnection.init(AutorecoveringConnection.java:156)
> Nov 29 12:02:05   at 
> com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1130)
> Nov 29 12:02:05   at 
> com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1087)
> Nov 29 12:02:05   at 
> com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1045)
> Nov 29 12:02:05   at 
> com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1207)
> Nov 29 12:02:05   at 
> org.apache.flink.streaming.connectors.rabbitmq.RMQSourceITCase.getRMQConnection(RMQSourceITCase.java:201)
> Nov 29 12:02:05   at 
> org.apache.flink.streaming.connectors.rabbitmq.RMQSourceITCase.setUp(RMQSourceITCase.java:96)
> Nov 29 12:02:05   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Nov 29 12:02:05   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Nov 29 12:02:05   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Nov 29 12:02:05   at java.lang.reflect.Method.invoke(Method.java:498)
> Nov 29 12:02:05   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Nov 29 12:02:05   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Nov 29 12:02:05   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Nov 29 12:02:05   at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> Nov 29 12:02:05   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> Nov 29 12:02:05   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Nov 29 12:02:05   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Nov 29 12:02:05   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> Nov 29 12:02:05   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> Nov 29 12:02:05   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> Nov 29 12:02:05   at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> Nov 29 12:02:05   at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> Nov 29 

[jira] [Updated] (FLINK-24681) org.apache.kafka.clients.consumer.NoOffsetForPartitionException: Undefined offset with no reset policy for partitions

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24681:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


>  org.apache.kafka.clients.consumer.NoOffsetForPartitionException: Undefined 
> offset with no reset policy for partitions
> --
>
> Key: FLINK-24681
> URL: https://issues.apache.org/jira/browse/FLINK-24681
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0
>Reporter: qinghuan wang
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> When create a Kafka Table
> {code:java}
> CREATE TABLE KafkaTable (
>   ...
> ) WITH (
>      'connector' = 'kafka',
>      'topic' = 'user_behavior',
>      'properties.bootstrap.servers' = '192.168.3.244:9092',
>      'properties.group.id' = 'testGroup',
>       'format' = 'csv'
> ); 
> {code}
> An exception throws:
> {code:java}
> Caused by: org.apache.kafka.clients.consumer.NoOffsetForPartitionException: 
> Undefined offset with no reset policy for partitions: [user_behavior-0]Caused 
> by: org.apache.kafka.clients.consumer.NoOffsetForPartitionException: 
> Undefined offset with no reset policy for partitions: 
> [haikang-face-recognition-0] at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.resetMissingPositions(SubscriptionState.java:631)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateFetchPositions(KafkaConsumer.java:2343)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1725)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1684)
>  at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.removeEmptySplits(KafkaPartitionSplitReader.java:375)
>  at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.handleSplitsChanges(KafkaPartitionSplitReader.java:260)
>  at 
> org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:51)
>  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
>  ... 7 common frames omitted{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23810) Print sql when parse failed , which is convenient to find error sql from multiple executed sql

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23810:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Print  sql when parse failed  , which is convenient to find error sql from  
> multiple executed sql 
> --
>
> Key: FLINK-23810
> URL: https://issues.apache.org/jira/browse/FLINK-23810
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: hehuiyuan
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> Print sql when parse failed , which is convenient to find error sql.
>  
> {code:java}
> public SqlNode parse(String sql) {
> try {
> SqlParser parser = SqlParser.create(sql, config);
> return parser.parseStmt();
> } catch (SqlParseException e) {
> throw new SqlParserException("SQL parse failed. " + e.getMessage(), 
> e);
> }
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24031) I am trying to deploy Flink in kubernetes but when I launch the taskManager in other container I get a Exception

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24031:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> I am trying to deploy Flink in kubernetes but when I launch the taskManager 
> in other container I get a Exception
> 
>
> Key: FLINK-24031
> URL: https://issues.apache.org/jira/browse/FLINK-24031
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0, 1.13.2
>Reporter: Julio Pérez
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Attachments: JM.log, TM.log, flink-map.yml, jobmanager.log, 
> jobmanager.yml, taskmanager.log, taskmanager.yml
>
>
>  I explain here -> [https://github.com/apache/flink/pull/17020]
> I have a problem when I try to run Flink in k8s with the follow manifests
> I have the following exception
>  # JobManager :
> {quote}2021-08-27 09:16:57,917 ERROR akka.remote.EndpointWriter [] - dropping 
> message [class akka.actor.ActorSelectionMessage] for non-local recipient 
> [Actor[akka.tcp://flink@jobmanager-hs:6123/]] arriving at 
> [akka.tcp://flink@jobmanager-hs:6123] inbound addresses are 
> [akka.tcp://flink@cluster:6123]
>  2021-08-27 09:17:01,255 DEBUG 
> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - 
> Trigger heartbeat request.
>  2021-08-27 09:17:01,284 DEBUG 
> org.apache.flink.runtime.resourcemanager.StandaloneResourceManager [] - 
> Trigger heartbeat request.
>  2021-08-27 09:17:10,008 DEBUG akka.remote.transport.netty.NettyTransport [] 
> - Remote connection to [/172.17.0.1:34827] was disconnected because of [id: 
> 0x13ae1d03, /172.17.0.1:34827 :> /172.17.0.23:6123] DISCONNECTED
>  2021-08-27 09:17:10,008 DEBUG akka.remote.transport.ProtocolStateActor [] - 
> Association between local [tcp://flink@cluster:6123] and remote 
> [tcp://flink@172.17.0.1:34827] was disassociated because the 
> ProtocolStateActor failed: Unknown
>  2021-08-27 09:17:10,009 WARN akka.remote.ReliableDeliverySupervisor [] - 
> Association with remote system [akka.tcp://flink@172.17.0.24:6122] has 
> failed, address is now gated for [50] ms. Reason: [Disassociated]
> {quote}
> TaskManager:
> {quote}INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Could not 
> resolve ResourceManager address 
> akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager__, retrying 
> in 1 ms: Could not connect to rpc endpoint under address 
> akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager__.
>  INFO org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Could not 
> resolve ResourceManager address 
> akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager__, retrying 
> in 1 ms: Could not connect to rpc endpoint under address 
> akka.tcp://flink@flink-jobmanager:6123/user/rpc/resourcemanager__.
> {quote}
> Best regards,
> Julio



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-23982) Compiling error in WindowOperatorTest

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23982:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Compiling error in WindowOperatorTest
> -
>
> Key: FLINK-23982
> URL: https://issues.apache.org/jira/browse/FLINK-23982
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.14.0, 1.18.0
>Reporter: Xintong Song
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> What I don't understand is, while the error does not seem to be environment 
> related, the other JDK11 compiling stages in the same build did not fail. 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=22855=logs=6caf31d6-847a-526e-9624-468e053467d6=0b23652f-b18b-5b6e-6eb6-a11070364610=3650
> {code}
> [INFO] -
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorTest.java:[678,43]
>  incompatible types: cannot infer type arguments for 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator<>
> reason: inference variable KEY has incompatible equality constraints 
> ACC,org.apache.flink.api.java.tuple.Tuple2,IN,K,java.lang.String
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24701) "late events" link in "Timely Stream Processing" document is not working

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24701:
---
  Labels: Starter auto-deprioritized-minor starter  (was: Starter 
stale-minor starter)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> "late events" link in "Timely Stream Processing" document is not working
> 
>
> Key: FLINK-24701
> URL: https://issues.apache.org/jira/browse/FLINK-24701
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.15.0
>Reporter: Caizhi Weng
>Priority: Not a Priority
>  Labels: Starter, auto-deprioritized-minor, starter
>
> The last but one paragraph of [this 
> document|https://nightlies.apache.org/flink/flink-docs-master/docs/concepts/time/#notions-of-time-event-time-and-processing-time]
>  contains a link with the text "late events". However clicking on that link 
> leads to nothing.
> I guess it should point to the 
> [lateness|https://nightlies.apache.org/flink/flink-docs-master/docs/concepts/time/#lateness]
>  section in the same page.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24095) Elasticsearch7DynamicSinkITCase.testWritingDocuments fails due to socket timeout

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24095:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Major, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Elasticsearch7DynamicSinkITCase.testWritingDocuments fails due to socket 
> timeout
> 
>
> Key: FLINK-24095
> URL: https://issues.apache.org/jira/browse/FLINK-24095
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.14.0, 1.15.0, 1.16.0, elasticsearch-3.0.0
>Reporter: Xintong Song
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=23250=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=12781
> {code}
> Aug 31 23:06:22 Caused by: java.net.SocketTimeoutException: 30,000 
> milliseconds timeout on connection http-outgoing-3 [ACTIVE]
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.extractAndWrapCause(RestClient.java:808)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:248)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestClient.performRequest(RestClient.java:235)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1514)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1499)
> Aug 31 23:06:22   at 
> org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:720)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:138)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch7.Elasticsearch7ApiCallBridge.verifyClientConnection(Elasticsearch7ApiCallBridge.java:46)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:318)
> Aug 31 23:06:22   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:100)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:46)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:110)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:691)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:667)
> Aug 31 23:06:22   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:639)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:766)
> Aug 31 23:06:22   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
> Aug 31 23:06:22   at java.lang.Thread.run(Thread.java:748)
> Aug 31 23:06:22 Caused by: java.net.SocketTimeoutException: 30,000 
> milliseconds timeout on connection http-outgoing-3 [ACTIVE]
> Aug 31 23:06:22   at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.timeout(HttpAsyncRequestExecutor.java:387)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:92)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.client.InternalIODispatch.onTimeout(InternalIODispatch.java:39)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.timeout(AbstractIODispatch.java:175)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.sessionTimedOut(BaseIOReactor.java:261)
> Aug 31 23:06:22   at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.timeoutCheck(AbstractIOReactor.java:502)
> Aug 31 23:06:22   at 
> 

[jira] [Updated] (FLINK-24640) CEIL, FLOOR built-in functions for Timestamp should respect DST

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24640:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available  (was: auto-deprioritized-major pull-request-available 
stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> CEIL, FLOOR built-in functions for Timestamp should respect DST
> ---
>
> Key: FLINK-24640
> URL: https://issues.apache.org/jira/browse/FLINK-24640
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.14.0
>Reporter: Sergey Nuyanzin
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> The problem is that if there is a date in DST time then 
> {code:sql}
> select floor(current_timestamp to year);
> {code}
> leads to result
> {noformat}
> 2021-12-31 23:00:00.000
> {noformat}
> while expected is {{2022-01-01 00:00:00.000}}
> same issue is with {{WEEK}}, {{QUARTER}} and {{MONTH}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24941) Cannot report backpressure with DatadogReporter

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24941:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available  (was: auto-deprioritized-major pull-request-available 
stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Cannot report backpressure with DatadogReporter
> ---
>
> Key: FLINK-24941
> URL: https://issues.apache.org/jira/browse/FLINK-24941
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Metrics
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Ori Popowski
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> When using {{DatadogHttpReporter}} the log is full of these errors:
>  
> {code:java}
> 2021-11-16 09:51:11,521 [Flink-MetricRegistry-thread-1] INFO  
> org.apache.flink.metrics.datadog.DatadogHttpReporter  - The metric 
> flink.task.isBackPressured will not be reported because only number types are 
> supported by this reporter. {code}
> The code shows that the reason is that {{isBackPressured}} is a Boolean and 
> all Gauge values are converted to {{Number}} which results in 
> {{ClassCastException}} [1].
> I understand the limitation, but:
>  # This bug can be easily fixed
>  # Monitoring backpressure is extremely important. Without backpressure 
> monitroing there's no way of seeing backpressure history and no alerts.
> h3. Workaround
> For anyone interested, rewrite the 
> {{org.apache.flink.metrics.datadog.DGauge}} to map Booleans to integers (0 => 
> false, 1 => true), and use the maven/sbt shade plugin to take your own 
> version of this class into the final JAR instead the existing class from the 
> flink-metrics-datadog package.
>  
> [1] 
> https://github.com/apache/flink/blob/release-1.11/flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpReporter.java#L184-L188
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25835) The task initialization duration is recorded in logs

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25835:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available  (was: auto-deprioritized-major pull-request-available 
stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> The task initialization duration is recorded in logs
> 
>
> Key: FLINK-25835
> URL: https://issues.apache.org/jira/browse/FLINK-25835
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.12.2, 1.15.0
>Reporter: Bo Cui
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> [https://github.com/apache/flink/blob/a543e658acfbc22c1579df0d043654037b9ec4b0/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java#L644]
> We are testing the time of state backend initialization for different data 
> levels.However, the task initialization time cannot be obtained from the log 
> file and the time taken to restore the status at the backend cannot be 
> obtained.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-11526) Support Chinese Website for Apache Flink

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11526:
---
Labels: auto-deprioritized-major auto-unassigned pull-request-available 
stale-minor  (was: auto-deprioritized-major auto-unassigned 
pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support Chinese Website for Apache Flink
> 
>
> Key: FLINK-11526
> URL: https://issues.apache.org/jira/browse/FLINK-11526
> Project: Flink
>  Issue Type: New Feature
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, stale-minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is an umbrella issue for tracking fully support Chinese for Flink 
> website (flink.apache.org).
> A more detailed description can be found in the proposal doc: 
> https://docs.google.com/document/d/1R1-uDq-KawLB8afQYrczfcoQHjjIhq6tvUksxrfhBl0/edit#



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18822) [umbrella] Improve and complete Change Data Capture formats

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18822:
---
Labels: auto-deprioritized-major auto-unassigned stale-minor  (was: 
auto-deprioritized-major auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Minor but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 180 days. I have gone ahead and marked it "stale-minor". If this ticket is 
still Minor, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> [umbrella] Improve and complete Change Data Capture formats
> ---
>
> Key: FLINK-18822
> URL: https://issues.apache.org/jira/browse/FLINK-18822
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, stale-minor
>
> This is an umbrella issue to collect new features and improvements and bugs 
> for CDC formats. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-24581) compatible uname command with multiple OS on ARM platform

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-24581:
---
  Labels: auto-deprioritized-minor pull-request-available  (was: 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> compatible uname command with multiple OS on ARM platform
> -
>
> Key: FLINK-24581
> URL: https://issues.apache.org/jira/browse/FLINK-24581
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Reporter: zhao bo
>Priority: Not a Priority
>  Labels: auto-deprioritized-minor, pull-request-available
>
> Currently there are multiple ARM chips providers, and the associated OS might 
> be linux or Mac OS.
> So the issue will hit on Mac OS which locate on ARM platfrom. We will get 
> option illegal error during `uname -i`
> For generic linux OS on ARM, that would be good when we exec `uname -i`.
>  
> For the reason above, we need to find a approciated way to make both better.
> From try on linux and mac os on ARM, both of them return the same via `uname 
> -m`
>  
> So we will propose a nit fix towards this which is about exchanging the 
> affected part from `uname -i` to `uname -m`.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25449) KafkaSourceITCase.testRedundantParallelism failed on azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25449:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor test-stability  
(was: auto-deprioritized-major stale-minor test-stability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> KafkaSourceITCase.testRedundantParallelism failed on azure
> --
>
> Key: FLINK-25449
> URL: https://issues.apache.org/jira/browse/FLINK-25449
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.5
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Dec 25 00:51:07 Caused by: java.lang.RuntimeException: One or more fetchers 
> have encountered exception
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:223)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:305)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:69)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:66)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:423)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:204)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:684)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeInvoke(StreamTask.java:639)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650)
> Dec 25 00:51:07   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:623)
> Dec 25 00:51:07   at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:779)
> Dec 25 00:51:07   at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
> Dec 25 00:51:07   at java.lang.Thread.run(Thread.java:748)
> Dec 25 00:51:07 Caused by: java.lang.RuntimeException: SplitFetcher thread 0 
> received unexpected exception while polling the records
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:148)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:103)
> Dec 25 00:51:07   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Dec 25 00:51:07   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Dec 25 00:51:07   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Dec 25 00:51:07   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Dec 25 00:51:07   ... 1 more
> Dec 25 00:51:07 Caused by: java.lang.IllegalStateException: Consumer is not 
> subscribed to any topics or assigned any partitions
> Dec 25 00:51:07   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1223)
> Dec 25 00:51:07   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:108)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
> Dec 25 00:51:07   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:140)
> Dec 25 00:51:07   ... 6 more
>  {code}
>  
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28589=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6612



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25306) Flink CLI end-to-end test timeout on azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25306:
---
  Labels: auto-deprioritized-critical auto-deprioritized-major 
auto-deprioritized-minor test-stability  (was: auto-deprioritized-critical 
auto-deprioritized-major stale-minor test-stability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Flink CLI end-to-end test timeout on azure
> --
>
> Key: FLINK-25306
> URL: https://issues.apache.org/jira/browse/FLINK-25306
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Client / Job Submission
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-critical, auto-deprioritized-major, 
> auto-deprioritized-minor, test-stability
>
> {code:java}
> Dec 14 02:14:48 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:16:59 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:19:10 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:21:21 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:23:32 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:25:43 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:27:54 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:30:05 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:32:16 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:34:27 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:36:38 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:38:49 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:41:01 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:43:12 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:45:23 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:47:34 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:49:45 Waiting for Dispatcher REST endpoint to come up...
> Dec 14 02:49:46 Dispatcher REST endpoint has not started within a timeout of 
> 30 sec
> Dec 14 02:49:46 [FAIL] Test script contains errors.
> Dec 14 02:49:46 Checking for errors...
> Dec 14 02:49:46 No errors in log files.
> Dec 14 02:49:46 Checking for exceptions...
> Dec 14 02:49:46 No exceptions in log files.
> Dec 14 02:49:46 Checking for non-empty .out files...
> Dec 14 02:49:46 No non-empty .out files.
> Dec 14 02:49:46 
> Dec 14 02:49:46 [FAIL] 'Flink CLI end-to-end test' failed after 65 minutes 
> and 35 seconds! Test exited with exit code 1
> Dec 14 02:49:46 
> 02:49:46 ##[group]Environment Information
> Dec 14 02:49:46 Searching for .dump, .dumpstream and related files in 
> '/home/vsts/work/1/s'
> dmesg: read kernel buffer failed: Operation not permitted
> Dec 14 02:49:48 Stopping taskexecutor daemon (pid: 93858) on host 
> fv-az231-497.
> Dec 14 02:49:49 Stopping standalonesession daemon (pid: 93605) on host 
> fv-az231-497.
> The STDIO streams did not close within 10 seconds of the exit event from 
> process '/usr/bin/bash'. This may indicate a child process inherited the 
> STDIO streams and has not yet exited.
> ##[error]Bash exited with code '1'.
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28066=logs=2ffee335-fb12-54a6-1ba9-9610c8a56b81=ad628523-4b0b-5f7d-41f5-e8e2e6921343=108



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-25451) KafkaEnumeratorTest.testDiscoverPartitionsPeriodically failed on azure

2023-08-22 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-25451:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor test-stability  
(was: auto-deprioritized-major stale-minor test-stability)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> KafkaEnumeratorTest.testDiscoverPartitionsPeriodically failed on azure
> --
>
> Key: FLINK-25451
> URL: https://issues.apache.org/jira/browse/FLINK-25451
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> {code:java}
> Dec 25 04:38:34 [ERROR] Tests run: 10, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 58.393 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.enumerator.KafkaEnumeratorTest
> Dec 25 04:38:34 [ERROR] 
> org.apache.flink.connector.kafka.source.enumerator.KafkaEnumeratorTest.testDiscoverPartitionsPeriodically
>   Time elapsed: 30.01 s  <<< ERROR!
> Dec 25 04:38:34 org.junit.runners.model.TestTimedOutException: test timed out 
> after 3 milliseconds
> Dec 25 04:38:34   at java.lang.Object.wait(Native Method)
> Dec 25 04:38:34   at java.lang.Object.wait(Object.java:502)
> Dec 25 04:38:34   at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
> Dec 25 04:38:34   at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> Dec 25 04:38:34   at 
> org.apache.flink.connector.kafka.source.enumerator.KafkaEnumeratorTest.testDiscoverPartitionsPeriodically(KafkaEnumeratorTest.java:221)
> Dec 25 04:38:34   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Dec 25 04:38:34   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Dec 25 04:38:34   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Dec 25 04:38:34   at java.lang.reflect.Method.invoke(Method.java:498)
> Dec 25 04:38:34   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Dec 25 04:38:34   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Dec 25 04:38:34   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Dec 25 04:38:34   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Dec 25 04:38:34   at java.lang.Thread.run(Thread.java:748)
> Dec 25 04:38:34 
> {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28590=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=8338a7d2-16f7-52e5-f576-4b7b3071eb3d=6549



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32896) Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key as initial capacity

2023-08-22 Thread Marcono1234 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757683#comment-17757683
 ] 

Marcono1234 commented on FLINK-32896:
-

{quote}
Marcono1234 do you have capacity to work on this one?
{quote}
The main problem is that I am not familiar with Flink, neither the source code 
nor the project itself. It would therefore be great if one of you could do 
this, because it will most likely be much easier and way more efficient than me 
trying to get familiar with the project and how to build it. Sorry if this is 
causing trouble for you.
I came across these issues while [writing a custom CodeQL 
query|https://github.com/Marcono1234/codeql-java-queries/blob/master/codeql-custom-queries-java/queries/likely-bugs/method-ref-creating-collection-with-capacity.ql]
 and running it against popular Java projects on GitHub to verify that it works 
correctly. The query results included those listed above for Flink, and I 
thought it might be useful for you if I notified you about these issues.

> Incorrect `Map.computeIfAbsent(..., ...::new)` usage which misinterprets key 
> as initial capacity
> 
>
> Key: FLINK-32896
> URL: https://issues.apache.org/jira/browse/FLINK-32896
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.18.0, 1.17.1
>Reporter: Marcono1234
>Priority: Minor
>  Labels: starter
>
> The are multiple cases in the code which look like this:
> {code}
> map.computeIfAbsent(..., ArrayList::new)
> {code}
> Not only does this create a new collection (here an {{ArrayList}}), but 
> {{computeIfAbsent}} also passes the map key as argument to the mapping 
> function, so instead of calling the no-args constructor such as {{new 
> ArrayList<>()}}, this actually calls the constructor with {{int}} initial 
> capacity parameter, such as {{new ArrayList<>(initialCapacity)}}.
> For some cases where this occurs in the Flink code I am not sure if it is 
> intended, but in same cases this does not seem to be intended. Here are the 
> cases I found:
> - 
> [{{HsFileDataIndexImpl:163}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsFileDataIndexImpl.java#L163C70-L163C84]
> - 
> [{{HsSpillingStrategy:128}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategy.java#L128C68-L128C82]
> - 
> [{{HsSpillingStrategy:134}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategy.java#L134C63-L134C77]
> - 
> [{{HsSpillingStrategy:140}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategy.java#L140C63-L140C77]
> - 
> [{{HsSpillingStrategy:145}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategy.java#L145C70-L145C84]
> - 
> [{{HsSpillingStrategy:151}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategy.java#L151C65-L151C79]
> - 
> [{{HsSpillingStrategy:157}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategy.java#L157C65-L157C79]
> - 
> [{{HsSpillingStrategyUtils:76}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/HsSpillingStrategyUtils.java#L76C74-L76C88]
> - 
> [{{ProducerMergedPartitionFileIndex:171}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/hybrid/tiered/file/ProducerMergedPartitionFileIndex.java#L171C75-L171C89]
> - 
> [{{TestingSpillingInfoProvider:201}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/hybrid/TestingSpillingInfoProvider.java#L201C56-L201C70]
> - 
> [{{TestingSpillingInfoProvider:208}}|https://github.com/apache/flink/blob/9546f8243a24e7b45582b6de6702f819f1d73f97/flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/hybrid/TestingSpillingInfoProvider.java#L208C54-L208C66]
> - 
> 

[jira] [Closed] (FLINK-32890) Flink app rolled back with old savepoints (3 hours back in time) while some checkpoints have been taken in between

2023-08-22 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-32890.
--
Fix Version/s: kubernetes-operator-1.7.0
   Resolution: Fixed

merged to main 301b179c1e38904124fbfa22cacac1ee46602dce

> Flink app rolled back with old savepoints (3 hours back in time) while some 
> checkpoints have been taken in between
> --
>
> Key: FLINK-32890
> URL: https://issues.apache.org/jira/browse/FLINK-32890
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Reporter: Nicolas Fraison
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.7.0
>
>
> Here are all details about the issue:
>  * Deployed new release of a flink app with a new operator
>  * Flink Operator set the app as stable
>  * After some time the app failed and stay in failed state (due to some issue 
> with our kafka clusters)
>  * Finally decided to rollback the new release just in case it could be the 
> root cause of the issue on kafka
>  * Operator detect: {{Job is not running but HA metadata is available for 
> last state restore, ready for upgrade, Deleting JobManager deployment while 
> preserving HA metadata.}}  -> rely on last-state (as we do not disable 
> fallback), no savepoint taken
>  * Flink start JM and deployment of the app. It well find the 3 checkpoints
>  * {{Using '/flink-kafka-job-apache-nico/flink-kafka-job-apache-nico' as 
> Zookeeper namespace.}}
>  * {{Initializing job 'flink-kafka-job' (6b24a364c1905e924a69f3dbff0d26a6).}}
>  * {{Recovering checkpoints from 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints'}.}}
>  * {{Found 3 checkpoints in 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints'}.}}
>  * {{{}Restoring job 6b24a364c1905e924a69f3dbff0d26a6 from Checkpoint 19 @ 
> 1692268003920 for 6b24a364c1905e924a69f3dbff0d26a6 located at 
> }}\{{{}s3p://.../flink-kafka-job-apache-nico/checkpoints/6b24a364c1905e924a69f3dbff0d26a6/chk-19{}}}{{{}.{}}}
>  * Job failed because of the missing operator
> {code:java}
> Job 6b24a364c1905e924a69f3dbff0d26a6 reached terminal state FAILED.
> org.apache.flink.runtime.client.JobInitializationException: Could not start 
> the JobMaster.
> Caused by: java.util.concurrent.CompletionException: 
> java.lang.IllegalStateException: There is no operator for the state 
> f298e8715b4d85e6f965b60e1c848cbe * Job 6b24a364c1905e924a69f3dbff0d26a6 has 
> been registered for cleanup in the JobResultStore after reaching a terminal 
> state.{code}
>  * {{Clean up the high availability data for job 
> 6b24a364c1905e924a69f3dbff0d26a6.}}
>  * {{Removed job graph 6b24a364c1905e924a69f3dbff0d26a6 from 
> ZooKeeperStateHandleStore\{namespace='flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobgraphs'}.}}
>  * JobManager restart and try to resubmit the job but the job was already 
> submitted so finished
>  * {{Received JobGraph submission 'flink-kafka-job' 
> (6b24a364c1905e924a69f3dbff0d26a6).}}
>  * {{Ignoring JobGraph submission 'flink-kafka-job' 
> (6b24a364c1905e924a69f3dbff0d26a6) because the job already reached a 
> globally-terminal state (i.e. FAILED, CANCELED, FINISHED) in a previous 
> execution.}}
>  * {{Application completed SUCCESSFULLY}}
>  * Finally the operator rollback the deployment and still indicate that {{Job 
> is not running but HA metadata is available for last state restore, ready for 
> upgrade}}
>  * But the job metadata are not anymore there (clean previously)
>  
> {code:java}
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints
> Path 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs/6b24a364c1905e924a69f3dbff0d26a6/checkpoints
>  doesn't exist
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico/jobs
> (CONNECTED [zookeeper-data-eng-multi-cloud.zookeeper-flink.svc:2181]) /> ls 
> /flink-kafka-job-apache-nico/flink-kafka-job-apache-nico
> jobgraphs
> jobs
> leader
> {code}
>  
> The rolled back app from flink operator finally take the last provided 
> savepoint as no metadata/checkpoints are available. But this last savepoint 
> is an old one as during the upgrade the operator decided to rely on 
> last-state (The old savepoint taken is a scheduled one)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] gyfora merged pull request #654: [FLINK-32890] Correct HA patch check for zookeeper metadata store

2023-08-22 Thread via GitHub


gyfora merged PR #654:
URL: https://github.com/apache/flink-kubernetes-operator/pull/654


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32906:

Description: 
more info could be found at 

FLIP-279 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]

 

Tests:

Both configs could be set with new value and following behaviours are expected 
: 
| | |sql-client.display.max-column-width, default value is 
30|table.display.max-column-width, default value is 30|
|sqlclient|Streaming|text longer than the value will be truncated and replaced 
with “...”|Text longer than the value will be truncated and replaced with “...”|
|sqlclient|Batch|text longer than the value will be truncated and replaced with 
“...”|Text longer than the value will be truncated and replaced with “...”|
|Table API|Streaming|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|
|Table API|Batch|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|

 

Please pay attention that this task offers a backward compatible solution and 
deprecated sql-client.display.max-column-width, which means once 
sql-client.display.max-column-width is used, it falls into the old scenario 
where table.display.max-column-width didn't exist, any changes of 
table.display.max-column-width won't take effect. You should test it either by 
only using table.display.max-column-width or by using 
sql-client.display.max-column-width, but not both of them back and forth.

 

  was:
more info could be found at 

FLIP-279 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]

 

Tests:

Both configs could be set with new value and following behaviours are expected 
: 
| | |sql-client.display.max-column-width, default value is 
30|table.display.max-column-width, default value is 30|
|sqlclient|Streaming|text longer than the value will be truncated and replaced 
with “...”|Text longer than the value will be truncated and replaced with “...”|
|sqlclient|Batch|text longer than the value will be truncated and replaced with 
“...”|Text longer than the value will be truncated and replaced with “...”|
|Table API|Streaming|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|
|Table API|Batch|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|

 

 


> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 

[jira] [Commented] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Jing Ge (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17757617#comment-17757617
 ] 

Jing Ge commented on FLINK-32906:
-

[~liyubin117] Thanks for testing.

"but use specified sql-client.display.max-column-width, the configuration also 
take effects, it seems conflict with expected `No effect`" - the description 
was wrong. "No effect" was a bug and has been fixed alongside the PR. I have 
updated the description. Please check it again.

 

"use specified table.display.max-column-width,  the configuration take no 
effect, it conflicts with 'Text longer than the value will be truncated and 
replaced with “...”'." - this task offers a backward compatible solution and 
deprecated sql-client.display.max-column-width, which means once 
sql-client.display.max-column-width is used, it falls into the old scenario 
where table.display.max-column-width didn't exist, any changes of 
table.display.max-column-width won't take effect. You should test it either by 
only using table.display.max-column-width or by using 
sql-client.display.max-column-width, but not both of them back and forth.

 

 

 

> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
> Please pay attention that this task offers a backward compatible solution and 
> deprecated sql-client.display.max-column-width, which means once 
> sql-client.display.max-column-width is used, it falls into the old scenario 
> where table.display.max-column-width didn't exist, any changes of 
> table.display.max-column-width won't take effect. You should test it either 
> by only using table.display.max-column-width or by using 
> sql-client.display.max-column-width, but not both of them back and forth.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32906) Release Testing: Verify FLINK-30025 Unified the max display column width for SqlClient and Table APi in both Streaming and Batch execMode

2023-08-22 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-32906:

Description: 
more info could be found at 

FLIP-279 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]

 

Tests:

Both configs could be set with new value and following behaviours are expected 
: 
| | |sql-client.display.max-column-width, default value is 
30|table.display.max-column-width, default value is 30|
|sqlclient|Streaming|text longer than the value will be truncated and replaced 
with “...”|Text longer than the value will be truncated and replaced with “...”|
|sqlclient|Batch|text longer than the value will be truncated and replaced with 
“...”|Text longer than the value will be truncated and replaced with “...”|
|Table API|Streaming|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|
|Table API|Batch|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|

 

 

  was:
more info could be found at 

FLIP-279 
[https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]

 

Tests:

Both configs could be set with new value and following behaviours are expected 
: 
| | |sql-client.display.max-column-width, default value is 
30|table.display.max-column-width, default value is 30|
|sqlclient|Streaming|Changes will be forwarded to 
table.display.max-column-width and text longer than the value will be truncated 
and replaced with “...”|Text longer than the value will be truncated and 
replaced with “...”|
|sqlclient|Batch|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|
|Table API|Streaming|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|
|Table API|Batch|No effect. 
table.display.max-column-width with the default value 30 will be used|Text 
longer than the value will be truncated and replaced with “...”|

 

 


> Release Testing: Verify FLINK-30025 Unified the max display column width for 
> SqlClient and Table APi in both Streaming and Batch execMode
> -
>
> Key: FLINK-32906
> URL: https://issues.apache.org/jira/browse/FLINK-32906
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jing Ge
>Assignee: Yubin Li
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-22-18-54-45-122.png, 
> image-2023-08-22-18-54-57-699.png, image-2023-08-22-18-58-11-241.png, 
> image-2023-08-22-18-58-20-509.png, image-2023-08-22-19-04-49-344.png, 
> image-2023-08-22-19-06-49-335.png
>
>
> more info could be found at 
> FLIP-279 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-279+Unified+the+max+display+column+width+for+SqlClient+and+Table+APi+in+both+Streaming+and+Batch+execMode]
>  
> Tests:
> Both configs could be set with new value and following behaviours are 
> expected : 
> | | |sql-client.display.max-column-width, default value is 
> 30|table.display.max-column-width, default value is 30|
> |sqlclient|Streaming|text longer than the value will be truncated and 
> replaced with “...”|Text longer than the value will be truncated and replaced 
> with “...”|
> |sqlclient|Batch|text longer than the value will be truncated and replaced 
> with “...”|Text longer than the value will be truncated and replaced with 
> “...”|
> |Table API|Streaming|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
> |Table API|Batch|No effect. 
> table.display.max-column-width with the default value 30 will be used|Text 
> longer than the value will be truncated and replaced with “...”|
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32939) pyflink 1.17.0 has missing transitive dependency for pyopenssl

2023-08-22 Thread Nathanael England (Jira)
Nathanael England created FLINK-32939:
-

 Summary: pyflink 1.17.0 has missing transitive dependency for 
pyopenssl
 Key: FLINK-32939
 URL: https://issues.apache.org/jira/browse/FLINK-32939
 Project: Flink
  Issue Type: Bug
 Environment: Ubuntu 20.04
Flink 1.17.0
Reporter: Nathanael England


When running a pyflink job recently, we got an error about not being able to 
import something from pyopenssl correctly. Here's the traceback.
{code:bash}
E   Caused by: java.lang.RuntimeException: Failed to create 
stage bundle factory! Traceback (most recent call last):
E File "/usr/lib/python3.8/runpy.py", line 194, in 
_run_module_as_main
E   return _run_code(code, main_globals, None,
E File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
E   exec(code, run_globals)
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/pyflink/fn_execution/beam/beam_boot.py",
 line 36, in 
E   from 
apache_beam.portability.api.org.apache.beam.model.fn_execution.v1.beam_fn_api_pb2
 import \
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/apache_beam/__init__.py",
 line 93, in 
E   from apache_beam import io
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/apache_beam/io/__init__.py",
 line 27, in 
E   from apache_beam.io.mongodbio import *
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/apache_beam/io/mongodbio.py",
 line 93, in 
E   from bson import json_util
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/bson/json_util.py",
 line 130, in 
E   from pymongo.errors import ConfigurationError
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/pymongo/__init__.py",
 line 114, in 
E   from pymongo.collection import ReturnDocument
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/pymongo/collection.py",
 line 26, in 
E   from pymongo import common, helpers, message
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/pymongo/common.py",
 line 38, in 
E   from pymongo.ssl_support import 
validate_allow_invalid_certs, validate_cert_reqs
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/pymongo/ssl_support.py",
 line 27, in 
E   import pymongo.pyopenssl_context as _ssl
E File 
"/home/buildbot/.cache/pants/named_caches/pex_root/venvs/s/361094c4/venv/lib/python3.8/site-packages/pymongo/pyopenssl_context.py",
 line 27, in 
E   from OpenSSL import SSL as _SSL
E File 
"/usr/local/lib/python3.8/dist-packages/OpenSSL/__init__.py", line 8, in 

E   from OpenSSL import crypto, SSL
E File 
"/usr/local/lib/python3.8/dist-packages/OpenSSL/crypto.py", line 1556, in 

E   class X509StoreFlags(object):
E File 
"/usr/local/lib/python3.8/dist-packages/OpenSSL/crypto.py", line 1577, in 
X509StoreFlags
E   CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
E   AttributeError: module 'lib' has no attribute 
'X509_V_FLAG_CB_ISSUER_CHECK'
{code}
It seems to be the case from this traceback that apache-flink depends on 
apache-beam which depends on pymongo which wants to depend on pyopenssl. In 
order to do that within the pymongo library, users need to specify 
`pymongo[ocsp]` as their dependency instead of just `pymongo`. It looks like 
apache-beam is just specifying `pymongo` and then doing some horrible python 
path mutilation to find some random installation on the system path. The tool 
we are using (pantsbuild) modifies python path at the start, so it shouldn't 
have been possible to find this installation.

I believe this is an Apache Beam problem, but Jira will not let me make an 
issue there. Since this affects all Flink python users, though, it seems 
appropriate to be here as whatever fix comes to Beam should be worked 
downstream into Flink.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32938) flink-connector-pulsar should remove all `PulsarAdmin` calls

2023-08-22 Thread Neng Lu (Jira)
Neng Lu created FLINK-32938:
---

 Summary: flink-connector-pulsar should remove all `PulsarAdmin` 
calls
 Key: FLINK-32938
 URL: https://issues.apache.org/jira/browse/FLINK-32938
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Pulsar
Reporter: Neng Lu


The flink-connector-pulsar should not access and interact with the admin 
endpoint. This could introduce potential security issues.

In a production environment, a Pulsar cluster admin will not grant the 
permissions for the flink application to conduct any admin operations. 

Currently, the connector does various admin calls:

```{{{}{}}}{{{}{}}}
PulsarAdmin.topics().getPartitionedTopicMetadata(topic)
PulsarAdmin.namespaces().getTopics(namespace)
PulsarAdmin.topics().getLastMessageId(topic)
PulsarAdmin.topics().getMessageIdByTimestamp(topic, timestamp)
PulsarAdmin.topics().getSubscriptions(topic)
PulsarAdmin.topics().createSubscription(topic, subscription, MessageId.earliest)
PulsarAdmin.topics().resetCursor(topic, subscription, initial, !include)
```

We need to replace these calls with consumer or client calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23261: [hotfix][docs] Update the parameter types of StreamGraph#addOperator in javadocs

2023-08-22 Thread via GitHub


flinkbot commented on PR #23261:
URL: https://github.com/apache/flink/pull/23261#issuecomment-1688551797

   
   ## CI report:
   
   * 3a3c28d0b440eaf88f2be7080edfacc1fbe3cd41 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   4   >