Re: [PR] [FLINK-34648] add waitChangeResultRequest and WaitChangeResultResponse to avoid RPC timeout. [flink-cdc]

2024-03-13 Thread via GitHub


ruanhang1993 commented on code in PR #3128:
URL: https://github.com/apache/flink-cdc/pull/3128#discussion_r1524276055


##
flink-cdc-runtime/src/main/java/org/apache/flink/cdc/runtime/operators/schema/coordinator/SchemaRegistryRequestHandler.java:
##
@@ -70,6 +81,8 @@ public SchemaRegistryRequestHandler(
 this.flushedSinkWriters = new HashSet<>();
 this.pendingSchemaChanges = new LinkedList<>();
 this.schemaManager = schemaManager;
+schemaChangeThreadPool = Executors.newSingleThreadExecutor();
+isSchemaChangeApplying = true;

Review Comment:
   isSchemaChangeApplying = false;



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-34664) Add .asf.yaml for Flink CDC

2024-03-13 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-34664.

Resolution: Resolved

Resolved in flink-cdc(master) via: 88c23b59a016d7b023992d2a8fb1865f46fed00d

> Add .asf.yaml for Flink CDC
> ---
>
> Key: FLINK-34664
> URL: https://issues.apache.org/jira/browse/FLINK-34664
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> We need to add .asf.yaml file to Flink CDC repo to get auto-links to Apache 
> Jira and update project description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34664) Add .asf.yaml for Flink CDC

2024-03-13 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-34664:
---
Fix Version/s: cdc-3.1.0

> Add .asf.yaml for Flink CDC
> ---
>
> Key: FLINK-34664
> URL: https://issues.apache.org/jira/browse/FLINK-34664
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
> Fix For: cdc-3.1.0
>
>
> We need to add .asf.yaml file to Flink CDC repo to get auto-links to Apache 
> Jira and update project description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32076) Add file pool for concurrent file reusing

2024-03-13 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-32076.
--
  Assignee: Hangxiang Yu  (was: Hangxiang Yu)
Resolution: Fixed

merged 583722e7 and 3b9623e5 into master

> Add file pool for concurrent file reusing
> -
>
> Key: FLINK-32076
> URL: https://issues.apache.org/jira/browse/FLINK-32076
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.18.0
>Reporter: Zakelly Lan
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-32076][checkpoint] Introduce file pool to reuse files [flink]

2024-03-13 Thread via GitHub


masteryhx closed pull request #24418: [FLINK-32076][checkpoint] Introduce file 
pool to reuse files
URL: https://github.com/apache/flink/pull/24418


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


Jiabao-Sun commented on code in PR #24483:
URL: https://github.com/apache/flink/pull/24483#discussion_r1524267664


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/tasks/OneInputStreamTaskTest.java:
##
@@ -895,39 +890,38 @@ public TaskMetricGroup getMetricGroup() {
 (Gauge)
 
chainedOperatorMetricGroup.get(MetricNames.IO_CURRENT_OUTPUT_WATERMARK);
 
-Assert.assertEquals(
-"A metric was registered multiple times.",
-5,
-new HashSet<>(
+assertThat(

Review Comment:
   > the only reason for collecting the retrieved Gauge instances to a Set is 
to validate that none of them are null
   
   Here, we are not checking for null; rather, we are examining whether there 
are duplicates among the five `Gauge`s to ensure that no metric has been 
registered multiple times.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


Jiabao-Sun commented on code in PR #24483:
URL: https://github.com/apache/flink/pull/24483#discussion_r1524262573


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/TumblingProcessingTimeWindowsTest.java:
##
@@ -151,53 +132,44 @@ public void testTimeUnits() {
 TumblingProcessingTimeWindows.of(Time.seconds(5), 
Time.seconds(1));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(1000L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(1000, 6000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(1000, 6000));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(5999L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(1000, 6000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(1000, 6000));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(6000L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(6000, 11000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(6000, 11000));
 }
 
 @Test
-public void testInvalidParameters() {
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(-1));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
-
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(10), 
Time.seconds(20));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
-
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(10), 
Time.seconds(-11));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
+void testInvalidParameters() {
+
+assertThatThrownBy(() -> 
TumblingProcessingTimeWindows.of(Time.seconds(-1)))
+.isInstanceOf(IllegalArgumentException.class)
+.hasMessageContaining("TumblingProcessingTimeWindows");

Review Comment:
   I found that in many test classes contain Time, I think it should be 
uniformly removed in Flink 2.0.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


Jiabao-Sun commented on code in PR #24483:
URL: https://github.com/apache/flink/pull/24483#discussion_r1524259645


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/TumblingProcessingTimeWindowsTest.java:
##
@@ -151,53 +132,44 @@ public void testTimeUnits() {
 TumblingProcessingTimeWindows.of(Time.seconds(5), 
Time.seconds(1));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(1000L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(1000, 6000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(1000, 6000));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(5999L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(1000, 6000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(1000, 6000));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(6000L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(6000, 11000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(6000, 11000));
 }
 
 @Test
-public void testInvalidParameters() {
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(-1));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
-
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(10), 
Time.seconds(20));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
-
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(10), 
Time.seconds(-11));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
+void testInvalidParameters() {
+
+assertThatThrownBy(() -> 
TumblingProcessingTimeWindows.of(Time.seconds(-1)))
+.isInstanceOf(IllegalArgumentException.class)
+.hasMessageContaining("TumblingProcessingTimeWindows");

Review Comment:
   Thanks.
   I'll change the usage of deprecated `Time` to `Duration` as well.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34533][release] Add release note for version 1.19 [flink]

2024-03-13 Thread via GitHub


Myasuka commented on code in PR #24394:
URL: https://github.com/apache/flink/pull/24394#discussion_r1524255947


##
docs/content.zh/release-notes/flink-1.19.md:
##
@@ -0,0 +1,362 @@
+---
+title: "Release Notes - Flink 1.19"
+---
+
+
+# Release notes - Flink 1.19
+
+These release notes discuss important aspects, such as configuration, behavior 
or dependencies,
+that changed between Flink 1.18 and Flink 1.19. Please read these notes 
carefully if you are 
+planning to upgrade your Flink version to 1.19.
+
+## Dependency upgrades
+
+ Drop support for python 3.7
+
+# [FLINK-33029](https://issues.apache.org/jira/browse/FLINK-33029)
+
+ Add support for python 3.11
+
+# [FLINK-33030](https://issues.apache.org/jira/browse/FLINK-33030)
+
+## Build System
+
+ Support Java 21
+
+# [FLINK-33163](https://issues.apache.org/jira/browse/FLINK-33163)
+Apache Flink was made ready to compile and run with Java 21. This feature is 
still in beta mode.
+Issues should be reported in Flink's bug tracker.
+
+## Checkpoints
+
+ Deprecate RestoreMode#LEGACY
+
+# [FLINK-34190](https://issues.apache.org/jira/browse/FLINK-34190)
+
+`RestoreMode#LEGACY` is deprecated. Please use `RestoreMode#CLAIM` or 
`RestoreMode#NO_CLAIM` mode
+instead to get a clear state file ownership when restoring.
+
+ CheckpointsCleaner clean individual checkpoint states in parallel
+
+# [FLINK-33090](https://issues.apache.org/jira/browse/FLINK-33090)
+
+Now when disposing of no longer needed checkpoints, every state handle/state 
file will be disposed
+in parallel by the ioExecutor, vastly improving the disposing speed of a 
single checkpoint (for
+large checkpoints, the disposal time can be improved from 10 minutes to < 1 
minute). The old
+behavior can be restored by setting `state.checkpoint.cleaner.parallel-mode` 
to false.
+
+ Support using larger checkpointing interval when source is processing 
backlog
+
+# [FLINK-32514](https://issues.apache.org/jira/browse/FLINK-32514)
+
+`ProcessingBacklog` is introduced to demonstrate whether a record should be 
processed with low latency
+or high throughput. `ProcessingBacklog` can be set by source operators and can 
be used to change the
+checkpoint interval of a job during runtime.
+
+ Allow triggering Checkpoints through command line client
+
+# [FLINK-6755](https://issues.apache.org/jira/browse/FLINK-6755)
+
+The command line interface supports triggering a checkpoint manually. Usage:
+```
+./bin/flink checkpoint $JOB_ID [-full]
+```
+By specifying the '-full' option, a full checkpoint is triggered. Otherwise an 
incremental
+checkpoint is triggered if the job is configured to take incremental ones 
periodically.
+
+
+## Runtime & Coordination
+
+ Migrate TypeSerializerSnapshot#resolveSchemaCompatibility
+
+# [FLINK-30613](https://issues.apache.org/jira/browse/FLINK-30613)
+
+In Flink 1.19, the old method of resolving schema compatibility has been 
deprecated and the new one
+is introduced. See 
[FLIP-263](https://cwiki.apache.org/confluence/display/FLINK/FLIP-263%3A+Improve+resolving+schema+compatibility?src=contextnavpagetreemode)
 for more details.
+Please migrate to the new method following 
[link](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/datastream/fault-tolerance/serialization/custom_serialization/#migrating-from-deprecated-typeserializersnapshotresolveschemacompatibilityt).
+
+ Deprecate old serialization config methods and options
+
+# [FLINK-34122](https://issues.apache.org/jira/browse/FLINK-34122)
+
+Configuring serialization behavior through hard codes is deprecated, because 
you need to modify the
+codes when upgrading the job version. You should configure this via options
+`pipeline.serialization-config`, `pipeline.force-avro`, `pipeline.force-kryo`, 
and `pipeline.generic-types`.
+Registration of instance-level serializers is deprecated, using class-level 
serializers instead.
+For more information and code examples, please refer to 
[link](https://cwiki.apache.org/confluence/display/FLINK/FLIP-398:+Improve+Serialization+Configuration+And+Usage+In+Flink).
+
+ Migrate string configuration key to ConfigOption
+
+# [FLINK-34079](https://issues.apache.org/jira/browse/FLINK-34079)
+
+We have deprecated all setXxx and getXxx methods except `getString(String key, 
String defaultValue)`
+and `setString(String key, String value)`, such as: `setInteger`, `setLong`, 
`getInteger` and `getLong` etc.
+We strongly recommend that users and developers use the ConfigOption-based get 
and set methods directly.
+
+ Support System out and err to be redirected to LOG or discarded
+
+# [FLINK-33625](https://issues.apache.org/jira/browse/FLINK-33625)
+
+`System.out` and `System.err` output the content to the `taskmanager.out` and 
`taskmanager.err` files.
+In a production environment, if flink users use them to print a lot of 
content, the limits of yarn
+or kubernetes may be exceeded, 

Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


Jiabao-Sun commented on code in PR #24483:
URL: https://github.com/apache/flink/pull/24483#discussion_r1524256772


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/partitioner/StreamPartitionerTest.java:
##
@@ -21,33 +21,32 @@
 import org.apache.flink.runtime.plugable.SerializationDelegate;
 import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
 import org.apache.flink.util.InstantiationUtil;
-import org.apache.flink.util.TestLogger;
 
-import org.junit.Before;
-import org.junit.Test;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
 
 import java.io.IOException;
 
-import static org.junit.Assert.assertEquals;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests for different {@link StreamPartitioner} implementations. */
-public abstract class StreamPartitionerTest extends TestLogger {
+abstract class StreamPartitionerTest {
 
 protected final StreamPartitioner streamPartitioner = 
createPartitioner();
 protected final StreamRecord streamRecord = new 
StreamRecord<>(null);
 protected final SerializationDelegate> 
serializationDelegate =
 new SerializationDelegate<>(null);
 
-abstract StreamPartitioner createPartitioner();
+protected abstract StreamPartitioner createPartitioner();

Review Comment:
   Good point!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


ferenc-csaky commented on code in PR #24483:
URL: https://github.com/apache/flink/pull/24483#discussion_r1523230235


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/MergingWindowSetTest.java:
##
@@ -227,97 +220,92 @@ public void testLateMerging() throws Exception {
 // add several non-overlapping initial windows
 
 mergeFunction.reset();
-assertEquals(
-new TimeWindow(0, 3), windowSet.addWindow(new TimeWindow(0, 
3), mergeFunction));
-assertFalse(mergeFunction.hasMerged());
-assertEquals(new TimeWindow(0, 3), windowSet.getStateWindow(new 
TimeWindow(0, 3)));
+assertThat(windowSet.addWindow(new TimeWindow(0, 3), mergeFunction))
+.isEqualTo(new TimeWindow(0, 3));
+assertThat(mergeFunction.hasMerged()).isFalse();
+assertThat(windowSet.getStateWindow(new TimeWindow(0, 
3))).isEqualTo(new TimeWindow(0, 3));
 
 mergeFunction.reset();
-assertEquals(
-new TimeWindow(5, 8), windowSet.addWindow(new TimeWindow(5, 
8), mergeFunction));
-assertFalse(mergeFunction.hasMerged());
-assertEquals(new TimeWindow(5, 8), windowSet.getStateWindow(new 
TimeWindow(5, 8)));
+assertThat(windowSet.addWindow(new TimeWindow(5, 8), mergeFunction))
+.isEqualTo(new TimeWindow(5, 8));
+assertThat(mergeFunction.hasMerged()).isFalse();
+assertThat(windowSet.getStateWindow(new TimeWindow(5, 
8))).isEqualTo(new TimeWindow(5, 8));
 
 mergeFunction.reset();
-assertEquals(
-new TimeWindow(10, 13), windowSet.addWindow(new TimeWindow(10, 
13), mergeFunction));
-assertFalse(mergeFunction.hasMerged());
-assertEquals(new TimeWindow(10, 13), windowSet.getStateWindow(new 
TimeWindow(10, 13)));
+assertThat(windowSet.addWindow(new TimeWindow(10, 13), mergeFunction))
+.isEqualTo(new TimeWindow(10, 13));
+assertThat(mergeFunction.hasMerged()).isFalse();
+assertThat(windowSet.getStateWindow(new TimeWindow(10, 13)))
+.isEqualTo(new TimeWindow(10, 13));
 
 // add a window that merges the later two windows
 mergeFunction.reset();
-assertEquals(
-new TimeWindow(5, 13), windowSet.addWindow(new TimeWindow(8, 
10), mergeFunction));
-assertTrue(mergeFunction.hasMerged());
-assertEquals(new TimeWindow(5, 13), mergeFunction.mergeTarget());
-assertThat(
-mergeFunction.stateWindow(),
-anyOf(is(new TimeWindow(5, 8)), is(new TimeWindow(10, 13;
-assertThat(
-mergeFunction.mergeSources(),
-containsInAnyOrder(new TimeWindow(5, 8), new TimeWindow(10, 
13)));
-assertThat(
-mergeFunction.mergedStateWindows(),
-anyOf(
-containsInAnyOrder(new TimeWindow(10, 13)),
-containsInAnyOrder(new TimeWindow(5, 8;
-assertThat(mergeFunction.mergedStateWindows(), 
not(hasItem(mergeFunction.mergeTarget(;
+assertThat(windowSet.addWindow(new TimeWindow(8, 10), mergeFunction))
+.isEqualTo(new TimeWindow(5, 13));
+assertThat(mergeFunction.hasMerged()).isTrue();
+assertThat(mergeFunction.mergeTarget()).isEqualTo(new TimeWindow(5, 
13));
+assertThat(mergeFunction.stateWindow())
+.satisfiesAnyOf(
+w -> assertThat(w).isEqualTo(new TimeWindow(5, 8)),
+w -> assertThat(w).isEqualTo(new TimeWindow(10, 13)));
+assertThat(mergeFunction.mergeSources())
+.containsExactlyInAnyOrder(new TimeWindow(5, 8), new 
TimeWindow(10, 13));
+assertThat(mergeFunction.mergedStateWindows())
+.containsAnyOf(new TimeWindow(5, 8), new TimeWindow(10, 13));
 
-assertEquals(new TimeWindow(0, 3), windowSet.getStateWindow(new 
TimeWindow(0, 3)));
+assertThat(mergeFunction.mergedStateWindows().toArray())
+.satisfiesAnyOf(
+o -> assertThat(o).containsExactly(new TimeWindow(10, 
13)),
+o -> assertThat(o).containsExactly(new TimeWindow(5, 
8)));
+
+
assertThat(mergeFunction.mergedStateWindows()).doesNotContain(mergeFunction.mergeTarget());
+
+assertThat(windowSet.getStateWindow(new TimeWindow(0, 
3))).isEqualTo(new TimeWindow(0, 3));
 
 mergeFunction.reset();
-assertEquals(
-new TimeWindow(5, 13), windowSet.addWindow(new TimeWindow(5, 
8), mergeFunction));
-assertFalse(mergeFunction.hasMerged());
+assertThat(windowSet.addWindow(new TimeWindow(5, 8), mergeFunction))
+.isEqualTo(new TimeWindow(5, 13));
+assertThat(mergeFunction.hasMerged()).isFalse();
 
 mergeFunction.reset();
-assertEquals(
-new 

Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


ferenc-csaky commented on code in PR #24483:
URL: https://github.com/apache/flink/pull/24483#discussion_r1521029537


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/partitioner/StreamPartitionerTest.java:
##
@@ -21,33 +21,32 @@
 import org.apache.flink.runtime.plugable.SerializationDelegate;
 import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
 import org.apache.flink.util.InstantiationUtil;
-import org.apache.flink.util.TestLogger;
 
-import org.junit.Before;
-import org.junit.Test;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
 
 import java.io.IOException;
 
-import static org.junit.Assert.assertEquals;
+import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests for different {@link StreamPartitioner} implementations. */
-public abstract class StreamPartitionerTest extends TestLogger {
+abstract class StreamPartitionerTest {
 
 protected final StreamPartitioner streamPartitioner = 
createPartitioner();
 protected final StreamRecord streamRecord = new 
StreamRecord<>(null);
 protected final SerializationDelegate> 
serializationDelegate =
 new SerializationDelegate<>(null);
 
-abstract StreamPartitioner createPartitioner();
+protected abstract StreamPartitioner createPartitioner();

Review Comment:
   If the class itself is package-private (which I agree with), I think we 
should also change every `protected` method and field to package-private as 
well. And then the same is true for the child classes that ovverrides this 
`abstract` method. WDYT?



##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperatorTest.java:
##
@@ -2009,19 +2005,18 @@ public long getCurrentProcessingTime() {
 testHarness.processElement(new StreamRecord<>(new Tuple2<>("key2", 1), 
timestamp));
 
 // the garbage collection timer would wrap-around
-Assert.assertTrue(window.maxTimestamp() + lateness < 
window.maxTimestamp());
+assertThat(window.maxTimestamp() + 
lateness).isLessThan(window.maxTimestamp());
 
 // and it would prematurely fire with watermark (Long.MAX_VALUE - 1500)
-Assert.assertTrue(window.maxTimestamp() + lateness < Long.MAX_VALUE - 
1500);
+assertThat(window.maxTimestamp() + lateness).isLessThan(Long.MAX_VALUE 
- 1500);
 
 // if we don't correctly prevent wrap-around in the garbage collection
 // timers this watermark will clean our window state for the just-added
 // element/window
 testHarness.processWatermark(new Watermark(Long.MAX_VALUE - 1500));
 
 // this watermark is before the end timestamp of our only window
-Assert.assertTrue(Long.MAX_VALUE - 1500 < window.maxTimestamp());
-Assert.assertTrue(window.maxTimestamp() < Long.MAX_VALUE);
+assertThat(window.maxTimestamp()).isBetween(Long.MAX_VALUE - 1500, 
Long.MAX_VALUE);

Review Comment:
   This should be `isStrictlyBetween` I think, previously the defined range was 
exclusive.



##
flink-streaming-java/src/test/java/org/apache/flink/streaming/runtime/operators/windowing/TumblingProcessingTimeWindowsTest.java:
##
@@ -151,53 +132,44 @@ public void testTimeUnits() {
 TumblingProcessingTimeWindows.of(Time.seconds(5), 
Time.seconds(1));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(1000L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(1000, 6000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(1000, 6000));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(5999L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(1000, 6000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(1000, 6000));
 
 when(mockContext.getCurrentProcessingTime()).thenReturn(6000L);
-assertThat(
-assigner.assignWindows("String", Long.MIN_VALUE, mockContext),
-contains(timeWindow(6000, 11000)));
+assertThat(assigner.assignWindows("String", Long.MIN_VALUE, 
mockContext))
+.contains(new TimeWindow(6000, 11000));
 }
 
 @Test
-public void testInvalidParameters() {
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(-1));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-assertThat(e.toString(), containsString("abs(offset) < size"));
-}
-
-try {
-TumblingProcessingTimeWindows.of(Time.seconds(10), 
Time.seconds(20));
-fail("should fail");
-} catch (IllegalArgumentException e) {
-   

Re: [PR] [FLINK-34516] Move CheckpointingMode to flink-core [flink]

2024-03-13 Thread via GitHub


Zakelly commented on PR #24381:
URL: https://github.com/apache/flink/pull/24381#issuecomment-1996356653

   @masteryhx  I have transformed to the new CheckpointMode everywhere, and 
rebased master. Would you please take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24495:
URL: https://github.com/apache/flink/pull/24495#issuecomment-1996344641

   
   ## CI report:
   
   * d22b1166f0c742eda39ce0bcf214855683642668 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24493:
URL: https://github.com/apache/flink/pull/24493#issuecomment-1996344418

   
   ## CI report:
   
   * 908a6fb7d7400b0a3fa76a09452b1cbc79e1a972 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24494:
URL: https://github.com/apache/flink/pull/24494#issuecomment-1996344542

   
   ## CI report:
   
   * d9e9cbec2790dc80e276b04cef7c739410cdad04 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


liming30 commented on PR #24493:
URL: https://github.com/apache/flink/pull/24493#issuecomment-1996342006

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


liming30 opened a new pull request, #24494:
URL: https://github.com/apache/flink/pull/24494

   Backporting https://github.com/apache/flink/pull/23922 to release-1.18


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


liming30 opened a new pull request, #24493:
URL: https://github.com/apache/flink/pull/24493

   Backporting #23922 to release-1.17


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33798][statebackend/rocksdb] automatically clean up rocksdb logs when the task exited. [flink]

2024-03-13 Thread via GitHub


liming30 opened a new pull request, #24495:
URL: https://github.com/apache/flink/pull/24495

   Backporting https://github.com/apache/flink/pull/23922 to release-1.19


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-33925) Extended failure handling for bulk requests (elasticsearch back port)

2024-03-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-33925.
--
Resolution: Done

main via 9e161cc097b34d3ea0d32787f337e630250547d3.

> Extended failure handling for bulk requests (elasticsearch back port)
> -
>
> Key: FLINK-33925
> URL: https://issues.apache.org/jira/browse/FLINK-33925
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.1
>Reporter: Peter Schulz
>Assignee: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-1.2.0
>
>
> This is a back port of the implementation for the elasticsearch connector, 
> see FLINK-32028, to achieve consistent APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33925) Extended failure handling for bulk requests (elasticsearch back port)

2024-03-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo updated FLINK-33925:
---
Fix Version/s: opensearch-1.2.0

> Extended failure handling for bulk requests (elasticsearch back port)
> -
>
> Key: FLINK-33925
> URL: https://issues.apache.org/jira/browse/FLINK-33925
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Opensearch
>Affects Versions: opensearch-1.0.1
>Reporter: Peter Schulz
>Assignee: Peter Schulz
>Priority: Major
>  Labels: pull-request-available
> Fix For: opensearch-1.2.0
>
>
> This is a back port of the implementation for the elasticsearch connector, 
> see FLINK-32028, to achieve consistent APIs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33925][connectors/opensearch] Allow customising bulk failure handling [flink-connector-opensearch]

2024-03-13 Thread via GitHub


reswqa merged PR #39:
URL: https://github.com/apache/flink-connector-opensearch/pull/39


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33925][connectors/opensearch] Allow customising bulk failure handling [flink-connector-opensearch]

2024-03-13 Thread via GitHub


boring-cyborg[bot] commented on PR #39:
URL: 
https://github.com/apache/flink-connector-opensearch/pull/39#issuecomment-1996317097

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-03-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-34543.
--
Resolution: Done

master(1.20) via a021db68a8545b4183e935beccd9b0d62329ee67.

> Support Full Partition Processing On Non-keyed DataStream
> -
>
> Key: FLINK-34543
> URL: https://issues.apache.org/jira/browse/FLINK-34543
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce the PartitionWindowedStream and provide multiple full window 
> operations in it.
> The related motivation and design can be found in 
> [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34543) Support Full Partition Processing On Non-keyed DataStream

2024-03-13 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo reassigned FLINK-34543:
--

Assignee: Wencong Liu

> Support Full Partition Processing On Non-keyed DataStream
> -
>
> Key: FLINK-34543
> URL: https://issues.apache.org/jira/browse/FLINK-34543
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.20.0
>Reporter: Wencong Liu
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> Introduce the PartitionWindowedStream and provide multiple full window 
> operations in it.
> The related motivation and design can be found in 
> [FLIP-380|https://cwiki.apache.org/confluence/display/FLINK/FLIP-380%3A+Support+Full+Partition+Processing+On+Non-keyed+DataStream].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34543][datastream]Support Full Partition Processing On Non-keyed DataStream [flink]

2024-03-13 Thread via GitHub


reswqa merged PR #24398:
URL: https://github.com/apache/flink/pull/24398


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826925#comment-17826925
 ] 

Jane Chan commented on FLINK-29114:
---

Thanks [~mapohl], I've opened a cherry-pick 
[PR|https://github.com/apache/flink/pull/24492], and it would be great if you 
could help review it.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

Re: [PR] [FLINK-29114][connector][filesystem] Fix issue of file overwriting caused by multiple writes to the same sink table and shared staging directory [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24492:
URL: https://github.com/apache/flink/pull/24492#issuecomment-1996302414

   
   ## CI report:
   
   * 63db84c3fb1d069c48e0efe8733e63d4dd912777 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34655) Autoscaler standalone doesn't work for flink 1.15

2024-03-13 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan updated FLINK-34655:

Summary: Autoscaler standalone doesn't work for flink 1.15  (was: 
Autoscaler doesn't work for flink 1.15)

> Autoscaler standalone doesn't work for flink 1.15
> -
>
> Key: FLINK-34655
> URL: https://issues.apache.org/jira/browse/FLINK-34655
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> flink-ubernetes-operator is committed to supporting the latest 4 flink minor 
> versions, and autoscaler is a part of flink-ubernetes-operator. Currently,  
> the latest 4 flink minor versions are 1.15, 1.16, 1.17 and 1.18.
> But autoscaler doesn't work for  flink 1.15.
> h2. Root cause: 
> * FLINK-28310 added some properties in IOMetricsInfo in flink-1.16
> * IOMetricsInfo is a part of JobDetailsInfo
> * JobDetailsInfo is necessary for autoscaler [1]
> * flink's RestClient doesn't allow miss any property during deserializing the 
> json
> That means that the RestClient after 1.15 cannot fetch JobDetailsInfo for 
> 1.15 jobs.
> h2. How to fix it properly?
> - [[FLINK-34655](https://issues.apache.org/jira/browse/FLINK-34655)] Copy 
> IOMetricsInfo to flink-autoscaler-standalone module
> - Removing them after 1.15 are not supported
> [1] 
> https://github.com/apache/flink-kubernetes-operator/blob/ede1a610b3375d31a2e82287eec67ace70c4c8df/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java#L109
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-401%3A+REST+API+JSON+response+deserialization+unknown+field+tolerance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34655) Autoscaler doesn't work for flink 1.15

2024-03-13 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826924#comment-17826924
 ] 

Rui Fan commented on FLINK-34655:
-

Merged to main(1.8.0) via: ab41083f38cbe27c7d0ee3d8ba29b527e13a4fcc

> Autoscaler doesn't work for flink 1.15
> --
>
> Key: FLINK-34655
> URL: https://issues.apache.org/jira/browse/FLINK-34655
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> flink-ubernetes-operator is committed to supporting the latest 4 flink minor 
> versions, and autoscaler is a part of flink-ubernetes-operator. Currently,  
> the latest 4 flink minor versions are 1.15, 1.16, 1.17 and 1.18.
> But autoscaler doesn't work for  flink 1.15.
> h2. Root cause: 
> * FLINK-28310 added some properties in IOMetricsInfo in flink-1.16
> * IOMetricsInfo is a part of JobDetailsInfo
> * JobDetailsInfo is necessary for autoscaler [1]
> * flink's RestClient doesn't allow miss any property during deserializing the 
> json
> That means that the RestClient after 1.15 cannot fetch JobDetailsInfo for 
> 1.15 jobs.
> h2. How to fix it properly?
> - [[FLINK-34655](https://issues.apache.org/jira/browse/FLINK-34655)] Copy 
> IOMetricsInfo to flink-autoscaler-standalone module
> - Removing them after 1.15 are not supported
> [1] 
> https://github.com/apache/flink-kubernetes-operator/blob/ede1a610b3375d31a2e82287eec67ace70c4c8df/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java#L109
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-401%3A+REST+API+JSON+response+deserialization+unknown+field+tolerance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34655) Autoscaler doesn't work for flink 1.15

2024-03-13 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826923#comment-17826923
 ] 

Rui Fan commented on FLINK-34655:
-

{quote}I understand the idea behind providing suggestions. However, it is 
difficult to assess the quality of Autoscaling decisions without applying them 
automatically. The reason is that suggestions become stale very quickly if the 
load pattern is not completely static. Even for static load patterns, if the 
user doesn't redeploy in a matter of minutes, the suggestions might already be 
stale again when the number of pending records increased too much. In any case, 
production load patterns are rarely static which means that autoscaling will 
inevitable trigger multiple times a day, but that is where its real power is 
unleashed. It would be great to hear about any concerns your users have for 
turning on automatic scaling. {quote}

Thanks for pointing it out! 

It is indeed difficult to observe the dynamic changes of the load. But users 
don't want to use a huge feature without observe. This does not only refer to 
autoscaler, but to all major features, users need to do enough research before 
they can be applied to the production environment. 

Although the parallelism may change dynamically, based on historical 
experience, users are more concerned about whether the parallelism is 
reasonable during peak periods. Currently, jdbc event handler recorded all 
ScalingReports. The ScalingReport includes the create time, users can check 
them conveniently. 

{quote}We've been operating it in production for about a year now.{quote}

It's great to see that your users have been using autoscaler for a long time. I 
believe it will give the entire community more confidence in using the 
autoscaler.

{quote}Back to the issue here, should we think about a patch release for 1.15 / 
1.16 to add support for overriding vertex parallelism?{quote}

I agree with [~gyfora], the 1.15 and 1.16 won't be released anymore. So 
community doesn't need to backport them. If some users want to use these 
features, it's better to use the new version or cherry pick them to their 
internal flink version.

> Autoscaler doesn't work for flink 1.15
> --
>
> Key: FLINK-34655
> URL: https://issues.apache.org/jira/browse/FLINK-34655
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> flink-ubernetes-operator is committed to supporting the latest 4 flink minor 
> versions, and autoscaler is a part of flink-ubernetes-operator. Currently,  
> the latest 4 flink minor versions are 1.15, 1.16, 1.17 and 1.18.
> But autoscaler doesn't work for  flink 1.15.
> h2. Root cause: 
> * FLINK-28310 added some properties in IOMetricsInfo in flink-1.16
> * IOMetricsInfo is a part of JobDetailsInfo
> * JobDetailsInfo is necessary for autoscaler [1]
> * flink's RestClient doesn't allow miss any property during deserializing the 
> json
> That means that the RestClient after 1.15 cannot fetch JobDetailsInfo for 
> 1.15 jobs.
> h2. How to fix it properly?
> - [[FLINK-34655](https://issues.apache.org/jira/browse/FLINK-34655)] Copy 
> IOMetricsInfo to flink-autoscaler-standalone module
> - Removing them after 1.15 are not supported
> [1] 
> https://github.com/apache/flink-kubernetes-operator/blob/ede1a610b3375d31a2e82287eec67ace70c4c8df/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java#L109
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-401%3A+REST+API+JSON+response+deserialization+unknown+field+tolerance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34655) Autoscaler doesn't work for flink 1.15

2024-03-13 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-34655.
-
Resolution: Fixed

> Autoscaler doesn't work for flink 1.15
> --
>
> Key: FLINK-34655
> URL: https://issues.apache.org/jira/browse/FLINK-34655
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> flink-ubernetes-operator is committed to supporting the latest 4 flink minor 
> versions, and autoscaler is a part of flink-ubernetes-operator. Currently,  
> the latest 4 flink minor versions are 1.15, 1.16, 1.17 and 1.18.
> But autoscaler doesn't work for  flink 1.15.
> h2. Root cause: 
> * FLINK-28310 added some properties in IOMetricsInfo in flink-1.16
> * IOMetricsInfo is a part of JobDetailsInfo
> * JobDetailsInfo is necessary for autoscaler [1]
> * flink's RestClient doesn't allow miss any property during deserializing the 
> json
> That means that the RestClient after 1.15 cannot fetch JobDetailsInfo for 
> 1.15 jobs.
> h2. How to fix it properly?
> - [[FLINK-34655](https://issues.apache.org/jira/browse/FLINK-34655)] Copy 
> IOMetricsInfo to flink-autoscaler-standalone module
> - Removing them after 1.15 are not supported
> [1] 
> https://github.com/apache/flink-kubernetes-operator/blob/ede1a610b3375d31a2e82287eec67ace70c4c8df/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java#L109
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-401%3A+REST+API+JSON+response+deserialization+unknown+field+tolerance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-29114][connector][filesystem] Fix issue of file overwriting caused by multiple writes to the same sink table and shared staging directory [flink]

2024-03-13 Thread via GitHub


LadyForest opened a new pull request, #24492:
URL: https://github.com/apache/flink/pull/24492

   ## What is the purpose of the change
   
   This PR is cherry-picked from #24390
   
   
   ## Brief change log
   
   * Fix unstable TableSourceITCase#testTableHintWithLogicalTableScanReuse
   * Moves the staging dir configuration into builder for easier testing
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - FileSystemOutputFormatTest#testGetUniqueStagingDirectory
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34655] Copy IOMetricsInfo to flink-autoscaler-standalone module [flink-kubernetes-operator]

2024-03-13 Thread via GitHub


1996fanrui merged PR #797:
URL: https://github.com/apache/flink-kubernetes-operator/pull/797


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34529) Projection cannot be pushed down through rank operator.

2024-03-13 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826918#comment-17826918
 ] 

Benchao Li commented on FLINK-34529:


[~nilerzhou] Thanks for the update. 

For #1, is there a way to preserve 'primary key' fields for {{Rank}}? Usually 
an regression is not acceptable.

For #2, I agree. 

And putting all these together, I'm just thinking that can we just improve 
current case without introducing the general {{ProjectWindowTransposeRule}}, 
since it does not take 'primary key' into account and we'll end up with an 
regression. Putting more transposing rules into 'default rewrite' phase might 
be the minimal step to move forward without any regression. What do you think?

> Projection cannot be pushed down through rank operator.
> ---
>
> Key: FLINK-34529
> URL: https://issues.apache.org/jira/browse/FLINK-34529
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.19.0
>Reporter: yisha zhou
>Assignee: yisha zhou
>Priority: Major
>
> When there is a rank/deduplicate operator, the projection based on output of 
> this operator cannot be pushed down to the input of it.
> The following code can help reproducing the issue:
> {code:java}
> val util = streamTestUtil() 
> util.addTableSource[(String, Int, String)]("T1", 'a, 'b, 'c)
> util.addTableSource[(String, Int, String)]("T2", 'd, 'e, 'f)
> val sql =
>   """
> |SELECT a FROM (
> |  SELECT a, f,
> |  ROW_NUMBER() OVER (PARTITION BY f ORDER BY c DESC) as rank_num
> |  FROM  T1, T2
> |  WHERE T1.a = T2.d
> |)
> |WHERE rank_num = 1
>   """.stripMargin
> util.verifyPlan(sql){code}
> The plan is expected to be:
> {code:java}
> Calc(select=[a])
> +- Rank(strategy=[AppendFastStrategy], rankType=[ROW_NUMBER], 
> rankRange=[rankStart=1, rankEnd=1], partitionBy=[f], orderBy=[c DESC], 
> select=[a, c, f])
>+- Exchange(distribution=[hash[f]])
>   +- Calc(select=[a, c, f])
>  +- Join(joinType=[InnerJoin], where=[=(a, d)], select=[a, c, d, f], 
> leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])
> :- Exchange(distribution=[hash[a]])
> :  +- Calc(select=[a, c])
> : +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, T1, source: [TestTableSource(a, b, c)]]], fields=[a, b, c])
> +- Exchange(distribution=[hash[d]])
>+- Calc(select=[d, f])
>   +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, T2, source: [TestTableSource(d, e, f)]]], fields=[d, e, f]) 
> {code}
> Notice that the 'select' of Join operator is [a, c, d, f]. However the actual 
> plan is:
> {code:java}
> Calc(select=[a])
> +- Rank(strategy=[AppendFastStrategy], rankType=[ROW_NUMBER], 
> rankRange=[rankStart=1, rankEnd=1], partitionBy=[f], orderBy=[c DESC], 
> select=[a, c, f])
>+- Exchange(distribution=[hash[f]])
>   +- Calc(select=[a, c, f])
>  +- Join(joinType=[InnerJoin], where=[=(a, d)], select=[a, b, c, d, 
> e, f], leftInputSpec=[NoUniqueKey], rightInputSpec=[NoUniqueKey])
> :- Exchange(distribution=[hash[a]])
> :  +- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, T1, source: [TestTableSource(a, b, c)]]], fields=[a, b, c])
> +- Exchange(distribution=[hash[d]])
>+- LegacyTableSourceScan(table=[[default_catalog, 
> default_database, T2, source: [TestTableSource(d, e, f)]]], fields=[d, e, f])
>  {code}
> the 'select' of Join operator is [a, b, c, d, e, f], which means the 
> projection in the final Calc is not passed through the Rank.
> And I think an easy way to fix this issue is to add 
> org.apache.calcite.rel.rules.ProjectWindowTransposeRule into 
> FlinkStreamRuleSets.LOGICAL_OPT_RULES.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34180][docs] Migrate doc website from ververica to flink [flink-web]

2024-03-13 Thread via GitHub


PatrickRen merged PR #722:
URL: https://github.com/apache/flink-web/pull/722


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34664) Add .asf.yaml for Flink CDC

2024-03-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34664:
---
Labels: pull-request-available  (was: )

> Add .asf.yaml for Flink CDC
> ---
>
> Key: FLINK-34664
> URL: https://issues.apache.org/jira/browse/FLINK-34664
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>  Labels: pull-request-available
>
> We need to add .asf.yaml file to Flink CDC repo to get auto-links to Apache 
> Jira and update project description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34664) Add .asf.yaml for Flink CDC

2024-03-13 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-34664:
-

 Summary: Add .asf.yaml for Flink CDC
 Key: FLINK-34664
 URL: https://issues.apache.org/jira/browse/FLINK-34664
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Qingsheng Ren
Assignee: Qingsheng Ren


We need to add .asf.yaml file to Flink CDC repo to get auto-links to Apache 
Jira and update project description



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34656) Generated code for `ITEM` operator should return null when getting element of a null map/array/row

2024-03-13 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826917#comment-17826917
 ] 

Benchao Li commented on FLINK-34656:


Although the {{resultTerm}} is a default value instead of {{NULL}}, the 
{{nullTerm}} is correct, the the {{GeneratedExpression}} for 
{{generateArrayElementAt}} should be correct. Could you provide a test case 
that reproduce the problem you described?

> Generated code for `ITEM` operator should return null when getting element of 
> a null map/array/row
> --
>
> Key: FLINK-34656
> URL: https://issues.apache.org/jira/browse/FLINK-34656
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.20.0
>Reporter: yisha zhou
>Priority: Major
>
> In FieldAccessFromTableITCase we can find that the expected result of f0[1] 
> is null when f0 is a null array. 
> However, behavior in generated code for ITEM is not consistent with case 
> above. The main code is:
>  
> {code:java}
> val arrayAccessCode =
>   s"""
>  |${array.code}
>  |${index.code}
>  |boolean $nullTerm = ${array.nullTerm} || ${index.nullTerm} ||
>  |   $idxStr < 0 || $idxStr >= ${array.resultTerm}.size() || $arrayIsNull;
>  |$resultTypeTerm $resultTerm = $nullTerm ? $defaultTerm : $arrayGet;
>  |""".stripMargin {code}
> If `array.nullTerm` is true, a default value of element type will be 
> returned, for example -1 for null bigint array.
> The reason why FieldAccessFromTableITCase can get expected result is that the 
> ReduceExpressionsRule generated an expression code for that case like:
> {code:java}
> boolean isNull$0 = true || false ||
>    ((int) 1) - 1 < 0 || ((int) 1) - 1 >= 
> ((org.apache.flink.table.data.ArrayData) null).size() || 
> ((org.apache.flink.table.data.ArrayData) null).isNullAt(((int) 1) - 1);
> long result$0 = isNull$0 ? -1L : ((org.apache.flink.table.data.ArrayData) 
> null).getLong(((int) 1) - 1);
> if (isNull$0) {
>   out.setField(0, null);
> } else {
>   out.setField(0, result$0);
> } {code}
> The reduced expr will be a null literal.
>  
> I think the behaviors for getting element of a null value should be unified.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34662) Add new issue template for Flink CDC repo

2024-03-13 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren resolved FLINK-34662.
---
Resolution: Fixed

> Add new issue template for Flink CDC repo
> -
>
> Key: FLINK-34662
> URL: https://issues.apache.org/jira/browse/FLINK-34662
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>
> As we migrated to Apache Jira for managing issues, we need to provide a new 
> template to remind users and contributors about the new issue reporting way.
> The reason we don't close issue functionality is that some historical commits 
> and PRs are linked to issues before the donation, and closing the 
> functionality will make those not traceable anymore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34662) Add new issue template for Flink CDC repo

2024-03-13 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826913#comment-17826913
 ] 

Qingsheng Ren commented on FLINK-34662:
---

flink-cdc master: dba00625b11e77539794346890f3cb76c3b9c0cd

> Add new issue template for Flink CDC repo
> -
>
> Key: FLINK-34662
> URL: https://issues.apache.org/jira/browse/FLINK-34662
> Project: Flink
>  Issue Type: Sub-task
>  Components: Flink CDC
>Reporter: Qingsheng Ren
>Assignee: Qingsheng Ren
>Priority: Major
>
> As we migrated to Apache Jira for managing issues, we need to provide a new 
> template to remind users and contributors about the new issue reporting way.
> The reason we don't close issue functionality is that some historical commits 
> and PRs are linked to issues before the donation, and closing the 
> functionality will make those not traceable anymore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-34660) AutoRescalingITCase#testCheckpointRescalingInKeyedState AssertionError

2024-03-13 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-34660.
--
Fix Version/s: 1.20.0
 Assignee: Hangxiang Yu
   Resolution: Fixed

merged aa71589 into master

> AutoRescalingITCase#testCheckpointRescalingInKeyedState AssertionError
> --
>
> Key: FLINK-34660
> URL: https://issues.apache.org/jira/browse/FLINK-34660
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Hangxiang Yu
>Assignee: Hangxiang Yu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58249=ms.vss-test-web.build-test-results-tab=4036370=100718=debug]
>  
> {code:java}
> expected:<[(0,8000), (0,32000), (0,48000), (0,72000), (1,78000), (1,3), 
> (1,54000), (0,2000), (0,1), (0,5), (0,66000), (0,74000), (0,82000), 
> (1,8), (1,0), (1,16000), (1,24000), (1,4), (1,56000), (1,64000), 
> (0,12000), (0,28000), (0,52000), (0,6), (0,68000), (0,76000), (1,18000), 
> (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), (0,14000), (0,22000), 
> (0,38000), (0,46000), (0,62000), (0,7), (1,4000), (1,2), (1,36000), 
> (1,44000)]> but was:<[(0,8000), (0,32000), (0,48000), (0,72000), (1,78000), 
> (1,3), (1,54000), (0,2000), (0,1), (0,5), (0,66000), (0,74000), 
> (0,82000), (0,23000), (0,31000), (1,8), (1,0), (1,16000), (1,24000), 
> (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), (0,6), 
> (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), 
> (0,6000), (0,14000), (0,22000), (0,19000), (0,35000), (1,4000), (1,2), 
> (1,36000), (1,44000)]> {code}
>  
> This maybe related to FLINK-34624 as we could see from the log:
> {code:java}
> 03:31:02,073 [ main] INFO 
> org.apache.flink.runtime.testutils.PseudoRandomValueSelector [] - Randomly 
> selected true for state.changelog.enabled
> 03:31:02,163 [jobmanager-io-thread-2] INFO 
> org.apache.flink.state.changelog.AbstractChangelogStateBackend [] - 
> ChangelogStateBackend is used, delegating EmbeddedRocksDBStateBackend. {code}
> FLINK-34624 disables changelog since it doesn't support local rescaling 
> currently.
> Even if disabling changelog for AutoRescalingITCase manually, 
> randomization may still be applied to it.
> We should apply randomization only when it's not pre-defined.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34660][checkpoint] Avoid randomizing changelog-related config when it's pre-defined [flink]

2024-03-13 Thread via GitHub


masteryhx merged PR #24488:
URL: https://github.com/apache/flink/pull/24488


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34651) The HiveTableSink of Flink does not support writing to S3

2024-03-13 Thread shizhengchao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shizhengchao updated FLINK-34651:
-
Priority: Blocker  (was: Major)

> The HiveTableSink of Flink does not support writing to S3
> -
>
> Key: FLINK-34651
> URL: https://issues.apache.org/jira/browse/FLINK-34651
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.13.6, 1.14.6, 1.15.4, 1.16.3, 1.17.2, 1.18.1
>Reporter: shizhengchao
>Priority: Blocker
>
> My Hive table is located on S3. When I try to write to Hive using Flink 
> Streaming SQL, I find that it does not support writing to S3. Furthermore, 
> this issue has not been fixed in the latest version. The error I got is as 
> follows:
> {code:java}
> //代码占位符
> java.io.IOException: No FileSystem for scheme: s3
>     at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2586)
>     at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2593)
>     at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
>     at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
>     at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
>     at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
>     at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>     at 
> org.apache.flink.connectors.hive.HadoopFileSystemFactory.create(HadoopFileSystemFactory.java:44)
>     at 
> org.apache.flink.table.filesystem.stream.StreamingSink.lambda$compactionWriter$8dbc1825$1(StreamingSink.java:95)
>     at 
> org.apache.flink.table.filesystem.stream.compact.CompactCoordinator.initializeState(CompactCoordinator.java:102)
>     at 
> org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:118)
>     at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:290)
>     at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:441)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:585)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:565)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:650)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:540)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
>     at java.lang.Thread.run(Thread.java:750)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34640][metrics] Replace DummyMetricGroup usage with UnregisteredMetricsGroup [flink]

2024-03-13 Thread via GitHub


JingGe commented on PR #24490:
URL: https://github.com/apache/flink/pull/24490#issuecomment-1996136921

   The PR is fine. Let's wait until the CI is passed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34640][metrics] Replace DummyMetricGroup usage with UnregisteredMetricsGroup [flink]

2024-03-13 Thread via GitHub


JingGe commented on PR #24490:
URL: https://github.com/apache/flink/pull/24490#issuecomment-1996133377

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-34640) Replace DummyMetricGroup usage with UnregisteredMetricsGroup

2024-03-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge reassigned FLINK-34640:
---

Assignee: Jeyhun Karimov

> Replace DummyMetricGroup usage with UnregisteredMetricsGroup
> 
>
> Key: FLINK-34640
> URL: https://issues.apache.org/jira/browse/FLINK-34640
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Metrics, Tests
>Reporter: Chesnay Schepler
>Assignee: Jeyhun Karimov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The {{DummyMetricGroup}} is terrible because it is decidedly unsafe to use. 
> Use the {{UnregisteredMetricsGroup}} instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [Flink 34569][e2e] fail fast if AWS cli container fails to start [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24491:
URL: https://github.com/apache/flink/pull/24491#issuecomment-1996064367

   
   ## CI report:
   
   * 119cd86a12b4d65ed43000f58df4a423f9e2aae7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Flink 34569][e2e] fail fast if AWS cli container fails to start [flink]

2024-03-13 Thread via GitHub


robobario commented on code in PR #24491:
URL: https://github.com/apache/flink/pull/24491#discussion_r1524015981


##
flink-end-to-end-tests/test-scripts/common_s3_operations.sh:
##
@@ -29,12 +29,18 @@
 #   AWSCLI_CONTAINER_ID
 ###
 function aws_cli_start() {
-  export AWSCLI_CONTAINER_ID=$(docker run -d \

Review Comment:
   see https://www.shellcheck.net/wiki/SC2155



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Flink 34569][e2e] fail fast if AWS cli container fails to start [flink]

2024-03-13 Thread via GitHub


robobario opened a new pull request, #24491:
URL: https://github.com/apache/flink/pull/24491

   ## What is the purpose of the change
   
   This pull request aims to make end-to-end test scripts that source 
`common_s3_operations.sh` fail fast if the aws cli container fails to start. It 
also adds a single naive retry aiming to recover from a transient network 
failure.
   
   [FLINK-34569](https://issues.apache.org/jira/browse/FLINK-34569) describes 
an issue where an end-to-end test run took 15 minutes to fail after the aws cli 
container failed to start. From the test logs:
   
   ```
   2024-03-02T04:10:55.5496990Z Unable to find image 'banst/awscli:latest' 
locally 2024-03-02T04:10:56.3857380Z docker: Error response from daemon: Head 
"https://registry-1.docker.io/v2/banst/awscli/manifests/latest": read tcp 
10.1.0.97:33016->54.236.113.205:443: read: connection reset by peer. 
2024-03-02T04:10:56.3857877Z See 'docker run --help'. 
2024-03-02T04:10:56.4586492Z Error: No such object:
   ```
   
   This failure isn't handled and so later we were stuck in a loop trying to 
docker exec commands like `docker exec -t "" command`.
   
   To test it locally I've been provoking docker run failures by changing the 
image name to something non-existent.
   
   ## Brief change log
   
 - *Fail fast if aws cli container fails to run*
 - *Add naive retry when creating aws cli container*
 - *Add --rm to jq docker run commands to remove them on exit*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   I verified that it fails fast by modifying the awscli image to have a 
non-existant name, to provoke a `docker run` failure, causing it to fail like:
   
   ```
   
==
   Running 'test-scripts/test_file_sink.sh s3 StreamingFileSink 
skip_check_exceptions'
   
==
   TEST_DATA_DIR: 
/home/roby/development/redhat-managed-kafka/upstream/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-53909550201
   Flink dist directory: 
/home/roby/development/redhat-managed-kafka/upstream/flink/flink-dist/target/flink-1.20-SNAPSHOT-bin/flink-1.20-SNAPSHOT
   Found AWS bucket robeyoun-testing-flink-13-03-2024, running the e2e test.
   Found AWS access key, running the e2e test.
   Found AWS secret key, running the e2e test.
   Unable to find image 'banstz/awscli:latest' locally
   docker: Error response from daemon: pull access denied for banstz/awscli, 
repository does not exist or may require 'docker login': denied: requested 
access to the resource is denied.
   See 'docker run --help'.
   running aws cli container failed
   Unable to find image 'banstz/awscli:latest' locally
   docker: Error response from daemon: pull access denied for banstz/awscli, 
repository does not exist or may require 'docker login': denied: requested 
access to the resource is denied.
   See 'docker run --help'.
   running aws cli container failed
   running the aws cli container failed
   [FAIL] Test script contains errors.
   Checking for errors...
   No errors in log files.
   Checking for exceptions...
   No exceptions in log files.
   Checking for non-empty .out files...
   grep: 
/home/roby/development/redhat-managed-kafka/upstream/flink/build-target/log/*.out:
 No such file or directory
   No non-empty .out files.
   
   [FAIL] 'test-scripts/test_file_sink.sh s3 StreamingFileSink 
skip_check_exceptions' failed after 0 minutes and 6 seconds! Test exited with 
exit code 1
   ```
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers:no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


JingGe commented on PR #24483:
URL: https://github.com/apache/flink/pull/24483#issuecomment-199548

   > Brief change log
   
   @Jiabao-Sun Thanks for the excellent summary! I have added it as the 
reference example in the description of 
[FLINK-25325](https://issues.apache.org/jira/browse/FLINK-25325)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34640][metrics] Replace DummyMetricGroup usage with UnregisteredMetricsGroup [flink]

2024-03-13 Thread via GitHub


jeyhunkarimov commented on PR #24490:
URL: https://github.com/apache/flink/pull/24490#issuecomment-1995999503

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-25325) Migration Flink from Junit4 to Junit5

2024-03-13 Thread Jing Ge (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Ge updated FLINK-25325:

Description: 
Based on the consensus from the mailing list discussion[1][2], we have been 
starting working on the JUnit4 to JUnit5 migration. 

This is the umbrella ticket which describes the big picture of the migration 
with following steps:
 * AssertJ integration and guideline
 * Test Framework upgrade from JUnit4 to JUnit5
 * JUnit5 migration guideline(document and reference migration)
 * Optimization for issues found while writing new test in JUint5
 * [Long-term]Module based graceful migration of old tests in JUnit4 to JUnit5

All JUnit5 migration related tasks are welcome to be created under this 
umbrella. 

 

[https://github.com/apache/flink/pull/24483] is an excellent PR as example. 
Please create similar Brief change log summary for new PR.

 

[1] [[DISCUSS]Moving to 
JUnit5|https://lists.apache.org/thread/jsjvc2cqb91pyh47d4p6olk3c1vxqm3w]

[2] [[DISCUSS] Conventions on assertions to use in 
tests|https://lists.apache.org/thread/33t7hz8w873p1bc5msppk65792z08rgt]

[3] [JUnit5 migration 
guide|https://docs.google.com/document/d/1514Wa_aNB9bJUen4xm5uiuXOooOJTtXqS_Jqk9KJitU/edit]

  was:
Based on the consensus from the mailing list discussion[1][2], we have been 
starting working on the JUnit4 to JUnit5 migration. 

This is the umbrella ticket which describes the big picture of the migration 
with following steps:
 * AssertJ integration and guideline
 * Test Framework upgrade from JUnit4 to JUnit5
 * JUnit5 migration guideline(document and reference migration)
 * Optimization for issues found while writing new test in JUint5
 * [Long-term]Module based graceful migration of old tests in JUnit4 to JUnit5

 

All JUnit5 migration related tasks are welcome to be created under this 
umbrella. 

 

 

[1] [[DISCUSS]Moving to 
JUnit5|https://lists.apache.org/thread/jsjvc2cqb91pyh47d4p6olk3c1vxqm3w]

[2] [[DISCUSS] Conventions on assertions to use in 
tests|https://lists.apache.org/thread/33t7hz8w873p1bc5msppk65792z08rgt]

[3] [JUnit5 migration 
guide|https://docs.google.com/document/d/1514Wa_aNB9bJUen4xm5uiuXOooOJTtXqS_Jqk9KJitU/edit]



> Migration Flink from Junit4 to Junit5
> -
>
> Key: FLINK-25325
> URL: https://issues.apache.org/jira/browse/FLINK-25325
> Project: Flink
>  Issue Type: New Feature
>  Components: Tests
>Affects Versions: 1.14.0
>Reporter: Jing Ge
>Priority: Major
>  Labels: Umbrella
> Fix For: 1.20.0
>
>
> Based on the consensus from the mailing list discussion[1][2], we have been 
> starting working on the JUnit4 to JUnit5 migration. 
> This is the umbrella ticket which describes the big picture of the migration 
> with following steps:
>  * AssertJ integration and guideline
>  * Test Framework upgrade from JUnit4 to JUnit5
>  * JUnit5 migration guideline(document and reference migration)
>  * Optimization for issues found while writing new test in JUint5
>  * [Long-term]Module based graceful migration of old tests in JUnit4 to JUnit5
> All JUnit5 migration related tasks are welcome to be created under this 
> umbrella. 
>  
> [https://github.com/apache/flink/pull/24483] is an excellent PR as example. 
> Please create similar Brief change log summary for new PR.
>  
> [1] [[DISCUSS]Moving to 
> JUnit5|https://lists.apache.org/thread/jsjvc2cqb91pyh47d4p6olk3c1vxqm3w]
> [2] [[DISCUSS] Conventions on assertions to use in 
> tests|https://lists.apache.org/thread/33t7hz8w873p1bc5msppk65792z08rgt]
> [3] [JUnit5 migration 
> guide|https://docs.google.com/document/d/1514Wa_aNB9bJUen4xm5uiuXOooOJTtXqS_Jqk9KJitU/edit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826884#comment-17826884
 ] 

Andriy Redko commented on FLINK-34663:
--

Thanks for trying it out, [~wgendy] , the pull request is not merged yet (I've 
just mentioned that there is an attempt to have separate OS v1 and v2 support), 
I will try to look into the possible options issue shortly.

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
> Attachments: image-2024-03-14-00-10-40-982.png
>
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread wael shehata (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826883#comment-17826883
 ] 

wael shehata commented on FLINK-34663:
--

knowing that after building the codebase ([GitHub - 
apache/flink-connector-opensearch: Apache 
flink|https://github.com/apache/flink-connector-opensearch]) project using 
maven ( ./mvn clean package -DskipTests )... 8-jars are created with the 
following names:

!image-2024-03-14-00-10-40-982.png|width=380,height=250!

and when using these jars (supposedly are the ones for "opensearch-v2" ) i got 
the problem as mentioned above .. and if i changed flink version (to 1.8.1) and 
opensearch version  (to 2.12.0) .. the project wont built and give an error of 
missing many classes that are no longer exist in the new versions ...

How then to get or build "opensearch-v2" flink connector ?

warm regards

 

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
> Attachments: image-2024-03-14-00-10-40-982.png
>
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread wael shehata (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wael shehata updated FLINK-34663:
-
Attachment: image-2024-03-14-00-10-40-982.png

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
> Attachments: image-2024-03-14-00-10-40-982.png
>
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread wael shehata (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826880#comment-17826880
 ] 

wael shehata commented on FLINK-34663:
--

Dear Andriy ,I sincerely appreciate your valuable comment ...

But I've encountered difficulty locating the jar file for the "opensearch-2" 
Flink connector. Could you kindly confirm if it has already been released or if 
it is still under development?

As of now, my search has reached only the standard flink-opensearch-connector 
jar file, which exclusively supports Opensearch V1.x. Not Opensearch V2.x..

If you happen to have any information regarding the availability of the 
opensearch-2 Flink connector jar file, I would be immensely grateful for your 
assistance in this matter.

Warm regards

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34640][metrics] Replace DummyMetricGroup usage with UnregisteredMetricsGroup [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24490:
URL: https://github.com/apache/flink/pull/24490#issuecomment-1995888995

   
   ## CI report:
   
   * 39a6d1928dc2fbaa1de123871b7537c92a03af45 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34640) Replace DummyMetricGroup usage with UnregisteredMetricsGroup

2024-03-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34640:
---
Labels: pull-request-available  (was: )

> Replace DummyMetricGroup usage with UnregisteredMetricsGroup
> 
>
> Key: FLINK-34640
> URL: https://issues.apache.org/jira/browse/FLINK-34640
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Metrics, Tests
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> The {{DummyMetricGroup}} is terrible because it is decidedly unsafe to use. 
> Use the {{UnregisteredMetricsGroup}} instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34640][metrics] Replace DummyMetricGroup usage with UnregisteredMetricsGroup [flink]

2024-03-13 Thread via GitHub


jeyhunkarimov opened a new pull request, #24490:
URL: https://github.com/apache/flink/pull/24490

   ## What is the purpose of the change
   
   Replace DummyMetricGroup usage with UnregisteredMetricsGroup
   
   ## Brief change log
   
   *(for example:)*
 - Remove DummyMetricGroup and replace it with UnregisteredMetricsGroup
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread Andriy Redko (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826845#comment-17826845
 ] 

Andriy Redko commented on FLINK-34663:
--

[https://github.com/apache/flink-connector-opensearch/pull/38] would solve 
this, I will try to find out if we could make it work with 1.x client as well

> flink-opensearch connector Unable to parse response body for Response
> -
>
> Key: FLINK-34663
> URL: https://issues.apache.org/jira/browse/FLINK-34663
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Opensearch
>Affects Versions: 1.18.1
> Environment: Docker-Compose:
> Flink 1.18.1 - Java11
> OpenSearch 2.12.0
> Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
>Reporter: wael shehata
>Priority: Major
>
> I`m trying to use flink-sql-opensearch connector to sink stream data to 
> OpenSearch via Flink …
> After submitting the Job to Flink cluster successfully , the job runs 
> normally for 30sec and create the index with data … then it fails with the 
> following message:
> _*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… 
> Caused by: java.io.IOException: Unable to parse response body for Response*_
> _*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
> host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
> OK}*_
> at 
> org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
> at 
> org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
> at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
> at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
> at 
> org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
> at 
> org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
> at 
> org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
> at 
> org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
> at 
> org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
> at 
> org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
> at 
> org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
> at 
> org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
> … 1 more
> *Caused by: java.lang.NullPointerException*
> *at java.base/java.util.Objects.requireNonNull(Unknown Source)*
> *at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
> *at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*
> It seems that this error is common but without any solution …
> the flink connector despite it was built for OpenSearch 1.3 , but it still 
> working in sending and creating index to OpenSearch 2.12.0 … but this error 
> persists with all OpenSearch versions greater than 1.13 …
> *Opensearch support reply was:*
> *"this is unexpected, could you please create an issue here [1], the issue is 
> caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-34663) flink-opensearch connector Unable to parse response body for Response

2024-03-13 Thread wael shehata (Jira)
wael shehata created FLINK-34663:


 Summary: flink-opensearch connector Unable to parse response body 
for Response
 Key: FLINK-34663
 URL: https://issues.apache.org/jira/browse/FLINK-34663
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Opensearch
Affects Versions: 1.18.1
 Environment: Docker-Compose:
Flink 1.18.1 - Java11
OpenSearch 2.12.0
Flink-Sql-Opensearch-connector (flink 1.18.1 → Os 1.3)
Reporter: wael shehata


I`m trying to use flink-sql-opensearch connector to sink stream data to 
OpenSearch via Flink …
After submitting the Job to Flink cluster successfully , the job runs normally 
for 30sec and create the index with data … then it fails with the following 
message:
_*org.apache.flink.util.FlinkRuntimeException: Complete bulk has failed… Caused 
by: java.io.IOException: Unable to parse response body for Response*_

_*{requestLine=POST /_bulk?timeout=1m HTTP/1.1, 
host=[http://172.20.0.6:9200|http://172.20.0.6:9200/], response=HTTP/1.1 200 
OK}*_

at 
org.opensearch.client.RestHighLevelClient$1.onSuccess(RestHighLevelClient.java:1942)
at 
org.opensearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:662)
at org.opensearch.client.RestClient$1.completed(RestClient.java:396)
at org.opensearch.client.RestClient$1.completed(RestClient.java:390)
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
at 
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
at 
org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
at 
org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at 
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
… 1 more
*Caused by: java.lang.NullPointerException*
*at java.base/java.util.Objects.requireNonNull(Unknown Source)*
*at org.opensearch.action.DocWriteResponse.(DocWriteResponse.java:140)*
*at org.opensearch.action.index.IndexResponse.(IndexResponse.java:67) …*

It seems that this error is common but without any solution …
the flink connector despite it was built for OpenSearch 1.3 , but it still 
working in sending and creating index to OpenSearch 2.12.0 … but this error 
persists with all OpenSearch versions greater than 1.13 …

*Opensearch support reply was:*
*"this is unexpected, could you please create an issue here [1], the issue is 
caused by _type property that has been removed in 2.x"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-18476) PythonEnvUtilsTest#testStartPythonProcess fails

2024-03-13 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826803#comment-17826803
 ] 

Ryan Skraba commented on FLINK-18476:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58254=logs=59a2b95a-736b-5c46-b3e0-cee6e587fd86=c301da75-e699-5c06-735f-778207c16f50=21904

> PythonEnvUtilsTest#testStartPythonProcess fails
> ---
>
> Key: FLINK-18476
> URL: https://issues.apache.org/jira/browse/FLINK-18476
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0, 1.15.3, 1.18.0, 1.19.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> test-stability
>
> The 
> {{org.apache.flink.client.python.PythonEnvUtilsTest#testStartPythonProcess}} 
> failed in my local environment as it assumes the environment has 
> {{/usr/bin/python}}. 
> I don't know exactly how did I get python in Ubuntu 20.04, but I have only 
> alias for {{python = python3}}. Therefore the tests fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34643) JobIDLoggingITCase failed

2024-03-13 Thread Ryan Skraba (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826800#comment-17826800
 ] 

Ryan Skraba commented on FLINK-34643:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58254=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8244

> JobIDLoggingITCase failed
> -
>
> Key: FLINK-34643
> URL: https://issues.apache.org/jira/browse/FLINK-34643
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.20.0
>Reporter: Matthias Pohl
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=7897
> {code}
> Mar 09 01:24:23 01:24:23.498 [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 4.209 s <<< FAILURE! -- in 
> org.apache.flink.test.misc.JobIDLoggingITCase
> Mar 09 01:24:23 01:24:23.498 [ERROR] 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(ClusterClient) 
> -- Time elapsed: 1.459 s <<< ERROR!
> Mar 09 01:24:23 java.lang.IllegalStateException: Too few log events recorded 
> for org.apache.flink.runtime.jobmaster.JobMaster (12) - this must be a bug in 
> the test code
> Mar 09 01:24:23   at 
> org.apache.flink.util.Preconditions.checkState(Preconditions.java:215)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.assertJobIDPresent(JobIDLoggingITCase.java:148)
> Mar 09 01:24:23   at 
> org.apache.flink.test.misc.JobIDLoggingITCase.testJobIDLogging(JobIDLoggingITCase.java:132)
> Mar 09 01:24:23   at java.lang.reflect.Method.invoke(Method.java:498)
> Mar 09 01:24:23   at 
> java.util.concurrent.RecursiveAction.exec(RecursiveAction.java:189)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
> Mar 09 01:24:23   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
> Mar 09 01:24:23 
> {code}
> The other test failures of this build were also caused by the same test:
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=8349
> * 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58187=logs=a596f69e-60d2-5a4b-7d39-dc69e4cdaed3=712ade8c-ca16-5b76-3acd-14df33bc1cb1=8209



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34659) How to implement global sort in latest flink datastream API

2024-03-13 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826788#comment-17826788
 ] 

Martijn Visser commented on FLINK-34659:


{quote}If you ever search on hive/spark SQL/Dataframe/RDD, you will found that 
global sort is basic example on first site.{quote}

Isn't that to be expected, given that all of these systems are batch systems 
first and foremost?

The user documentation that you linked even says so:
{quote}In theory, a streaming pipeline can execute all operators. However, in 
practice, some operations might
not make much sense as they would lead to ever-growing state and are therefore 
not supported. A global
sort would be an example that is only available in batch mode. Simply put: it 
should be possible to
run a working streaming pipeline in batch mode but not necessarily vice 
versa.{quote}

A global sort in a streaming application will mean that you will have to 
indefinitely store all state. That's not a bug, it does not make sense (as 
explained in the docs) to do a global sort in a streaming application. 

{quote}Honestly I am shock/upset on this reply.{quote}

I'm sorry for that, but you are asking a user question in the Jira of this 
ticket. As documented at 
https://flink.apache.org/how-to-contribute/getting-help/, you see that 
questions are meant for User mailing list, Slack or Stackoverflow. Bugs are to 
be reported in Jira. If it's unsure if something is a bug, the ask is to first 
post it on the User mailin glist. 

> How to implement global sort in latest flink datastream API
> ---
>
> Key: FLINK-34659
> URL: https://issues.apache.org/jira/browse/FLINK-34659
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.18.1
>Reporter: Junyao Huang
>Priority: Major
> Attachments: image-2024-03-13-11-21-57-846.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/dataset_migration/#%E7%AC%AC%E4%B8%89%E7%B1%BB]
>  
> !image-2024-03-13-11-21-57-846.png!
>  
> {{will this cause OOM in streaming execution mode?}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2024-03-13 Thread Piotr Nowojski (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826779#comment-17826779
 ] 

Piotr Nowojski commented on FLINK-26990:


I've just encountered the same issue in some internal build (thus can not 
provide an url, but the error message is exactly the same)


> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.2
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2024-03-13 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-26990:
---
Priority: Major  (was: Minor)

> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.2, 1.18.1
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-26990) BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime failed

2024-03-13 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-26990:
---
Affects Version/s: 1.18.1

> BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>  failed
> ---
>
> Key: FLINK-26990
> URL: https://issues.apache.org/jira/browse/FLINK-26990
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.16.2, 1.18.1
>Reporter: Matthias Pohl
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=914=logs=f3dc9b18-b77a-55c1-591e-264c46fe44d1=2d3cd81e-1c37-5c31-0ee4-f5d5cdb9324d=25899]
>  failed due to unexpected behavior in 
> {{BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime}}:
> {code}
> Apr 01 11:42:06 [ERROR] 
> org.apache.flink.table.runtime.operators.python.aggregate.arrow.batch.BatchArrowPythonGroupWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByTime
>   Time elapsed: 0.11 s  <<< FAILURE!
> Apr 01 11:42:06 java.lang.AssertionError: 
> Apr 01 11:42:06 
> Apr 01 11:42:06 Expected size: 6 but was: 4 in:
> Apr 01 11:42:06 [Record @ (undef) : 
> +I(c1,0,1969-12-31T23:59:55,1970-01-01T00:00:05),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,0,1970-01-01T00:00,1970-01-01T00:00:10),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,1,1970-01-01T00:00:05,1970-01-01T00:00:15),
> Apr 01 11:42:06 Record @ (undef) : 
> +I(c1,2,1970-01-01T00:00:10,1970-01-01T00:00:20)]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-29114:
---
Priority: Blocker  (was: Major)

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Blocker
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> 

[jira] [Commented] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826775#comment-17826775
 ] 

Matthias Pohl commented on FLINK-34227:
---

https://github.com/apache/flink/actions/runs/8258416104/job/22590931762#step:10:17193

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2131)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2099)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2077)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:876)
>   at 
> org.apache.flink.table.planner.runtime.stream.sql.WindowDistinctAggregateITCase.testHopWindow_Cube(WindowDistinctAggregateITCase.scala:550)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34569) 'Streaming File Sink s3 end-to-end test' failed

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826766#comment-17826766
 ] 

Matthias Pohl edited comment on FLINK-34569 at 3/13/24 3:10 PM:


Sure, go ahead. Thanks for looking into it. I assigned the ticket to you.


was (Author: mapohl):
Sure, go ahead. I assigned the ticket to you.

> 'Streaming File Sink s3 end-to-end test' failed
> ---
>
> Key: FLINK-34569
> URL: https://issues.apache.org/jira/browse/FLINK-34569
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Rob Young
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58026=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3957
> {code}
> Mar 02 04:12:57 Waiting until all values have been produced
> Unable to find image 'stedolan/jq:latest' locally
> Error: No such container: 
> docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": 
> read tcp 10.1.0.97:42214->54.236.113.205:443: read: connection reset by peer.
> See 'docker run --help'.
> Mar 02 04:12:58 Number of produced values 0/6
> Error: No such container: 
> Unable to find image 'stedolan/jq:latest' locally
> latest: Pulling from stedolan/jq
> [DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest 
> version 2, schema 1 support will be removed in an upcoming release. Suggest 
> the author of docker.io/stedolan/jq:latest to upgrade the image to the OCI 
> Format, or Docker Image manifest v2, schema 2. More information at 
> https://docs.docker.com/go/deprecated-image-specs/
> 237d5fcd25cf: Pulling fs layer
> [...]
> 4dae4fd48813: Pull complete
> Digest: 
> sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
> Status: Downloaded newer image for stedolan/jq:latest
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-34569) 'Streaming File Sink s3 end-to-end test' failed

2024-03-13 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl reassigned FLINK-34569:
-

Assignee: Rob Young

> 'Streaming File Sink s3 end-to-end test' failed
> ---
>
> Key: FLINK-34569
> URL: https://issues.apache.org/jira/browse/FLINK-34569
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Rob Young
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58026=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3957
> {code}
> Mar 02 04:12:57 Waiting until all values have been produced
> Unable to find image 'stedolan/jq:latest' locally
> Error: No such container: 
> docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": 
> read tcp 10.1.0.97:42214->54.236.113.205:443: read: connection reset by peer.
> See 'docker run --help'.
> Mar 02 04:12:58 Number of produced values 0/6
> Error: No such container: 
> Unable to find image 'stedolan/jq:latest' locally
> latest: Pulling from stedolan/jq
> [DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest 
> version 2, schema 1 support will be removed in an upcoming release. Suggest 
> the author of docker.io/stedolan/jq:latest to upgrade the image to the OCI 
> Format, or Docker Image manifest v2, schema 2. More information at 
> https://docs.docker.com/go/deprecated-image-specs/
> 237d5fcd25cf: Pulling fs layer
> [...]
> 4dae4fd48813: Pull complete
> Digest: 
> sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
> Status: Downloaded newer image for stedolan/jq:latest
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34569) 'Streaming File Sink s3 end-to-end test' failed

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826766#comment-17826766
 ] 

Matthias Pohl commented on FLINK-34569:
---

Sure, go ahead. I assigned the ticket to you.

> 'Streaming File Sink s3 end-to-end test' failed
> ---
>
> Key: FLINK-34569
> URL: https://issues.apache.org/jira/browse/FLINK-34569
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Assignee: Rob Young
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=58026=logs=af184cdd-c6d8-5084-0b69-7e9c67b35f7a=0f3adb59-eefa-51c6-2858-3654d9e0749d=3957
> {code}
> Mar 02 04:12:57 Waiting until all values have been produced
> Unable to find image 'stedolan/jq:latest' locally
> Error: No such container: 
> docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": 
> read tcp 10.1.0.97:42214->54.236.113.205:443: read: connection reset by peer.
> See 'docker run --help'.
> Mar 02 04:12:58 Number of produced values 0/6
> Error: No such container: 
> Unable to find image 'stedolan/jq:latest' locally
> latest: Pulling from stedolan/jq
> [DEPRECATION NOTICE] Docker Image Format v1, and Docker Image manifest 
> version 2, schema 1 support will be removed in an upcoming release. Suggest 
> the author of docker.io/stedolan/jq:latest to upgrade the image to the OCI 
> Format, or Docker Image manifest v2, schema 2. More information at 
> https://docs.docker.com/go/deprecated-image-specs/
> 237d5fcd25cf: Pulling fs layer
> [...]
> 4dae4fd48813: Pull complete
> Digest: 
> sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
> Status: Downloaded newer image for stedolan/jq:latest
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826763#comment-17826763
 ] 

Matthias Pohl commented on FLINK-29114:
---

It shouldn't be a problem as far as I see: if the RC is accepted, it will end 
up in 1.19.1. If it's not accepted it would even make it into 1.19.0 (you could 
mention that in the vote thread, if a new RC would be created). It's up to the 
release managers to accept it in or not. It's still a valid change because it's 
a bugfix (which is allowed to be merged during the feature freeze).

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> 

[jira] [Resolved] (FLINK-29122) Improve robustness of FileUtils.expandDirectory()

2024-03-13 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-29122.

Resolution: Fixed

> Improve robustness of FileUtils.expandDirectory() 
> --
>
> Key: FLINK-29122
> URL: https://issues.apache.org/jira/browse/FLINK-29122
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Robert Metzger
>Assignee: Anupam Aggarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> `FileUtils.expandDirectory()` can potentially write to invalid locations if 
> the zip file is invalid (contains entry names with ../).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29122) Improve robustness of FileUtils.expandDirectory()

2024-03-13 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826762#comment-17826762
 ] 

Robert Metzger commented on FLINK-29122:


Merged to master in 
https://github.com/apache/flink/commit/0da60ca1a4754f858cf7c52dd4f0c97ae0e1b0cb

> Improve robustness of FileUtils.expandDirectory() 
> --
>
> Key: FLINK-29122
> URL: https://issues.apache.org/jira/browse/FLINK-29122
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Robert Metzger
>Assignee: Anupam Aggarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> `FileUtils.expandDirectory()` can potentially write to invalid locations if 
> the zip file is invalid (contains entry names with ../).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29122) Improve robustness of FileUtils.expandDirectory()

2024-03-13 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-29122:
---
Fix Version/s: 1.20.0

> Improve robustness of FileUtils.expandDirectory() 
> --
>
> Key: FLINK-29122
> URL: https://issues.apache.org/jira/browse/FLINK-29122
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.16.0, 1.17.0
>Reporter: Robert Metzger
>Assignee: Anupam Aggarwal
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.20.0
>
>
> `FileUtils.expandDirectory()` can potentially write to invalid locations if 
> the zip file is invalid (contains entry names with ../).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-29122][core] Improve robustness of FileUtils.expandDirectory() [flink]

2024-03-13 Thread via GitHub


rmetzger merged PR #24307:
URL: https://github.com/apache/flink/pull/24307


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34659) How to implement global sort in latest flink datastream API

2024-03-13 Thread Junyao Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826086#comment-17826086
 ] 

Junyao Huang edited comment on FLINK-34659 at 3/13/24 2:51 PM:
---

If you ever search on hive/spark SQL/Dataframe/RDD,

you will found that global sort is basic example on first site.

 

But I can not even see a standard solution on flink docs,

only finds that in table api comments.

[https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/docs/content/docs/dev/table/data_stream_api.md?plain=1#L843]

 

Would this will not be a problem/bug?

 

Especially Stream/Batch Unify is currently major part on Flink Roadmap.

 

Honestly I am shock/upset on this reply.


was (Author: pegasas):
If you ever search on hive/spark SQL/Dataframe/RDD,

you will found that global sort is basic example on first site.

 

But I can not even see a standard solution on flink docs,

only finds that in table api comments.

[https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/docs/content/docs/dev/table/data_stream_api.md?plain=1#L843]

 

Would this will not be a problem/bug?

 

Especially Stream/Batch Unify is currently major part on Flink Roadmap.

 

Honestly I am shock on this reply. 

> How to implement global sort in latest flink datastream API
> ---
>
> Key: FLINK-34659
> URL: https://issues.apache.org/jira/browse/FLINK-34659
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.18.1
>Reporter: Junyao Huang
>Priority: Major
> Attachments: image-2024-03-13-11-21-57-846.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/dataset_migration/#%E7%AC%AC%E4%B8%89%E7%B1%BB]
>  
> !image-2024-03-13-11-21-57-846.png!
>  
> {{will this cause OOM in streaming execution mode?}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34659) How to implement global sort in latest flink datastream API

2024-03-13 Thread Junyao Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826086#comment-17826086
 ] 

Junyao Huang edited comment on FLINK-34659 at 3/13/24 2:50 PM:
---

If you ever search on hive/spark SQL/Dataframe/RDD,

you will found that global sort is basic example on first site.

 

But I can not even see a standard solution on flink docs,

only finds that in table api comments.

[https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/docs/content/docs/dev/table/data_stream_api.md?plain=1#L843]

 

Would this will not be a problem/bug?

 

Especially Stream/Batch Unify is currently major part on Flink Roadmap.

 

Honestly I am shock on this reply. 


was (Author: pegasas):
If you ever search on hive/spark SQL/Dataframe/RDD,

you will found that global sort is basic example on first site.

 

But I can even see a standard solution on flink docs,

only finds that in table api comments.

https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/docs/content/docs/dev/table/data_stream_api.md?plain=1#L843

 

Would this will not be a problem/bug?

 

Especially Stream/Batch Unify is currently major part on Flink Roadmap.

 

Honestly I am shock on this reply. 

> How to implement global sort in latest flink datastream API
> ---
>
> Key: FLINK-34659
> URL: https://issues.apache.org/jira/browse/FLINK-34659
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.18.1
>Reporter: Junyao Huang
>Priority: Major
> Attachments: image-2024-03-13-11-21-57-846.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/dataset_migration/#%E7%AC%AC%E4%B8%89%E7%B1%BB]
>  
> !image-2024-03-13-11-21-57-846.png!
>  
> {{will this cause OOM in streaming execution mode?}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34659) How to implement global sort in latest flink datastream API

2024-03-13 Thread Junyao Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826086#comment-17826086
 ] 

Junyao Huang commented on FLINK-34659:
--

If you ever search on hive/spark SQL/Dataframe/RDD,

you will found that global sort is basic example on first site.

 

But I can even see a standard solution on flink docs,

only finds that in table api comments.

https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/docs/content/docs/dev/table/data_stream_api.md?plain=1#L843

 

Would this will not be a problem/bug?

 

Especially Stream/Batch Unify is currently major part on Flink Roadmap.

 

Honestly I am shock on this reply. 

> How to implement global sort in latest flink datastream API
> ---
>
> Key: FLINK-34659
> URL: https://issues.apache.org/jira/browse/FLINK-34659
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.18.1
>Reporter: Junyao Huang
>Priority: Major
> Attachments: image-2024-03-13-11-21-57-846.png
>
>
> [https://nightlies.apache.org/flink/flink-docs-master/zh/docs/dev/datastream/dataset_migration/#%E7%AC%AC%E4%B8%89%E7%B1%BB]
>  
> !image-2024-03-13-11-21-57-846.png!
>  
> {{will this cause OOM in streaming execution mode?}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34227][runtime] Fixes concurrency issue in JobMaster shutdown [flink]

2024-03-13 Thread via GitHub


flinkbot commented on PR #24489:
URL: https://github.com/apache/flink/pull/24489#issuecomment-1994454761

   
   ## CI report:
   
   * 2cc6490c77495bfb3e46c30e1ff10409abacc770 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-03-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34227:
---
Labels: github-actions pull-request-available test-stability  (was: 
github-actions test-stability)

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, pull-request-available, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2131)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2099)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2077)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:876)
>   at 
> org.apache.flink.table.planner.runtime.stream.sql.WindowDistinctAggregateITCase.testHopWindow_Cube(WindowDistinctAggregateITCase.scala:550)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34227][runtime] Fixes concurrency issue in JobMaster shutdown [flink]

2024-03-13 Thread via GitHub


XComp opened a new pull request, #24489:
URL: https://github.com/apache/flink/pull/24489

   ## What is the purpose of the change
   
   tba
   
   ## Brief change log
   
   tba
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: yes
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34655) Autoscaler doesn't work for flink 1.15

2024-03-13 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826054#comment-17826054
 ] 

Gyula Fora commented on FLINK-34655:


[~mxm] I would be hesitant to try to backport these changes to 1.15/1.16, the 
community doesn't generally backport new features to older releases and also 
these are already out of the supported version scope of Flink core anyways. 

For 1.15 we would have to backport the aggregated metrics changes which is not 
backward compatible with the current 1.15 rest api, so not possible to do.

> Autoscaler doesn't work for flink 1.15
> --
>
> Key: FLINK-34655
> URL: https://issues.apache.org/jira/browse/FLINK-34655
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> flink-ubernetes-operator is committed to supporting the latest 4 flink minor 
> versions, and autoscaler is a part of flink-ubernetes-operator. Currently,  
> the latest 4 flink minor versions are 1.15, 1.16, 1.17 and 1.18.
> But autoscaler doesn't work for  flink 1.15.
> h2. Root cause: 
> * FLINK-28310 added some properties in IOMetricsInfo in flink-1.16
> * IOMetricsInfo is a part of JobDetailsInfo
> * JobDetailsInfo is necessary for autoscaler [1]
> * flink's RestClient doesn't allow miss any property during deserializing the 
> json
> That means that the RestClient after 1.15 cannot fetch JobDetailsInfo for 
> 1.15 jobs.
> h2. How to fix it properly?
> - [[FLINK-34655](https://issues.apache.org/jira/browse/FLINK-34655)] Copy 
> IOMetricsInfo to flink-autoscaler-standalone module
> - Removing them after 1.15 are not supported
> [1] 
> https://github.com/apache/flink-kubernetes-operator/blob/ede1a610b3375d31a2e82287eec67ace70c4c8df/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java#L109
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-401%3A+REST+API+JSON+response+deserialization+unknown+field+tolerance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826033#comment-17826033
 ] 

Matthias Pohl edited comment on FLINK-34227 at 3/13/24 1:19 PM:


So, we should fix the {{runAfterwards}} issue Chesnay raised. But I continued 
investigating it because I found it strange that it only appears in the 
{{AdaptiveScheduler}} test profile even though we're using the same logic in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

The problem seems to be that the {{JobMaster}} disconnecting from the 
{{ResourceManager}} happens twice. The second disconnect triggers a reconnect 
and re-registration of the {{JobMaster}} in the {{ResourceManager}}. This can 
theoretically happen because the first disconnect will trigger a 
{{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the {{JobMaster}} before its {{RPCEndpoint}} is 
shutdown (which can happen due to the concurrency introduced by the 
{{runAfterwards}} issue), it will get processed once more leading to the 
reconnect because the {{JobMaster#resourceManagerAddress}} is still set (which 
is the condition for {{JobMaster#isConnectingToResourceManager}} which is 
called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).
 With the {{resourceManagerAddress}} being set and pointing to the correct 
{{ResourceManager}}, reconnection would be triggered.

But this still doesn't answer the question why it only happens with the 
{{AdaptiveScheduler}} test profile?


was (Author: mapohl):
So, we should fix the {{runAfterwards}} issue Chesnay raised. But I continued 
investigating it because I found it strange that it only appears in the 
{{AdaptiveScheduler}} test profile even though we're using the same logic in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

The problem seems to be that the {{JobMaster}} disconnecting from the 
{{ResourceManager}} happens twice. The second disconnect triggers a reconnect 
and re-registration of the {{JobMaster}} in the {{ResourceManager}}. This can 
theoretically happen because the first disconnect will trigger a 
{{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the {{JobMaster}} before its {{RPCEndpoint}} is 
shutdown (which can happen due to the concurrency introduced by the 
{{runAfterwards}} issue), it will get processed once more leading to the 
reconnect because the {{JobMaster#resourceManagerAddress}} is still set (which 
is the condition for {{JobMaster#isConnectingToResourceManager}} which is 
called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).
 With the {{resourceManagerAddress}} being set and pointing to the correct 
{{ResourceManager}}, reconnection would be triggered.

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at 

Re: [PR] Add announcement blog post for Flink 1.19 [flink-web]

2024-03-13 Thread via GitHub


lincoln-lil commented on code in PR #721:
URL: https://github.com/apache/flink-web/pull/721#discussion_r1523241548


##
docs/content/posts/2024-03-xx-release-1.19.0.md:
##
@@ -0,0 +1,501 @@
+---
+authors:
+- LincolnLee:
+  name: "Lincoln Lee"
+  twitter: lincoln_86xy
+
+date: "2024-03-xxT22:00:00Z"
+subtitle: ""
+title: Announcing the Release of Apache Flink 1.19
+aliases:
+- /news/2024/03/xx/release-1.19.0.html
+---
+
+The Apache Flink PMC is pleased to announce the release of Apache Flink 
1.19.0. As usual, we are
+looking at a packed release with a wide variety of improvements and new 
features. Overall, 162
+people contributed to this release completing 33 FLIPs and 600+ issues. Thank 
you!
+
+Let's dive into the highlights.
+
+# Flink SQL Improvements
+
+## Custom Parallelism for Table/SQL Sources
+
+Now in Flink 1.19, you can set a custom parallelism for performance tuning via 
the `scan.parallelism`
+option. The first available connector is DataGen (Kafka connector is on the 
way). Here is an example
+using SQL Client:
+
+```sql
+-- set parallelism within the ddl
+CREATE TABLE Orders (
+order_number BIGINT,
+priceDECIMAL(32,2),
+buyerROW,
+order_time   TIMESTAMP(3)
+) WITH (
+'connector' = 'datagen',
+'scan.parallelism' = '4'
+);
+
+-- or set parallelism via dynamic table option
+SELECT * FROM Orders /*+ OPTIONS('scan.parallelism'='4') */;
+```
+
+**More Information**
+* 
[Documentation](https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/sourcessinks/#scan-table-source)
+* [FLIP-367: Support Setting Parallelism for Table/SQL 
Sources](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263429150)
+
+
+## Configurable SQL Gateway Java Options
+
+A new option `env.java.opts.sql-gateway` for specifying the Java options is 
introduced in Flink 1.19,
+so you can fine-tune the memory settings, garbage collection behavior, and 
other relevant Java
+parameters for SQL Gateway.
+
+**More Information**
+* [FLINK-33203](https://issues.apache.org/jira/browse/FLINK-33203)
+
+
+## Configure Different State TTLs Using SQL Hint
+
+Starting from Flink 1.18, Table API and SQL users can set state time-to-live 
(TTL) individually for
+stateful operators via the SQL compiled plan. In Flink 1.19, users have a more 
flexible way to
+specify custom TTL values for regular joins and group aggregations directly 
within their queries by [utilizing the STATE_TTL 
hint](https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/sql/queries/hints/#state-ttl-hints).
+This improvement means that you no longer need to alter your compiled plan to 
set specific TTLs for
+these frequently used operators. With the introduction of `STATE_TTL` hints, 
you can streamline your workflow and
+dynamically adjust the TTL based on your operational requirements.
+
+Here is an example:
+```sql
+-- set state ttl for join
+SELECT /*+ STATE_TTL('Orders'= '1d', 'Customers' = '20d') */ *
+FROM Orders LEFT OUTER JOIN Customers
+ON Orders.o_custkey = Customers.c_custkey;
+
+-- set state ttl for aggregation
+SELECT /*+ STATE_TTL('o' = '1d') */ o_orderkey, SUM(o_totalprice) AS revenue
+FROM Orders AS o
+GROUP BY o_orderkey;
+```
+
+**More Information**
+* 
[Documentation](https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/dev/table/sql/queries/hints/#state-ttl-hints)
+* [FLIP-373: Support Configuring Different State TTLs using SQL 
Hint](https://cwiki.apache.org/confluence/display/FLINK/FLIP-373%3A+Support+Configuring+Different+State+TTLs+using+SQL+Hint)
+
+
+## Named Parameters for Functions and Procedures
+
+Named parameters now can be used when calling a function or stored procedure. 
With named parameters,
+users do not need to strictly specify the parameter position, just specify the 
parameter name and its
+corresponding value. At the same time, if non-essential parameters are not 
specified, they will default to being filled with null.
+
+Here's an example of defining a function with one mandatory parameter and two 
optional parameters using named parameters:
+```java
+public static class NamedArgumentsTableFunction extends TableFunction {
+
+   @FunctionHint(
+   output = @DataTypeHint("STRING"),
+   arguments = {
+   @ArgumentHint(name = "in1", isOptional 
= false, type = @DataTypeHint("STRING")),
+   @ArgumentHint(name = "in2", isOptional 
= true, type = @DataTypeHint("STRING")),
+   @ArgumentHint(name = "in3", isOptional 
= true, type = @DataTypeHint("STRING"))})
+   public void eval(String arg1, String arg2, String arg3) {
+   collect(arg1 + ", " + arg2 + "," + arg3);
+   }
+
+}
+```
+When calling the function in SQL, parameters can be specified by name, for 
example:
+```sql
+SELECT * FROM TABLE(myFunction(in1 => 'v1', in3 => 'v3', in2 => 'v2'));

Re: [PR] [FLINK-34533][release] Add release note for version 1.19 [flink]

2024-03-13 Thread via GitHub


lincoln-lil commented on PR #24394:
URL: https://github.com/apache/flink/pull/24394#issuecomment-1994381592

   @PatrickRen Thanks for reviewing this! I've addressed your comments and 
updated the pr.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34516] Move CheckpointingMode to flink-core [flink]

2024-03-13 Thread via GitHub


Zakelly commented on PR #24381:
URL: https://github.com/apache/flink/pull/24381#issuecomment-1994372709

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826033#comment-17826033
 ] 

Matthias Pohl edited comment on FLINK-34227 at 3/13/24 12:54 PM:
-

So, we should fix the {{runAfterwards}} issue Chesnay raised. But I continued 
investigating it because I found it strange that it only appears in the 
{{AdaptiveScheduler}} test profile even though we're using the same logic in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

The problem seems to be that the {{JobMaster}} disconnecting from the 
{{ResourceManager}} happens twice. The second disconnect triggers a reconnect 
and re-registration of the {{JobMaster}} in the {{ResourceManager}}. This can 
theoretically happen because the first disconnect will trigger a 
{{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the {{JobMaster}} before its {{RPCEndpoint}} is 
shutdown (which can happen due to the concurrency introduced by the 
{{runAfterwards}} issue), it will get processed once more leading to the 
reconnect because the {{JobMaster#resourceManagerAddress}} is still set (which 
is the condition for {{JobMaster#isConnectingToResourceManager}} which is 
called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).
 With the {{resourceManagerAddress}} being set and pointing to the correct 
{{ResourceManager}}, reconnection would be triggered.


was (Author: mapohl):
One other theory on that issue:
I agree that we should fix the {{runAfterwards}} issue you raised. But I 
continued investigating it because I found it strange that it only appears in 
the {{AdaptiveScheduler}} test profile even though we're using the same logic 
in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

But now on another theory: The problem seems to be that the {{JobMaster}} 
disconnecting from the {{ResourceManager}} happens twice. The second disconnect 
triggers a reconnect and re-registration of the {{JobMaster}} in the 
{{ResourceManager}}. This can theoretically happen because the first disconnect 
will trigger a {{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the JobMaster before its RPCEndpoint is shutdown, 
it will get processed once more leading to the reconnect because the 
{{JobMaster#resourceManagerAddress}} is still set (which is the condition for 
{{JobMaster#isConnectingToResourceManager}} which is called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).
 With the {{resourceManagerAddress}} being set and pointing to the correct 
{{ResourceManager}}, reconnection would be triggered.

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> 

[jira] [Commented] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826033#comment-17826033
 ] 

Matthias Pohl commented on FLINK-34227:
---

One other theory on that issue:
I agree that we should fix the {{runAfterwards}} issue you raised. But I 
continued investigating it because I found it strange that it only appears in 
the {{AdaptiveScheduler}} test profile even though we're using the same logic 
in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

But now on another theory: The problem seems to be that the {{JobMaster}} 
disconnecting from the {{ResourceManager}} happens twice. The second disconnect 
triggers a reconnect and re-registration of the {{JobMaster}} in the 
{{ResourceManager}}. This can theoretically happen because the first disconnect 
will trigger a {{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the JobMaster before its RPCEndpoint is shutdown, 
it will get processed once more leading to the reconnect because the 
{{JobMaster#resourceManagerAddress}} is still set (which is the condition for 
{{JobMaster#isConnectingToResourceManager}} which is called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2131)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2099)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:2077)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:876)
>   at 
> org.apache.flink.table.planner.runtime.stream.sql.WindowDistinctAggregateITCase.testHopWindow_Cube(WindowDistinctAggregateITCase.scala:550)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34227) Job doesn't disconnect from ResourceManager

2024-03-13 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826033#comment-17826033
 ] 

Matthias Pohl edited comment on FLINK-34227 at 3/13/24 12:28 PM:
-

One other theory on that issue:
I agree that we should fix the {{runAfterwards}} issue you raised. But I 
continued investigating it because I found it strange that it only appears in 
the {{AdaptiveScheduler}} test profile even though we're using the same logic 
in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

But now on another theory: The problem seems to be that the {{JobMaster}} 
disconnecting from the {{ResourceManager}} happens twice. The second disconnect 
triggers a reconnect and re-registration of the {{JobMaster}} in the 
{{ResourceManager}}. This can theoretically happen because the first disconnect 
will trigger a {{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the JobMaster before its RPCEndpoint is shutdown, 
it will get processed once more leading to the reconnect because the 
{{JobMaster#resourceManagerAddress}} is still set (which is the condition for 
{{JobMaster#isConnectingToResourceManager}} which is called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).
 With the {{resourceManagerAddress}} being set and pointing to the correct 
{{ResourceManager}}, reconnection would be triggered.


was (Author: mapohl):
One other theory on that issue:
I agree that we should fix the {{runAfterwards}} issue you raised. But I 
continued investigating it because I found it strange that it only appears in 
the {{AdaptiveScheduler}} test profile even though we're using the same logic 
in 
[SchedulerBase#close|https://github.com/apache/flink/blob/d4e0084649c019c536ee1e44bab15c8eca01bf13/flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerBase.java#L674]
 as mentioned in the previous comment.

But now on another theory: The problem seems to be that the {{JobMaster}} 
disconnecting from the {{ResourceManager}} happens twice. The second disconnect 
triggers a reconnect and re-registration of the {{JobMaster}} in the 
{{ResourceManager}}. This can theoretically happen because the first disconnect 
will trigger a {{JobMaster#disconnectResourceManager}} in 
[ResourceManager#closeJobManagerConnection in line 
1150|https://github.com/apache/flink/blob/d6a4eb966fbc47277e07b79e7c64939a62eb1d54/flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java#L1150].
 If this RPC call makes it to the JobMaster before its RPCEndpoint is shutdown, 
it will get processed once more leading to the reconnect because the 
{{JobMaster#resourceManagerAddress}} is still set (which is the condition for 
{{JobMaster#isConnectingToResourceManager}} which is called in 
[JobMaster#disconnectResourceManager|https://github.com/apache/flink/blob/7d0111dfab640f2f590dd710d76de927c86cf83e/flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java#L847]).

> Job doesn't disconnect from ResourceManager
> ---
>
> Key: FLINK-34227
> URL: https://issues.apache.org/jira/browse/FLINK-34227
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0, 1.18.1
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Critical
>  Labels: github-actions, test-stability
> Attachments: FLINK-34227.7e7d69daebb438b8d03b7392c9c55115.log, 
> FLINK-34227.log
>
>
> https://github.com/XComp/flink/actions/runs/7634987973/job/20800205972#step:10:14557
> {code}
> [...]
> "main" #1 prio=5 os_prio=0 tid=0x7f4b7000 nid=0x24ec0 waiting on 
> condition [0x7fccce1eb000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0xbdd52618> (a 
> java.util.concurrent.CompletableFuture$Signaller)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   

[jira] [Commented] (FLINK-34655) Autoscaler doesn't work for flink 1.15

2024-03-13 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17826015#comment-17826015
 ] 

Maximilian Michels commented on FLINK-34655:


Thanks for raising awareness for the Flink version compatibility, [~fanrui]! 
Although we've been using Flink Autoscaling with 1.16, it is true that only 
Flink 1.17 supports it out of the box.
{quote}In the short term, we only use the autoscaler to give suggestion instead 
of scaling directly. After our users think the parallelism calculation is 
reliable, they will have stronger motivation to upgrade the flink version.
{quote}
I understand the idea behind providing suggestions. However, it is difficult to 
assess the quality of Autoscaling decisions without applying them 
automatically. The reason is that suggestions become stale very quickly if the 
load pattern is not completely static. Even for static load patterns, if the 
user doesn't redeploy in a matter of minutes, the suggestions might already be 
stale again when the number of pending records increased too much. In any case, 
production load patterns are rarely static which means that autoscaling will 
inevitable trigger multiple times a day, but that is where its real power is 
unleashed. It would be great to hear about any concerns your users have for 
turning on automatic scaling. We've been operating it in production for about a 
year now.

Back to the issue here, should we think about a patch release for 1.15 / 1.16 
to add support for overriding vertex parallelism?

> Autoscaler doesn't work for flink 1.15
> --
>
> Key: FLINK-34655
> URL: https://issues.apache.org/jira/browse/FLINK-34655
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.8.0
>
>
> flink-ubernetes-operator is committed to supporting the latest 4 flink minor 
> versions, and autoscaler is a part of flink-ubernetes-operator. Currently,  
> the latest 4 flink minor versions are 1.15, 1.16, 1.17 and 1.18.
> But autoscaler doesn't work for  flink 1.15.
> h2. Root cause: 
> * FLINK-28310 added some properties in IOMetricsInfo in flink-1.16
> * IOMetricsInfo is a part of JobDetailsInfo
> * JobDetailsInfo is necessary for autoscaler [1]
> * flink's RestClient doesn't allow miss any property during deserializing the 
> json
> That means that the RestClient after 1.15 cannot fetch JobDetailsInfo for 
> 1.15 jobs.
> h2. How to fix it properly?
> - [[FLINK-34655](https://issues.apache.org/jira/browse/FLINK-34655)] Copy 
> IOMetricsInfo to flink-autoscaler-standalone module
> - Removing them after 1.15 are not supported
> [1] 
> https://github.com/apache/flink-kubernetes-operator/blob/ede1a610b3375d31a2e82287eec67ace70c4c8df/flink-autoscaler/src/main/java/org/apache/flink/autoscaler/ScalingMetricCollector.java#L109
> [2] 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-401%3A+REST+API+JSON+response+deserialization+unknown+field+tolerance



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


ferenc-csaky commented on PR #24483:
URL: https://github.com/apache/flink/pull/24483#issuecomment-1994136699

   @Jiabao-Sun  FYI: I think I'll be finished with the review today or tomorrow.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-25544][streaming][JUnit5 Migration] The runtime package of module flink-stream-java [flink]

2024-03-13 Thread via GitHub


Jiabao-Sun commented on PR #24483:
URL: https://github.com/apache/flink/pull/24483#issuecomment-1994108222

   Thanks @JingGe for the suggestions.
   Brief change log Updated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-34662) Add new issue template for Flink CDC repo

2024-03-13 Thread Qingsheng Ren (Jira)
Qingsheng Ren created FLINK-34662:
-

 Summary: Add new issue template for Flink CDC repo
 Key: FLINK-34662
 URL: https://issues.apache.org/jira/browse/FLINK-34662
 Project: Flink
  Issue Type: Sub-task
  Components: Flink CDC
Reporter: Qingsheng Ren
Assignee: Qingsheng Ren


As we migrated to Apache Jira for managing issues, we need to provide a new 
template to remind users and contributors about the new issue reporting way.

The reason we don't close issue functionality is that some historical commits 
and PRs are linked to issues before the donation, and closing the functionality 
will make those not traceable anymore.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825997#comment-17825997
 ] 

Jane Chan commented on FLINK-29114:
---

Not sure whether it's a good timing to pick into release-1.19 considering that 
it's already being prepared for release.

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> 

Re: [PR] [FLINK-34180][docs] Migrate doc website from ververica to flink [flink-web]

2024-03-13 Thread via GitHub


PatrickRen commented on code in PR #722:
URL: https://github.com/apache/flink-web/pull/722#discussion_r1522749858


##
docs/content.zh/documentation/flink-cdc-master.md:
##
@@ -0,0 +1,27 @@
+---
+weight: 12

Review Comment:
   If we can control the order, what about resorting all titles in alphabetic 
order?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29114) TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with result mismatch

2024-03-13 Thread Jane Chan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jane Chan updated FLINK-29114:
--
Fix Version/s: 1.20.0

> TableSourceITCase#testTableHintWithLogicalTableScanReuse sometimes fails with 
> result mismatch 
> --
>
> Key: FLINK-29114
> URL: https://issues.apache.org/jira/browse/FLINK-29114
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.15.0, 1.19.0, 1.20.0
>Reporter: Sergey Nuyanzin
>Assignee: Jane Chan
>Priority: Major
>  Labels: auto-deprioritized-major, pull-request-available, 
> test-stability
> Fix For: 1.20.0
>
> Attachments: FLINK-29114.log, image-2024-02-27-15-23-49-494.png, 
> image-2024-02-27-15-26-07-657.png, image-2024-02-27-15-32-48-317.png
>
>
> It could be reproduced locally by repeating tests. Usually about 100 
> iterations are enough to have several failed tests
> {noformat}
> [ERROR] Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 1.664 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase
> [ERROR] 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse
>   Time elapsed: 0.108 s  <<< FAILURE!
> java.lang.AssertionError: expected: 3,2,Hello world, 3,2,Hello world, 3,2,Hello world)> but was: 2,2,Hello, 2,2,Hello, 3,2,Hello world, 3,2,Hello world)>
>     at org.junit.Assert.fail(Assert.java:89)
>     at org.junit.Assert.failNotEquals(Assert.java:835)
>     at org.junit.Assert.assertEquals(Assert.java:120)
>     at org.junit.Assert.assertEquals(Assert.java:146)
>     at 
> org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableHintWithLogicalTableScanReuse(TableSourceITCase.scala:428)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>     at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>     at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>     at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>     at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
>     at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
>     at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42)
>     at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80)
>     at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:107)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88)
>     at 
> org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54)

  1   2   >