[jira] [Commented] (FLINK-31835) DataTypeHint don't support Row>

2023-04-25 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716538#comment-17716538
 ] 

Aitozi commented on FLINK-31835:


[~jark] The PR have passed the CI and I think the current solution will not 
cause compatibility problem by only fix the conversion class according to the 
nullability when creating CollectionDataType, could you help review that ?

> DataTypeHint don't support Row>
> 
>
> Key: FLINK-31835
> URL: https://issues.apache.org/jira/browse/FLINK-31835
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.4
>Reporter: jeff-zou
>Priority: Major
>  Labels: pull-request-available
>
> Using DataTypeHint("Row>") in a UDF gives the following error:
>  
> {code:java}
> Caused by: java.lang.ClassCastException: class [I cannot be cast to class 
> [Ljava.lang.Object; ([I and [Ljava.lang.Object; are in module java.base of 
> loader 'bootstrap')
> org.apache.flink.table.data.conversion.ArrayObjectArrayConverter.toInternal(ArrayObjectArrayConverter.java:40)
> org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
> org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:75)
> org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:37)
> org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
> StreamExecCalc$251.processElement_split9(Unknown Source)
> StreamExecCalc$251.processElement(Unknown Source)
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
>  {code}
>  
> The function is as follows:
> {code:java}
> @DataTypeHint("Row>")
> public Row eval() {
> int[] i = new int[3];
> return Row.of(i);
> } {code}
>  
> This error is not reported when testing other simple types, so it is not an 
> environmental problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Aitozi commented on a diff in pull request #22485: [FLINK-31835][planner] Fix the array type can't be converted from ext…

2023-04-25 Thread via GitHub


Aitozi commented on code in PR #22485:
URL: https://github.com/apache/flink/pull/22485#discussion_r1177360955


##
flink-table/flink-table-planner/src/test/java/org/apache/flink/table/planner/runtime/stream/table/ValuesITCase.java:
##
@@ -332,7 +332,23 @@ public void testRegisteringValuesWithComplexTypes() {
 mapData.put(1, 1);
 mapData.put(2, 2);
 
-Row row = Row.of(mapData, Row.of(1, 2, 3), new Integer[] {1, 2});
+Row row = Row.of(mapData, Row.of(1, 2, 3), new int[] {1, 2});

Review Comment:
   The type of this is ROW < .., .., ARRAY>. So we have to 
provide `new int[] {1, 2}` to pass the `containsExactly`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-30644) ChangelogCompatibilityITCase.testRestore fails due to CheckpointCoordinator being shutdown

2023-04-25 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716514#comment-17716514
 ] 

Yanfei Lei edited comment on FLINK-30644 at 4/26/23 3:50 AM:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48321=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=10691]
 stack trace shows that the reason is fileNotFound:
{code:java}
Apr 21 01:35:03 at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
Apr 21 01:35:03 at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
Apr 21 01:35:03 at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
Apr 21 01:35:03 ... 3 more
Apr 21 01:35:03 Caused by: java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.FileNotFoundException: Cannot find 
checkpoint or savepoint file/directory 
'file:/tmp/junit862341719583315537/junit8040524335885911429/e0cfadc575a94b10511f5ef02629fb30/chk-1'
 on file system 'file'.
Apr 21 01:35:03 at 
org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:321)
Apr 21 01:35:03 at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:114)
Apr 21 01:35:03 at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
Apr 21 01:35:03 ... 3 more
Apr 21 01:35:03 Caused by: java.io.FileNotFoundException: 
java.io.FileNotFoundException: Cannot find checkpoint or savepoint 
file/directory 
'file:/tmp/junit862341719583315537/junit8040524335885911429/e0cfadc575a94b10511f5ef02629fb30/chk-1'
 on file system 'file'.
Apr 21 01:35:03 at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorageAccess.resolveCheckpointPointer(AbstractFsCheckpointStorageAccess.java:275)
Apr 21 01:35:03 at 
org.apache.flink.runtime.state.filesystem.AbstractFsCheckpointStorageAccess.resolveCheckpoint(AbstractFsCheckpointStorageAccess.java:136)
Apr 21 01:35:03 at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1824)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.tryRestoreExecutionGraphFromSavepoint(DefaultExecutionGraphFactory.java:223)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:198)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:210)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:136)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:371)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:348)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
Apr 21 01:35:03 at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
Apr 21 01:35:03 ... 4 more
{code}
 

 


was (Author: yanfei lei):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48321=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=10691]
 stack trace shows that the reason is fileNotFound:
{code:java}
Apr 21 01:35:03 at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1824)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.tryRestoreExecutionGraphFromSavepoint(DefaultExecutionGraphFactory.java:223)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:198)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:210)
Apr 21 01:35:03 at 

[jira] [Commented] (FLINK-30644) ChangelogCompatibilityITCase.testRestore fails due to CheckpointCoordinator being shutdown

2023-04-25 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716514#comment-17716514
 ] 

Yanfei Lei commented on FLINK-30644:


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=48321=logs=a549b384-c55a-52c0-c451-00e0477ab6db=eef5922c-08d9-5ba3-7299-8393476594e7=10691]
 stack trace shows that the reason is fileNotFound:
{code:java}
Apr 21 01:35:03 at 
org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1824)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.tryRestoreExecutionGraphFromSavepoint(DefaultExecutionGraphFactory.java:223)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultExecutionGraphFactory.createAndRestoreExecutionGraph(DefaultExecutionGraphFactory.java:198)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:365)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:210)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:136)
Apr 21 01:35:03 at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:152)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.DefaultSlotPoolServiceSchedulerFactory.createScheduler(DefaultSlotPoolServiceSchedulerFactory.java:119)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:371)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:348)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.internalCreateJobMasterService(DefaultJobMasterServiceFactory.java:123)
Apr 21 01:35:03 at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.lambda$createJobMasterService$0(DefaultJobMasterServiceFactory.java:95)
Apr 21 01:35:03 at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedSupplier$4(FunctionUtils.java:112)
Apr 21 01:35:03 ... 4 more {code}
 

 

> ChangelogCompatibilityITCase.testRestore fails due to CheckpointCoordinator 
> being shutdown
> --
>
> Key: FLINK-30644
> URL: https://issues.apache.org/jira/browse/FLINK-30644
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / State Backends
>Affects Versions: 1.17.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> We observe a build failure in {{ChangelogCompatibilityITCase.testRestore}} 
> due to the {{CheckpointCoordinator}} being shut down:
> {code:java}
> [...]
> Caused by: org.apache.flink.runtime.checkpoint.CheckpointException: 
> CheckpointCoordinator shutdown.
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.PendingCheckpoint.abort(PendingCheckpoint.java:544)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:2140)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:2127)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoints(CheckpointCoordinator.java:2004)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoints(CheckpointCoordinator.java:1987)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingAndQueuedCheckpoints(CheckpointCoordinator.java:2183)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.shutdown(CheckpointCoordinator.java:426)
> Jan 12 02:37:37   at 
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraph.onTerminalState(DefaultExecutionGraph.java:1329)
> [...]{code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=44731=logs=2c3cbe13-dee0-5837-cf47-3053da9a8a78=b78d9d30-509a-5cea-1fef-db7abaa325ae=9255



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s with IPv6 stack

2023-04-25 Thread caiyi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716511#comment-17716511
 ] 

caiyi commented on FLINK-31929:
---

Ok, I'll make a pull request to resolve it.

> HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s 
> with IPv6 stack
> 
>
> Key: FLINK-31929
> URL: https://issues.apache.org/jira/browse/FLINK-31929
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
> Environment: K8s with IPv6 stack
>Reporter: caiyi
>Priority: Major
> Attachments: 1.jpg
>
>
> As attachment below, String.format works not properly if address is IPv6, 
> new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31928) flink-kubernetes works not properly in k8s with IPv6 stack

2023-04-25 Thread caiyi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716509#comment-17716509
 ] 

caiyi commented on FLINK-31928:
---

Ok, I'll upgrade okhttp3 to latest 4.11.0.

> flink-kubernetes works not properly in k8s with IPv6 stack
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Major
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] fredia commented on pull request #22445: [FLINK-31876][QS] Migrate flink-queryable-state-client tests to JUnit5

2023-04-25 Thread via GitHub


fredia commented on PR #22445:
URL: https://github.com/apache/flink/pull/22445#issuecomment-1522731230

   @reswqa Thanks for the detailed review, fixed as suggested, please help take 
a look again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa commented on a diff in pull request #22445: [FLINK-31876][QS] Migrate flink-queryable-state-client tests to JUnit5

2023-04-25 Thread via GitHub


reswqa commented on code in PR #22445:
URL: https://github.com/apache/flink/pull/22445#discussion_r1177297848


##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableListStateTest.java:
##
@@ -56,26 +57,23 @@ public void setUp() throws Exception {
 listState = ImmutableListState.createState(listStateDesc, serInit);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testUpdate() throws Exception {
+@Test
+void testUpdate() throws Exception {
 List list = getStateContents();
-assertEquals(1L, list.size());
-
-long element = list.get(0);
-assertEquals(42L, element);
-
-listState.add(54L);
+assertThat(list).hasSize(1);
+assertThat(list).containsExactly(42L);
+assertThatThrownBy(() -> listState.add(54L))
+.isInstanceOf(UnsupportedOperationException.class);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testClear() throws Exception {
+@Test
+void testClear() throws Exception {
 List list = getStateContents();
-assertEquals(1L, list.size());
-
-long element = list.get(0);
-assertEquals(42L, element);
+assertThat(list).hasSize(1);
 
-listState.clear();
+assertThat(list).containsExactly(42L);

Review Comment:
   assertThat(list).containsExactly(42L);



##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableMapStateTest.java:
##
@@ -63,130 +62,147 @@ public void setUp() throws Exception {
 mapState = ImmutableMapState.createState(mapStateDesc, initSer);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testPut() throws Exception {
-assertTrue(mapState.contains(1L));
+@Test
+void testPut() throws Exception {
+assertThat(mapState.contains(1L)).isTrue();
 long value = mapState.get(1L);
-assertEquals(5L, value);
+assertThat(value).isEqualTo(5L);
 
-assertTrue(mapState.contains(2L));
+assertThat(mapState.contains(2L)).isTrue();
 value = mapState.get(2L);
-assertEquals(5L, value);
-
-mapState.put(2L, 54L);
+assertThat(value).isEqualTo(5L);
+assertThatThrownBy(() -> mapState.put(2L, 54L))
+.isInstanceOf(UnsupportedOperationException.class);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testPutAll() throws Exception {
-assertTrue(mapState.contains(1L));
+@Test
+void testPutAll() throws Exception {
+assertThat(mapState.contains(1L)).isTrue();
 long value = mapState.get(1L);
-assertEquals(5L, value);
+assertThat(value).isEqualTo(5L);
 
-assertTrue(mapState.contains(2L));
+assertThat(mapState.contains(2L)).isTrue();
 value = mapState.get(2L);
-assertEquals(5L, value);
+assertThat(value).isEqualTo(5L);
 
 Map nMap = new HashMap<>();
 nMap.put(1L, 7L);
 nMap.put(2L, 7L);
-
-mapState.putAll(nMap);
+assertThatThrownBy(() -> mapState.putAll(nMap))
+.isInstanceOf(UnsupportedOperationException.class);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testUpdate() throws Exception {
-assertTrue(mapState.contains(1L));
+@Test
+void testUpdate() throws Exception {
+assertThat(mapState.contains(1L)).isTrue();
 long value = mapState.get(1L);
-assertEquals(5L, value);
+assertThat(value).isEqualTo(5L);
 
-assertTrue(mapState.contains(2L));
+assertThat(mapState.contains(2L)).isTrue();
 value = mapState.get(2L);
-assertEquals(5L, value);
-
-mapState.put(2L, 54L);
+assertThat(value).isEqualTo(5L);
+assertThatThrownBy(() -> mapState.put(2L, 54L))
+.isInstanceOf(UnsupportedOperationException.class);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testIterator() throws Exception {
-assertTrue(mapState.contains(1L));
+@Test
+void testIterator() throws Exception {
+assertThat(mapState.contains(1L)).isTrue();
 long value = mapState.get(1L);
-assertEquals(5L, value);
+assertThat(value).isEqualTo(5L);
 
-assertTrue(mapState.contains(2L));
+assertThat(mapState.contains(2L)).isTrue();
 value = mapState.get(2L);
-assertEquals(5L, value);
-
-Iterator> iterator = mapState.iterator();
-while (iterator.hasNext()) {
-iterator.remove();
-}
+assertThat(value).isEqualTo(5L);
+assertThatThrownBy(
+() -> {
+Iterator> iterator = 
mapState.iterator();
+ 

[jira] (FLINK-27018) timestamp missing end zero when outputing to kafka

2023-04-25 Thread jeff-zou (Jira)


[ https://issues.apache.org/jira/browse/FLINK-27018 ]


jeff-zou deleted comment on FLINK-27018:
--

was (Author: JIRAUSER282259):
[~paul8263]  ok, thx, but I don't know how to assigne this issue to you.

[~fpaul]  Can you do this?

> timestamp missing end  zero when outputing to kafka
> ---
>
> Key: FLINK-27018
> URL: https://issues.apache.org/jira/browse/FLINK-27018
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.5
>Reporter: jeff-zou
>Priority: Major
> Attachments: kafka.png
>
>
> the bug is described as follows:
>  
> {code:java}
> data in source:
>  2022-04-02 03:34:21.260
> but after sink by sql, the data in kafka:
>  2022-04-02 03:34:21.26
> {code}
>  
> data miss end zero in kafka.
>  
> sql:
> {code:java}
> create kafka_table(stime stimestamp) with ('connector'='kafka','format' = 
> 'json');
> insert into kafka_table select stime from (values(timestamp '2022-04-02 
> 03:34:21.260')){code}
> the value in kafka is : \{"stime":"2022-04-02 03:34:21.26"}, missed end zero.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] pgaref commented on a diff in pull request #22467: [FLINK-31888] Introduce interfaces and utils for loading and executing enrichers

2023-04-25 Thread via GitHub


pgaref commented on code in PR #22467:
URL: https://github.com/apache/flink/pull/22467#discussion_r1177288100


##
flink-tests/src/test/java/org/apache/flink/test/plugin/FailureEnricherPluginTest.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.plugin;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.JobManagerOptions;
+import org.apache.flink.core.failure.FailureEnricher;
+import org.apache.flink.core.failure.FailureEnricherFactory;
+import org.apache.flink.core.plugin.DefaultPluginManager;
+import org.apache.flink.core.plugin.DirectoryBasedPluginFinder;
+import org.apache.flink.core.plugin.PluginDescriptor;
+import org.apache.flink.core.plugin.PluginFinder;
+import org.apache.flink.core.plugin.PluginManager;
+import org.apache.flink.core.testutils.CommonTestUtils;
+import org.apache.flink.runtime.failure.FailureEnricherUtils;
+import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
+import org.apache.flink.test.plugin.jar.failure.TestFailureEnricher;
+import org.apache.flink.test.plugin.jar.failure.TestFailureEnricherFactory;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.guava30.com.google.common.collect.ImmutableMap;
+import org.apache.flink.shaded.guava30.com.google.common.collect.Lists;
+
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.File;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+/** Test for {@link FailureEnricherFactory}. */
+public class FailureEnricherPluginTest extends PluginTestBase {

Review Comment:
   PluginManager is heavily tested already so removing these tests as discussed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] huwh commented on pull request #22438: [FLINK-31632] Add method setMaxParallelism to DataStreamSink.

2023-04-25 Thread via GitHub


huwh commented on PR #22438:
URL: https://github.com/apache/flink/pull/22438#issuecomment-1522698089

   Oh, I notice that the jira id in PR title and  commit message is wrong, it 
should be [FLINK-31873](https://issues.apache.org/jira/browse/FLINK-31873)
   
   And I think we don't need separate commits in this PR, we prefer each commit 
to contain full functionality


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] huwh commented on a diff in pull request #22438: [FLINK-31632] Add method setMaxParallelism to DataStreamSink.

2023-04-25 Thread via GitHub


huwh commented on code in PR #22438:
URL: https://github.com/apache/flink/pull/22438#discussion_r1177277341


##
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java:
##
@@ -283,6 +283,25 @@ public void testTransformationSetParallelism() {
 assertThat(vertices.get(2).isParallelismConfigured()).isTrue();
 }
 
+@Test
+public void testTransformationSetMaxParallelism() {

Review Comment:
   All test methods in junit5 should be package-private.
   ```java
   void testTransformationSetMaxParallelism() 
   ``` 
   
   



##
flink-streaming-java/src/test/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGeneratorTest.java:
##
@@ -283,6 +283,25 @@ public void testTransformationSetParallelism() {
 assertThat(vertices.get(2).isParallelismConfigured()).isTrue();
 }
 
+@Test
+public void testTransformationSetMaxParallelism() {
+StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+/* The max parallelism of the environment (that is inherited by the 
source)
+and the parallelism of the map operator needs to be different for this 
test */
+env.setMaxParallelism(4);
+env.fromSequence(1L, 3L).map(i -> 
i).setMaxParallelism(10).print().setMaxParallelism(20);

Review Comment:
   It's better to introduce variables for each DataStream/DataStreamSink and 
then get StreamNode by their ID.
   
   ```java
   DataStreamSource source = env.fromSequence(1L, 3L);
   
assertThat(streamGraph.getStreamNode(source.getId()).getMaxParallelism()).isEqualTo(4);
   ``` 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31898) Flink k8s autoscaler does not work as expected

2023-04-25 Thread Kyungmin Kim (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716490#comment-17716490
 ] 

Kyungmin Kim commented on FLINK-31898:
--

[~gyfora] 

After watching more metrics, I found out that the busyMsPerSecond metric does 
fluctuate a lot (It records only 1k or zero) and I think it results in 
incorrect TRUE_PROCESSING_RATE. 

It was because my test job throttles the number of record inputs per second.

I changed my job's behavior to allow all inputs, add some delay inside the map 
operator and change the configuration as you suggested. 

Autoscaler now works very well :). It finds the optimal parallelism. 

Sorry for the confusion and I think you can close the issue.

By the way can you let me know when you guys are planning to release 1.5 
version? 

> Flink k8s autoscaler does not work as expected
> --
>
> Key: FLINK-31898
> URL: https://issues.apache.org/jira/browse/FLINK-31898
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Kyungmin Kim
>Priority: Major
> Attachments: image-2023-04-24-10-54-58-083.png, 
> image-2023-04-24-13-27-17-478.png, image-2023-04-24-13-28-15-462.png, 
> image-2023-04-24-13-31-06-420.png, image-2023-04-24-13-41-43-040.png, 
> image-2023-04-24-13-42-40-124.png, image-2023-04-24-13-43-49-431.png, 
> image-2023-04-24-13-44-17-479.png, image-2023-04-24-14-18-12-450.png, 
> image-2023-04-24-16-47-35-697.png
>
>
> Hi I'm using Flink k8s autoscaler to automatically deploy jobs in proper 
> parallelism.
> I was using 1.4 version but I found that it does not scale down properly 
> because TRUE_PROCESSING_RATE becoming NaN when the tasks are idled.
> In the main branch, I checked the code was fixed to set TRUE_PROCESSING_RATE 
> to positive infinity and make scaleFactor to very low value so I'm now 
> experimentally using docker image built with main branch of 
> Flink-k8s-operator repository in my job.
> It now scales down properly but the problem is, it does not converge to the 
> optimal parallelism. It scales down well but it jumps up again to high 
> parallelism. 
>  
> Below is the experimental setup and my figure of parallelism changes result.
>  * about 40 RPS
>  * each task can process 10 TPS (intended throttling)
> !image-2023-04-24-10-54-58-083.png|width=999,height=266!
> Even using default configuration leads to the same result. What can I do 
> more? Thank you.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31943) Multiple t_env cause class loading problem

2023-04-25 Thread Xuannan Su (Jira)
Xuannan Su created FLINK-31943:
--

 Summary: Multiple t_env cause class loading problem
 Key: FLINK-31943
 URL: https://issues.apache.org/jira/browse/FLINK-31943
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.17.0
Reporter: Xuannan Su
 Attachments: flink-sql-connector-kafka-1.17.0.jar, 
pyflink_classloader.py

When a PyFlink process creates multiple StreamTableEnvironment with different 
EnvironmentSettings and sets the "pipeline.jars" at the first created t_env, it 
appears that the jar is not added to the classloader of the first created t_env.

 

After digging a little bit, the reason may be that when creating the second 
table environment with a new EnvironmentSettings, the context classloader 
overwrites by a new classloader, see `EnvironmentSettings.Builder.build` method.

 

I uploaded the script to reproduce the problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31135) ConfigMap DataSize went > 1 MB and cluster stopped working

2023-04-25 Thread Zhihao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716479#comment-17716479
 ] 

Zhihao Chen commented on FLINK-31135:
-

Hi [~Swathi Chandrashekar], can I ask do we have any update on this?

> ConfigMap DataSize went > 1 MB and cluster stopped working
> --
>
> Key: FLINK-31135
> URL: https://issues.apache.org/jira/browse/FLINK-31135
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.2.0
>Reporter: Sriram Ganesh
>Priority: Major
> Attachments: dump_cm.yaml, image-2023-04-19-09-48-19-089.png
>
>
> I am Flink Operator to manage clusters. Flink version: 1.15.2. Flink jobs 
> failed with the below error. It seems the config map size went beyond 1 MB 
> (default size). 
> Since it is managed by the operator and config maps are not updated with any 
> manual intervention, I suspect it could be an operator issue. 
>  
> {code:java}
> Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure 
> executing: PUT at: 
> https:///api/v1/namespaces//configmaps/-config-map. Message: 
> ConfigMap "-config-map" is invalid: []: Too long: must have at most 
> 1048576 bytes. Received status: Status(apiVersion=v1, code=422, 
> details=StatusDetails(causes=[StatusCause(field=[], message=Too long: must 
> have at most 1048576 bytes, reason=FieldValueTooLong, 
> additionalProperties={})], group=null, kind=ConfigMap, name=-config-map, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=ConfigMap "-config-map" is invalid: []: Too long: must have at 
> most 1048576 bytes, metadata=ListMeta(_continue=null, 
> remainingItemCount=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
> at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:673)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:612)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:560)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:521)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:347)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleUpdate(OperationSupport.java:327)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleUpdate(BaseOperation.java:781)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.lambda$replace$1(HasMetadataOperation.java:183)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:188)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:130)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> io.fabric8.kubernetes.client.dsl.base.HasMetadataOperation.replace(HasMetadataOperation.java:41)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.lambda$attemptCheckAndUpdateConfigMap$11(Fabric8FlinkKubeClient.java:325)
>  ~[flink-dist-1.15.2.jar:1.15.2]
> at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700)
>  ~[?:?]
> ... 3 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] pgaref commented on pull request #22481: [FLINK-27805] bump orc version to 1.7.8

2023-04-25 Thread via GitHub


pgaref commented on PR #22481:
URL: https://github.com/apache/flink/pull/22481#issuecomment-1522510048

   Aslo cc @pnowojski  / @akalash  that might be interested


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-kafka] mas-chen opened a new pull request, #24: [FLINK-30859] Sync kafka related examples from apache/flink:master

2023-04-25 Thread via GitHub


mas-chen opened a new pull request, #24:
URL: https://github.com/apache/flink-connector-kafka/pull/24

   There is only one example related to Kafka -- StateMachineExample. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31942) Support Conditional Writes in DynamoDB connector

2023-04-25 Thread Hong Liang Teoh (Jira)
Hong Liang Teoh created FLINK-31942:
---

 Summary: Support Conditional Writes in DynamoDB connector
 Key: FLINK-31942
 URL: https://issues.apache.org/jira/browse/FLINK-31942
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / DynamoDB
Reporter: Hong Liang Teoh


Currently, the AWS DynamoDB connector uses the BatchWrite API, which does not 
support conditional writes. This is not great because there might be some use 
cases where the Flink user might want to use conditional writes to implement 
idempotent writes.

 

We propose to implement support for using `PutItem`, `UpdateItem` and 
`DeleteItem` in the DDB connector



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22487: [FLINK-31940] DataStreamCsvITCase#CityPojo serialized as POJO

2023-04-25 Thread via GitHub


flinkbot commented on PR #22487:
URL: https://github.com/apache/flink/pull/22487#issuecomment-1522417242

   
   ## CI report:
   
   * c2135fded05d90ae67526d98903ed25793d6bc57 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31940) DataStreamCsvITCase#CityPojo should be public

2023-04-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31940:
---
Labels: pull-request-available  (was: )

> DataStreamCsvITCase#CityPojo should be public
> -
>
> Key: FLINK-31940
> URL: https://issues.apache.org/jira/browse/FLINK-31940
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Since the class is package-private it is serialized via Kryo and not the pojo 
> serializer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol opened a new pull request, #22487: [FLINK-31940] DataStreamCsvITCase#CityPojo serialized as POJO

2023-04-25 Thread via GitHub


zentol opened a new pull request, #22487:
URL: https://github.com/apache/flink/pull/22487

   POJO classes should be public. This currently causes Kryo to be used for the 
entire pojo, requiring additional --add-opens on Java 17.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31941) Not backwards compatible naming for some kubernetes resources

2023-04-25 Thread Eric Xiao (Jira)
Eric Xiao created FLINK-31941:
-

 Summary: Not backwards compatible naming for some kubernetes 
resources
 Key: FLINK-31941
 URL: https://issues.apache.org/jira/browse/FLINK-31941
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Eric Xiao


We are in the process of migrating all our workloads over to the Kubernetes 
operator and noticed that some of the Kubernetes resources in the operator are 
hardcoded and not consistent with how Flink previously defined them. This is 
leading to us some downstream incompatibilities and some migration toil in our 
monitoring and dashboards that have queries depending on the previous naming 
schema.

I couldn't find exact definitions a task-manager or job-manager in the flink 
repo, but this is what I have noticed, I may be wrong on my interpretations .
h3. Deployment Names

Previously:

 
{code:java}
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
trickle-job-manager2/2 22   13d
trickle-task-manager   10/10   10   10  13d {code}
New (Flink Operator):

 

 
{code:java}
NAME  READY   UP-TO-DATE   AVAILABLE   AGE
trickle   2/2 22   6h25m
trickle-taskmanager   4/4 44   6h25m {code}
 

[1] 
[https://github.com/apache/flink-kubernetes-operator/blob/main/flink-kubernetes-standalone/src/main/java/org/apache/flink/kubernetes/operator/utils/StandaloneKubernetesUtils.java#L29-L38]
h3. Pod Names

Previously:
{code:java}
NAME                                    READY   STATUS      RESTARTS   AGE
trickle-job-manager-65d95d4854-lgmsm    1/1     Running     0          13d
trickle-job-manager-65d95d4854-vdzl8    1/1     Running     0          5d
trickle-task-manager-86c85cf647-46nxh   1/1     Running     0          5d
trickle-task-manager-86c85cf647-ct6c5   1/1     Running     0          5d
trickle-task-manager-86c85cf647-h894q   1/1     Running     0          5d
trickle-task-manager-86c85cf647-kpr5x   1/1     Running     0          5d{code}
New (Flink Operator):
{code:java}
NAME                                   READY   STATUS    RESTARTS   AGE
trickle-58f895675f-9m5wm               1/1     Running   0          25h
trickle-58f895675f-n4hhv               1/1     Running   0          25h
trickle-taskmanager-6f9f64b9b9-857lv   1/1     Running   0          25h
trickle-taskmanager-6f9f64b9b9-cnsrx   1/1     Running   0          25h{code}

The pod names stem from the deployment names, so a fix to update the deployment 
names may also fix the pod names.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] Samrat002 commented on pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-25 Thread via GitHub


Samrat002 commented on PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#issuecomment-1522231643

   @dannycranmer please review whenever time  . 
   - Added UT for the change 
   - Created seperate issue for adding e2e test for catalog . 
   - Tried addressing to all review comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] Samrat002 commented on a diff in pull request #47: [FLINK-30481][FLIP-277] GlueCatalog Implementation

2023-04-25 Thread via GitHub


Samrat002 commented on code in PR #47:
URL: 
https://github.com/apache/flink-connector-aws/pull/47#discussion_r1176884725


##
flink-catalog-aws-glue/src/main/java/org/apache/flink/table/catalog/glue/util/GlueOperator.java:
##
@@ -0,0 +1,1639 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.catalog.glue.util;
+
+import org.apache.flink.table.api.Schema;
+import org.apache.flink.table.api.TableSchema;
+import org.apache.flink.table.api.ValidationException;
+import org.apache.flink.table.catalog.CatalogBaseTable;
+import org.apache.flink.table.catalog.CatalogDatabase;
+import org.apache.flink.table.catalog.CatalogDatabaseImpl;
+import org.apache.flink.table.catalog.CatalogFunction;
+import org.apache.flink.table.catalog.CatalogFunctionImpl;
+import org.apache.flink.table.catalog.CatalogPartition;
+import org.apache.flink.table.catalog.CatalogPartitionSpec;
+import org.apache.flink.table.catalog.CatalogTable;
+import org.apache.flink.table.catalog.CatalogView;
+import org.apache.flink.table.catalog.FunctionLanguage;
+import org.apache.flink.table.catalog.ObjectPath;
+import org.apache.flink.table.catalog.exceptions.CatalogException;
+import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
+import org.apache.flink.table.catalog.exceptions.FunctionAlreadyExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionNotExistException;
+import org.apache.flink.table.catalog.exceptions.PartitionSpecInvalidException;
+import org.apache.flink.table.catalog.exceptions.TableNotExistException;
+import org.apache.flink.table.catalog.exceptions.TableNotPartitionedException;
+import org.apache.flink.table.catalog.glue.GlueCatalogConstants;
+import org.apache.flink.table.catalog.glue.GlueCatalogOptions;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.factories.ManagedTableFactory;
+import org.apache.flink.table.resource.ResourceType;
+import org.apache.flink.table.types.DataType;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.services.glue.GlueClient;
+import software.amazon.awssdk.services.glue.model.AlreadyExistsException;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.BatchDeleteTableResponse;
+import software.amazon.awssdk.services.glue.model.Column;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.CreateDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.CreatePartitionRequest;
+import software.amazon.awssdk.services.glue.model.CreatePartitionResponse;
+import software.amazon.awssdk.services.glue.model.CreateTableRequest;
+import software.amazon.awssdk.services.glue.model.CreateTableResponse;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.CreateUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.Database;
+import software.amazon.awssdk.services.glue.model.DatabaseInput;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.DeleteDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.DeletePartitionRequest;
+import software.amazon.awssdk.services.glue.model.DeletePartitionResponse;
+import software.amazon.awssdk.services.glue.model.DeleteTableRequest;
+import software.amazon.awssdk.services.glue.model.DeleteTableResponse;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionRequest;
+import 
software.amazon.awssdk.services.glue.model.DeleteUserDefinedFunctionResponse;
+import software.amazon.awssdk.services.glue.model.EntityNotFoundException;
+import software.amazon.awssdk.services.glue.model.GetDatabaseRequest;
+import software.amazon.awssdk.services.glue.model.GetDatabaseResponse;
+import software.amazon.awssdk.services.glue.model.GetDatabasesRequest;
+import 

[jira] [Created] (FLINK-31940) DataStreamCsvITCase#CityPojo should be public

2023-04-25 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-31940:


 Summary: DataStreamCsvITCase#CityPojo should be public
 Key: FLINK-31940
 URL: https://issues.apache.org/jira/browse/FLINK-31940
 Project: Flink
  Issue Type: Sub-task
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Tests
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.18.0


Since the class is package-private it is serialized via Kryo and not the pojo 
serializer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] ericxiao251 commented on pull request #22438: [FLINK-31632] Add method setMaxParallelism to DataStreamSink.

2023-04-25 Thread via GitHub


ericxiao251 commented on PR #22438:
URL: https://github.com/apache/flink/pull/22438#issuecomment-1522095761

   Ah right, done! Thanks for the reminder @MartijnVisser .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dmvk commented on a diff in pull request #22467: [FLINK-31888] Introduce interfaces and utils for loading and executing enrichers

2023-04-25 Thread via GitHub


dmvk commented on code in PR #22467:
URL: https://github.com/apache/flink/pull/22467#discussion_r1176348699


##
flink-tests/src/test/java/org/apache/flink/test/plugin/FailureEnricherPluginTest.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.test.plugin;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.configuration.ConfigConstants;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.JobManagerOptions;
+import org.apache.flink.core.failure.FailureEnricher;
+import org.apache.flink.core.failure.FailureEnricherFactory;
+import org.apache.flink.core.plugin.DefaultPluginManager;
+import org.apache.flink.core.plugin.DirectoryBasedPluginFinder;
+import org.apache.flink.core.plugin.PluginDescriptor;
+import org.apache.flink.core.plugin.PluginFinder;
+import org.apache.flink.core.plugin.PluginManager;
+import org.apache.flink.core.testutils.CommonTestUtils;
+import org.apache.flink.runtime.failure.FailureEnricherUtils;
+import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
+import org.apache.flink.test.plugin.jar.failure.TestFailureEnricher;
+import org.apache.flink.test.plugin.jar.failure.TestFailureEnricherFactory;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.flink.shaded.guava30.com.google.common.collect.ImmutableMap;
+import org.apache.flink.shaded.guava30.com.google.common.collect.Lists;
+
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.File;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+
+/** Test for {@link FailureEnricherFactory}. */
+public class FailureEnricherPluginTest extends PluginTestBase {

Review Comment:
   What does this test beyond what DefaultPluginManagerTest already does?



##
flink-core/src/main/java/org/apache/flink/configuration/JobManagerOptions.java:
##
@@ -263,6 +263,30 @@ public class JobManagerOptions {
 .withDescription(
 "The maximum number of historical execution 
attempts kept in history.");
 
+/**
+ * Flag indicating whether JobManager should load available Failure 
Enricher plugins at startup.
+ * An optional list of Failure Enricher names. If empty, NO enrichers will 
be started. If
+ * configured, only enrichers whose name (as returned by class.getName()) 
matches any of the
+ * names in the list will be started.
+ *
+ * Example:
+ *
+ * {@code
+ * jobmanager.failure-enrichers = 
org.apache.flink.test.plugin.jar.failure.TypeFailureEnricher, 
org.apache.flink.runtime.failure.FailureEnricherUtilsTest$TestEnricher
+ *
+ * }
+ */
+@Documentation.Section(Documentation.Sections.ALL_JOB_MANAGER)
+public static final ConfigOption FAILURE_ENRICHERS_LIST =
+key("jobmanager.failure-enrichers")
+.stringType()
+.noDefaultValue()
+.withDescription(
+"An optional list of failure enricher names."
++ " If empty, NO failure enrichers will be 
started."
++ " If configured, only enrichers whose 
name matches"
++ " any of the names in the list will be 
started. ");

Review Comment:
   nit
   ```suggestion
   + " any of the names in the list will be 
started.");
   ```



##
flink-runtime/src/test/java/org/apache/flink/runtime/failure/FailureEnricherUtilsTest.java:
##
@@ -0,0 +1,350 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the 

[jira] [Created] (FLINK-31939) ClassNotFoundException on org.apache.beam.runners.flink.translation.wrappers.SourceInputFormat when running Flink 1.14 and up with Beam 2.46.0

2023-04-25 Thread AmitK (Jira)
AmitK created FLINK-31939:
-

 Summary: ClassNotFoundException on 
org.apache.beam.runners.flink.translation.wrappers.SourceInputFormat when 
running Flink 1.14 and up with Beam 2.46.0
 Key: FLINK-31939
 URL: https://issues.apache.org/jira/browse/FLINK-31939
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.15.4, 1.16.0, 1.14.2
Reporter: AmitK


While running some basic Beam code that is reading a file and transforming it 
for further processing, I am unable to get past the following error thrown, 
that prevents the job master from starting up when I try to run/deploy a fat 
jar on Flink. I'm using Apache Beam 2.46.0, with Java 11 and beam flink runner 
1.14 with a local Flink instance that's on 1.14.2.

 

It looks to be some form of class-loading issue, and while I've tried later 
version of Flink, all produce the same error. Appreciate any help with this.

 

Caused by: java.lang.ClassNotFoundException: 
org.apache.beam.runners.flink.translation.wrappers.SourceInputFormat
        at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:476)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:589)
        at 
org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:64)
        at 
org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:74)
        at 
org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
        at 
org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:172)
        at java.base/java.lang.Class.forName0(Native Method)
        at java.base/java.lang.Class.forName(Class.java:398)
        at 
org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:78)

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-aws] darenwkt commented on a diff in pull request #49: [FLINK-24438] Add Kinesis connector using FLIP-27

2023-04-25 Thread via GitHub


darenwkt commented on code in PR #49:
URL: 
https://github.com/apache/flink-connector-aws/pull/49#discussion_r1176766870


##
flink-connector-aws-kinesis-streams/src/test/java/org/apache/flink/connector/kinesis/source/examples/SourceFromKinesis.java:
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kinesis.source.examples;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.serialization.SimpleStringSchema;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.connector.aws.config.AWSConfigConstants;
+import org.apache.flink.connector.kinesis.source.KinesisStreamsSource;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
+
+import java.util.Properties;
+
+/**
+ * An example application demonstrating how to use the {@link 
KinesisStreamsSource} to read from
+ * KDS.
+ */
+public class SourceFromKinesis {
+
+public static void main(String[] args) throws Exception {
+final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+env.enableCheckpointing(10_000);
+
+Properties consumerConfig = new Properties();
+consumerConfig.setProperty(AWSConfigConstants.AWS_REGION, "us-east-1");

Review Comment:
   Question: Do we support `AWS_ENDPOINT` instead of `AWS_REGION` in FLIP-27?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] MartijnVisser commented on pull request #22438: Add method setMaxParallelism to DataStreamSink.

2023-04-25 Thread via GitHub


MartijnVisser commented on PR #22438:
URL: https://github.com/apache/flink/pull/22438#issuecomment-1522085769

   @ericxiao251 Could you change your commit messages to fit with 
https://flink.apache.org/how-to-contribute/code-style-and-quality-pull-requests/#4-commit-naming-conventions
 ? That will also auto-link the Jira ticket and your PR together


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] darenwkt commented on a diff in pull request #49: [FLINK-24438] Add Kinesis connector using FLIP-27

2023-04-25 Thread via GitHub


darenwkt commented on code in PR #49:
URL: 
https://github.com/apache/flink-connector-aws/pull/49#discussion_r1176764441


##
flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/source/reader/PollingKinesisShardSplitReader.java:
##


Review Comment:
   We should consider adding Test for this class



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31873) Add setMaxParallelism to the DataStreamSink Class

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser reassigned FLINK-31873:
--

Assignee: Eric Xiao

> Add setMaxParallelism to the DataStreamSink Class
> -
>
> Key: FLINK-31873
> URL: https://issues.apache.org/jira/browse/FLINK-31873
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Reporter: Eric Xiao
>Assignee: Eric Xiao
>Priority: Major
> Attachments: Screenshot 2023-04-20 at 4.33.14 PM.png
>
>
> When turning on Flink reactive mode, it is suggested to convert all 
> {{setParallelism}} calls to {{setMaxParallelism}} from [elastic scaling 
> docs|https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/deployment/elastic_scaling/#configuration].
> With the current implementation of the {{DataStreamSink}} class, only the 
> {{[setParallelism|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/DataStreamSink.java#L172-L181]}}
>  function of the 
> {{[Transformation|https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/dag/Transformation.java#L248-L285]}}
>  class is exposed - {{Transformation}} also has the 
> {{[setMaxParallelism|https://github.com/apache/flink/blob/master/flink-core/src/main/java/org/apache/flink/api/dag/Transformation.java#L277-L285]}}
>  function which is not exposed.
>  
> This means for any sink in the Flink pipeline, we cannot set a max 
> parallelism.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] chachae commented on pull request #351: [FLINK-28646] Handle scaling operation separately in reconciler/service

2023-04-25 Thread via GitHub


chachae commented on PR #351:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/351#issuecomment-1522040383

   > ## What is the purpose of the change
   > Enable reactive scaling for standalone deployments by detecting and 
handling spec changes that do not require full application upgrades. The 
standalone integration for session clusters and application clusters allows for 
more efficient scaling operations when only the parallelism changes. We should 
distinguish this operation in the reconciler/service and implement this for 
standalone mode.
   > 
   > ## Brief change log
   > * Added a new method `FlinkService.reactiveScale()` to handle reactive 
scaling for standalone deployments
   > * Added spec change detection that results a `DiffType`. Values can be:
   >   
   >   * `IGNORE` - the job is not upgraded
   >   * `SCALE` - if reactive mode is enable the job is rescaled, upgraded 
otherwise
   >   * `UPGRADE` - the job is upgraded
   > * Added `diff` package to the public operator API, classes marked with 
`@Experimental`
   > 
   > ## Verifying this change
   > * New unit tests where added to cover the functionality
   > * Verified manually by submitting `basic-reactive.yaml` and changing the 
`job.parallelism`
   > 
   > ```
   > [INFO ] [default.basic-reactive-example] SCALE change(s) detected, 
starting reconciliation.
   > [INFO ] [default.basic-reactive-example] 
FlinkDeploymentSpec[job.parallelism=4] differs from 
FlinkDeploymentSpec[job.parallelism=2]
   > [INFO ] [default.basic-reactive-example] Scaling TM replicas: actual(1) -> 
desired(2)
   > [INFO ] [default.basic-reactive-example] Reactive scaling succeeded
   > ```
   > 
   > ## Does this pull request potentially affect one of the following parts:
   > * Dependencies (does it add or upgrade a dependency): no
   > * The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
yes
   > * Core observer or reconciler logic that is regularly executed: yes
   > 
   > ## Documentation
   > * Does this pull request introduce a new feature? yes
   > * If yes, how is the feature documented? Generated API doc
   
   How to modify `job.parallelism` ?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-31937) Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"

2023-04-25 Thread Kurt Ostfeld (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Ostfeld resolved FLINK-31937.
--
Resolution: Won't Fix

[~martijnvisser] ah, ok, unit tests don't need to run outside of the CI 
environment. thank you.

> Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"
> ---
>
> Key: FLINK-31937
> URL: https://issues.apache.org/jira/browse/FLINK-31937
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State
>Reporter: Kurt Ostfeld
>Priority: Minor
>
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.68 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest[ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration
>   Time elapsed: 3.801 s  <<< FAILURE!java.lang.AssertionError: Connection 
> leak (server) at 
> org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration(ClientTest.java:719)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31938) Failing Unit Test: FlinkConnectionTest.testCatalogSchema "Failed to get response for the operation"

2023-04-25 Thread Kurt Ostfeld (Jira)
Kurt Ostfeld created FLINK-31938:


 Summary: Failing Unit Test: FlinkConnectionTest.testCatalogSchema 
"Failed to get response for the operation"
 Key: FLINK-31938
 URL: https://issues.apache.org/jira/browse/FLINK-31938
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / JDBC
Reporter: Kurt Ostfeld


{noformat}
[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.885 s 
<<< FAILURE! - in org.apache.flink.table.jdbc.FlinkConnectionTest
[ERROR] org.apache.flink.table.jdbc.FlinkConnectionTest.testCatalogSchema  Time 
elapsed: 1.513 s  <<< ERROR!
org.apache.flink.table.client.gateway.SqlExecutionException: Failed to get 
response for the operation 733f0d91-e9e8-4487-949f-f3abb13384e8.
at 
org.apache.flink.table.client.gateway.ExecutorImpl.getFetchResultResponse(ExecutorImpl.java:416)
at 
org.apache.flink.table.client.gateway.ExecutorImpl.fetchUtilResultsReady(ExecutorImpl.java:376)
at 
org.apache.flink.table.client.gateway.ExecutorImpl.executeStatement(ExecutorImpl.java:242)
at 
org.apache.flink.table.jdbc.FlinkConnectionTest.testCatalogSchema(FlinkConnectionTest.java:95){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31912) Upgrade bytebuddy to 14.4.1

2023-04-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-31912:
-
Summary: Upgrade bytebuddy to 14.4.1  (was: Upgrade bytebuddy)

> Upgrade bytebuddy to 14.4.1
> ---
>
> Key: FLINK-31912
> URL: https://issues.apache.org/jira/browse/FLINK-31912
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31912) Upgrade bytebuddy

2023-04-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-31912.

Resolution: Fixed

master: 4d50f78cf99ebdf1c57d225ac26904abd8b4bc64

> Upgrade bytebuddy
> -
>
> Key: FLINK-31912
> URL: https://issues.apache.org/jira/browse/FLINK-31912
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #22474: [FLINK-31912][build] Upgrade bytebuddy to 1.14.4

2023-04-25 Thread via GitHub


zentol merged PR #22474:
URL: https://github.com/apache/flink/pull/22474


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31793) Remove dependency on flink-shaded for flink-connector-jdbc

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-31793.
--
Fix Version/s: jdbc-3.2.0
   Resolution: Fixed

Fixed in main: 7f5a8b7310671ae1f2a244e14de6cf233e1cf05b

> Remove dependency on flink-shaded for flink-connector-jdbc
> --
>
> Key: FLINK-31793
> URL: https://issues.apache.org/jira/browse/FLINK-31793
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Connectors / JDBC
>Reporter: Danny Cranmer
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: jdbc-3.2.0
>
>
> The JDBC connector relies on flink-shaded and uses Flinks' shaded Guava. With 
> the externalization of connector, connectors shouldn't rely on Flink-Shaded 
> but instead shade dependencies such as this one themselves



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-jdbc] boring-cyborg[bot] commented on pull request #40: [FLINK-31793] remove dependency on flink-shaded guava for flink-connector-jdbc

2023-04-25 Thread via GitHub


boring-cyborg[bot] commented on PR #40:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/40#issuecomment-1521895137

   Awesome work, congrats on your first merged pull request!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] MartijnVisser merged pull request #40: [FLINK-31793] remove dependency on flink-shaded guava for flink-connector-jdbc

2023-04-25 Thread via GitHub


MartijnVisser merged PR #40:
URL: https://github.com/apache/flink-connector-jdbc/pull/40


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31937) Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"

2023-04-25 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716290#comment-17716290
 ] 

Martijn Visser commented on FLINK-31937:


[~kurto] My initial thinking is that this is a local problem, since we run CI 
for every PR and every merged commit and this test doesn't fail there... 

> Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"
> ---
>
> Key: FLINK-31937
> URL: https://issues.apache.org/jira/browse/FLINK-31937
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State
>Reporter: Kurt Ostfeld
>Priority: Minor
>
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.68 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest[ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration
>   Time elapsed: 3.801 s  <<< FAILURE!java.lang.AssertionError: Connection 
> leak (server) at 
> org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration(ClientTest.java:719)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31937) Failing Unit Test: ClientTest.testClientServerIntegration "Connection leak"

2023-04-25 Thread Kurt Ostfeld (Jira)
Kurt Ostfeld created FLINK-31937:


 Summary: Failing Unit Test: ClientTest.testClientServerIntegration 
"Connection leak"
 Key: FLINK-31937
 URL: https://issues.apache.org/jira/browse/FLINK-31937
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Queryable State
Reporter: Kurt Ostfeld


{code:java}
[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 34.68 s 
<<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest[ERROR] 
org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration  
Time elapsed: 3.801 s  <<< FAILURE!java.lang.AssertionError: Connection leak 
(server) at 
org.apache.flink.queryablestate.network.ClientTest.testClientServerIntegration(ClientTest.java:719)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol commented on pull request #22484: [FLINK-31934][rocksdb][tests] Remove mocking

2023-04-25 Thread via GitHub


zentol commented on PR #22484:
URL: https://github.com/apache/flink/pull/22484#issuecomment-1521829398

   IIRC those mocks didnt cause issues on Java 17 so to me they are out of 
scope of this ticket.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-31936) Support setting scale up max factor

2023-04-25 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora reassigned FLINK-31936:
--

Assignee: Zhanghao Chen

> Support setting scale up max factor
> ---
>
> Key: FLINK-31936
> URL: https://issues.apache.org/jira/browse/FLINK-31936
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>
> Currently, only scale down max factor is supported to be configured. We 
> should also add a config for scale up max factor as well. In many cases, a 
> job's performance won't improve after scaling up due to external bottlenecks. 
> Although we can detect ineffective scaling up and would block further 
> scaling, but it already hurts if we scale too much in a single step which may 
> even burn out external services.
> As for the default value, I think 200% would be a good start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31936) Support setting scale up max factor

2023-04-25 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716271#comment-17716271
 ] 

Gyula Fora commented on FLINK-31936:


Thanks [~Zhanghao Chen] , I assign this to you :) 

> Support setting scale up max factor
> ---
>
> Key: FLINK-31936
> URL: https://issues.apache.org/jira/browse/FLINK-31936
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Zhanghao Chen
>Priority: Major
>
> Currently, only scale down max factor is supported to be configured. We 
> should also add a config for scale up max factor as well. In many cases, a 
> job's performance won't improve after scaling up due to external bottlenecks. 
> Although we can detect ineffective scaling up and would block further 
> scaling, but it already hurts if we scale too much in a single step which may 
> even burn out external services.
> As for the default value, I think 200% would be a good start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] fredia commented on a diff in pull request #22457: [FLINK-31876][QS] Migrate flink-queryable-state-runtime tests to JUnit5

2023-04-25 Thread via GitHub


fredia commented on code in PR #22457:
URL: https://github.com/apache/flink/pull/22457#discussion_r1176516288


##
flink-queryable-state/flink-queryable-state-runtime/src/test/java/org/apache/flink/queryablestate/network/ClientTest.java:
##
@@ -257,33 +258,34 @@ public void testRequestUnavailableHost() throws Exception 
{
 try {
 client = new Client<>("Test Client", 1, serializer, stats);
 
-InetSocketAddress serverAddress = new 
InetSocketAddress(InetAddress.getLocalHost(), 0);
+InetSocketAddress serverAddress =

Review Comment:
   Yes, I will rebase this pr later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-aws] darenwkt commented on a diff in pull request #49: [FLINK-24438] Add Kinesis connector using FLIP-27

2023-04-25 Thread via GitHub


darenwkt commented on code in PR #49:
URL: 
https://github.com/apache/flink-connector-aws/pull/49#discussion_r1176498371


##
flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/source/config/SourceConfigConstants.java:
##
@@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kinesis.source.config;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.connector.aws.config.AWSConfigConstants;
+import 
org.apache.flink.connector.kinesis.source.reader.PollingKinesisShardSplitReader;
+
+import java.time.Duration;
+
+@PublicEvolving
+public class SourceConfigConstants extends AWSConfigConstants {
+public enum InitialPosition {
+LATEST,
+TRIM_HORIZON,
+AT_TIMESTAMP
+}
+
+/** The record publisher type represents the record-consume style. */
+public enum RecordPublisherType {
+
+/** Consume the Kinesis records using AWS SDK v2 with the enhanced 
fan-out consumer. */
+EFO,
+/** Consume the Kinesis records using AWS SDK v1 with the get-records 
method. */
+POLLING
+}
+
+/** The EFO registration type represents how we are going to de-/register 
efo consumer. */
+public enum EFORegistrationType {
+
+/**
+ * Delay the registration of efo consumer for taskmanager to execute. 
De-register the efo
+ * consumer for taskmanager to execute when task is shut down.
+ */
+LAZY,
+/**
+ * Register the efo consumer eagerly for jobmanager to execute. 
De-register the efo consumer
+ * the same way as lazy does.
+ */
+EAGER,
+/** Do not register efo consumer programmatically. Do not de-register 
either. */
+NONE
+}
+
+/** The RecordPublisher type (EFO|POLLING). */
+public static final String RECORD_PUBLISHER_TYPE = 
"flink.stream.recordpublisher";
+
+public static final String DEFAULT_RECORD_PUBLISHER_TYPE =
+RecordPublisherType.POLLING.toString();
+
+/** Determine how and when consumer de-/registration is performed 
(LAZY|EAGER|NONE). */
+public static final String EFO_REGISTRATION_TYPE = 
"flink.stream.efo.registration";
+
+public static final String DEFAULT_EFO_REGISTRATION_TYPE = 
EFORegistrationType.EAGER.toString();
+
+/** The name of the EFO consumer to register with KDS. */
+public static final String EFO_CONSUMER_NAME = 
"flink.stream.efo.consumername";
+
+/** The prefix of consumer ARN for a given stream. */
+public static final String EFO_CONSUMER_ARN_PREFIX = 
"flink.stream.efo.consumerarn";
+
+/** The initial position to start reading Kinesis streams from. */
+public static final String STREAM_INITIAL_POSITION = 
"flink.stream.initpos";
+
+public static final String DEFAULT_STREAM_INITIAL_POSITION = 
InitialPosition.LATEST.toString();
+
+/**
+ * The initial timestamp to start reading Kinesis stream from (when 
AT_TIMESTAMP is set for
+ * STREAM_INITIAL_POSITION).
+ */
+public static final String STREAM_INITIAL_TIMESTAMP = 
"flink.stream.initpos.timestamp";
+
+/**
+ * The date format of initial timestamp to start reading Kinesis stream 
from (when AT_TIMESTAMP
+ * is set for STREAM_INITIAL_POSITION).
+ */
+public static final String STREAM_TIMESTAMP_DATE_FORMAT =
+"flink.stream.initpos.timestamp.format";
+
+public static final String DEFAULT_STREAM_TIMESTAMP_DATE_FORMAT =
+"-MM-dd'T'HH:mm:ss.SSSXXX";
+
+/** The maximum number of describeStream attempts if we get a recoverable 
exception. */
+public static final String STREAM_DESCRIBE_RETRIES = 
"flink.stream.describe.maxretries";
+
+public static final int DEFAULT_STREAM_DESCRIBE_RETRIES = 50;
+
+/** The base backoff time between each describeStream attempt. */
+public static final String STREAM_DESCRIBE_BACKOFF_BASE = 
"flink.stream.describe.backoff.base";
+
+public static final long DEFAULT_STREAM_DESCRIBE_BACKOFF_BASE = 2000L;
+
+/** The maximum backoff time between each describeStream attempt. */
+public static final String 

[GitHub] [flink-connector-aws] darenwkt commented on a diff in pull request #49: [FLINK-24438] Add Kinesis connector using FLIP-27

2023-04-25 Thread via GitHub


darenwkt commented on code in PR #49:
URL: 
https://github.com/apache/flink-connector-aws/pull/49#discussion_r1176497702


##
flink-connector-aws-kinesis-streams/src/main/java/org/apache/flink/connector/kinesis/source/config/SourceConfigConstants.java:
##
@@ -0,0 +1,369 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.kinesis.source.config;
+
+import org.apache.flink.annotation.PublicEvolving;
+import org.apache.flink.connector.aws.config.AWSConfigConstants;
+import 
org.apache.flink.connector.kinesis.source.reader.PollingKinesisShardSplitReader;
+
+import java.time.Duration;
+
+@PublicEvolving
+public class SourceConfigConstants extends AWSConfigConstants {
+public enum InitialPosition {
+LATEST,
+TRIM_HORIZON,
+AT_TIMESTAMP
+}
+
+/** The record publisher type represents the record-consume style. */
+public enum RecordPublisherType {
+
+/** Consume the Kinesis records using AWS SDK v2 with the enhanced 
fan-out consumer. */
+EFO,
+/** Consume the Kinesis records using AWS SDK v1 with the get-records 
method. */
+POLLING
+}
+
+/** The EFO registration type represents how we are going to de-/register 
efo consumer. */
+public enum EFORegistrationType {
+
+/**
+ * Delay the registration of efo consumer for taskmanager to execute. 
De-register the efo
+ * consumer for taskmanager to execute when task is shut down.
+ */
+LAZY,
+/**
+ * Register the efo consumer eagerly for jobmanager to execute. 
De-register the efo consumer
+ * the same way as lazy does.
+ */
+EAGER,
+/** Do not register efo consumer programmatically. Do not de-register 
either. */
+NONE
+}
+
+/** The RecordPublisher type (EFO|POLLING). */
+public static final String RECORD_PUBLISHER_TYPE = 
"flink.stream.recordpublisher";
+
+public static final String DEFAULT_RECORD_PUBLISHER_TYPE =
+RecordPublisherType.POLLING.toString();
+
+/** Determine how and when consumer de-/registration is performed 
(LAZY|EAGER|NONE). */
+public static final String EFO_REGISTRATION_TYPE = 
"flink.stream.efo.registration";
+
+public static final String DEFAULT_EFO_REGISTRATION_TYPE = 
EFORegistrationType.EAGER.toString();
+
+/** The name of the EFO consumer to register with KDS. */
+public static final String EFO_CONSUMER_NAME = 
"flink.stream.efo.consumername";
+
+/** The prefix of consumer ARN for a given stream. */
+public static final String EFO_CONSUMER_ARN_PREFIX = 
"flink.stream.efo.consumerarn";
+
+/** The initial position to start reading Kinesis streams from. */
+public static final String STREAM_INITIAL_POSITION = 
"flink.stream.initpos";
+
+public static final String DEFAULT_STREAM_INITIAL_POSITION = 
InitialPosition.LATEST.toString();
+
+/**
+ * The initial timestamp to start reading Kinesis stream from (when 
AT_TIMESTAMP is set for
+ * STREAM_INITIAL_POSITION).
+ */
+public static final String STREAM_INITIAL_TIMESTAMP = 
"flink.stream.initpos.timestamp";
+
+/**
+ * The date format of initial timestamp to start reading Kinesis stream 
from (when AT_TIMESTAMP
+ * is set for STREAM_INITIAL_POSITION).
+ */
+public static final String STREAM_TIMESTAMP_DATE_FORMAT =
+"flink.stream.initpos.timestamp.format";
+
+public static final String DEFAULT_STREAM_TIMESTAMP_DATE_FORMAT =
+"-MM-dd'T'HH:mm:ss.SSSXXX";
+
+/** The maximum number of describeStream attempts if we get a recoverable 
exception. */
+public static final String STREAM_DESCRIBE_RETRIES = 
"flink.stream.describe.maxretries";
+
+public static final int DEFAULT_STREAM_DESCRIBE_RETRIES = 50;
+
+/** The base backoff time between each describeStream attempt. */
+public static final String STREAM_DESCRIBE_BACKOFF_BASE = 
"flink.stream.describe.backoff.base";
+
+public static final long DEFAULT_STREAM_DESCRIBE_BACKOFF_BASE = 2000L;
+
+/** The maximum backoff time between each describeStream attempt. */
+public static final String 

[GitHub] [flink] masteryhx commented on a diff in pull request #22457: [FLINK-31876][QS] Migrate flink-queryable-state-runtime tests to JUnit5

2023-04-25 Thread via GitHub


masteryhx commented on code in PR #22457:
URL: https://github.com/apache/flink/pull/22457#discussion_r1176184618


##
flink-queryable-state/flink-queryable-state-runtime/src/test/java/org/apache/flink/queryablestate/itcases/AbstractQueryableStateTestBase.java:
##
@@ -128,9 +132,9 @@ public abstract class AbstractQueryableStateTestBase 
extends TestLogger {
 
 protected static int maxParallelism;
 
-@ClassRule public static TemporaryFolder classloaderFolder = new 
TemporaryFolder();
+@TempDir static File classloaderFolder;
 
-@Before
+@BeforeEach
 public void setUp() throws Exception {
 // NOTE: do not use a shared instance for all tests as the tests may 
break
 this.stateBackend = createStateBackend();

Review Comment:
   'Assert.assertNotNull' should be replaced with 'Assertions.assertNotNull'



##
flink-queryable-state/flink-queryable-state-runtime/src/test/java/org/apache/flink/queryablestate/itcases/AbstractQueryableStateTestBase.java:
##
@@ -232,7 +236,7 @@ public Integer getKey(Tuple2 value) {
 try {
 Tuple2 res = response.get();
 counts.set(key, res.f1);
-assertEquals("Key mismatch", key, 
res.f0.intValue());
+assertEquals(key, res.f0.intValue(), "Key 
mismatch");
 } catch (Exception e) {
 Assert.fail(e.getMessage());

Review Comment:
   Assertions.fail



##
flink-queryable-state/flink-queryable-state-runtime/src/test/java/org/apache/flink/queryablestate/network/ClientTest.java:
##
@@ -257,33 +258,34 @@ public void testRequestUnavailableHost() throws Exception 
{
 try {
 client = new Client<>("Test Client", 1, serializer, stats);
 
-InetSocketAddress serverAddress = new 
InetSocketAddress(InetAddress.getLocalHost(), 0);
+InetSocketAddress serverAddress =

Review Comment:
   IIUC, This is fixing in 
[FLINK-31897](https://issues.apache.org/jira/browse/FLINK-31897) ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fredia commented on a diff in pull request #22445: [FLINK-31876][QS] Migrate flink-queryable-state-client tests to JUnit5

2023-04-25 Thread via GitHub


fredia commented on code in PR #22445:
URL: https://github.com/apache/flink/pull/22445#discussion_r1176412964


##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableAggregatingStateTest.java:
##
@@ -24,23 +24,24 @@
 import org.apache.flink.api.common.state.AggregatingStateDescriptor;
 import org.apache.flink.core.memory.DataOutputViewStreamWrapper;
 
-import org.junit.Before;
-import org.junit.Test;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
 
 import java.io.ByteArrayOutputStream;
 
-import static org.junit.Assert.assertEquals;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+import static org.junit.jupiter.api.Assertions.assertEquals;

Review Comment:
   Sorry for the misunderstanding, replace all assertions of junit5 to assertJ.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31936) Support setting scale up max factor

2023-04-25 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716235#comment-17716235
 ] 

Zhanghao Chen commented on FLINK-31936:
---

Hi, [~gyfora], if you think this issue is worthful to do, could you assign it 
to me? I'm trying to use autoscaling in production, and would like to 
contribute to autoscaler as well. Hope this could be the starting point of my 
contribution.

> Support setting scale up max factor
> ---
>
> Key: FLINK-31936
> URL: https://issues.apache.org/jira/browse/FLINK-31936
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Zhanghao Chen
>Priority: Major
>
> Currently, only scale down max factor is supported to be configured. We 
> should also add a config for scale up max factor as well. In many cases, a 
> job's performance won't improve after scaling up due to external bottlenecks. 
> Although we can detect ineffective scaling up would block further scaling, 
> but it already hurts if we scale too much in a single step which may even 
> burn out external services.
> As for the default value, I think 200% would be a good start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31936) Support setting scale up max factor

2023-04-25 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-31936:
--
Description: 
Currently, only scale down max factor is supported to be configured. We should 
also add a config for scale up max factor as well. In many cases, a job's 
performance won't improve after scaling up due to external bottlenecks. 
Although we can detect ineffective scaling up and would block further scaling, 
but it already hurts if we scale too much in a single step which may even burn 
out external services.

As for the default value, I think 200% would be a good start.

  was:
Currently, only scale down max factor is supported to be configured. We should 
also add a config for scale up max factor as well. In many cases, a job's 
performance won't improve after scaling up due to external bottlenecks. 
Although we can detect ineffective scaling up would block further scaling, but 
it already hurts if we scale too much in a single step which may even burn out 
external services.

As for the default value, I think 200% would be a good start.


> Support setting scale up max factor
> ---
>
> Key: FLINK-31936
> URL: https://issues.apache.org/jira/browse/FLINK-31936
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Zhanghao Chen
>Priority: Major
>
> Currently, only scale down max factor is supported to be configured. We 
> should also add a config for scale up max factor as well. In many cases, a 
> job's performance won't improve after scaling up due to external bottlenecks. 
> Although we can detect ineffective scaling up and would block further 
> scaling, but it already hurts if we scale too much in a single step which may 
> even burn out external services.
> As for the default value, I think 200% would be a good start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31936) Support setting scale up max factor

2023-04-25 Thread Zhanghao Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanghao Chen updated FLINK-31936:
--
Description: 
Currently, only scale down max factor is supported to be configured. We should 
also add a config for scale up max factor as well. In many cases, a job's 
performance won't improve after scaling up due to external bottlenecks. 
Although we can detect ineffective scaling up would block further scaling, but 
it already hurts if we scale too much in a single step which may even burn out 
external services.

As for the default value, I think 200% would be a good start.

  was:Currently, only scale down max factor is supported to be configured. We 
should also add a config for scale up max factor as well. In many cases, a 
job's performance won't improve after scaling up due to external bottlenecks. 
Although we can detect ineffective scaling up would block further scaling, but 
it already hurts if we scale too much in a single step which may even burn out 
external services.


> Support setting scale up max factor
> ---
>
> Key: FLINK-31936
> URL: https://issues.apache.org/jira/browse/FLINK-31936
> Project: Flink
>  Issue Type: Improvement
>  Components: Autoscaler
>Reporter: Zhanghao Chen
>Priority: Major
>
> Currently, only scale down max factor is supported to be configured. We 
> should also add a config for scale up max factor as well. In many cases, a 
> job's performance won't improve after scaling up due to external bottlenecks. 
> Although we can detect ineffective scaling up would block further scaling, 
> but it already hurts if we scale too much in a single step which may even 
> burn out external services.
> As for the default value, I think 200% would be a good start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-31936) Support setting scale up max factor

2023-04-25 Thread Zhanghao Chen (Jira)
Zhanghao Chen created FLINK-31936:
-

 Summary: Support setting scale up max factor
 Key: FLINK-31936
 URL: https://issues.apache.org/jira/browse/FLINK-31936
 Project: Flink
  Issue Type: Improvement
  Components: Autoscaler
Reporter: Zhanghao Chen


Currently, only scale down max factor is supported to be configured. We should 
also add a config for scale up max factor as well. In many cases, a job's 
performance won't improve after scaling up due to external bottlenecks. 
Although we can detect ineffective scaling up would block further scaling, but 
it already hurts if we scale too much in a single step which may even burn out 
external services.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31560) Savepoint failing to complete with ExternallyInducedSources

2023-04-25 Thread Yanfei Lei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716075#comment-17716075
 ] 

Yanfei Lei edited comment on FLINK-31560 at 4/25/23 10:42 AM:
--

[~fyang86] After reading the 
[code|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java#L136]
 of the legacy source, I found that  FLINK-25256 only focuses on the interface 
of FLIP-27, under the current architecture, it is difficult to support the 
savepoint of the legacy source,  because we can't pass CheckpointOptions to 
Pravega connector.


was (Author: yanfei lei):
[~fyang86] After reading the 
[code|https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java#L136]
 of the legacy source, I found that  FLINK-25256 only focuses on the interface 
of FLIP-27, I will open a pr to fix this. 

> Savepoint failing to complete with ExternallyInducedSources
> ---
>
> Key: FLINK-31560
> URL: https://issues.apache.org/jira/browse/FLINK-31560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.0
>Reporter: Fan Yang
>Assignee: Yanfei Lei
>Priority: Major
> Attachments: image-2023-03-23-18-03-05-943.png, 
> image-2023-03-23-18-19-24-482.png, jobmanager_log.txt, 
> taskmanager_172.28.17.19_6123-f2dbff_log, 
> tmp_tm_172.28.17.19_6123-f2dbff_tmp_job_83ad4f408d0e7bf30f940ddfa5fe00e3_op_WindowOperator_137df028a798f504a6900a4081c9990c__1_1__uuid_edc681f0-3825-45ce-a123-9ff69ce6d8f1_db_LOG
>
>
> Flink version: 1.16.0
>  
> We are using Flink to run some streaming applications with Pravega as source 
> and use window and reduce transformations. We use RocksDB state backend with 
> incremental checkpointing enabled. We don't enable the latest changelog state 
> backend.
> When we try to stop the job, we encounter issues with the savepoint failing 
> to complete for the job. This happens most of the time. On rare occasions, 
> the job gets canceled suddenly with its savepoint get completed successfully.
> Savepointing shows below error:
>  
> {code:java}
> 2023-03-22 08:55:57,521 [jobmanager-io-thread-1] WARN  
> org.apache.flink.runtime.checkpoint.CheckpointFailureManager [] - Failed to 
> trigger or complete checkpoint 189 for job 7354442cd6f7c121249360680c04284d. 
> (0 consecutive failed attempts so 
> far)org.apache.flink.runtime.checkpoint.CheckpointException: Failure to 
> finalize checkpoint.    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.finalizeCheckpoint(CheckpointCoordinator.java:1375)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.completePendingCheckpoint(CheckpointCoordinator.java:1265)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.receiveAcknowledgeMessage(CheckpointCoordinator.java:1157)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$acknowledgeCheckpoint$1(ExecutionGraphHandler.java:89)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.scheduler.ExecutionGraphHandler.lambda$processCheckpointCoordinatorMessage$3(ExecutionGraphHandler.java:119)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]    at java.lang.Thread.run(Thread.java:829) [?:?]
> Caused by: java.io.IOException: Unknown implementation of StreamStateHandle: 
> class org.apache.flink.runtime.state.PlaceholderStreamStateHandle    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandle(MetadataV2V3SerializerBase.java:699)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeStreamStateHandleMap(MetadataV2V3SerializerBase.java:813)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeKeyedStateHandle(MetadataV2V3SerializerBase.java:344)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeKeyedStateCol(MetadataV2V3SerializerBase.java:269)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> org.apache.flink.runtime.checkpoint.metadata.MetadataV2V3SerializerBase.serializeSubtaskState(MetadataV2V3SerializerBase.java:262)
>  ~[flink-dist-1.16.0.jar:1.16.0]    at 
> 

[GitHub] [flink] reswqa commented on pull request #22432: [FLINK-18808][runtime] Include side outputs in numRecordsOut metric

2023-04-25 Thread via GitHub


reswqa commented on PR #22432:
URL: https://github.com/apache/flink/pull/22432#issuecomment-1521565404

   Thanks @pnowojski for the review! I have addressed your second comment this 
pr in the fix-up commit. As for the first comment, I need to hear your 
feedback. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-31924) [Flink operator] Flink Autoscale - Limit the max number of scale ups

2023-04-25 Thread Gyula Fora (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gyula Fora closed FLINK-31924.
--
Resolution: Not A Bug

Based on offline discussion the problem seems to be related to the job itself 
not the autoscaler logic

> [Flink operator] Flink Autoscale - Limit the max number of scale ups
> 
>
> Key: FLINK-31924
> URL: https://issues.apache.org/jira/browse/FLINK-31924
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Sriram Ganesh
>Priority: Critical
>
> Found that Autoscale keeps happening even after reaching max-parallelism.
> {color:#172b4d}Flink version: 1.17{color}
> Source: Kafka
> Configuration:
>  
> {code:java}
> flinkConfiguration:
>     kubernetes.operator.job.autoscaler.enabled: "true"
>     kubernetes.operator.job.autoscaler.scaling.sources.enabled: "true"
>     kubernetes.operator.job.autoscaler.target.utilization: "0.6"
>     kubernetes.operator.job.autoscaler.target.utilization.boundary: "0.2"
>     kubernetes.operator.job.autoscaler.stabilization.interval: "1m"
>     kubernetes.operator.job.autoscaler.metrics.window: "3m"{code}
> Logs:
> {code:java}
> 2023-04-24 12:29:10,738 o.a.f.k.o.c.FlinkDeploymentController [INFO 
> ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:10,740 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:10,740 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:10,765 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job status 
> changed from CREATED to RUNNING2023-04-24 12:29:10,870 o.a.f.k.o.l.AuditUtils 
> [INFO ][my-namespace/my-pod] >>> Event  | Info| JOBSTATUSCHANGED 
> | Job status changed from CREATED to RUNNING2023-04-24 12:29:10,938 
> o.a.f.k.o.l.AuditUtils [INFO ][my-namespace/my-pod] >>> Status | Info 
>| STABLE  | The resource deployment is considered to be stable and 
> won’t be rolled back2023-04-24 12:29:10,986 
> o.a.f.k.o.a.ScalingMetricCollector [INFO ][my-namespace/my-pod] Skipping 
> metric collection during stabilization period until 
> 2023-04-24T12:30:10.765Z2023-04-24 12:29:10,986 
> o.a.f.k.o.r.d.AbstractFlinkResourceReconciler [INFO ][my-namespace/my-pod] 
> Resource fully reconciled, nothing to do...2023-04-24 12:29:10,986 
> o.a.f.k.o.c.FlinkDeploymentController [INFO ][my-namespace/my-pod] End of 
> reconciliation2023-04-24 12:29:25,991 o.a.f.k.o.c.FlinkDeploymentController 
> [INFO ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:25,992 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:25,992 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:26,005 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job status 
> (RUNNING) unchanged2023-04-24 12:29:26,053 o.a.f.k.o.a.ScalingMetricCollector 
> [INFO ][my-namespace/my-pod] Skipping metric collection during stabilization 
> period until 2023-04-24T12:30:10.765Z2023-04-24 12:29:26,054 
> o.a.f.k.o.r.d.AbstractFlinkResourceReconciler [INFO ][my-namespace/my-pod] 
> Resource fully reconciled, nothing to do...2023-04-24 12:29:26,054 
> o.a.f.k.o.c.FlinkDeploymentController [INFO ][my-namespace/my-pod] End of 
> reconciliation2023-04-24 12:29:41,059 o.a.f.k.o.c.FlinkDeploymentController 
> [INFO ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:41,060 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:41,061 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:41,075 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job status 
> (RUNNING) unchanged2023-04-24 12:29:41,116 o.a.f.k.o.a.ScalingMetricCollector 
> [INFO ][my-namespace/my-pod] Skipping metric collection during stabilization 
> period until 2023-04-24T12:30:10.765Z2023-04-24 12:29:41,116 
> o.a.f.k.o.r.d.AbstractFlinkResourceReconciler [INFO ][my-namespace/my-pod] 
> Resource fully reconciled, nothing to do...2023-04-24 12:29:41,116 
> o.a.f.k.o.c.FlinkDeploymentController [INFO ][my-namespace/my-pod] End of 
> reconciliation2023-04-24 12:29:56,121 o.a.f.k.o.c.FlinkDeploymentController 
> [INFO ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:56,122 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:56,122 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:56,134 
> o.a.f.k.o.o.JobStatusObserver  [INFO 

[jira] [Commented] (FLINK-31835) DataTypeHint don't support Row>

2023-04-25 Thread Aitozi (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716209#comment-17716209
 ] 

Aitozi commented on FLINK-31835:


Yes, the reason is shown above. I have pushed a PR to try to solve this. But it 
needs some discussion to avoid break the compatibility. 

> DataTypeHint don't support Row>
> 
>
> Key: FLINK-31835
> URL: https://issues.apache.org/jira/browse/FLINK-31835
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.4
>Reporter: jeff-zou
>Priority: Major
>  Labels: pull-request-available
>
> Using DataTypeHint("Row>") in a UDF gives the following error:
>  
> {code:java}
> Caused by: java.lang.ClassCastException: class [I cannot be cast to class 
> [Ljava.lang.Object; ([I and [Ljava.lang.Object; are in module java.base of 
> loader 'bootstrap')
> org.apache.flink.table.data.conversion.ArrayObjectArrayConverter.toInternal(ArrayObjectArrayConverter.java:40)
> org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
> org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:75)
> org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:37)
> org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
> StreamExecCalc$251.processElement_split9(Unknown Source)
> StreamExecCalc$251.processElement(Unknown Source)
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
>  {code}
>  
> The function is as follows:
> {code:java}
> @DataTypeHint("Row>")
> public Row eval() {
> int[] i = new int[3];
> return Row.of(i);
> } {code}
>  
> This error is not reported when testing other simple types, so it is not an 
> environmental problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22486: [FLINK-31935] Register handlers for resource requirements endpoints i…

2023-04-25 Thread via GitHub


flinkbot commented on PR #22486:
URL: https://github.com/apache/flink/pull/22486#issuecomment-1521537079

   
   ## CI report:
   
   * 797d6f45f1f0f1827b59711121ade62ec03d78b8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dmvk commented on pull request #22486: [FLINK-31935] Register handlers for resource requirements endpoints i…

2023-04-25 Thread via GitHub


dmvk commented on PR #22486:
URL: https://github.com/apache/flink/pull/22486#issuecomment-1521530533

   cc @zentol 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31935) The new resource requirements REST API is only available for session clusters

2023-04-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31935:
---
Labels: pull-request-available  (was: )

> The new resource requirements REST API is only available for session clusters
> -
>
> Key: FLINK-31935
> URL: https://issues.apache.org/jira/browse/FLINK-31935
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.18.0
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>  Labels: pull-request-available
>
> We need to register both `JobResourceRequirementsHandler` and `
> JobResourceRequirementsUpdateHandler` for application / per-job clusters as 
> well.
>  
> These handlers have been introduced as part of FLINK-31316.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dmvk opened a new pull request, #22486: [FLINK-31935] Register handlers for resource requirements endpoints i…

2023-04-25 Thread via GitHub


dmvk opened a new pull request, #22486:
URL: https://github.com/apache/flink/pull/22486

   https://issues.apache.org/jira/browse/FLINK-31935
   
   Register handlers for resource requirements endpoints in all deployment 
modes (session, application, per-job).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-jdbc] WencongLiu commented on a diff in pull request #40: [FLINK-31793] remove dependency on flink-shaded guava for flink-connector-jdbc

2023-04-25 Thread via GitHub


WencongLiu commented on code in PR #40:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/40#discussion_r1176297094


##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/utils/JdbcTypeUtil.java:
##
@@ -26,7 +26,7 @@
 import org.apache.flink.api.java.typeutils.ObjectArrayTypeInfo;
 import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
-import org.apache.flink.shaded.guava30.com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableMap;

Review Comment:
   Done. I've remove all guava dependencies in this repo.  @MartijnVisser 



##
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/utils/JdbcTypeUtil.java:
##
@@ -26,7 +26,7 @@
 import org.apache.flink.api.java.typeutils.ObjectArrayTypeInfo;
 import org.apache.flink.table.types.logical.LogicalTypeRoot;
 
-import org.apache.flink.shaded.guava30.com.google.common.collect.ImmutableMap;
+import com.google.common.collect.ImmutableMap;

Review Comment:
   Done. I've removed all guava dependencies in this repo.  @MartijnVisser 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31935) The new resource requirements REST API is only available for session clusters

2023-04-25 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716203#comment-17716203
 ] 

Chesnay Schepler commented on FLINK-31935:
--

Whoops; just need to move them up into the WebMonitorEndpoint I guess.

> The new resource requirements REST API is only available for session clusters
> -
>
> Key: FLINK-31935
> URL: https://issues.apache.org/jira/browse/FLINK-31935
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.18.0
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>
> We need to register both `JobResourceRequirementsHandler` and `
> JobResourceRequirementsUpdateHandler` for application / per-job clusters as 
> well.
>  
> These handlers have been introduced as part of FLINK-31316.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] MartijnVisser commented on pull request #22481: [FLINK-27805] bump orc version to 1.7.8

2023-04-25 Thread via GitHub


MartijnVisser commented on PR #22481:
URL: https://github.com/apache/flink/pull/22481#issuecomment-1521507651

   @lirui-apache @JingsongLi Does one of you want to review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] reswqa commented on a diff in pull request #22445: [FLINK-31876][QS] Migrate flink-queryable-state-client tests to JUnit5

2023-04-25 Thread via GitHub


reswqa commented on code in PR #22445:
URL: https://github.com/apache/flink/pull/22445#discussion_r1176281072


##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableAggregatingStateTest.java:
##
@@ -24,23 +24,24 @@
 import org.apache.flink.api.common.state.AggregatingStateDescriptor;
 import org.apache.flink.core.memory.DataOutputViewStreamWrapper;
 
-import org.junit.Before;
-import org.junit.Test;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
 
 import java.io.ByteArrayOutputStream;
 
-import static org.junit.Assert.assertEquals;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+import static org.junit.jupiter.api.Assertions.assertEquals;

Review Comment:
   Why is it still there, we shouldn't use any non-assertJ assertions anymore.



##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableMapStateTest.java:
##
@@ -24,28 +24,29 @@
 import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
 import 
org.apache.flink.queryablestate.client.state.serialization.KvStateSerializer;
 
-import org.junit.Before;
-import org.junit.Test;
+import org.junit.jupiter.api.BeforeEach;
+import org.junit.jupiter.api.Test;
 
 import java.util.HashMap;
 import java.util.Iterator;
 import java.util.Map;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
+import static org.assertj.core.api.Assertions.assertThatThrownBy;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertFalse;
+import static org.junit.jupiter.api.Assertions.assertTrue;

Review Comment:
   These should all be removed.



##
flink-queryable-state/flink-queryable-state-client-java/src/test/java/org/apache/flink/queryablestate/client/state/ImmutableListStateTest.java:
##
@@ -56,26 +58,23 @@ public void setUp() throws Exception {
 listState = ImmutableListState.createState(listStateDesc, serInit);
 }
 
-@Test(expected = UnsupportedOperationException.class)
-public void testUpdate() throws Exception {
+@Test
+void testUpdate() throws Exception {
 List list = getStateContents();
-assertEquals(1L, list.size());
-
-long element = list.get(0);
-assertEquals(42L, element);
-
-listState.add(54L);
+assertThat(list).hasSize(1);
+assertThat(list).contains(42L, Index.atIndex(0));

Review Comment:
   ```suggestion
   assertThat(list).containsExactly(42L);
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31930) MetricQueryService works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716202#comment-17716202
 ] 

Martijn Visser commented on FLINK-31930:


[~caiyi] Do you want to open a PR to resolve this?

> MetricQueryService works not properly in k8s with IPv6 stack
> 
>
> Key: FLINK-31930
> URL: https://issues.apache.org/jira/browse/FLINK-31930
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
> Environment: 1. K8s with ipv6 stack
> 2. Deploy flink-kubernetes-operator
> 3. Deploy a standalone cluster with 3 taskmanager using kubernetes 
> high-availability.
>Reporter: caiyi
>Priority: Major
> Attachments: 1.jpg
>
>
> As attachment below, MetricQueryService works not properly in k8s with IPv6 
> stack.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-31928) flink-kubernetes works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716201#comment-17716201
 ] 

Martijn Visser commented on FLINK-31928:


[~caiyi] Do you want to open a PR to update okhttp3? 

> flink-kubernetes works not properly in k8s with IPv6 stack
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Kubernetes Operator
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Major
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-31928:
---
Component/s: (was: Kubernetes Operator)

> flink-kubernetes works not properly in k8s with IPv6 stack
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Major
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zoltar9264 commented on pull request #22458: [FLINK-31743][statebackend/rocksdb] disable rocksdb log relocating wh…

2023-04-25 Thread via GitHub


zoltar9264 commented on PR #22458:
URL: https://github.com/apache/flink/pull/22458#issuecomment-1521503517

   Thanks @masteryhx for help review the pr,  I have updated the code as your 
comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716199#comment-17716199
 ] 

Martijn Visser commented on FLINK-31929:


It's probably worthwhile to look if you can use a similar approach as 
FLINK-30478 to resolve this

> HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s 
> with IPv6 stack
> 
>
> Key: FLINK-31929
> URL: https://issues.apache.org/jira/browse/FLINK-31929
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
> Environment: K8s with IPv6 stack
>Reporter: caiyi
>Priority: Major
> Attachments: 1.jpg
>
>
> As attachment below, String.format works not properly if address is IPv6, 
> new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zoltar9264 commented on a diff in pull request #22458: [FLINK-31743][statebackend/rocksdb] disable rocksdb log relocating wh…

2023-04-25 Thread via GitHub


zoltar9264 commented on code in PR #22458:
URL: https://github.com/apache/flink/pull/22458#discussion_r1176280348


##
flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBResourceContainer.java:
##
@@ -59,6 +59,9 @@
 public final class RocksDBResourceContainer implements AutoCloseable {
 private static final Logger LOG = 
LoggerFactory.getLogger(RocksDBResourceContainer.class);
 
+// the filename length limit is 255 on most operating systems
+private static final int INSTANCE_PATH_LENGTH_LIMIT = 255 - 
"_LOG".length();

Review Comment:
   Because when log dir is set, rocksdb will add the suffix '_LOG' to the 
instance path as the log file name, you can check it in 
   
https://github.com/ververica/frocksdb/blob/FRocksDB-6.20.3/file/filename.cc#L31



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31928) flink-kubernetes works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-31928:
---
Priority: Major  (was: Blocker)

> flink-kubernetes works not properly in k8s with IPv6 stack
> --
>
> Key: FLINK-31928
> URL: https://issues.apache.org/jira/browse/FLINK-31928
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Kubernetes Operator
> Environment: Kubernetes of IPv6 stack.
>Reporter: caiyi
>Priority: Major
>
> As 
> [https://github.com/square/okhttp/issues/7368|https://github.com/square/okhttp/issues/7368,]
>  ,okhttp3 shaded in flink-kubernetes works not properly in IPv6 stack in k8s, 
> need to upgrade okhttp3 to version 4.10.0 and shade dependency of 
> okhttp3:4.10.0
> org.jetbrains.kotlin:kotlin-stdlib in flink-kubernetes or just upgrade
> kubernetes-client to latest version, and release a new version of 
> flink-kubernetes-operator.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31929) HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-31929:
---
Priority: Major  (was: Blocker)

> HighAvailabilityServicesUtils.getWebMonitorAddress works not properly in k8s 
> with IPv6 stack
> 
>
> Key: FLINK-31929
> URL: https://issues.apache.org/jira/browse/FLINK-31929
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
> Environment: K8s with IPv6 stack
>Reporter: caiyi
>Priority: Major
> Attachments: 1.jpg
>
>
> As attachment below, String.format works not properly if address is IPv6, 
> new URL(protocol, address, port, "").toString() is correct.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31930) MetricQueryService works not properly in k8s with IPv6 stack

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-31930:
---
Priority: Major  (was: Blocker)

> MetricQueryService works not properly in k8s with IPv6 stack
> 
>
> Key: FLINK-31930
> URL: https://issues.apache.org/jira/browse/FLINK-31930
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / RPC
> Environment: 1. K8s with ipv6 stack
> 2. Deploy flink-kubernetes-operator
> 3. Deploy a standalone cluster with 3 taskmanager using kubernetes 
> high-availability.
>Reporter: caiyi
>Priority: Major
> Attachments: 1.jpg
>
>
> As attachment below, MetricQueryService works not properly in k8s with IPv6 
> stack.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31931) Exception history page should not link to a non-existent TM log page.

2023-04-25 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser updated FLINK-31931:
---
Fix Version/s: (was: 1.18.0)

> Exception history page should not link to a non-existent TM log page.
> -
>
> Key: FLINK-31931
> URL: https://issues.apache.org/jira/browse/FLINK-31931
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Reporter: Junrui Li
>Assignee: Junrui Li
>Priority: Major
>
> In FLINK-30358, we supported to show the task manager ID on the exception 
> history page and added a link to the task manager ID to jump to the task 
> manager page. However, if the task manager no longer exists when clicking the 
> link to jump, the page will continue to load and the following error log will 
> be continuously printed in the JM log. This will trouble users, and should be 
> optimized.
> {code:java}
> 2023-04-25 11:40:50,109 [flink-akka.actor.default-dispatcher-35] ERROR 
> org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerDetailsHandler 
> [] - Unhandled exception.
> org.apache.flink.runtime.resourcemanager.exceptions.UnknownTaskExecutorException:
>  No TaskExecutor registered under container_01.
>   at 
> org.apache.flink.runtime.resourcemanager.ResourceManager.requestTaskManagerDetailsInfo(ResourceManager.java:697)
>  ~[flink-dist-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source) ~[?:?]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_362]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_362]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$handleRpcInvocation$1(AkkaRpcActor.java:309)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.concurrent.akka.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:307)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:222)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:84)
>  ~[?:?]
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:168)
>  ~[?:?]
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:24) ~[?:?]
>   at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:20) ~[?:?]
>   at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:20) 
> ~[?:?]
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) 
> ~[flink-scala_2.12-1.17-SNAPSHOT.jar:1.17-SNAPSHOT]
>   at akka.actor.Actor.aroundReceive(Actor.scala:537) ~[?:?]
>   at akka.actor.Actor.aroundReceive$(Actor.scala:535) ~[?:?]
>   at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:220) ~[?:?]
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:579) ~[?:?]
>   at akka.actor.ActorCell.invoke(ActorCell.scala:547) ~[?:?]
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:270) ~[?:?]
>   at akka.dispatch.Mailbox.run(Mailbox.scala:231) ~[?:?]
>   at akka.dispatch.Mailbox.exec(Mailbox.scala:243) ~[?:?]
>   at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) 
> [?:1.8.0_362]
>   at 
> java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) 
> [?:1.8.0_362]
>   at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) 
> [?:1.8.0_362]
>   at 
> java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) 
> [?:1.8.0_362]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22485: [FLINK-31835][planner] Fix the array type can't be converted from ext…

2023-04-25 Thread via GitHub


flinkbot commented on PR #22485:
URL: https://github.com/apache/flink/pull/22485#issuecomment-1521496544

   
   ## CI report:
   
   * cfa23eca0351e51b98fd315126d9002f72e5e0fc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-31935) The new resource requirements REST API is only available for session clusters

2023-04-25 Thread Jira
David Morávek created FLINK-31935:
-

 Summary: The new resource requirements REST API is only available 
for session clusters
 Key: FLINK-31935
 URL: https://issues.apache.org/jira/browse/FLINK-31935
 Project: Flink
  Issue Type: Bug
  Components: Runtime / REST
Affects Versions: 1.18.0
Reporter: David Morávek


We need to register both `JobResourceRequirementsHandler` and `
JobResourceRequirementsUpdateHandler` for application / per-job clusters as 
well.
 
These handlers have been introduced as part of FLINK-31316.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-31935) The new resource requirements REST API is only available for session clusters

2023-04-25 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-31935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Morávek reassigned FLINK-31935:
-

Assignee: David Morávek

> The new resource requirements REST API is only available for session clusters
> -
>
> Key: FLINK-31935
> URL: https://issues.apache.org/jira/browse/FLINK-31935
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.18.0
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>
> We need to register both `JobResourceRequirementsHandler` and `
> JobResourceRequirementsUpdateHandler` for application / per-job clusters as 
> well.
>  
> These handlers have been introduced as part of FLINK-31316.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31835) DataTypeHint don't support Row>

2023-04-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-31835:
---
Labels: pull-request-available  (was: )

> DataTypeHint don't support Row>
> 
>
> Key: FLINK-31835
> URL: https://issues.apache.org/jira/browse/FLINK-31835
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.15.4
>Reporter: jeff-zou
>Priority: Major
>  Labels: pull-request-available
>
> Using DataTypeHint("Row>") in a UDF gives the following error:
>  
> {code:java}
> Caused by: java.lang.ClassCastException: class [I cannot be cast to class 
> [Ljava.lang.Object; ([I and [Ljava.lang.Object; are in module java.base of 
> loader 'bootstrap')
> org.apache.flink.table.data.conversion.ArrayObjectArrayConverter.toInternal(ArrayObjectArrayConverter.java:40)
> org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
> org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:75)
> org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:37)
> org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
> StreamExecCalc$251.processElement_split9(Unknown Source)
> StreamExecCalc$251.processElement(Unknown Source)
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
>  {code}
>  
> The function is as follows:
> {code:java}
> @DataTypeHint("Row>")
> public Row eval() {
> int[] i = new int[3];
> return Row.of(i);
> } {code}
>  
> This error is not reported when testing other simple types, so it is not an 
> environmental problem.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] Aitozi opened a new pull request, #22485: [FLINK-31835][planner] Fix the array type can't be converted from ext…

2023-04-25 Thread via GitHub


Aitozi opened a new pull request, #22485:
URL: https://github.com/apache/flink/pull/22485

   …ernal primitive array
   
   
   ## What is the purpose of the change
   
   In the below UDF, the return type of the function is CollectionDataType with 
`Array`, but it's conversion class is `Integer[]`. So the 
external int[] array can not be converted to internal type with exception
   
   ```
   public static class RowFunction extends ScalarFunction {
   @DataTypeHint("Row>")
   public RowData eval() {
   int[] i = new int[3];
   return Row.of(i)
   }
   }
   ```
   
   ```
   Caused by: java.lang.ClassCastException: class [I cannot be cast to class 
[Ljava.lang.Object; ([I and [Ljava.lang.Object; are in module java.base of 
loader 'bootstrap')
   
org.apache.flink.table.data.conversion.ArrayObjectArrayConverter.toInternal(ArrayObjectArrayConverter.java:40)
   
org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
   
org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:75)
   
org.apache.flink.table.data.conversion.RowRowConverter.toInternal(RowRowConverter.java:37)
   
org.apache.flink.table.data.conversion.DataStructureConverter.toInternalOrNull(DataStructureConverter.java:61)
   StreamExecCalc$251.processElement_split9(Unknown Source)
   StreamExecCalc$251.processElement(Unknown Source)
   
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:82)
 
   ```
   
   
   ## Brief change log
   
   - change the primitive types default conversion class to respect to the 
nullability
   - change the conversion class when datatype is transform the nullability.
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31766) Restoring from a retained checkpoint that was generated with changelog backend enabled might fail due to missing files

2023-04-25 Thread Yanfei Lei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanfei Lei updated FLINK-31766:
---
Component/s: Runtime / Checkpointing

> Restoring from a retained checkpoint that was generated with changelog 
> backend enabled might fail due to missing files
> --
>
> Key: FLINK-31766
> URL: https://issues.apache.org/jira/browse/FLINK-31766
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Runtime / Coordination
>Affects Versions: 1.17.0, 1.16.1, 1.18.0
>Reporter: Matthias Pohl
>Priority: Major
> Attachments: 
> FLINK-31593.StatefulJobSavepointMigrationITCase.create_snapshot.log, 
> FLINK-31593.StatefulJobSavepointMigrationITCase.verify_snapshot.log
>
>
> in FLINK-31593 we discovered a instability when generating the test data for 
> {{StatefulJobSavepointMigrationITCase}} and 
> {{StatefulJobWBroadcastStateMigrationITCase}}. It appears that files are 
> deleted that shouldn't be deleted (see [~Yanfei Lei]'s [comment in 
> FLINK-31593|https://issues.apache.org/jira/browse/FLINK-31593?focusedCommentId=17706679=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17706679]).
> It's quite reproducible when generating the 1.17 test data for 
> {{StatefulJobWBroadcastStateMigrationITCase}} and doing a test run to verify 
> it.
> I'm attaching the debug logs of such two runs that I generated for 
> FLINK-31593 in this issue as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] dmvk closed pull request #22467: FLINK-31888: Introduce interfaces and utils for loading and executing enrichers

2023-04-25 Thread via GitHub


dmvk closed pull request #22467: FLINK-31888: Introduce interfaces and utils 
for loading and executing enrichers
URL: https://github.com/apache/flink/pull/22467


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dmvk commented on pull request #22467: FLINK-31888: Introduce interfaces and utils for loading and executing enrichers

2023-04-25 Thread via GitHub


dmvk commented on PR #22467:
URL: https://github.com/apache/flink/pull/22467#issuecomment-1521483868

   The commit message should follow standard formatting:
   
   `[FLINK-XXX][component] `
   
   This means that the current message
   
   ```
   FLINK-31888: Introduce interfaces and utils for loading and executing 
enrichers
   ```
   
   Should become
   
   ```
   [FLINK-31888] Introduce interfaces and utils for loading and executing 
enrichers
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] fredia commented on pull request #22445: [FLINK-31876][QS] Migrate flink-queryable-state-client tests to JUnit5

2023-04-25 Thread via GitHub


fredia commented on PR #22445:
URL: https://github.com/apache/flink/pull/22445#issuecomment-1521455180

   @jiexray @reswqa Thanks for the review, I updated the PR according to your 
suggestions, please help take a look again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31924) [Flink operator] Flink Autoscale - Limit the max number of scale ups

2023-04-25 Thread Sriram Ganesh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716186#comment-17716186
 ] 

Sriram Ganesh commented on FLINK-31924:
---

The issue still remains the same. I tried from the main branch.

> [Flink operator] Flink Autoscale - Limit the max number of scale ups
> 
>
> Key: FLINK-31924
> URL: https://issues.apache.org/jira/browse/FLINK-31924
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Kubernetes Operator
>Affects Versions: kubernetes-operator-1.4.0
>Reporter: Sriram Ganesh
>Priority: Critical
>
> Found that Autoscale keeps happening even after reaching max-parallelism.
> {color:#172b4d}Flink version: 1.17{color}
> Source: Kafka
> Configuration:
>  
> {code:java}
> flinkConfiguration:
>     kubernetes.operator.job.autoscaler.enabled: "true"
>     kubernetes.operator.job.autoscaler.scaling.sources.enabled: "true"
>     kubernetes.operator.job.autoscaler.target.utilization: "0.6"
>     kubernetes.operator.job.autoscaler.target.utilization.boundary: "0.2"
>     kubernetes.operator.job.autoscaler.stabilization.interval: "1m"
>     kubernetes.operator.job.autoscaler.metrics.window: "3m"{code}
> Logs:
> {code:java}
> 2023-04-24 12:29:10,738 o.a.f.k.o.c.FlinkDeploymentController [INFO 
> ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:10,740 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:10,740 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:10,765 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job status 
> changed from CREATED to RUNNING2023-04-24 12:29:10,870 o.a.f.k.o.l.AuditUtils 
> [INFO ][my-namespace/my-pod] >>> Event  | Info| JOBSTATUSCHANGED 
> | Job status changed from CREATED to RUNNING2023-04-24 12:29:10,938 
> o.a.f.k.o.l.AuditUtils [INFO ][my-namespace/my-pod] >>> Status | Info 
>| STABLE  | The resource deployment is considered to be stable and 
> won’t be rolled back2023-04-24 12:29:10,986 
> o.a.f.k.o.a.ScalingMetricCollector [INFO ][my-namespace/my-pod] Skipping 
> metric collection during stabilization period until 
> 2023-04-24T12:30:10.765Z2023-04-24 12:29:10,986 
> o.a.f.k.o.r.d.AbstractFlinkResourceReconciler [INFO ][my-namespace/my-pod] 
> Resource fully reconciled, nothing to do...2023-04-24 12:29:10,986 
> o.a.f.k.o.c.FlinkDeploymentController [INFO ][my-namespace/my-pod] End of 
> reconciliation2023-04-24 12:29:25,991 o.a.f.k.o.c.FlinkDeploymentController 
> [INFO ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:25,992 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:25,992 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:26,005 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job status 
> (RUNNING) unchanged2023-04-24 12:29:26,053 o.a.f.k.o.a.ScalingMetricCollector 
> [INFO ][my-namespace/my-pod] Skipping metric collection during stabilization 
> period until 2023-04-24T12:30:10.765Z2023-04-24 12:29:26,054 
> o.a.f.k.o.r.d.AbstractFlinkResourceReconciler [INFO ][my-namespace/my-pod] 
> Resource fully reconciled, nothing to do...2023-04-24 12:29:26,054 
> o.a.f.k.o.c.FlinkDeploymentController [INFO ][my-namespace/my-pod] End of 
> reconciliation2023-04-24 12:29:41,059 o.a.f.k.o.c.FlinkDeploymentController 
> [INFO ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:41,060 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:41,061 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:41,075 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job status 
> (RUNNING) unchanged2023-04-24 12:29:41,116 o.a.f.k.o.a.ScalingMetricCollector 
> [INFO ][my-namespace/my-pod] Skipping metric collection during stabilization 
> period until 2023-04-24T12:30:10.765Z2023-04-24 12:29:41,116 
> o.a.f.k.o.r.d.AbstractFlinkResourceReconciler [INFO ][my-namespace/my-pod] 
> Resource fully reconciled, nothing to do...2023-04-24 12:29:41,116 
> o.a.f.k.o.c.FlinkDeploymentController [INFO ][my-namespace/my-pod] End of 
> reconciliation2023-04-24 12:29:56,121 o.a.f.k.o.c.FlinkDeploymentController 
> [INFO ][my-namespace/my-pod] Starting reconciliation2023-04-24 12:29:56,122 
> o.a.f.k.o.s.FlinkResourceContextFactory [INFO ][my-namespace/my-pod] Getting 
> service for my-job2023-04-24 12:29:56,122 o.a.f.k.o.o.JobStatusObserver  
> [INFO ][my-namespace/my-pod] Observing job status2023-04-24 12:29:56,134 
> o.a.f.k.o.o.JobStatusObserver  [INFO ][my-namespace/my-pod] Job 

[jira] [Closed] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-31897.

Resolution: Fixed

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] TanYuxin-tyx commented on pull request #22471: [FLINK-31897] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-25 Thread via GitHub


TanYuxin-tyx commented on PR #22471:
URL: https://github.com/apache/flink/pull/22471#issuecomment-1521416377

   Thanks @zentol and @reswqa  for the quick reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-31897:
-
Fix Version/s: 1.16.2
   1.18.0
   1.17.1

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-31897:
-
Component/s: Runtime / Queryable State
 (was: API / State Processor)

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Runtime / Queryable State
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-31897:
-
Issue Type: Technical Debt  (was: Bug)

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Technical Debt
>  Components: API / State Processor
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.16.2, 1.18.0, 1.17.1
>
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716174#comment-17716174
 ] 

Chesnay Schepler edited comment on FLINK-31897 at 4/25/23 8:50 AM:
---

master: 918b873c0c0654b029140581c0ed49b44f9c5273
1.17: a0c734ffdecdb9fcbc31538c74f8da5762da8207
1.16: b3a52739c994494dd565db2c43963e5d15d79fbc


was (Author: zentol):
master: 918b873c0c0654b029140581c0ed49b44f9c5273
1.17: a0c734ffdecdb9fcbc31538c74f8da5762da8207

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716174#comment-17716174
 ] 

Chesnay Schepler edited comment on FLINK-31897 at 4/25/23 8:49 AM:
---

master: 918b873c0c0654b029140581c0ed49b44f9c5273
1.17: a0c734ffdecdb9fcbc31538c74f8da5762da8207


was (Author: zentol):
master: 918b873c0c0654b029140581c0ed49b44f9c5273

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #22480: [FLINK-31897][Backport 1.16] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-25 Thread via GitHub


zentol merged PR #22480:
URL: https://github.com/apache/flink/pull/22480


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] zentol merged pull request #22479: [FLINK-31897][Backport 1.17] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-25 Thread via GitHub


zentol merged PR #22479:
URL: https://github.com/apache/flink/pull/22479


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-31897) Failing Unit Test: org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost

2023-04-25 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-31897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17716174#comment-17716174
 ] 

Chesnay Schepler commented on FLINK-31897:
--

master: 918b873c0c0654b029140581c0ed49b44f9c5273

> Failing Unit Test: 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost
> 
>
> Key: FLINK-31897
> URL: https://issues.apache.org/jira/browse/FLINK-31897
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Reporter: Kurt Ostfeld
>Assignee: Yuxin Tan
>Priority: Minor
>  Labels: pull-request-available
>
>  
>  
> {code:java}
> [ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.612 
> s <<< FAILURE! - in org.apache.flink.queryablestate.network.ClientTest 
> [ERROR] 
> org.apache.flink.queryablestate.network.ClientTest.testRequestUnavailableHost 
> Time elapsed: 0.006 s <<< FAILURE! java.lang.AssertionError: 
> Expected: A CompletableFuture that will have failed within 360 
> milliseconds with: java.net.ConnectException but: Future completed with 
> different exception: 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedSocketException:
>  Can't assign requested address: /:0 Caused by: 
> java.net.BindException: Can't assign requested address  {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] zentol merged pull request #22471: [FLINK-31897] Fix the unstable test ClientTest#testRequestUnavailableHost

2023-04-25 Thread via GitHub


zentol merged PR #22471:
URL: https://github.com/apache/flink/pull/22471


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #22483: [FLINK-31118][table] Add ARRAY_UNION function.

2023-04-25 Thread via GitHub


liuyongvs commented on PR #22483:
URL: https://github.com/apache/flink/pull/22483#issuecomment-1521402397

   hi @snuyanzin  i submit a new pr and close this 
https://github.com/apache/flink/pull/21958.
   The new implementation considers the issue of type conversion and refers to 
the spark implementation. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22484: [FLINK-31934][rocksdb][tests] Remove mocking

2023-04-25 Thread via GitHub


flinkbot commented on PR #22484:
URL: https://github.com/apache/flink/pull/22484#issuecomment-1521402033

   
   ## CI report:
   
   * 54d1e53d06b10a093bcb534820af781f04d8488e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   >