[jira] [Commented] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740032#comment-17740032
 ] 

Sergey Nuyanzin commented on FLINK-32539:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50967=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23391

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well
> example of failure
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Hanyu Zheng (Jira)


[ https://issues.apache.org/jira/browse/FLINK-32539 ]


Hanyu Zheng deleted comment on FLINK-32539:
-

was (Author: JIRAUSER300701):
so we need run this two statement on local?
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well
> example of failure
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin commented on pull request #20006: [FLINK-27415][Connectors / FileSystem] Read empty csv file throws exception in FileSystem table connector

2023-07-04 Thread via GitHub


snuyanzin commented on PR #20006:
URL: https://github.com/apache/flink/pull/20006#issuecomment-1621059659

   Friendly reminder: before merging please be sure it passes ci
   
   it seems this PR merged is the reason of a blocker issue 
https://issues.apache.org/jira/browse/FLINK-32539


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Hanyu Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740029#comment-17740029
 ] 

Hanyu Zheng commented on FLINK-32539:
-

so we need run this two statement on local?
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well
> example of failure
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22956: [FLINK-32539][tests] Update violations as a result of FLINK-27415 changes

2023-07-04 Thread via GitHub


flinkbot commented on PR #22956:
URL: https://github.com/apache/flink/pull/22956#issuecomment-1621055568

   
   ## CI report:
   
   * 7ae444f8d718b92e8c81627e4473c30e2811f5b8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252558140


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   ok, I will try it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252558140


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   ok, I will try it, by the way, I have a question about pyflink function 
defintion, the ci test said
   ```
   @property
1562:   def map_union(self, map2) -> 'Expression':
   """
   Returns a map created by merging two maps, 'map1' and 'map2'. These 
two maps should have a
   common map type. If there are overlapping keys, the value from 
'map2' will overwrite the
   value from 'map1'. If any of maps is null, return null.
   .. seealso:: :py:attr:`~Expression.map_union`
   """
   return _binary_op("mapUnion")(self, map2)
   
   ```
   ` pyflink/table/expression.py:1562: error: Too many arguments`
   do you know the reason?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on PR #22842:
URL: https://github.com/apache/flink/pull/22842#issuecomment-1621047910

   looks like because of this https://issues.apache.org/jira/browse/FLINK-32539


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740027#comment-17740027
 ] 

Sergey Nuyanzin commented on FLINK-32539:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50949=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23391

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well
> example of failure
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740028#comment-17740028
 ] 

Sergey Nuyanzin commented on FLINK-32539:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50953=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23391

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well
> example of failure
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-32539:

Description: 
blocker since now it fails on every build
to reproduce jdk 8 is required
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
It seems the reason is FLINK-27415
where it was removed line 
{code:java}
checkArgument(fileLength > 0L);
{code}
at the same time it was mentioned in achunit violations  and now should be 
removed as well

example of failure
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064

  was:
blocker since now it fails on every build
to reproduce jdk 8 is required
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
It seems the reason is FLINK-27415
where it was removed line 
{code:java}
checkArgument(fileLength > 0L);
{code}
at the same time it was mentioned in achunit violations  and now should be 
removed as well


> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well
> example of failure
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50946=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=23064



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32539:
---
Labels: pull-request-available test-stability  (was: test-stability)

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin opened a new pull request, #22956: [FLINK-32539][tests] Update violations as a result of FLINK-27415 changes

2023-07-04 Thread via GitHub


snuyanzin opened a new pull request, #22956:
URL: https://github.com/apache/flink/pull/22956

   
   ## What is the purpose of the change
   
   The PR updates violations after changes done at 
https://github.com/apache/flink/pull/20006
   
   
   ## Verifying this change
   
   
   
   This change is a trivial rework 
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no)
 - The S3 file system connector: ( no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable )
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-32539:

Labels: test-stability  (was: )

> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>  Labels: test-stability
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-32539:
---

 Summary: Archunit violations started to fail in test_misc
 Key: FLINK-32539
 URL: https://issues.apache.org/jira/browse/FLINK-32539
 Project: Flink
  Issue Type: Bug
  Components: Tests
Affects Versions: 1.18.0
Reporter: Sergey Nuyanzin
Assignee: Sergey Nuyanzin


blocker since now it fails on every build
to reproduce jdk 8 is required
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{code}
It seems the reason is FLINK-27415
where it was removed line 
{code:java}
checkArgument(fileLength > 0L);
{code}
at the same time it was mentioned in achunit violations  and now should be 
removed as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32539) Archunit violations started to fail in test_misc

2023-07-04 Thread Sergey Nuyanzin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Nuyanzin updated FLINK-32539:

Description: 
blocker since now it fails on every build
to reproduce jdk 8 is required
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{noformat}
It seems the reason is FLINK-27415
where it was removed line 
{code:java}
checkArgument(fileLength > 0L);
{code}
at the same time it was mentioned in achunit violations  and now should be 
removed as well

  was:
blocker since now it fails on every build
to reproduce jdk 8 is required
{noformat}
mvn clean install -DskipTests
mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
-Darchunit.freeze.store.default.allowStoreUpdate=false
{code}
It seems the reason is FLINK-27415
where it was removed line 
{code:java}
checkArgument(fileLength > 0L);
{code}
at the same time it was mentioned in achunit violations  and now should be 
removed as well


> Archunit violations started to fail in test_misc
> 
>
> Key: FLINK-32539
> URL: https://issues.apache.org/jira/browse/FLINK-32539
> Project: Flink
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Sergey Nuyanzin
>Priority: Blocker
>
> blocker since now it fails on every build
> to reproduce jdk 8 is required
> {noformat}
> mvn clean install -DskipTests
> mvn verify -pl flink-architecture-tests/flink-architecture-tests-production/ 
> -Darchunit.freeze.store.default.allowStoreUpdate=false
> {noformat}
> It seems the reason is FLINK-27415
> where it was removed line 
> {code:java}
> checkArgument(fileLength > 0L);
> {code}
> at the same time it was mentioned in achunit violations  and now should be 
> removed as well



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] liuyongvs commented on pull request #22745: [FLINK-31691][table] Add built-in MAP_FROM_ENTRIES function.

2023-07-04 Thread via GitHub


liuyongvs commented on PR #22745:
URL: https://github.com/apache/flink/pull/22745#issuecomment-1621043422

   hi @snuyanzin will it be merged now?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on PR #22842:
URL: https://github.com/apache/flink/pull/22842#issuecomment-1621039450

   @snuyanzin , do you know why test_ci misc report error?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-32402) FLIP-294: Support Customized Catalog Modification Listener

2023-07-04 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li reassigned FLINK-32402:
--

Assignee: Fang Yong

> FLIP-294: Support Customized Catalog Modification Listener
> --
>
> Key: FLINK-32402
> URL: https://issues.apache.org/jira/browse/FLINK-32402
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.18.0
>Reporter: Fang Yong
>Assignee: Fang Yong
>Priority: Major
>
> Issue for 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-294%3A+Support+Customized+Catalog+Modification+Listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #22955: [FLINK-32536][python] Add java 17 add-opens/add-exports JVM arguments relates to DirectByteBuffer

2023-07-04 Thread via GitHub


flinkbot commented on PR #22955:
URL: https://github.com/apache/flink/pull/22955#issuecomment-1621034716

   
   ## CI report:
   
   * 512b8fbc98dbfdb91ee9fdac16514a0cd2ef1f8b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on PR #22922:
URL: https://github.com/apache/flink/pull/22922#issuecomment-1621034276

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu commented on pull request #22955: [FLINK-32536][python] Add java 17 add-opens/add-exports JVM arguments relates to DirectByteBuffer

2023-07-04 Thread via GitHub


dianfu commented on PR #22955:
URL: https://github.com/apache/flink/pull/22955#issuecomment-1621034175

   cc @zentol 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32455) Breaking change in TypeSerializerUpgradeTestBase prevents flink-connector-kafka from building against 1.18-SNAPSHOT

2023-07-04 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740024#comment-17740024
 ] 

Yun Gao commented on FLINK-32455:
-

Thanks [~tzulitai] for the quick fix! 

For the formal fix, I think we might also use the same option, namely we
 # Revert the `TypeSerializerUpgradeTestBase` so the connectors libraries could 
continue to use it.
 # Introduce a new `MigratedTypeSerializerUpgradeTestBase` and make all the 
tests inside flink library to use.
 # Then after 1.18 get published, we could move 
`MigratedTypeSerializerUpgradeTestBase` back to 
`TypeSerializerUpgradeTestBase`, and also migrates the tests in the connector 
libraries. 

What do you think about this?

> Breaking change in TypeSerializerUpgradeTestBase prevents 
> flink-connector-kafka from building against 1.18-SNAPSHOT
> ---
>
> Key: FLINK-32455
> URL: https://issues.apache.org/jira/browse/FLINK-32455
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Kafka, Test Infrastructure
>Affects Versions: 1.18.0
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.18.0
>
>
> FLINK-27518 introduced a breaking signature change to the abstract class 
> {{TypeSerializerUpgradeTestBase}}, specifically the abstract 
> {{createTestSpecifications}} method signature was changed. This breaks 
> downstream test code in externalized connector repos, e.g. 
> flink-connector-kafka's {{KafkaSerializerUpgradeTest}}
> Moreover, {{fink-migration-test-utils}} needs to be transitively pulled in by 
> downstream test code that depends on flink-core test-jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32536) Python tests fail with Arrow DirectBuffer exception

2023-07-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32536:
---
Labels: pull-request-available  (was: )

> Python tests fail with Arrow DirectBuffer exception
> ---
>
> Key: FLINK-32536
> URL: https://issues.apache.org/jira/browse/FLINK-32536
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Tests
>Affects Versions: 1.18.0
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> https://dev.azure.com/chesnay/flink/_build/results?buildId=3674=logs=fba17979-6d2e-591d-72f1-97cf42797c11=727942b6-6137-54f7-1ef9-e66e706ea068
> {code}
> 2023-07-04T12:54:15.5296754Z Jul 04 12:54:15 E   
> py4j.protocol.Py4JJavaError: An error occurred while calling 
> z:org.apache.flink.table.runtime.arrow.ArrowUtils.collectAsPandasDataFrame.
> 2023-07-04T12:54:15.5299579Z Jul 04 12:54:15 E   : 
> java.lang.RuntimeException: Arrow depends on DirectByteBuffer.(long, 
> int) which is not available. Please set the system property 
> 'io.netty.tryReflectionSetAccessible' to 'true'.
> 2023-07-04T12:54:15.5302307Z Jul 04 12:54:15 Eat 
> org.apache.flink.table.runtime.arrow.ArrowUtils.checkArrowUsable(ArrowUtils.java:184)
> 2023-07-04T12:54:15.5302859Z Jul 04 12:54:15 Eat 
> org.apache.flink.table.runtime.arrow.ArrowUtils.collectAsPandasDataFrame(ArrowUtils.java:546)
> 2023-07-04T12:54:15.5303177Z Jul 04 12:54:15 Eat 
> jdk.internal.reflect.GeneratedMethodAccessor287.invoke(Unknown Source)
> 2023-07-04T12:54:15.5303515Z Jul 04 12:54:15 Eat 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2023-07-04T12:54:15.5303929Z Jul 04 12:54:15 Eat 
> java.base/java.lang.reflect.Method.invoke(Method.java:568)
> 2023-07-04T12:54:15.5307338Z Jul 04 12:54:15 Eat 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> 2023-07-04T12:54:15.5309888Z Jul 04 12:54:15 Eat 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
> 2023-07-04T12:54:15.5310306Z Jul 04 12:54:15 Eat 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
> 2023-07-04T12:54:15.5337220Z Jul 04 12:54:15 Eat 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> 2023-07-04T12:54:15.5341859Z Jul 04 12:54:15 Eat 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
> 2023-07-04T12:54:15.5342363Z Jul 04 12:54:15 Eat 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
> 2023-07-04T12:54:15.5344866Z Jul 04 12:54:15 Eat 
> java.base/java.lang.Thread.run(Thread.java:833)
> {code}
> {code}
> 2023-07-04T12:54:15.5663559Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::BatchPandasConversionTests::test_empty_to_pandas
> 2023-07-04T12:54:15.5663891Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::BatchPandasConversionTests::test_from_pandas
> 2023-07-04T12:54:15.5664299Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::BatchPandasConversionTests::test_to_pandas
> 2023-07-04T12:54:15.5664655Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::BatchPandasConversionTests::test_to_pandas_for_retract_table
> 2023-07-04T12:54:15.5665003Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::StreamPandasConversionTests::test_empty_to_pandas
> 2023-07-04T12:54:15.5665360Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::StreamPandasConversionTests::test_from_pandas
> 2023-07-04T12:54:15.5665704Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::StreamPandasConversionTests::test_to_pandas
> 2023-07-04T12:54:15.5666045Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::StreamPandasConversionTests::test_to_pandas_for_retract_table
> 2023-07-04T12:54:15.5666415Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_conversion.py::StreamPandasConversionTests::test_to_pandas_with_event_time
> 2023-07-04T12:54:15.5666840Z Jul 04 12:54:15 FAILED 
> pyflink/table/tests/test_pandas_udaf.py::BatchPandasUDAFITTests::test_group_aggregate_function
> 2023-07-04T12:54:15.5667189Z Jul 04 12:54:15 FAILED 
> 

[GitHub] [flink] dianfu opened a new pull request, #22955: [FLINK-32536][python] Add java 17 add-opens/add-exports JVM arguments relates to DirectByteBuffer

2023-07-04 Thread via GitHub


dianfu opened a new pull request, #22955:
URL: https://github.com/apache/flink/pull/22955

   
   ## What is the purpose of the change
   
   *This pull request adds `--add-opens=java.base/java.nio=ALL-UNNAMED 
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED` to java opts to allow 
DirectByteBuffer to be used in JDK 17. *
   
   ## Brief change log
   
 - *Added `--add-opens=java.base/java.nio=ALL-UNNAMED 
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED` to java opts*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests in JDK17.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] elkhand commented on a diff in pull request #22816: [FLINK-32373][sql-client] Support passing headers with SQL Client gateway requests

2023-07-04 Thread via GitHub


elkhand commented on code in PR #22816:
URL: https://github.com/apache/flink/pull/22816#discussion_r1252538100


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/ExecutorImpl.java:
##
@@ -577,4 +593,29 @@ private void closeSession() throws SqlExecutionException {
 // ignore any throwable to keep the cleanup running
 }
 }
+
+private static Collection 
readHeadersFromEnvironmentVariable(String envVarName) {
+List headers = new ArrayList<>();
+String rawHeaders = System.getenv(envVarName);
+
+if (rawHeaders != null) {
+String[] lines = rawHeaders.split("\n");

Review Comment:
   nit: instead of `\n`, should we consider platform independent 
`System.lineSeparator()`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] elkhand commented on a diff in pull request #22816: [FLINK-32373][sql-client] Support passing headers with SQL Client gateway requests

2023-07-04 Thread via GitHub


elkhand commented on code in PR #22816:
URL: https://github.com/apache/flink/pull/22816#discussion_r1252537263


##
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/ExecutorImpl.java:
##
@@ -577,4 +593,29 @@ private void closeSession() throws SqlExecutionException {
 // ignore any throwable to keep the cleanup running
 }
 }
+
+private static Collection 
readHeadersFromEnvironmentVariable(String envVarName) {
+List headers = new ArrayList<>();
+String rawHeaders = System.getenv(envVarName);
+
+if (rawHeaders != null) {
+String[] lines = rawHeaders.split("\n");

Review Comment:
   Do you think it will be not so easy to specify a single env variable value 
that is separated by newlines? What about considering a different separator for 
distinguishing a list of header key-value pairs in a single line?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32480) Keyed State always returns new value instance

2023-07-04 Thread Viktor Feklin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740022#comment-17740022
 ] 

Viktor Feklin commented on FLINK-32480:
---

[~masteryhx] , please read description carefully - question is not about the 
way you create default instance (although i think supplier pattern would be 
more suitable instead of object cloning). 

Main question - why newly created instance not fixed as current state. Second 
call to value() - return new instance again.

Simple example for your understanding (assertion will fail untill i manyaly fix 
state with update()):
{code:java}
assert contextState.value() == contextState.value(){code}

> Keyed State always returns new value instance
> -
>
> Key: FLINK-32480
> URL: https://issues.apache.org/jira/browse/FLINK-32480
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: 1.14.6
>Reporter: Viktor Feklin
>Priority: Major
>
> I create ValueState with default value. Then i access value in the map 
> function (multiple times with the same partition key).
> Expected behavior:
>  * First call to value() should return new instance
>  * Second call to value should return instance created in first call (just 
> like Map#computeIfAbsent)
> Actual dehavior:
>  * every call to value() return new instance until we manualy set it with 
> update() function.
> According to source code - we can call update only once to assign value to 
> current key. But from the user poin of view - it happends to call update() 
> every time - because i do not know if value was already asigned or just 
> created.
> 
> Currently my code looks like:
> {code:java}
> List context = contextState.value();
> contextState.update(context); {code}
> May be there is some logic for immutable objects, but for mutable objects it 
> looks awkward
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] elkhand commented on a diff in pull request #22816: [FLINK-32373][sql-client] Support passing headers with SQL Client gateway requests

2023-07-04 Thread via GitHub


elkhand commented on code in PR #22816:
URL: https://github.com/apache/flink/pull/22816#discussion_r1252528630


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestClient.java:
##
@@ -411,30 +413,31 @@ private static Request createRequest(
 String targetUrl,
 HttpMethod httpMethod,
 ByteBuf jsonPayload,
-Collection fileUploads)
+Collection fileUploads,
+Collection customHeaders)
 throws IOException {
 if (fileUploads.isEmpty()) {
 
 HttpRequest httpRequest =
 new DefaultFullHttpRequest(
 HttpVersion.HTTP_1_1, httpMethod, targetUrl, 
jsonPayload);
 
-httpRequest
-.headers()
-.set(HttpHeaderNames.HOST, targetAddress)
+HttpHeaders headers = httpRequest.headers();
+headers.set(HttpHeaderNames.HOST, targetAddress)
 .set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
 .add(HttpHeaderNames.CONTENT_LENGTH, 
jsonPayload.capacity())
 .add(HttpHeaderNames.CONTENT_TYPE, 
RestConstants.REST_CONTENT_TYPE);
+customHeaders.forEach(ch -> headers.add(ch.getName(), 
ch.getValue()));
 
 return new SimpleRequest(httpRequest);
 } else {
 HttpRequest httpRequest =
 new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, 
httpMethod, targetUrl);
 
-httpRequest
-.headers()
-.set(HttpHeaderNames.HOST, targetAddress)
+HttpHeaders headers = httpRequest.headers();
+headers.set(HttpHeaderNames.HOST, targetAddress)
 .set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE);
+customHeaders.forEach(ch -> headers.set(ch.getName(), 
ch.getValue()));

Review Comment:
   nit: any reason to use `headers.set(...)` here, and `headers.add(...)` on 
[line 
430](https://github.com/apache/flink/pull/22816/files#diff-d6cdab55fefade687bd62fb086b6fa85aa4f8c91a745e8ec406fb8ac8982b9e1R430)
 ?
   If no specific reason, using the same method might make it more readable.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] elkhand commented on a diff in pull request #22816: [FLINK-32373][sql-client] Support passing headers with SQL Client gateway requests

2023-07-04 Thread via GitHub


elkhand commented on code in PR #22816:
URL: https://github.com/apache/flink/pull/22816#discussion_r1252526586


##
flink-runtime/src/main/java/org/apache/flink/runtime/rest/HttpHeader.java:
##
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest;
+
+import java.util.Objects;
+
+/** Represents an HTTP header with a name and a value. */
+public class HttpHeader {
+
+/** The name of the HTTP header. */
+private final String name;
+
+/** The value of the HTTP header. */
+private final String value;
+
+/**
+ * Constructs an {@code HttpHeader} object with the specified name and 
value.
+ *
+ * @param name the name of the HTTP header
+ * @param value the value of the HTTP header
+ */
+public HttpHeader(String name, String value) {
+this.name = name;
+this.value = value;
+}
+
+/**
+ * Returns the name of this HTTP header.
+ *
+ * @return the name of this HTTP header
+ */
+public String getName() {
+return name;
+}
+
+/**
+ * Returns the value of this HTTP header.
+ *
+ * @return the value of this HTTP header
+ */
+public String getValue() {
+return value;
+}
+
+@Override
+public String toString() {
+return "HttpHeader{" + "name='" + name + '\'' + ", value='" + value + 
'\'' + '}';
+}
+
+@Override
+public boolean equals(Object o) {

Review Comment:
   nit: letter `o` is easily confusible, what about renaming it to `other`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30719) flink-runtime-web failed due to a corrupted

2023-07-04 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740021#comment-17740021
 ] 

Sergey Nuyanzin commented on FLINK-30719:
-

i submitted a PR enabling configuration of number of retries for downloading 
and extraction to maven-frontend-plugin
https://github.com/eirslett/frontend-maven-plugin/pull/1098

> flink-runtime-web failed due to a corrupted 
> 
>
> Key: FLINK-30719
> URL: https://issues.apache.org/jira/browse/FLINK-30719
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend, Test Infrastructure, Tests
>Affects Versions: 1.16.0, 1.17.0, 1.18.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=44954=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=12550
> The build failed due to a corrupted nodejs dependency:
> {code}
> [ERROR] The archive file 
> /__w/1/.m2/repository/com/github/eirslett/node/16.13.2/node-16.13.2-linux-x64.tar.gz
>  is corrupted and will be deleted. Please try the build again.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32459) Force set the parallelism of SocketTableSource to 1

2023-07-04 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-32459.
--
Resolution: Done

master(1.18) via4bc408b9750fb402523f1a7955a5b23d5c99011e.

> Force set the parallelism of SocketTableSource to 1
> ---
>
> Key: FLINK-32459
> URL: https://issues.apache.org/jira/browse/FLINK-32459
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Common
>Affects Versions: 1.18.0
>Reporter: Weijie Guo
>Assignee: Weijie Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> SocketSource can only work with parallelism of 1, It is best to force set it 
> when load it in DynamicTableSource.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] reswqa merged pull request #22933: [FLINK-32459][connector] Force set the parallelism of SocketTableSource to 1

2023-07-04 Thread via GitHub


reswqa merged PR #22933:
URL: https://github.com/apache/flink/pull/22933


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252516691


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   it could be implemented in different ways, the main idea is to extract 
common logic into one place and reuse it



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hackergin commented on pull request #22937: [FLINK-32428] Introduce base interfaces for CatalogStore

2023-07-04 Thread via GitHub


hackergin commented on PR #22937:
URL: https://github.com/apache/flink/pull/22937#issuecomment-1620987697

   @ferenc-csaky Thanks for the great suggestion, I update the code. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22954: FLINK-32362] [connectors/common] increase the robustness of announceCombinedWatermark to cover the case task failover

2023-07-04 Thread via GitHub


flinkbot commented on PR #22954:
URL: https://github.com/apache/flink/pull/22954#issuecomment-1620985629

   
   ## CI report:
   
   * 736db13b75cff5ad19a4bd298dcaa2556044b7a9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22953: FLINK-32362] [connectors/common] increase the robustness of announceCombinedWatermark to cover the case task failover

2023-07-04 Thread via GitHub


flinkbot commented on PR #22953:
URL: https://github.com/apache/flink/pull/22953#issuecomment-1620980189

   
   ## CI report:
   
   * e0e4fb991483868df9693a71194f10028ec823c7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] LoveHeat commented on pull request #22806: [FLINK-32362] [connectors/common] increase the robustness of announceCombinedWatermark to cover the case task failover

2023-07-04 Thread via GitHub


LoveHeat commented on PR #22806:
URL: https://github.com/apache/flink/pull/22806#issuecomment-1620979223

   1.16 : https://github.com/apache/flink/pull/22953 
   1.17: https://github.com/apache/flink/pull/22954
   done @1996fanrui 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] LoveHeat opened a new pull request, #22954: FLINK-32362] [connectors/common] increase the robustness of announceCombinedWatermark to cover the case task failover

2023-07-04 Thread via GitHub


LoveHeat opened a new pull request, #22954:
URL: https://github.com/apache/flink/pull/22954

   backport to 1.17, see 
[Flink-32362](https://issues.apache.org/jira/browse/FLINK-32362)
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] LoveHeat opened a new pull request, #22953: FLINK-32362] [connectors/common] increase the robustness of announceCombinedWatermark to cover the case task failover

2023-07-04 Thread via GitHub


LoveHeat opened a new pull request, #22953:
URL: https://github.com/apache/flink/pull/22953

   backport to 1.16, see 
[Flink-32362](https://issues.apache.org/jira/browse/FLINK-32362)
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on PR #22842:
URL: https://github.com/apache/flink/pull/22842#issuecomment-1620966114

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252368976


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   you mean generate a CommonCollectionInputTypeStrategy and let 
CommonMapInputTypeStrategy and CommonArrayInputTypeStrategy become static 
class, and we can call them through CommonCollectionInputTypeStrategy?
   
   but CommonMapInputTypeStrategy is a normal class which doesn't need 
implements InputTypeStrategy, why we don't call it 
CommonCollectionInputTypeStrategyUtils?
   
   And we just move two class in a new class, do we really need to do this? I 
think it is ok to maintain two class now.
   @snuyanzin 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] 1996fanrui commented on pull request #22806: [FLINK-32362] [connectors/common] increase the robustness of announceCombinedWatermark to cover the case task failover

2023-07-04 Thread via GitHub


1996fanrui commented on PR #22806:
URL: https://github.com/apache/flink/pull/22806#issuecomment-1620955453

   Hi @LoveHeat , thanks for the fix.
   
   Could you help backport this PR to release-1.16 and release-1.17? It's 
better to wait until all CIs pass before merging.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32513) Job in BATCH mode with a significant number of transformations freezes on method StreamGraphGenerator.existsUnboundedSource()

2023-07-04 Thread Weihua Hu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740011#comment-17740011
 ] 

Weihua Hu commented on FLINK-32513:
---

Thanks [~vladislav.keda] reporting this. Could you provide the job topology to 
help reproduce it?

> Job in BATCH mode with a significant number of transformations freezes on 
> method StreamGraphGenerator.existsUnboundedSource()
> -
>
> Key: FLINK-32513
> URL: https://issues.apache.org/jira/browse/FLINK-32513
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.15.3, 1.16.1, 1.17.1
> Environment: All modes (local, k8s session, k8s application, ...)
> Flink 1.15.3
> Flink 1.16.1
> Flink 1.17.1
>Reporter: Vladislav Keda
>Priority: Critical
>
> Flink job executed in BATCH mode with a significant number of transformations 
> (more than 30 in my case) takes very long time to start due to the method 
> StreamGraphGenerator.existsUnboundedSource(). Also, during the execution of 
> the method, a lot of memory is consumed, which causes the GC to fire 
> frequently.
> Thread Dump:
> {code:java}
> "main@1" prio=5 tid=0x1 nid=NA runnable
>   java.lang.Thread.State: RUNNABLE
>       at java.util.ArrayList.addAll(ArrayList.java:702)
>       at 
> org.apache.flink.streaming.api.transformations.TwoInputTransformation.getTransitivePredecessors(TwoInputTransformation.java:224)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.PartitionTransformation.getTransitivePredecessors(PartitionTransformation.java:95)
>       at 
> org.apache.flink.streaming.api.transformations.TwoInputTransformation.getTransitivePredecessors(TwoInputTransformation.java:223)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.PartitionTransformation.getTransitivePredecessors(PartitionTransformation.java:95)
>       at 
> org.apache.flink.streaming.api.transformations.TwoInputTransformation.getTransitivePredecessors(TwoInputTransformation.java:223)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> org.apache.flink.streaming.api.transformations.OneInputTransformation.getTransitivePredecessors(OneInputTransformation.java:174)
>       at 
> 

[jira] [Closed] (FLINK-32520) FlinkDeployment recovered states from an obsolete savepoint when performing an upgrade

2023-07-04 Thread Ruibin Xing (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruibin Xing closed FLINK-32520.
---
Resolution: Cannot Reproduce

> FlinkDeployment recovered states from an obsolete savepoint when performing 
> an upgrade
> --
>
> Key: FLINK-32520
> URL: https://issues.apache.org/jira/browse/FLINK-32520
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: 1.13.1
>Reporter: Ruibin Xing
>Priority: Major
> Attachments: flink_kubernetes_operator_0615.csv, 
> logs-06151328-06151332.csv
>
>
> Kubernetes Operator version: 1.5.0
>  
> When upgrading one of our Flink jobs, it recovered from a savepoint created 
> by the previous version of the job. The timeline of the job is as follows:
>  # I upgraded the job for the first time. The job created a savepoint and 
> successfully restored from it.
>  # The job was running fine and created several checkpoints.
>  # Later, I performed the second upgrade. Soon after submission and before 
> the JobManager stopped, I realized I made a mistake in the spec, so I quickly 
> did the third upgrade.
>  # After the job started, I found that it had recovered from the savepoint 
> created during the first upgrade.
>  
> It appears that there was an error when submitting the third upgrade. 
> However, I'm still not quite sure why this would cause Flink to use the 
> obsolete savepoint after investigating the code. The related logs for the 
> operator are attached below.
>  
> Although I haven't found the root cause, I came up with some possible fixes:
>  # Remove the {{lastSavepoint}} after a job has successfully restored from it.
>  # Add options for savepoint, similar to: 
> {{kubernetes.operator.job.upgrade.last-state.max.allowed.checkpoint.age}} The 
> operator should refuse to recover from the savepoint if the max age is 
> exceeded.
>  # Create a flag in the status that records savepoint states. Set the flag to 
> false when the savepoint starts and mark it as true when it successfully 
> ends. The job should report an error if the flag for the last savepoint is 
> false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32520) FlinkDeployment recovered states from an obsolete savepoint when performing an upgrade

2023-07-04 Thread Ruibin Xing (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740009#comment-17740009
 ] 

Ruibin Xing commented on FLINK-32520:
-

[~gyfora] Sorry, after reviewing, it turns out that the second and third 
upgrades were `last-state` due to a bug in our web UI. Still the third upgrades 
did used the savepoint from the first upgrade. Since I can't provide more 
information right now I will close this issue. I'm going to change the logger 
settings and try to reproduce this issue. Thanks for your time.

> FlinkDeployment recovered states from an obsolete savepoint when performing 
> an upgrade
> --
>
> Key: FLINK-32520
> URL: https://issues.apache.org/jira/browse/FLINK-32520
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: 1.13.1
>Reporter: Ruibin Xing
>Priority: Major
> Attachments: flink_kubernetes_operator_0615.csv, 
> logs-06151328-06151332.csv
>
>
> Kubernetes Operator version: 1.5.0
>  
> When upgrading one of our Flink jobs, it recovered from a savepoint created 
> by the previous version of the job. The timeline of the job is as follows:
>  # I upgraded the job for the first time. The job created a savepoint and 
> successfully restored from it.
>  # The job was running fine and created several checkpoints.
>  # Later, I performed the second upgrade. Soon after submission and before 
> the JobManager stopped, I realized I made a mistake in the spec, so I quickly 
> did the third upgrade.
>  # After the job started, I found that it had recovered from the savepoint 
> created during the first upgrade.
>  
> It appears that there was an error when submitting the third upgrade. 
> However, I'm still not quite sure why this would cause Flink to use the 
> obsolete savepoint after investigating the code. The related logs for the 
> operator are attached below.
>  
> Although I haven't found the root cause, I came up with some possible fixes:
>  # Remove the {{lastSavepoint}} after a job has successfully restored from it.
>  # Add options for savepoint, similar to: 
> {{kubernetes.operator.job.upgrade.last-state.max.allowed.checkpoint.age}} The 
> operator should refuse to recover from the savepoint if the max age is 
> exceeded.
>  # Create a flag in the status that records savepoint states. Set the flag to 
> false when the savepoint starts and mark it as true when it successfully 
> ends. The job should report an error if the flag for the last savepoint is 
> false.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-07-04 Thread Jacky Lau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740007#comment-17740007
 ] 

Jacky Lau edited comment on FLINK-32257 at 7/5/23 2:46 AM:
---

hi [~dwysakowicz]  [~hanyuzheng] [~Sergey Nuyanzin] , the array_max return type 
i submitted has fixed and merged.

and i found another problem. why array_max don't support array/row type. spark 
supports it
{code:java}
// code placeholder
def isOrderable(dataType: DataType): Boolean = dataType match {
  case NullType => true
  case dt: AtomicType => true
  case struct: StructType => struct.fields.forall(f => isOrderable(f.dataType))
  case array: ArrayType => isOrderable(array.elementType)
  case udt: UserDefinedType[_] => isOrderable(udt.sqlType)
  case _ => false
} {code}
and our flink has also supported in code gen for example in the 
ComparableTypeStrategy, the ArrayComparableElementTypeStrategy refers to 
ComparableTypeStrategy, but don't put areTypesOfSameRootComparable
what do you think?
{code:java}
public static final BuiltInFunctionDefinition GREATEST =
BuiltInFunctionDefinition.newBuilder()
.name("GREATEST")
.kind(SCALAR)
.inputTypeStrategy(
comparable(ConstantArgumentCount.from(1), 
StructuredComparison.FULL))
.outputTypeStrategy(nullableIfArgs(TypeStrategies.COMMON))
.runtimeProvided()
.build(); {code}


was (Author: jackylau):
hi [~dwysakowicz] , the array_max return type has fixed and merged.

and i found another problem. why array_max don't support array/row type. spark 
supports it
{code:java}
// code placeholder
def isOrderable(dataType: DataType): Boolean = dataType match {
  case NullType => true
  case dt: AtomicType => true
  case struct: StructType => struct.fields.forall(f => isOrderable(f.dataType))
  case array: ArrayType => isOrderable(array.elementType)
  case udt: UserDefinedType[_] => isOrderable(udt.sqlType)
  case _ => false
} {code}
and our flink has also supported in code gen for example in the 
ComparableTypeStrategy, the ArrayComparableElementTypeStrategy refers to 
ComparableTypeStrategy, but don't put areTypesOfSameRootComparable
what do you think?

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-07-04 Thread Jacky Lau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740007#comment-17740007
 ] 

Jacky Lau commented on FLINK-32257:
---

hi [~dwysakowicz] , the array_max return type has fixed and merged.

and i found another problem. why array_max don't support array/row type. spark 
supports it
{code:java}
// code placeholder
def isOrderable(dataType: DataType): Boolean = dataType match {
  case NullType => true
  case dt: AtomicType => true
  case struct: StructType => struct.fields.forall(f => isOrderable(f.dataType))
  case array: ArrayType => isOrderable(array.elementType)
  case udt: UserDefinedType[_] => isOrderable(udt.sqlType)
  case _ => false
} {code}
and our flink has also supported in code gen for example in the 
ComparableTypeStrategy, the ArrayComparableElementTypeStrategy refers to 
ComparableTypeStrategy, but don't put areTypesOfSameRootComparable
what do you think?

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32257) Add ARRAY_MAX support in SQL & Table API

2023-07-04 Thread Jacky Lau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740005#comment-17740005
 ] 

Jacky Lau commented on FLINK-32257:
---

hi [~hanyuzheng] MAP_ENTRIES i have already supported and merged in the master

> Add ARRAY_MAX support in SQL & Table API
> 
>
> Key: FLINK-32257
> URL: https://issues.apache.org/jira/browse/FLINK-32257
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> This is an implementation of ARRAY_MAX
> The array_max() function concatenates get the maximum element from input 
> array.
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Syntax
> array_max(array)
> Arguments
> array: Any ARRAY with elements for which order is supported.
>  
> Returns
> The result matches the type of the elements. NULL elements are skipped. If 
> array is empty, or contains only NULL elements, NULL is returned.
>  
> Examples
> SQL
>  
> > SELECT array_max(array(1, 20, NULL, 3)); 20
>  
> {code:java}
> // Fink SQL-> select array_max(array[1, 20, null, 3])
> 20{code}
>  
> See also
> spark 
> [https://spark.apache.org/docs/latest/api/sql/index.html#array_max|https://spark.apache.org/docs/latest/api/sql/index.html#array_min]
> presto [https://prestodb.io/docs/current/functions/array.html]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31639) Introduce tiered storage memory manager

2023-07-04 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-31639:
--
Summary: Introduce tiered storage memory manager  (was: Introduce tiered 
store memory manager)

> Introduce tiered storage memory manager
> ---
>
> Key: FLINK-31639
> URL: https://issues.apache.org/jira/browse/FLINK-31639
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31636) Upstream supports reading buffers from tiered storage

2023-07-04 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-31636:
--
Summary: Upstream supports reading buffers from tiered storage  (was: 
Upstream supports reading buffers from tiered store)

> Upstream supports reading buffers from tiered storage
> -
>
> Key: FLINK-31636
> URL: https://issues.apache.org/jira/browse/FLINK-31636
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31638) Downstream supports reading buffers from tiered storage

2023-07-04 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-31638:
--
Summary: Downstream supports reading buffers from tiered storage  (was: 
Downstream supports reading buffers from tiered store)

> Downstream supports reading buffers from tiered storage
> ---
>
> Key: FLINK-31638
> URL: https://issues.apache.org/jira/browse/FLINK-31638
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Wencong Liu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32480) Keyed State always returns new value instance

2023-07-04 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17740004#comment-17740004
 ] 

Hangxiang Yu commented on FLINK-32480:
--

Hi, The object cannot be reused as default.
If you are sure that your codes will make it consistent with the value in 
flink, you could just enable object reuse.
You could see 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/dev/datastream/execution/execution_configuration/]
 and 
[https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/config/#pipeline-object-reuse]
 for more detals.

BTW, Which StateBackend you used ? If rocksdb state backend, I think It's fine 
to enable object reuse.

 

> Keyed State always returns new value instance
> -
>
> Key: FLINK-32480
> URL: https://issues.apache.org/jira/browse/FLINK-32480
> Project: Flink
>  Issue Type: Bug
>  Components: API / State Processor
>Affects Versions: 1.14.6
>Reporter: Viktor Feklin
>Priority: Major
>
> I create ValueState with default value. Then i access value in the map 
> function (multiple times with the same partition key).
> Expected behavior:
>  * First call to value() should return new instance
>  * Second call to value should return instance created in first call (just 
> like Map#computeIfAbsent)
> Actual dehavior:
>  * every call to value() return new instance until we manualy set it with 
> update() function.
> According to source code - we can call update only once to assign value to 
> current key. But from the user poin of view - it happends to call update() 
> every time - because i do not know if value was already asigned or just 
> created.
> 
> Currently my code looks like:
> {code:java}
> List context = contextState.value();
> contextState.update(context); {code}
> May be there is some logic for immutable objects, but for mutable objects it 
> looks awkward
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-31635) Support writing records to the new tiered storage architecture

2023-07-04 Thread Yuxin Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuxin Tan updated FLINK-31635:
--
Summary: Support writing records to the new tiered storage architecture  
(was: Support writing records to the new tiered store architecture)

> Support writing records to the new tiered storage architecture
> --
>
> Key: FLINK-31635
> URL: https://issues.apache.org/jira/browse/FLINK-31635
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.18.0
>Reporter: Yuxin Tan
>Assignee: Yuxin Tan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> Support writing records to the new tiered store architecture.
> To achieve the goal, this mainly includes the following two parts. 
> 1. Introduces the tiered storage architecture.
> 2. The producer-side implementation of the architecture



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] iceiceiceCN commented on pull request #22831: [FLINK-32388]Add the ability to pass parameters to CUSTOM PartitionCommitPolicy

2023-07-04 Thread via GitHub


iceiceiceCN commented on PR #22831:
URL: https://github.com/apache/flink/pull/22831#issuecomment-1620932577

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] TanYuxin-tyx commented on pull request #22851: [FLINK-31646][network] Implement the remote tier producer for the tiered storage

2023-07-04 Thread via GitHub


TanYuxin-tyx commented on PR #22851:
URL: https://github.com/apache/flink/pull/22851#issuecomment-1620926912

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-32538) CI build failed because node is corrupted when compiling

2023-07-04 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-32538:
-

 Summary: CI build failed because node is corrupted when compiling
 Key: FLINK-32538
 URL: https://issues.apache.org/jira/browse/FLINK-32538
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI, Tests
Affects Versions: 1.18.0
Reporter: Yuxin Tan


[ERROR] The archive file 
/__w/3/.m2/repository/com/github/eirslett/node/16.13.2/node-16.13.2-linux-x64.tar.gz
 is corrupted and will be deleted. Please try the build again. 
 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50896=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=10984]

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50919=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=10984]



[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50925=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb=10984]

 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-32506) Add the watermark aggregation benchmark for source coordinator

2023-07-04 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan closed FLINK-32506.
---
Resolution: Fixed

> Add the watermark aggregation benchmark for source coordinator
> --
>
> Key: FLINK-32506
> URL: https://issues.apache.org/jira/browse/FLINK-32506
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Affects Versions: 1.18.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> FLINK-32420 is improving the watermark aggregation performance.
> We want to add a benchmark for it first, and then we can see the official 
> performance change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32506) Add the watermark aggregation benchmark for source coordinator

2023-07-04 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739793#comment-17739793
 ] 

Rui Fan edited comment on FLINK-32506 at 7/5/23 2:10 AM:
-

Merged via:
 b402108f9bc468ed5223a5d73ea0bfcfa0085cfe

 902d93c4af148fb264ef134e290c9e059e090d1f

 


was (Author: fanrui):
Merged via:
 b402108f9bc468ed5223a5d73ea0bfcfa0085cfe

 

> Add the watermark aggregation benchmark for source coordinator
> --
>
> Key: FLINK-32506
> URL: https://issues.apache.org/jira/browse/FLINK-32506
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Affects Versions: 1.18.0
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>
> FLINK-32420 is improving the watermark aggregation performance.
> We want to add a benchmark for it first, and then we can see the official 
> performance change.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-benchmarks] 1996fanrui merged pull request #77: [FLINK-32506][connectors/common] Add the benchmark for watermark aggregation

2023-07-04 Thread via GitHub


1996fanrui merged PR #77:
URL: https://github.com/apache/flink-benchmarks/pull/77


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-benchmarks] 1996fanrui commented on pull request #77: [FLINK-32506][connectors/common] Add the benchmark for watermark aggregation

2023-07-04 Thread via GitHub


1996fanrui commented on PR #77:
URL: https://github.com/apache/flink-benchmarks/pull/77#issuecomment-1620916418

   Thanks @pnowojski  and @RocMarshal  for the review, CI passed, merging~
   
   Let's follow the benchmark result at http://codespeed.dak8s.net:8000/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22922:
URL: https://github.com/apache/flink/pull/22922#discussion_r1252435731


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +67,9 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (argumentDataTypes.size() < 1) {
+return Optional.empty();
+}

Review Comment:
   I think we can use 
   ` if (argumentDataTypes.size() < 1) {
   return Optional.empty();
   }`
   to handle empty input situation temporary, because  Calcite's stack 
inference logic and the argument count check can work except empty argument 
input. Once the bug in the Calcite's stack inference logic and the argument 
count check solved, we can remove this if statement. This if statement will not 
cause any bugs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22922:
URL: https://github.com/apache/flink/pull/22922#discussion_r1252435731


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +67,9 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (argumentDataTypes.size() < 1) {
+return Optional.empty();
+}

Review Comment:
   I think we can use 
   ` if (argumentDataTypes.size() < 1) {
   return Optional.empty();
   }`
   to handle empty input situation temporary, because  Calcite's stack 
inference logic and the argument count check can work except empty argument 
input. Once the bug in the Calcite's stack inference logic and the argument 
count check solved, we can remove this if statement.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22922:
URL: https://github.com/apache/flink/pull/22922#discussion_r1252435539


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +67,9 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (argumentDataTypes.size() < 1) {
+return Optional.empty();
+}

Review Comment:
   @dawidwys Is this OK?
   https://github.com/apache/flink/pull/22922/files#r1252434610



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +67,9 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (argumentDataTypes.size() < 1) {
+return Optional.empty();
+}

Review Comment:
   I think we can use 
   ` if (argumentDataTypes.size() < 1) {
   return Optional.empty();
   }`
   first to handle empty input situation, because  Calcite's stack inference 
logic and the argument count check can work except empty argument input. Once 
the bug in the Calcite's stack inference logic and the argument count check 
solved, we can remove this if statement.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22922:
URL: https://github.com/apache/flink/pull/22922#discussion_r1252435539


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +67,9 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (argumentDataTypes.size() < 1) {
+return Optional.empty();
+}

Review Comment:
   @dawidwys Is this OK?
   I think we can use 
   ` if (argumentDataTypes.size() < 1) {
   return Optional.empty();
   }`
   first to handle empty input situation, because  Calcite's stack inference 
logic and the argument count check can work except empty argument input. Once 
the bug in the Calcite's stack inference logic and the argument count check 
solved, we can remove this if statement.
   https://github.com/apache/flink/pull/22922/files#r1252434610



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22922:
URL: https://github.com/apache/flink/pull/22922#discussion_r1252434610


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +68,10 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (!TypeInferenceUtil.checkInputArgumentNumber(
+argumentCount, argumentDataTypes.size(), throwOnFailure)) {
+return callContext.fail(throwOnFailure, "the input argument number 
should be one");
+}

Review Comment:
   is this ok?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22922: [FLINK-32256][table] Add built-in ARRAY_ MIN function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22922:
URL: https://github.com/apache/flink/pull/22922#discussion_r1252105421


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/ArrayComparableElementTypeStrategy.java:
##
@@ -67,6 +67,9 @@ public ArgumentCount getArgumentCount() {
 public Optional> inferInputTypes(
 CallContext callContext, boolean throwOnFailure) {
 final List argumentDataTypes = 
callContext.getArgumentDataTypes();
+if (argumentDataTypes.size() < 1) {
+return Optional.empty();
+}

Review Comment:
   you mean I can add a static method at TypeInfereceUtil to do the argument 
count check?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on pull request #22834: [FLINK-32260][table] Add built-in ARRAY_SLICE function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on PR #22834:
URL: https://github.com/apache/flink/pull/22834#issuecomment-1620780780

   @snuyanzin pass ci test, it's ready to merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252368976


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   you mean generate a CommonCollectionInputTypeStrategy and let 
CommonMapInputTypeStrategy and CommonArrayInputTypeStrategy become static 
class, and we can call them through CommonCollectionInputTypeStrategy?
   
   but CommonMapInputTypeStrategy is a normal class which doesn't need 
implements InputTypeStrategy, why we don't call it 
CommonCollectionInputTypeStrategyUtils?
   
   And we just move two class in a new class, do we really need to do this? I 
think it is ok to maintain two class now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252374401


##
docs/data/sql_functions_zh.yml:
##
@@ -760,7 +760,9 @@ collection:
   - sql: MAP_FROM_ARRAYS(array_of_keys, array_of_values)
 table: mapFromArrays(array_of_keys, array_of_values)
 description: Returns a map created from an arrays of keys and values. Note 
that the lengths of two arrays should be the same.
-
+  - sql: MAP_UNION(map1, map2)
+table: map1.mapUnion(map2)
+description: 返回一个通过合并两个图 'map1' 和 'map2' 
创建的图。这两个图应该有相同的数据结构。如果有重叠的键,'map2' 的值将覆盖 'map1' 的值。如果任一图为空,则返回 null。

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252368976


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   you mean generate a CommonCollectionInputTypeStrategy and let 
CommonMapInputTypeStrategy and CommonArrayInputTypeStrategy become static 
class, and we can call them through CommonCollectionInputTypeStrategy?
   
   but CommonMapInputTypeStrategy is a normal class which doesn't need 
implements InputTypeStrategy, why we don't call it 
CommonCollectionInputTypeStrategyUtils?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252368976


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   you mean generate a CommonCollectionInputTypeStrategy and let 
CommonMapInputTypeStrategy and CommonArrayInputTypeStrategy become static 
class, and we can call them through CommonCollectionInputTypeStrategy?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252368976


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   remove CommonArrayInputTypeStrategy may affect other code?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252368137


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   `Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps should have a common map type. If there are overlapping keys, the value 
from 'map2' will overwrite the value from 'map1'. If any of maps is null, 
return null.`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252363975


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   `Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps are common map type. If there are overlapping keys, the value from 'map2' 
will overwrite the value from 'map1'. If any of maps is null, return null.`
   
   is this ok?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252365645


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   I would change this sentence ` These two maps are common map type. ` in a way
   ` These two maps should have a common map type. `



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252363975


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   `Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps are common map type. If there are overlapping keys, the value from 'map2' 
will overwrite the value from 'map1'. If any of maps is null, return null.
   
   is this ok?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hlteoh37 commented on a diff in pull request #22901: [FLINK-32469][REST] Simplify implementation of checkpoint handlers

2023-07-04 Thread via GitHub


hlteoh37 commented on code in PR #22901:
URL: https://github.com/apache/flink/pull/22901#discussion_r1252358904


##
docs/layouts/shortcodes/generated/expert_rest_section.html:
##
@@ -20,6 +20,18 @@
 Long
 The time in ms that the client waits for the leader address, 
e.g., Dispatcher or WebMonitorEndpoint
 
+
+rest.cache.checkpoint-statistics.size
+1000
+Integer
+Maximum number of entries in the checkpoint statistics 
cache.
+
+
+rest.cache.checkpoint-statistics.timeout
+3 s
+Duration
+Duration from write after which cached checkpoints statistics 
are cleaned up. It can be specified using notation: "500 ms", "1 s".
+

Review Comment:
   Ok, made it such that it will fallback to `web.refresh-interval`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252357094


##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   Can we create a `CommonCollectionInputTypeStrategy` class with this logic
   and 2 subclasses `CommonMapInputTypeStrategy` and 
`CommonArrayInputTypeStrategy` which customize it a bit.
   
   I'm asking since `CommonMapInputTypeStrategy` and 
`CommonArrayInputTypeStrategy`  are almost identical



##
flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/inference/strategies/CommonMapInputTypeStrategy.java:
##
@@ -0,0 +1,113 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.types.inference.strategies;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.table.functions.FunctionDefinition;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.table.types.inference.ArgumentCount;
+import org.apache.flink.table.types.inference.CallContext;
+import org.apache.flink.table.types.inference.InputTypeStrategy;
+import org.apache.flink.table.types.inference.Signature;
+import org.apache.flink.table.types.inference.Signature.Argument;
+import org.apache.flink.table.types.logical.LegacyTypeInformationType;
+import org.apache.flink.table.types.logical.LogicalType;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.utils.LogicalTypeMerging;
+import org.apache.flink.table.types.utils.TypeConversions;
+
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Optional;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+/** An {@link InputTypeStrategy} that expects that all arguments have a common 
map type. */
+@Internal
+public final class CommonMapInputTypeStrategy implements InputTypeStrategy {

Review Comment:
   Can we create a `CommonCollectionInputTypeStrategy` class with this logic
   and 2 subclasses 

[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252355890


##
docs/data/sql_functions_zh.yml:
##
@@ -760,7 +760,9 @@ collection:
   - sql: MAP_FROM_ARRAYS(array_of_keys, array_of_values)
 table: mapFromArrays(array_of_keys, array_of_values)
 description: Returns a map created from an arrays of keys and values. Note 
that the lengths of two arrays should be the same.
-
+  - sql: MAP_UNION(map1, map2)
+table: map1.mapUnion(map2)
+description: 返回一个通过合并两个图 'map1' 和 'map2' 
创建的图。这两个图应该有相同的数据结构。如果有重叠的键,'map2' 的值将覆盖 'map1' 的值。如果任一图为空,则返回 null。

Review Comment:
   there should be an empty line before next key which is `json`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252354619


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   you can not say `same` since  one could be nullable and another not...
   so it should be common type



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22952: [FLINK-30984][table] Upgrade Janino to 3.1.10 and Remove explicit cast required by 3.1.9

2023-07-04 Thread via GitHub


flinkbot commented on PR #22952:
URL: https://github.com/apache/flink/pull/22952#issuecomment-1620733780

   
   ## CI report:
   
   * 3ac64d4d4ecd588397f010a4a39c00c803eded9e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hlteoh37 commented on a diff in pull request #22901: [FLINK-32469][REST] Simplify implementation of checkpoint handlers

2023-07-04 Thread via GitHub


hlteoh37 commented on code in PR #22901:
URL: https://github.com/apache/flink/pull/22901#discussion_r1252350733


##
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/SchedulerNG.java:
##
@@ -93,6 +94,8 @@ ExecutionState requestPartitionState(
 
 ExecutionGraphInfo requestJob();
 
+CheckpointStatsSnapshot requestCheckpointStats();

Review Comment:
   Yep can do.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30984) Remove explicit cast required by 3.1.x janino

2023-07-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30984:
---
Labels: pull-request-available  (was: )

> Remove explicit cast required by 3.1.x janino 
> --
>
> Key: FLINK-30984
> URL: https://issues.apache.org/jira/browse/FLINK-30984
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Table SQL / Runtime
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: pull-request-available
>
> This is a follow up task.
> Currently in 3.1.x Janino there is  
> [https://github.com/janino-compiler/janino/issues/188] leading to fail 
> several Flink tests. Once it is fixed on janino side WAs should be removed 
> together with janino's update



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] snuyanzin opened a new pull request, #22952: [FLINK-30984][table] Upgrade Janino to 3.1.10 and Remove explicit cast required by 3.1.9

2023-07-04 Thread via GitHub


snuyanzin opened a new pull request, #22952:
URL: https://github.com/apache/flink/pull/22952

   
   ## What is the purpose of the change
   
   The PR updates Janino to 3.1.10 and removed WA with explicit cast of nulls 
which was introduced at https://github.com/apache/flink/pull/21500 to cope with 
Janino issue https://github.com/janino-compiler/janino/issues/188.
   Now the fix of this issue is in 3.1.10, so no need for WA anymore
   
   ## Verifying this change
   
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes )
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: ( no)
 - The serializers: ( no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: ( no )
 - The S3 file system connector: ( no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? ( no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-31784) Add multiple-component support to DefaultLeaderElectionService

2023-07-04 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl resolved FLINK-31784.
---
Fix Version/s: 1.18.0
   Resolution: Fixed

master: db4b59430664778b8c4f25b2e0eb0765b4dc10f6

> Add multiple-component support to DefaultLeaderElectionService
> --
>
> Key: FLINK-31784
> URL: https://issues.apache.org/jira/browse/FLINK-31784
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Matthias Pohl
>Assignee: Matthias Pohl
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] XComp merged pull request #22844: [FLINK-31784][runtime] Adds multi-component support to DefaultLeaderElectionService

2023-07-04 Thread via GitHub


XComp merged PR #22844:
URL: https://github.com/apache/flink/pull/22844


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252343611


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   `Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps should have same type for their key and value individually. If there are 
overlapping keys, the value from 'map2' will overwrite the value from 'map1'. 
If any of maps is null, return null.`
   
   how about this one?
   
   What I want to say is that two maps' key and value should have same data 
type.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252343611


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   `Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps should have same type for their key and value individually. If there are 
overlapping keys, the value from 'map2' will overwrite the value from 'map1'. 
If any of maps is null, return null.`
   
   What I want to say is that two maps' key and value should have same data 
type.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252343611


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   `Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps should have same type. If there are overlapping keys, the value from 
'map2' will overwrite the value from 'map1'. If any of maps is null, return 
null.`
   
   What I want to say is that two maps' key and value should have same data 
type.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252342408


##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/BaseExpressions.java:
##
@@ -1468,6 +1469,16 @@ public OutType mapEntries() {
 return toApiSpecificExpression(unresolvedCall(MAP_ENTRIES, toExpr()));
 }
 
+/**
+ * Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps should have same
+ * data structure. If there are overlapping keys, the value from 'map2' 
will overwrite the value

Review Comment:
   ok



##
docs/data/sql_functions.yml:
##
@@ -658,6 +658,9 @@ collection:
   - sql: MAP_KEYS(map)
 table: MAP.mapKeys()
 description: Returns the keys of the map as array. No order guaranteed.
+  - sql: MAP_UNION(map1, map2)
+table: map1.mapUnion(map2)
+description: Returns a map created by merging two maps, 'map1' and 'map2'. 
These two maps should have same data structure. If there are overlapping keys, 
the value from 'map2' will overwrite the value from 'map1'. If any of maps is 
null, return null.

Review Comment:
   ok



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


hanyuzheng7 commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252342119


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   ok, I will change the description



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] hanyuzheng7 closed pull request #22951: [FLINK-26948][table] Add-ARRAY_SORT-function.

2023-07-04 Thread via GitHub


hanyuzheng7 closed pull request #22951: [FLINK-26948][table] 
Add-ARRAY_SORT-function.
URL: https://github.com/apache/flink/pull/22951


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #22951: [FLINK-26948][table] Add-ARRAY_SORT-function.

2023-07-04 Thread via GitHub


flinkbot commented on PR #22951:
URL: https://github.com/apache/flink/pull/22951#issuecomment-1620703593

   
   ## CI report:
   
   * 8abf50de7e0aeaa49cda4133fb90448a8c390d91 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252336736


##
docs/data/sql_functions.yml:
##
@@ -658,6 +658,9 @@ collection:
   - sql: MAP_KEYS(map)
 table: MAP.mapKeys()
 description: Returns the keys of the map as array. No order guaranteed.
+  - sql: MAP_UNION(map1, map2)
+table: map1.mapUnion(map2)
+description: Returns a map created by merging two maps, 'map1' and 'map2'. 
These two maps should have same data structure. If there are overlapping keys, 
the value from 'map2' will overwrite the value from 'map1'. If any of maps is 
null, return null.

Review Comment:
   it's not clear what is same datastructure in SQL context...
   i guess same type (or not same, however common)
   
   



##
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/BaseExpressions.java:
##
@@ -1468,6 +1469,16 @@ public OutType mapEntries() {
 return toApiSpecificExpression(unresolvedCall(MAP_ENTRIES, toExpr()));
 }
 
+/**
+ * Returns a map created by merging two maps, 'map1' and 'map2'. These two 
maps should have same
+ * data structure. If there are overlapping keys, the value from 'map2' 
will overwrite the value

Review Comment:
   it's not clear what is same datastructure in SQL context...
   i guess same type (or not same, however common)
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252335930


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   it's not clear what is `same datastructure` in SQL context...
   i guess same type (or not same, however common)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] snuyanzin commented on a diff in pull request #22842: [FLINK-32261][table] Add built-in MAP_UNION function.

2023-07-04 Thread via GitHub


snuyanzin commented on code in PR #22842:
URL: https://github.com/apache/flink/pull/22842#discussion_r1252335930


##
flink-python/pyflink/table/expression.py:
##
@@ -1558,6 +1558,18 @@ def map_keys(self) -> 'Expression':
 """
 return _unary_op("mapKeys")(self)
 
+@property
+def map_union(self, map2) -> 'Expression':
+"""
+Returns a map created by merging two maps, 'map1' and 'map2'.
+These two maps should have same data structure. If there are 
overlapping keys,

Review Comment:
   it's not clear what is `same datastructure` in SQL context...
   i guess same type



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-26948) Add SORT_ARRAY supported in SQL & Table API

2023-07-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-26948:
---
Labels: pull-request-available  (was: )

> Add SORT_ARRAY supported in SQL & Table API
> ---
>
> Key: FLINK-26948
> URL: https://issues.apache.org/jira/browse/FLINK-26948
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: dalongliu
>Assignee: Jörn Kottmann
>Priority: Major
>  Labels: pull-request-available
>
> Returns the array in {{expr}} in sorted order.
> Syntax:
> {code:java}
> sort_array(expr [, ascendingOrder] ) {code}
> Arguments:
>  * {{{}expr{}}}: An ARRAY expression of sortable elements.
>  * {{{}ascendingOrder{}}}: An optional BOOLEAN expression defaulting to 
> {{{}true{}}}.
> Returns:
> The result type matches {{{}expr{}}}.
> Sorts the input array in ascending or descending order according to the 
> natural ordering of the array elements. {{NULL}} elements are placed at the 
> beginning of the returned array in ascending order or at the end of the 
> returned array in descending order.
> Examples:
> {code:java}
> > SELECT sort_array(array('b', 'd', NULL, 'c', 'a'), true);
>  [NULL,a,b,c,d] {code}
> See more:
>  * 
> [Spark|https://spark.apache.org/docs/latest/sql-ref-functions-builtin.html#date-and-timestamp-functions]
>  * [Hive|https://cwiki.apache.org/confluence/display/hive/languagemanual+udf]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] hanyuzheng7 opened a new pull request, #22951: [FLINK-26948][table] Add-ARRAY_SORT-function.

2023-07-04 Thread via GitHub


hanyuzheng7 opened a new pull request, #22951:
URL: https://github.com/apache/flink/pull/22951

   ## What is the purpose of the change
   
   Implement the array_sort function to extract a subset of elements from an 
array.
   
   Returns the array in sorted order. Sorts the input array in ascending or 
descending order according to the natural ordering of the array elements. NULL 
elements are placed at the beginning of the returned array in ascending order 
or at the end of the returned array in descending order. If the array itself is 
null, the function will return null. The optional ascendingOrder argument 
defaults to true if not specified.
   

   ## Brief change log
   
   ARRAY_SORT for Table API and SQL
   
   Syntax:
   
   `ARRAY_SORT(array, [ascendingOrder])`
   
   Arguments:
   array: The array we want to sort.
   ascendingOrder: this is an option, default and true stand for ascending 
order, false stand for descending order.
   
   
   Returns:  
   if the array is null or ascendingOrder is null return null. if array is 
empty return empty.
   
   Examples:
   
   ```
   SELECT ARRAY_SORT([5, 3, 2, 1, 0, null])
   Output: [null, 0, 1, 2, 3, 5]
   ```
   
   ```
   SELECT ARRAY_SORT([5, 3, 2, 1, 0, null], true)
   Output: [null, 0, 1, 2, 3, 5]
   ```
   
   ```
   SELECT ARRAY_SORT([null, 1, 2, 3, 5], false)
   Output: [5, 3, 2, 1, null]
   ```
   
   ```
   SELECT ARRAY_SORT(null)
   null
   ```
   
   ```
   SELECT ARRAY_SLICE(['a', 'b', 'c', 'd', 'e'], null)
   null
   
   ```
   
   see also:
   spark: https://spark.apache.org/docs/latest/api/sql/index.html#array_sort
   
databrick:https://docs.databricks.com/sql/language-manual/functions/array_sort.html
   
   ## Verifying this change
   
   - This change added tests in CollectionFunctionsITCase.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32212) Job restarting indefinitely after an IllegalStateException from BlobLibraryCacheManager

2023-07-04 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17739982#comment-17739982
 ] 

Robert Metzger commented on FLINK-32212:


I also just ran into this situation with the Flink K8s Operator and resolved it 
by regenerating the JobGraph.
To do so, I first scaled the Flink JobManager deployment to 0.
Then, I removed the "jobGraph-yyy" key from the "xxx-cluster-config-map" 
ConfigMap.

Next, I scaled the deployment up to 1 again, and watched the job successfully 
(and with state) recover from the last checkpoint.

> Job restarting indefinitely after an IllegalStateException from 
> BlobLibraryCacheManager
> ---
>
> Key: FLINK-32212
> URL: https://issues.apache.org/jira/browse/FLINK-32212
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.16.1
> Environment: Apache Flink Kubernetes Operator 1.4
>Reporter: Matheus Felisberto
>Priority: Major
>
> After running for a few hours the job starts to throw IllegalStateException 
> and I can't figure out why. To restore the job, I need to manually delete the 
> FlinkDeployment to be recreated and redeploy everything.
> The jar is built-in into the docker image, hence is defined accordingly with 
> the Operator's documentation:
> {code:java}
> // jarURI: local:///opt/flink/usrlib/my-job.jar {code}
> I've tried to move it into /opt/flink/lib/my-job.jar but it didn't work 
> either. 
>  
> {code:java}
> // Source: my-topic (1/2)#30587 
> (b82d2c7f9696449a2d9f4dc298c0a008_bc764cd8ddf7a0cff126f51c16239658_0_30587) 
> switched from DEPLOYING to FAILED with failure cause: 
> java.lang.IllegalStateException: The library registration references a 
> different set of library BLOBs than previous registrations for this job:
> old:[p-5d91888083d38a3ff0b6c350f05a3013632137c6-7237ecbb12b0b021934b0c81aef78396]
> new:[p-5d91888083d38a3ff0b6c350f05a3013632137c6-943737c6790a3ec6870cecd652b956c2]
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$ResolvedClassLoader.verifyClassLoader(BlobLibraryCacheManager.java:419)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$ResolvedClassLoader.access$500(BlobLibraryCacheManager.java:359)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$LibraryCacheEntry.getOrResolveClassLoader(BlobLibraryCacheManager.java:235)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$LibraryCacheEntry.access$1100(BlobLibraryCacheManager.java:202)
>     at 
> org.apache.flink.runtime.execution.librarycache.BlobLibraryCacheManager$DefaultClassLoaderLease.getOrResolveClassLoader(BlobLibraryCacheManager.java:336)
>     at 
> org.apache.flink.runtime.taskmanager.Task.createUserCodeClassloader(Task.java:1024)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:612)
>     at org.apache.flink.runtime.taskmanager.Task.run(Task.java:550)
>     at java.base/java.lang.Thread.run(Unknown Source) {code}
> If there is any other information that can help to identify the problem, 
> please let me know.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   >