Re: [PR] [hotfix] Fix spelling of RowTimeMiniBatchAssignerOperator class [flink]

2024-01-27 Thread via GitHub


flinkbot commented on PR #24210:
URL: https://github.com/apache/flink/pull/24210#issuecomment-1913491004

   
   ## CI report:
   
   * 6d0993d870ff48c28409170332baaf7678981022 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [hotfix] Fix spelling of RowTimeMiniBatchAssignerOperator class [flink]

2024-01-27 Thread via GitHub


dchristle opened a new pull request, #24210:
URL: https://github.com/apache/flink/pull/24210

   
   
   ## What is the purpose of the change
   
   Fixes the spelling of the table runtime class 
`RowTimeMiniBatchAssginerOperator` -> `RowTimeMiniBatchAssignerOperator`, and 
updates associated references in other files.
   
   ## Brief change log
   
   * Fix spelling of `RowTimeMiniBatchAssignerOperator` class
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34239) Introduce a deep copy method of SerializerConfig for merging with Table configs in org.apache.flink.table.catalog.DataTypeFactoryImpl

2024-01-27 Thread Kumar Mallikarjuna (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811634#comment-17811634
 ] 

Kumar Mallikarjuna commented on FLINK-34239:


Hi, I'm new to the community. This seems like a good change! If we want to 
implement this, may I pick this up?

> Introduce a deep copy method of SerializerConfig for merging with Table 
> configs in org.apache.flink.table.catalog.DataTypeFactoryImpl 
> --
>
> Key: FLINK-34239
> URL: https://issues.apache.org/jira/browse/FLINK-34239
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Core
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Priority: Major
>
> *Problem*
> Currently, 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig
>  will create a deep-copy of the SerializerConfig and merge Table config into 
> it. However, the deep copy is done by manully calling the getter and setter 
> methods of SerializerConfig, and is prone to human errors, e.g. missing 
> copying a newly added field in SerializerConfig.
> *Proposal*
> Introduce a deep copy method for SerializerConfig and replace the curr impl 
> in 
> org.apache.flink.table.catalog.DataTypeFactoryImpl#createSerializerExecutionConfig.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-27 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468766280


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/inference/TypeInferenceOperandChecker.java:
##
@@ -116,7 +127,17 @@ public Consistency getConsistency() {
 
 @Override
 public boolean isOptional(int i) {
-return false;
+Optional> optionalArguments = 
typeInference.getOptionalArguments();
+if (optionalArguments.isPresent()) {
+return optionalArguments.get().get(i);

Review Comment:
   The argument hint is for optional parameters that have default values, so 
there will not be a null situation.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-27 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468765647


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/inference/TypeInferenceOperandChecker.java:
##
@@ -116,7 +127,17 @@ public Consistency getConsistency() {
 
 @Override
 public boolean isOptional(int i) {
-return false;
+Optional> optionalArguments = 
typeInference.getOptionalArguments();
+if (optionalArguments.isPresent()) {
+return optionalArguments.get().get(i);
+} else {
+return false;
+}
+}
+
+@Override
+public boolean isFixedParameters() {
+return typeInference.getTypedArguments().isPresent();

Review Comment:
   After testing, direct modification here may have an impact on the 
verification of existing functions. The original default is false. If most 
functions are set to true, it may have an impact on the existing function 
detection type verification. Currently, there are two situations, one is the 
implicit conversion of timestamp(3) and timestamp(6), and the other is the 
conversion of Pojo and RowType. Therefore, I currently change it to return true 
only if the optional hint is declared and there is an optional argument 
parameter. Otherwise, the default is false, so that the parameter verification 
inside the calcite framework can be skipped.
   
   
   There is  failed ci link:
   
   
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57012=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34200) AutoRescalingITCase#testCheckpointRescalingInKeyedState fails

2024-01-27 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811633#comment-17811633
 ] 

Zhu Zhu commented on FLINK-34200:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57024=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=86f654fa-ab48-5c1a-25f4-7e7f6afb9bba

> AutoRescalingITCase#testCheckpointRescalingInKeyedState fails
> -
>
> Key: FLINK-34200
> URL: https://issues.apache.org/jira/browse/FLINK-34200
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Major
>  Labels: test-stability
> Attachments: FLINK-34200.failure.log.gz
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56601=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=ea7cf968-e585-52cb-e0fc-f48de023a7ca=8200]
> {code:java}
> Jan 19 02:31:53 02:31:53.954 [ERROR] Tests run: 32, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 1050 s <<< FAILURE! -- in 
> org.apache.flink.test.checkpointing.AutoRescalingITCase
> Jan 19 02:31:53 02:31:53.954 [ERROR] 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState[backend
>  = rocksdb, buffersPerChannel = 2] -- Time elapsed: 59.10 s <<< FAILURE!
> Jan 19 02:31:53 java.lang.AssertionError: expected:<[(0,8000), (0,32000), 
> (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), (0,1), 
> (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), (1,16000), 
> (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), (0,52000), 
> (0,6), (0,68000), (0,76000), (1,18000), (1,26000), (1,34000), (1,42000), 
> (1,58000), (0,6000), (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), 
> (0,7), (1,4000), (1,2), (1,36000), (1,44000)]> but was:<[(0,8000), 
> (0,32000), (0,48000), (0,72000), (1,78000), (1,3), (1,54000), (0,2000), 
> (0,1), (0,5), (0,66000), (0,74000), (0,82000), (1,8), (1,0), 
> (1,16000), (1,24000), (1,4), (1,56000), (1,64000), (0,12000), (0,28000), 
> (0,52000), (0,6), (0,68000), (0,76000), (0,1000), (0,25000), (0,33000), 
> (0,41000), (1,18000), (1,26000), (1,34000), (1,42000), (1,58000), (0,6000), 
> (0,14000), (0,22000), (0,38000), (0,46000), (0,62000), (0,7), (1,4000), 
> (1,2), (1,36000), (1,44000)]>
> Jan 19 02:31:53   at org.junit.Assert.fail(Assert.java:89)
> Jan 19 02:31:53   at org.junit.Assert.failNotEquals(Assert.java:835)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:120)
> Jan 19 02:31:53   at org.junit.Assert.assertEquals(Assert.java:146)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingKeyedState(AutoRescalingITCase.java:296)
> Jan 19 02:31:53   at 
> org.apache.flink.test.checkpointing.AutoRescalingITCase.testCheckpointRescalingInKeyedState(AutoRescalingITCase.java:196)
> Jan 19 02:31:53   at java.lang.reflect.Method.invoke(Method.java:498)
> Jan 19 02:31:53   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34245][python] Fix config retrieval logic from nested YAML in pyflink_gateway_server with flattened keys. [flink]

2024-01-27 Thread via GitHub


zhuzhurk commented on PR #24209:
URL: https://github.com/apache/flink/pull/24209#issuecomment-1913475259

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-27 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468765647


##
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/planner/functions/inference/TypeInferenceOperandChecker.java:
##
@@ -116,7 +127,17 @@ public Consistency getConsistency() {
 
 @Override
 public boolean isOptional(int i) {
-return false;
+Optional> optionalArguments = 
typeInference.getOptionalArguments();
+if (optionalArguments.isPresent()) {
+return optionalArguments.get().get(i);
+} else {
+return false;
+}
+}
+
+@Override
+public boolean isFixedParameters() {
+return typeInference.getTypedArguments().isPresent();

Review Comment:
   After testing, direct modification here may have an impact on the 
verification of existing functions. The original default is false. If most 
functions are set to FixedParameter, it may have an impact on the existing 
function detection type verification. Currently, there are two situations, one 
is the implicit conversion of timestamp(3) and timestamp(6), and the other is 
the conversion of Pojo and RowType. Therefore, I currently change it to return 
true only if the optional hint is declared and there is an optional argument 
parameter. Otherwise, the default is false, so that the parameter verification 
inside the calcite framework can be skipped.
   
   
   There is  failed ci link:
   
   
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=57012=logs=0c940707-2659-5648-cbe6-a1ad63045f0a=075c2716-8010-5565-fe08-3c4bb45824a4



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34058][table] Support optional parameters for named parameters [flink]

2024-01-27 Thread via GitHub


hackergin commented on code in PR #24183:
URL: https://github.com/apache/flink/pull/24183#discussion_r1468763844


##
flink-table/flink-table-common/src/test/java/org/apache/flink/table/types/extraction/TypeInferenceExtractorTest.java:
##
@@ -1810,4 +1844,30 @@ public String eval(String f1, String f2) {
 return "";
 }
 }
+
+private static class ArgumentHintScalarFunctionNotNullTypeWithOptionals 
extends ScalarFunction {
+@FunctionHint(
+argument = {
+@ArgumentHint(
+type = @DataTypeHint("STRING NOT NULL"),
+name = "f1",
+isOptional = true),
+@ArgumentHint(type = @DataTypeHint("INTEGER"), name = 
"f2", isOptional = true)
+})
+public String eval(String f1, Integer f2) {

Review Comment:
   After verification, there will be a NullPointerException during execution. I 
have added relevant detection and judgment, but this may not only be an 
optional issue. If the primitive is declared as a nullable type, there may be 
similar problems, but currently there is no similar detection. However, in 
order to avoid modifying the existing Function temporarily, I have not made any 
changes to this part. In the future, a separate JIRA can be created to track 
this issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34237][connectors/mongodb] Update MongoWriterITCase to be compatible with updated SinkV2 interfaces [flink-connector-mongodb]

2024-01-27 Thread via GitHub


Jiabao-Sun commented on PR #22:
URL: 
https://github.com/apache/flink-connector-mongodb/pull/22#issuecomment-1913457035

   Thanks @leonardBang for the review.
   PR title was updated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-33996) Support disabling project rewrite when multiple exprs in the project reference the same sub project field.

2024-01-27 Thread Jeyhun Karimov (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811594#comment-17811594
 ] 

Jeyhun Karimov commented on FLINK-33996:


Hi [~libenchao] Thanks for your comment. I agree that supporting expr reuse in 
the codegen phase would be a better solution. It requires a bit of time, but I 
already started working on this (FLINK-21573) and will ping you guys once I am 
finished. 

> Support disabling project rewrite when multiple exprs in the project 
> reference the same sub project field.
> --
>
> Key: FLINK-33996
> URL: https://issues.apache.org/jira/browse/FLINK-33996
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Affects Versions: 1.18.0
>Reporter: Feng Jin
>Priority: Major
>  Labels: pull-request-available
>
> When multiple top projects reference the same bottom project, project rewrite 
> rules may result in complex projects being calculated multiple times.
> Take the following SQL as an example:
> {code:sql}
> create table test_source(a varchar) with ('connector'='datagen');
> explain plan for select a || 'a' as a, a || 'b' as b FROM (select 
> REGEXP_REPLACE(a, 'aaa', 'bbb') as a FROM test_source);
> {code}
> The final SQL plan is as follows:
> {code:sql}
> == Abstract Syntax Tree ==
> LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
> +- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> test_source]])
> == Optimized Physical Plan ==
> Calc(select=[||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
> _UTF-16LE'a') AS a, ||(REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb'), 
> _UTF-16LE'b') AS b])
> +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
> fields=[a])
> == Optimized Execution Plan ==
> Calc(select=[||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'a') AS a, 
> ||(REGEXP_REPLACE(a, 'aaa', 'bbb'), 'b') AS b])
> +- TableSourceScan(table=[[default_catalog, default_database, test_source]], 
> fields=[a])
> {code}
> It can be observed that after project write, regex_place is calculated twice. 
> Generally speaking, regular expression matching is a time-consuming operation 
> and we usually do not want it to be calculated multiple times. Therefore, for 
> this scenario, we can support disabling project rewrite.
> After disabling some rules, the final plan we obtained is as follows:
> {code:sql}
> == Abstract Syntax Tree ==
> LogicalProject(a=[||($0, _UTF-16LE'a')], b=[||($0, _UTF-16LE'b')])
> +- LogicalProject(a=[REGEXP_REPLACE($0, _UTF-16LE'aaa', _UTF-16LE'bbb')])
>+- LogicalTableScan(table=[[default_catalog, default_database, 
> test_source]])
> == Optimized Physical Plan ==
> Calc(select=[||(a, _UTF-16LE'a') AS a, ||(a, _UTF-16LE'b') AS b])
> +- Calc(select=[REGEXP_REPLACE(a, _UTF-16LE'aaa', _UTF-16LE'bbb') AS a])
>+- TableSourceScan(table=[[default_catalog, default_database, 
> test_source]], fields=[a])
> == Optimized Execution Plan ==
> Calc(select=[||(a, 'a') AS a, ||(a, 'b') AS b])
> +- Calc(select=[REGEXP_REPLACE(a, 'aaa', 'bbb') AS a])
>+- TableSourceScan(table=[[default_catalog, default_database, 
> test_source]], fields=[a])
> {code}
> After testing, we probably need to modify these few rules:
> org.apache.flink.table.planner.plan.rules.logical.FlinkProjectMergeRule
> org.apache.flink.table.planner.plan.rules.logical.FlinkCalcMergeRule
> org.apache.flink.table.planner.plan.rules.logical.FlinkProjectCalcMergeRule



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34245][python] Fix config retrieval logic from nested YAML in pyflink_gateway_server with flattened keys. [flink]

2024-01-27 Thread via GitHub


flinkbot commented on PR #24209:
URL: https://github.com/apache/flink/pull/24209#issuecomment-1913271805

   
   ## CI report:
   
   * 2a84b534a38f3b94225f987286f57ff39dc61e56 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34245) CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to InaccessibleObjectException

2024-01-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34245:
---
Labels: pull-request-available test-stability  (was: test-stability)

> CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to 
> InaccessibleObjectException
> 
>
> Key: FLINK-34245
> URL: https://issues.apache.org/jira/browse/FLINK-34245
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / Cassandra
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: pull-request-available, test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06=b68c9f5c-04c9-5c75-3862-a3a27aabbce3]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=60960eae-6f09-579e-371e-29814bdd1adc=7a70c083-6a74-5348-5106-30a76c29d8fa=63680]
> {code:java}
> Jan 26 01:29:27 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling 
> z:org.apache.flink.python.util.PythonConfigUtil.configPythonOperator.
> Jan 26 01:29:27 E   : 
> java.lang.reflect.InaccessibleObjectException: Unable to make field final 
> java.util.Map java.util.Collections$UnmodifiableMap.m accessible: module 
> java.base does not "opens java.util" to unnamed module @17695df3
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.throwInaccessibleObjectException(AccessibleObject.java:391)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:367)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:315)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:183)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.setAccessible(Field.java:177)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.registerPythonBroadcastTransformationTranslator(PythonConfigUtil.java:357)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.configPythonOperator(PythonConfigUtil.java:101)
> Jan 26 01:29:27 E at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
> Jan 26 01:29:27 E at 
> java.base/java.lang.Thread.run(Thread.java:1583) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34245][python] Fix config retrieval logic from nested YAML in pyflink_gateway_server with flattened keys. [flink]

2024-01-27 Thread via GitHub


JunRuiLee opened a new pull request, #24209:
URL: https://github.com/apache/flink/pull/24209

   
   
   
   
   ## What is the purpose of the change
   
   Fix wrong logic in pyflink_gateway_server that using flattened key to get 
value from a nested standard yaml. 
   
   ## Verifying this change
   
   Locally tested CassandraSinkTest.test_cassandra_sink with JDK17 and 
confirmed it passed.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / **no** / don't 
know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ **not documented**)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34245) CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to InaccessibleObjectException

2024-01-27 Thread Junrui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811562#comment-17811562
 ] 

Junrui Li commented on FLINK-34245:
---

[~Sergey Nuyanzin] Thanks for kindly reminder, it's a bug that handle standard 
yaml config file in Pyflink gateway, I'll prepare a fix as soon as possible.

> CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to 
> InaccessibleObjectException
> 
>
> Key: FLINK-34245
> URL: https://issues.apache.org/jira/browse/FLINK-34245
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / Cassandra
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06=b68c9f5c-04c9-5c75-3862-a3a27aabbce3]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=60960eae-6f09-579e-371e-29814bdd1adc=7a70c083-6a74-5348-5106-30a76c29d8fa=63680]
> {code:java}
> Jan 26 01:29:27 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling 
> z:org.apache.flink.python.util.PythonConfigUtil.configPythonOperator.
> Jan 26 01:29:27 E   : 
> java.lang.reflect.InaccessibleObjectException: Unable to make field final 
> java.util.Map java.util.Collections$UnmodifiableMap.m accessible: module 
> java.base does not "opens java.util" to unnamed module @17695df3
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.throwInaccessibleObjectException(AccessibleObject.java:391)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:367)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:315)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:183)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.setAccessible(Field.java:177)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.registerPythonBroadcastTransformationTranslator(PythonConfigUtil.java:357)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.configPythonOperator(PythonConfigUtil.java:101)
> Jan 26 01:29:27 E at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
> Jan 26 01:29:27 E at 
> java.base/java.lang.Thread.run(Thread.java:1583) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-21573][table-planner] Support expression reuse in codegen [flink]

2024-01-27 Thread via GitHub


flinkbot commented on PR #24208:
URL: https://github.com/apache/flink/pull/24208#issuecomment-1913261797

   
   ## CI report:
   
   * 6b1bd2ced37baed874d8570a043f0bcab9780afb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-21573) Support expression reuse in codegen

2024-01-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21573:
---
Labels: auto-deprioritized-major auto-deprioritized-minor 
pull-request-available  (was: auto-deprioritized-major auto-deprioritized-minor)

> Support expression reuse in codegen
> ---
>
> Key: FLINK-21573
> URL: https://issues.apache.org/jira/browse/FLINK-21573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
>
> Currently there is no expression reuse in codegen, and this may result in 
> more CPU overhead in some cases. E.g.
> {code:java}
> SELECT my_map['key1'] as key1, my_map['key2'] as key2, my_map['key3'] as key3
> FROM (
>   SELECT dump_json_to_map(col1) as my_map
>   FROM T
> )
> {code}
> `dump_json_to_map` will be called 3 times.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-21573][table-planner] Support expression reuse in codegen [flink]

2024-01-27 Thread via GitHub


jeyhunkarimov opened a new pull request, #24208:
URL: https://github.com/apache/flink/pull/24208

   ## What is the purpose of the change
   
   Support expression reuse in codegen
   
   
   ## Brief change log
   
 - Support RexLocalRef
 - Optimize expression list and eliminate the unnecessary ones
 - Support codegen with RexLocalRef
   
   
   ## Verifying this change
   
   Fixed existing tests
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: yes (query serialization)
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34103][examples] Include flink-connector-datagen for AsyncIO example and bundle it into the dist by default [flink]

2024-01-27 Thread via GitHub


flinkbot commented on PR #24207:
URL: https://github.com/apache/flink/pull/24207#issuecomment-1913257936

   
   ## CI report:
   
   * 36cdc303a1e9686245bc9e28394f60b078fa6d96 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34103) AsyncIO example failed to run as DataGen Connector is not bundled

2024-01-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34103:
---
Labels: pull-request-available  (was: )

> AsyncIO example failed to run as DataGen Connector is not bundled
> -
>
> Key: FLINK-34103
> URL: https://issues.apache.org/jira/browse/FLINK-34103
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.18.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
>
> From the comments of FLINK-32821:
> root@73186f600374:/opt/flink# bin/flink run 
> /volume/flink-examples-streaming-1.18.0-AsyncIO.jar
> WARNING: Unknown module: jdk.compiler specified to --add-exports
> WARNING: Unknown module: jdk.compiler specified to --add-exports
> WARNING: Unknown module: jdk.compiler specified to --add-exports
> WARNING: Unknown module: jdk.compiler specified to --add-exports
> WARNING: Unknown module: jdk.compiler specified to --add-exports
> java.lang.NoClassDefFoundError: 
> org/apache/flink/connector/datagen/source/DataGeneratorSource
> at 
> org.apache.flink.streaming.examples.async.AsyncIOExample.main(AsyncIOExample.java:82)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown 
> Source)
> at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
> Source)
> at java.base/java.lang.reflect.Method.invoke(Unknown Source)
> at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
> at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
> at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:105)
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:851)
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:245)
> at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1095)
> at 
> org.apache.flink.client.cli.CliFrontend.lambda$mainInternal$9(CliFrontend.java:1189)
> at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
> at org.apache.flink.client.cli.CliFrontend.mainInternal(CliFrontend.java:1189)
> at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1157)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.flink.connector.datagen.source.DataGeneratorSource
> at java.base/java.net.URLClassLoader.findClass(Unknown Source)
> at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
> at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:67)
> at 
> org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:65)
> at 
> org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:51)
> at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
> ... 15 more



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34103][examples] Include flink-connector-datagen for AsyncIO example and bundle it into the dist by default [flink]

2024-01-27 Thread via GitHub


X-czh opened a new pull request, #24207:
URL: https://github.com/apache/flink/pull/24207

   
   
   ## What is the purpose of the change
   
   Include flink-connector-datagen for AsyncIO example to fix dep missing 
problem and bundle it into the dist by default.
   
   ```xml
   

org.apache.maven.plugins
maven-antrun-plugin


rename










   ```
   
   This change also integrate the name rewriting rule
   
   ## Brief change log
   
   - Use maven shade to package AsyncIO example instead, including 
flink-connector-datagen deps.
   - It was not bundled into the dist previously because the name rewriting 
rule, which is required by the dist bundling process did not include it before:
   
   ```xml
   

org.apache.maven.plugins
maven-antrun-plugin


rename










   ```
   
   This change also integrates the name rewriting rule in the shade plugin 
config so that it is bundled by default.
   
   ## Verifying this change
   
   This change is covered by extending the existing test: 
`flink-end-to-end-tests/test-scripts/test_streaming_examples.sh`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) no
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) no 
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented) not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34052][examples] Install the shaded streaming example jars in the maven repo [flink]

2024-01-27 Thread via GitHub


flinkbot commented on PR #24206:
URL: https://github.com/apache/flink/pull/24206#issuecomment-1913252250

   
   ## CI report:
   
   * e6cb657f8c0c2a59e73d2353b594684aa7a76c51 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-34052) Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository

2024-01-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-34052:
---
Labels: pull-request-available  (was: )

> Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository
> -
>
> Key: FLINK-34052
> URL: https://issues.apache.org/jira/browse/FLINK-34052
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Examples
>Affects Versions: 1.18.0
>Reporter: Junrui Li
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
>
> As a result of the changes implemented in FLINK-32821, the build process no 
> longer produces artifacts with the names 
> flink-examples-streaming-1.x-TopSpeedWindowing.jar and 
> flink-examples-streaming-1.x-SessionWindowing.jar. This has led to the 
> absence of these specific JAR files in the Maven repository 
> (https://repo1.maven.org/maven2/org/apache/flink/flink-examples-streaming/1.18.0/).
> These artifacts were previously available and may still be expected by users 
> as part of their application dependencies. Their removal could potentially 
> break existing build pipelines and applications that depend on these example 
> JARs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] [FLINK-34052][examples] Install the shaded streaming example jars in the maven repo [flink]

2024-01-27 Thread via GitHub


X-czh opened a new pull request, #24206:
URL: https://github.com/apache/flink/pull/24206

   
   
   ## What is the purpose of the change
   
   As a result of the changes implemented in 
[FLINK-32821](https://issues.apache.org/jira/browse/FLINK-32821), the build 
process no longer produces artifacts with the names 
flink-examples-streaming-1.x-yyy.jar. These artifacts were previously available 
and may still be expected by users as part of their application dependencies. 
Their removal could potentially break existing build pipelines and applications 
that depend on these example JARs.
   
   This PR aims to fix it.
   
   ## Brief change log
   
   Just add the following config to each example's maven shade config:
   ```xml
   
   true
   $example-name
   
   ```
   
   **Explanation**: 
   
   After invertigation, it turned out that the issue was resulted as follows:
   
   - The maven shade plugin will by default shade replaces with original jar 
with the result of shading. So, when a pom.xml includes two shades, the second 
shade execution will (by default) start from the result of the first shade 
execution ([ref](https://maven.apache.org/plugins/maven-shade-plugin/faq.html)).
   - We set the 
[finalName](https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#finalName)
 param to prevent the file replacement behavior as we'd like to build multiple 
independent example jars.
   However, maven by default will only install the default project artifact 
into the repo, omitting those shaded artifacts with finalName specified. This 
leads to the problem as reported in this issue.
   
   To fix it, we can set the 
[shadedArtifactAttached](https://maven.apache.org/plugins/maven-shade-plugin/examples/attached-artifact.html)
 param, which tells maven to install the shaded artifact aside from the default 
project artifact.
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   Before the change:
   
   https://github.com/apache/flink/assets/22020529/c3eadef8-df77-48a3-bc7a-7bcb2784d233;>
   
   After the change:
   
   https://github.com/apache/flink/assets/22020529/fee1048e-da7f-4beb-a78a-e9a226520589;>
   
   And the installed jar includes the datagen connector class files as expected:
   
   https://github.com/apache/flink/assets/22020529/28896c5f-97f2-4782-8b96-c6f5ec689f24;>
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no) no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no) no
 - The serializers: (yes / no / don't know) no
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know) no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know) 
no
 - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no) no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented) not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Comment Edited] (FLINK-34052) Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository

2024-01-27 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811556#comment-17811556
 ] 

Zhanghao Chen edited comment on FLINK-34052 at 1/27/24 4:26 PM:


After invertigation, it turned out that the issue was resulted as follows:
 * The maven shade plugin will by default shade replaces with original jar with 
the result of shading. So, when a {{pom.xml}} includes two shades, the second 
shade execution will (by default) start from the result of the first shade 
execution ([ref|https://maven.apache.org/plugins/maven-shade-plugin/faq.html]).
 *  We set the 
[finalName|https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#finalName]
 param to prevent the file replacement behavior as we'd like to build multiple 
independent example jars.
 * However, maven by default will only install the default project artifact 
into the repo, omitting those shaded artifacts with finalName specified. This 
leads to the problem as reported in this issue.

To fix it, we can set the 
[shadedArtifactAttached|https://maven.apache.org/plugins/maven-shade-plugin/examples/attached-artifact.html]
 param, which tells maven to install the shaded artifact aside from the default 
project artifact. I've attached a fix PR on it.


was (Author: zhanghao chen):
After invertigation, it turned out that the issue was resulted as follows:
 * The maven shade plugin will by default shade replaces with original jar with 
the result of shading. So, when a {{pom.xml}} includes two shades, the second 
shade execution will (by default) start from the result of the first shade 
execution ([ref|https://maven.apache.org/plugins/maven-shade-plugin/faq.html]).
 *  We set the [#finalName] param to prevent the file replacement behavior as 
we'd like to build multiple independent example jars.
 * However, maven by default will only install the default project artifact 
into the repo, omitting those shaded artifacts with finalName specified. This 
leads to the problem as reported in this issue.

To fix it, we can set the 
[shadedArtifactAttached|https://maven.apache.org/plugins/maven-shade-plugin/examples/attached-artifact.html]
 param, which tells maven to install the shaded artifact aside from the default 
project artifact. I've attached a fix PR on it.

> Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository
> -
>
> Key: FLINK-34052
> URL: https://issues.apache.org/jira/browse/FLINK-34052
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Examples
>Affects Versions: 1.18.0
>Reporter: Junrui Li
>Assignee: Zhanghao Chen
>Priority: Major
>
> As a result of the changes implemented in FLINK-32821, the build process no 
> longer produces artifacts with the names 
> flink-examples-streaming-1.x-TopSpeedWindowing.jar and 
> flink-examples-streaming-1.x-SessionWindowing.jar. This has led to the 
> absence of these specific JAR files in the Maven repository 
> (https://repo1.maven.org/maven2/org/apache/flink/flink-examples-streaming/1.18.0/).
> These artifacts were previously available and may still be expected by users 
> as part of their application dependencies. Their removal could potentially 
> break existing build pipelines and applications that depend on these example 
> JARs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-34052) Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository

2024-01-27 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811556#comment-17811556
 ] 

Zhanghao Chen commented on FLINK-34052:
---

After invertigation, it turned out that the issue was resulted as follows:
 * The maven shade plugin will by default shade replaces with original jar with 
the result of shading. So, when a {{pom.xml}} includes two shades, the second 
shade execution will (by default) start from the result of the first shade 
execution ([ref|https://maven.apache.org/plugins/maven-shade-plugin/faq.html]).
 *  We set the 
[finalName|[https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#finalName|https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#finalName]).]]
 param to prevent the file replacement behavior as we'd like to build multiple 
independent example jars.
 * However, maven by default will only install the default project artifact 
into the repo, omitting those shaded artifacts with finalName specified. This 
leads to the problem as reported in this issue.

To fix it, we can set the 
[shadedArtifactAttached|https://maven.apache.org/plugins/maven-shade-plugin/examples/attached-artifact.html]
 param, which tells maven to install the shaded artifact aside from the default 
project artifact. I've attached a fix PR on it.

> Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository
> -
>
> Key: FLINK-34052
> URL: https://issues.apache.org/jira/browse/FLINK-34052
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Examples
>Affects Versions: 1.18.0
>Reporter: Junrui Li
>Assignee: Zhanghao Chen
>Priority: Major
>
> As a result of the changes implemented in FLINK-32821, the build process no 
> longer produces artifacts with the names 
> flink-examples-streaming-1.x-TopSpeedWindowing.jar and 
> flink-examples-streaming-1.x-SessionWindowing.jar. This has led to the 
> absence of these specific JAR files in the Maven repository 
> (https://repo1.maven.org/maven2/org/apache/flink/flink-examples-streaming/1.18.0/).
> These artifacts were previously available and may still be expected by users 
> as part of their application dependencies. Their removal could potentially 
> break existing build pipelines and applications that depend on these example 
> JARs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34052) Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository

2024-01-27 Thread Zhanghao Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811556#comment-17811556
 ] 

Zhanghao Chen edited comment on FLINK-34052 at 1/27/24 4:25 PM:


After invertigation, it turned out that the issue was resulted as follows:
 * The maven shade plugin will by default shade replaces with original jar with 
the result of shading. So, when a {{pom.xml}} includes two shades, the second 
shade execution will (by default) start from the result of the first shade 
execution ([ref|https://maven.apache.org/plugins/maven-shade-plugin/faq.html]).
 *  We set the [#finalName] param to prevent the file replacement behavior as 
we'd like to build multiple independent example jars.
 * However, maven by default will only install the default project artifact 
into the repo, omitting those shaded artifacts with finalName specified. This 
leads to the problem as reported in this issue.

To fix it, we can set the 
[shadedArtifactAttached|https://maven.apache.org/plugins/maven-shade-plugin/examples/attached-artifact.html]
 param, which tells maven to install the shaded artifact aside from the default 
project artifact. I've attached a fix PR on it.


was (Author: zhanghao chen):
After invertigation, it turned out that the issue was resulted as follows:
 * The maven shade plugin will by default shade replaces with original jar with 
the result of shading. So, when a {{pom.xml}} includes two shades, the second 
shade execution will (by default) start from the result of the first shade 
execution ([ref|https://maven.apache.org/plugins/maven-shade-plugin/faq.html]).
 *  We set the 
[finalName|[https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#finalName|https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#finalName]).]]
 param to prevent the file replacement behavior as we'd like to build multiple 
independent example jars.
 * However, maven by default will only install the default project artifact 
into the repo, omitting those shaded artifacts with finalName specified. This 
leads to the problem as reported in this issue.

To fix it, we can set the 
[shadedArtifactAttached|https://maven.apache.org/plugins/maven-shade-plugin/examples/attached-artifact.html]
 param, which tells maven to install the shaded artifact aside from the default 
project artifact. I've attached a fix PR on it.

> Missing TopSpeedWindowing and SessionWindowing JARs in Flink Maven Repository
> -
>
> Key: FLINK-34052
> URL: https://issues.apache.org/jira/browse/FLINK-34052
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Examples
>Affects Versions: 1.18.0
>Reporter: Junrui Li
>Assignee: Zhanghao Chen
>Priority: Major
>
> As a result of the changes implemented in FLINK-32821, the build process no 
> longer produces artifacts with the names 
> flink-examples-streaming-1.x-TopSpeedWindowing.jar and 
> flink-examples-streaming-1.x-SessionWindowing.jar. This has led to the 
> absence of these specific JAR files in the Maven repository 
> (https://repo1.maven.org/maven2/org/apache/flink/flink-examples-streaming/1.18.0/).
> These artifacts were previously available and may still be expected by users 
> as part of their application dependencies. Their removal could potentially 
> break existing build pipelines and applications that depend on these example 
> JARs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34222][table] End-to-end implementation of minibatch join [flink]

2024-01-27 Thread via GitHub


lincoln-lil commented on PR #24161:
URL: https://github.com/apache/flink/pull/24161#issuecomment-1913183889

   @xishuaidelin The failure case is an known issue 
(https://issues.apache.org/jira/browse/FLINK-34206), you can rebase the latest 
master and rerun the tests.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34245) CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to InaccessibleObjectException

2024-01-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811522#comment-17811522
 ] 

Sergey Nuyanzin commented on FLINK-34245:
-

Bisect shows that the error is related to FLINK-33577, FLINK-34223, FLINK-34232 

> CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to 
> InaccessibleObjectException
> 
>
> Key: FLINK-34245
> URL: https://issues.apache.org/jira/browse/FLINK-34245
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / Cassandra
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06=b68c9f5c-04c9-5c75-3862-a3a27aabbce3]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=60960eae-6f09-579e-371e-29814bdd1adc=7a70c083-6a74-5348-5106-30a76c29d8fa=63680]
> {code:java}
> Jan 26 01:29:27 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling 
> z:org.apache.flink.python.util.PythonConfigUtil.configPythonOperator.
> Jan 26 01:29:27 E   : 
> java.lang.reflect.InaccessibleObjectException: Unable to make field final 
> java.util.Map java.util.Collections$UnmodifiableMap.m accessible: module 
> java.base does not "opens java.util" to unnamed module @17695df3
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.throwInaccessibleObjectException(AccessibleObject.java:391)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:367)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:315)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:183)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.setAccessible(Field.java:177)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.registerPythonBroadcastTransformationTranslator(PythonConfigUtil.java:357)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.configPythonOperator(PythonConfigUtil.java:101)
> Jan 26 01:29:27 E at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
> Jan 26 01:29:27 E at 
> java.base/java.lang.Thread.run(Thread.java:1583) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-34245) CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to InaccessibleObjectException

2024-01-27 Thread Sergey Nuyanzin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811522#comment-17811522
 ] 

Sergey Nuyanzin edited comment on FLINK-34245 at 1/27/24 1:44 PM:
--

Bisect shows that the error is related to FLINK-33577, FLINK-34223, FLINK-34232
[~JunRuiLi] could you please have a look here?


was (Author: sergey nuyanzin):
Bisect shows that the error is related to FLINK-33577, FLINK-34223, FLINK-34232 

> CassandraSinkTest.test_cassandra_sink fails under JDK17 and JDK21 due to 
> InaccessibleObjectException
> 
>
> Key: FLINK-34245
> URL: https://issues.apache.org/jira/browse/FLINK-34245
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Connectors / Cassandra
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Blocker
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=b53e1644-5cb4-5a3b-5d48-f523f39bcf06=b68c9f5c-04c9-5c75-3862-a3a27aabbce3]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=56942=logs=60960eae-6f09-579e-371e-29814bdd1adc=7a70c083-6a74-5348-5106-30a76c29d8fa=63680]
> {code:java}
> Jan 26 01:29:27 E   py4j.protocol.Py4JJavaError: An error 
> occurred while calling 
> z:org.apache.flink.python.util.PythonConfigUtil.configPythonOperator.
> Jan 26 01:29:27 E   : 
> java.lang.reflect.InaccessibleObjectException: Unable to make field final 
> java.util.Map java.util.Collections$UnmodifiableMap.m accessible: module 
> java.base does not "opens java.util" to unnamed module @17695df3
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.throwInaccessibleObjectException(AccessibleObject.java:391)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:367)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:315)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:183)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Field.setAccessible(Field.java:177)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.registerPythonBroadcastTransformationTranslator(PythonConfigUtil.java:357)
> Jan 26 01:29:27 E at 
> org.apache.flink.python.util.PythonConfigUtil.configPythonOperator(PythonConfigUtil.java:101)
> Jan 26 01:29:27 E at 
> java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:103)
> Jan 26 01:29:27 E at 
> java.base/java.lang.reflect.Method.invoke(Method.java:580)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
> Jan 26 01:29:27 E at 
> org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
> Jan 26 01:29:27 E at 
> java.base/java.lang.Thread.run(Thread.java:1583) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-34122) Deprecate old serialization config methods and options

2024-01-27 Thread Weijie Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-34122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weijie Guo closed FLINK-34122.
--
Resolution: Done

master via 2e56caf11c0ebe4461e6389b7d5c1a9ea2ff621f

> Deprecate old serialization config methods and options
> --
>
> Key: FLINK-34122
> URL: https://issues.apache.org/jira/browse/FLINK-34122
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Type Serialization System, Runtime / Configuration
>Affects Versions: 1.19.0
>Reporter: Zhanghao Chen
>Assignee: Zhanghao Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-34122][core] Deprecate old serialization config methods and options [flink]

2024-01-27 Thread via GitHub


reswqa commented on PR #24193:
URL: https://github.com/apache/flink/pull/24193#issuecomment-1913143127

   Thanks, merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-34122][core] Deprecate old serialization config methods and options [flink]

2024-01-27 Thread via GitHub


reswqa merged PR #24193:
URL: https://github.com/apache/flink/pull/24193


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-34246) Allow only archive failed job to history server

2024-01-27 Thread Lim Qing Wei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-34246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17811480#comment-17811480
 ] 

Lim Qing Wei commented on FLINK-34246:
--

Hi [~Wencong Liu] , thanks for replying.

Sorry I wasnt clear, I meant we should have an option to only archive failed 
job, not specific to batch.

Is this something sensible to add?

> Allow only archive failed job to history server
> ---
>
> Key: FLINK-34246
> URL: https://issues.apache.org/jira/browse/FLINK-34246
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Lim Qing Wei
>Priority: Minor
>
> Hi, I wonder if we can support only archiving Failed job to History Server.
> History server is a great tool to allow us to check on previous job, we are 
> using FLink batch which can run many times throughout the week, we only need 
> to check job on History Server when it has failed.
> It would be more efficient if we can choose to only store a subset of the 
> data.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)