[jira] (FLINK-30593) Determine restart time on the fly for Autoscaler
[ https://issues.apache.org/jira/browse/FLINK-30593 ] Nicholas Jiang deleted comment on FLINK-30593: was (Author: nicholasjiang): [~gyfora], does this rely on the stop time of scale operations? JobStatus only records the observed start time of operation and doesn't record the stop time, which cause that we could not get the observed restart times for scale operations. > Determine restart time on the fly for Autoscaler > > > Key: FLINK-30593 > URL: https://issues.apache.org/jira/browse/FLINK-30593 > Project: Flink > Issue Type: New Feature > Components: Autoscaler, Kubernetes Operator >Reporter: Gyula Fora >Priority: Major > > Currently the autoscaler uses a preconfigured restart time for the job. We > should dynamically adjust this on the observered restart times for scale > operations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-30593) Determine restart time on the fly for Autoscaler
[ https://issues.apache.org/jira/browse/FLINK-30593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759452#comment-17759452 ] Nicholas Jiang commented on FLINK-30593: [~gyfora], does this rely on the stop time of scale operations? JobStatus only records the observed start time of operation and doesn't record the stop time, which cause that we could not get the observed restart times for scale operations. > Determine restart time on the fly for Autoscaler > > > Key: FLINK-30593 > URL: https://issues.apache.org/jira/browse/FLINK-30593 > Project: Flink > Issue Type: New Feature > Components: Autoscaler, Kubernetes Operator >Reporter: Gyula Fora >Priority: Major > > Currently the autoscaler uses a preconfigured restart time for the job. We > should dynamically adjust this on the observered restart times for scale > operations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] snuyanzin commented on pull request #23217: [FLINK-32866][Tests] Enable the TestLoggerExtension for all junit5 tests
snuyanzin commented on PR #23217: URL: https://github.com/apache/flink/pull/23217#issuecomment-1695037306 The thing I didn't get is that from one side there are downstream test dependencies which could depend on JUnit4. After that change it the extension will be bundled by default... Before we intentionally excluded it in poms Who does this PR solve this issur? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-32940) Support projection pushdown to table source for column projections through UDTF
[ https://issues.apache.org/jira/browse/FLINK-32940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759447#comment-17759447 ] Venkata krishnan Sowrirajan commented on FLINK-32940: - Thanks [~337361...@qq.com] and [~lsy] . [~337361...@qq.com] , it makes sense to me. To summarize: # Apply _CoreRules.ProjectCorrelateTransposeRule_ to {_}FlinkBatchRuleSets{_}. But this only pushes the projects that are referenced by the children of _Correlate_ and not the other projects that needs to be pushed to the TableScan. # Introduce another rule that pushes the projects to the TableScan but only if the _Correlate_ is on a known UDTF like {_}UNNEST{_}. Otherwise, for user defined UDTFs we won't know for sure whether it is safe to push to the _TableScan_ or not. What do you think? I'm still thinking about how to write this rule so that it pushes all the projects to the TableScan operator. # Merge the _LogicalProject_ and _LogicalTableFunctionScan_ to LogicalTableFunctionScan. > Support projection pushdown to table source for column projections through > UDTF > --- > > Key: FLINK-32940 > URL: https://issues.apache.org/jira/browse/FLINK-32940 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Reporter: Venkata krishnan Sowrirajan >Priority: Major > > Currently, Flink doesn't push down columns projected through UDTF like > _UNNEST_ to the table source. > For eg: > {code:java} > SELECT t1.deptno, t2.ename FROM db.dept_nested t1, UNNEST(t1.employees) AS > t2{code} > For the above SQL, Flink projects all the columns for DEPT_NESTED rather than > only _name_ and {_}employees{_}. If the table source supports nested fields > column projection, ideally it should project only _t1.employees.ename_ from > the table source. > Query plan: > {code:java} > == Abstract Syntax Tree == > LogicalProject(deptno=[$0], ename=[$5]) > +- LogicalCorrelate(correlation=[$cor1], joinType=[inner], > requiredColumns=[{3}]) > :- LogicalTableScan(table=[[hive_catalog, db, dept_nested]]) > +- Uncollect > +- LogicalProject(employees=[$cor1.employees]) > +- LogicalValues(tuples=[[{ 0 }]]){code} > {code:java} > == Optimized Physical Plan == > Calc(select=[deptno, ename]) > +- Correlate(invocation=[$UNNEST_ROWS$1($cor1.employees)], > correlate=[table($UNNEST_ROWS$1($cor1.employees))], > select=[deptno,name,skillrecord,employees,empno,ename,skills], > rowType=[RecordType(BIGINT deptno, VARCHAR(2147483647) name, > RecordType:peek_no_expand(VARCHAR(2147483647) skilltype, VARCHAR(2147483647) > desc, RecordType:peek_no_expand(VARCHAR(2147483647) a, VARCHAR(2147483647) b) > others) skillrecord, RecordType:peek_no_expand(BIGINT empno, > VARCHAR(2147483647) ename, RecordType:peek_no_expand(VARCHAR(2147483647) > type, VARCHAR(2147483647) desc, RecordType:peek_no_expand(VARCHAR(2147483647) > a, VARCHAR(2147483647) b) others) ARRAY skills) ARRAY employees, BIGINT > empno, VARCHAR(2147483647) ename, > RecordType:peek_no_expand(VARCHAR(2147483647) type, VARCHAR(2147483647) desc, > RecordType:peek_no_expand(VARCHAR(2147483647) a, VARCHAR(2147483647) b) > others) ARRAY skills)], joinType=[INNER]) > +- TableSourceScan(table=[[hive_catalog, db, dept_nested]], > fields=[deptno, name, skillrecord, employees]){code} > {code:java} > == Optimized Execution Plan == > Calc(select=[deptno, ename]) > +- Correlate(invocation=[$UNNEST_ROWS$1($cor1.employees)], > correlate=[table($UNNEST_ROWS$1($cor1.employees))], > select=[deptno,name,skillrecord,employees,empno,ename,skills], > rowType=[RecordType(BIGINT deptno, VARCHAR(2147483647) name, > RecordType:peek_no_expand(VARCHAR(2147483647) skilltype, VARCHAR(2147483647) > desc, RecordType:peek_no_expand(VARCHAR(2147483647) a, VARCHAR(2147483647) b) > others) skillrecord, RecordType:peek_no_expand(BIGINT empno, > VARCHAR(2147483647) ename, RecordType:peek_no_expand(VARCHAR(2147483647) > type, VARCHAR(2147483647) desc, RecordType:peek_no_expand(VARCHAR(2147483647) > a, VARCHAR(2147483647) b) others) ARRAY skills) ARRAY employees, BIGINT > empno, VARCHAR(2147483647) ename, > RecordType:peek_no_expand(VARCHAR(2147483647) type, VARCHAR(2147483647) desc, > RecordType:peek_no_expand(VARCHAR(2147483647) a, VARCHAR(2147483647) b) > others) ARRAY skills)], joinType=[INNER]) > +- TableSourceScan(table=[[hive_catalog, db, dept_nested]], > fields=[deptno, name, skillrecord, employees]) {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32817) Supports running jar file names with Spaces
[ https://issues.apache.org/jira/browse/FLINK-32817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759443#comment-17759443 ] Xintong Song commented on FLINK-32817: -- Thanks, [~yesorno]. You are assigned. I notice you already opened a PR, but there are some CI failures. Please let me know when they are resolved and you are ready for a review. > Supports running jar file names with Spaces > --- > > Key: FLINK-32817 > URL: https://issues.apache.org/jira/browse/FLINK-32817 > Project: Flink > Issue Type: Improvement > Components: Deployment / YARN >Affects Versions: 1.14.0 >Reporter: Xianxun Ye >Assignee: Xianxun Ye >Priority: Major > Labels: pull-request-available > > When submitting a flink jar to a yarn cluster, if the jar filename has spaces > in it, the task will not be able to successfully parse the file path in > `YarnLocalResourceDescriptor`, and the following exception will occur in > JobManager. > The Flink jar file name is: StreamSQLExample 2.jar > {code:java} > bin/flink run -d -m yarn-cluster -p 1 -c > org.apache.flink.table.examples.java.basics.StreamSQLExample > StreamSQLExample\ 2.jar {code} > {code:java} > 2023-08-09 18:54:31,787 WARN > org.apache.flink.runtime.extension.resourcemanager.NeActiveResourceManager [] > - Failed requesting worker with resource spec WorkerResourceSpec > {cpuCores=1.0, taskHeapSize=220.160mb (230854450 bytes), taskOffHeapSize=0 > bytes, networkMemSize=158.720mb (166429984 bytes), managedMemSize=952.320mb > (998579934 bytes), numSlots=1}, current pending count: 0 > java.util.concurrent.CompletionException: > org.apache.flink.util.FlinkException: Error to parse > YarnLocalResourceDescriptor from > YarnLocalResourceDescriptor{key=StreamSQLExample 2.jar, > path=hdfs://***/.flink/application_1586413220781_33151/StreamSQLExample > 2.jar, size=7937, modificationTime=1691578403748, visibility=APPLICATION, > type=FILE} > at > org.apache.flink.util.concurrent.FutureUtils.lambda$supplyAsync$21(FutureUtils.java:1052) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) > ~[?:1.8.0_152] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_152] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_152] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_152] > Caused by: org.apache.flink.util.FlinkException: Error to parse > YarnLocalResourceDescriptor from > YarnLocalResourceDescriptor{key=StreamSQLExample 2.jar, > path=hdfs://sloth-jd-pub/user/sloth/.flink/application_1586413220781_33151/StreamSQLExample > 2.jar, size=7937, modificationTime=1691578403748, visibility=APPLICATION, > type=FILE} > at > org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:112) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:600) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:491) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.yarn.YarnResourceManagerDriver.createTaskExecutorLaunchContext(YarnResourceManagerDriver.java:452) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.yarn.YarnResourceManagerDriver.lambda$startTaskExecutorInContainerAsync$1(YarnResourceManagerDriver.java:383) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.util.concurrent.FutureUtils.lambda$supplyAsync$21(FutureUtils.java:1050) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > ... 4 more{code} > From what I understand, the HDFS cluster allows for file names with spaces, > as well as S3. > > I think we could replace the `LOCAL_RESOURCE_DESC_FORMAT` with > {code:java} > // code placeholder > private static final Pattern LOCAL_RESOURCE_DESC_FORMAT = > Pattern.compile( > "YarnLocalResourceDescriptor\\{" > + "key=([\\S\\x20]+), path=([\\S\\x20]+), > size=([\\d]+), modificationTime=([\\d]+), visibility=(\\S+), type=(\\S+)}"); > {code} > add '\x20' to only match the spaces -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (FLINK-32817) Supports running jar file names with Spaces
[ https://issues.apache.org/jira/browse/FLINK-32817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xintong Song reassigned FLINK-32817: Assignee: Xianxun Ye > Supports running jar file names with Spaces > --- > > Key: FLINK-32817 > URL: https://issues.apache.org/jira/browse/FLINK-32817 > Project: Flink > Issue Type: Improvement > Components: Deployment / YARN >Affects Versions: 1.14.0 >Reporter: Xianxun Ye >Assignee: Xianxun Ye >Priority: Major > Labels: pull-request-available > > When submitting a flink jar to a yarn cluster, if the jar filename has spaces > in it, the task will not be able to successfully parse the file path in > `YarnLocalResourceDescriptor`, and the following exception will occur in > JobManager. > The Flink jar file name is: StreamSQLExample 2.jar > {code:java} > bin/flink run -d -m yarn-cluster -p 1 -c > org.apache.flink.table.examples.java.basics.StreamSQLExample > StreamSQLExample\ 2.jar {code} > {code:java} > 2023-08-09 18:54:31,787 WARN > org.apache.flink.runtime.extension.resourcemanager.NeActiveResourceManager [] > - Failed requesting worker with resource spec WorkerResourceSpec > {cpuCores=1.0, taskHeapSize=220.160mb (230854450 bytes), taskOffHeapSize=0 > bytes, networkMemSize=158.720mb (166429984 bytes), managedMemSize=952.320mb > (998579934 bytes), numSlots=1}, current pending count: 0 > java.util.concurrent.CompletionException: > org.apache.flink.util.FlinkException: Error to parse > YarnLocalResourceDescriptor from > YarnLocalResourceDescriptor{key=StreamSQLExample 2.jar, > path=hdfs://***/.flink/application_1586413220781_33151/StreamSQLExample > 2.jar, size=7937, modificationTime=1691578403748, visibility=APPLICATION, > type=FILE} > at > org.apache.flink.util.concurrent.FutureUtils.lambda$supplyAsync$21(FutureUtils.java:1052) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590) > ~[?:1.8.0_152] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > ~[?:1.8.0_152] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ~[?:1.8.0_152] > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_152] > Caused by: org.apache.flink.util.FlinkException: Error to parse > YarnLocalResourceDescriptor from > YarnLocalResourceDescriptor{key=StreamSQLExample 2.jar, > path=hdfs://sloth-jd-pub/user/sloth/.flink/application_1586413220781_33151/StreamSQLExample > 2.jar, size=7937, modificationTime=1691578403748, visibility=APPLICATION, > type=FILE} > at > org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:112) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:600) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:491) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.yarn.YarnResourceManagerDriver.createTaskExecutorLaunchContext(YarnResourceManagerDriver.java:452) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.yarn.YarnResourceManagerDriver.lambda$startTaskExecutorInContainerAsync$1(YarnResourceManagerDriver.java:383) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > at > org.apache.flink.util.concurrent.FutureUtils.lambda$supplyAsync$21(FutureUtils.java:1050) > ~[flink-dist_2.12-1.14.0.jar:1.14.0] > ... 4 more{code} > From what I understand, the HDFS cluster allows for file names with spaces, > as well as S3. > > I think we could replace the `LOCAL_RESOURCE_DESC_FORMAT` with > {code:java} > // code placeholder > private static final Pattern LOCAL_RESOURCE_DESC_FORMAT = > Pattern.compile( > "YarnLocalResourceDescriptor\\{" > + "key=([\\S\\x20]+), path=([\\S\\x20]+), > size=([\\d]+), modificationTime=([\\d]+), visibility=(\\S+), type=(\\S+)}"); > {code} > add '\x20' to only match the spaces -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] masteryhx commented on a diff in pull request #23298: [FLINK-32963][Runtime / State Backends] Update StateBackendMigrationTestBase.java
masteryhx commented on code in PR #23298: URL: https://github.com/apache/flink/pull/23298#discussion_r130676 ## flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendMigrationTestBase.java: ## @@ -480,6 +482,14 @@ private void testKeyedMapStateUpgrade( // make sure that reading and writing each key state works with the new serializer backend.setCurrentKey(1); Iterator> iterable1 = mapState.iterator(); + +List> actualList = Review Comment: 1. Could you put them into a seprate method, like `sortedIterator` ? 2. a minor suggestion: Use TreeMap to avoid manual sort -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] masteryhx commented on a diff in pull request #21410: [FLINK-29776][flink-statebackend-changelog][JUnit5 Migration]Module: flink-statebackend-changelog.
masteryhx commented on code in PR #21410: URL: https://github.com/apache/flink/pull/21410#discussion_r1306881957 ## flink-state-backends/flink-statebackend-changelog/src/test/java/org/apache/flink/state/changelog/ChangelogDelegateEmbeddedRocksDBStateBackendTest.java: ## @@ -33,22 +33,21 @@ import org.apache.flink.runtime.state.StateBackend; import org.apache.flink.runtime.state.TestTaskStateManager; -import org.junit.Ignore; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.TemporaryFolder; +import org.junit.jupiter.api.Disabled; +import org.junit.jupiter.api.TestTemplate; +import org.junit.jupiter.api.io.TempDir; +import java.io.File; import java.io.IOException; /** Tests for {@link ChangelogStateBackend} delegating {@link EmbeddedRocksDBStateBackend}. */ -public class ChangelogDelegateEmbeddedRocksDBStateBackendTest -extends EmbeddedRocksDBStateBackendTest { +class ChangelogDelegateEmbeddedRocksDBStateBackendTest extends EmbeddedRocksDBStateBackendTest { -@Rule public final TemporaryFolder temp = new TemporaryFolder(); +@TempDir static File tmPath; Review Comment: ```suggestion @TempDir static File tempFile; ``` ## flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/ChangelogMapState.java: ## @@ -83,6 +84,16 @@ public UV setValue(UV value) { } return oldValue; } + +@Override +public boolean equals(Object o) { Review Comment: This fix should not be related to Junit5 migration, right? It's better to seprate them into 2 commits. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-19822) Remove redundant shuffle for streaming
[ https://issues.apache.org/jira/browse/FLINK-19822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759430#comment-17759430 ] yisha zhou commented on FLINK-19822: Hi [~godfreyhe]! We gonna utilize multipleInputOperator in stream mode and this issue is the prerequisite of it. Could you please follow up this issue or assign it to someone else? > Remove redundant shuffle for streaming > -- > > Key: FLINK-19822 > URL: https://issues.apache.org/jira/browse/FLINK-19822 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Reporter: godfrey he >Priority: Not a Priority > Labels: auto-deprioritized-major, auto-deprioritized-minor, > auto-unassigned, pull-request-available > > This is similar > [FLINK-12575|https://issues.apache.org/jira/browse/FLINK-12575], we could > implement {{satisfyTraits}} method for stream nodes to remove redundant > shuffle. This could add more possibilities that more operators can be merged > into multiple input operator. > Different batch, stream operators require the shuffle keys and the state keys > must be exactly the same, otherwise the state may be not correct. > We only support a few operators in this issue, such as Join and regular > Aggregate. Other operators will be supported in the future. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] Jiabao-Sun commented on pull request #23218: [FLINK-32854][flink-runtime][JUnit5 Migration] The state package of flink-runtime module
Jiabao-Sun commented on PR #23218: URL: https://github.com/apache/flink/pull/23218#issuecomment-1694972823 Hi @1996fanrui. Could you help review this PR when you have time? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-32678) Release Testing: Stress-Test to cover multiple low-level changes in Flink
[ https://issues.apache.org/jira/browse/FLINK-32678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759429#comment-17759429 ] Yang Wang edited comment on FLINK-32678 at 8/28/23 4:04 AM: *Functionality Test* 1. [SUCCEED] Build the docker image with release-1.18 branch 2. [SUCCEED] Use the flink-k8s-operator to start a Flink app with HA enabled, check the logs, UI 3. [SUCCEED] Check HA ConfigMaps, one for leader election and one for the job checkpoint 4. [SUCCEED] Check the thread dump of the JobManager and verify only one leader elector is running(the value is 4 before 1.15 with old HA) 5. [SUCCEED] Use the command {{kubectl exec flink-example-statemachine-897cb6d4f-bzdv5 – /bin/sh -c 'kill 1'}} to kill the JobManager and verify no more TaskManager is created(Flink should reuse the existing TaskManager before idle timeout). 6. [SUCCEED] Verify the Flink job recover from the latest checkpoint and keep running 2023-08-28 03:40:29,167 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Restoring job 596bdc6b7ac5bcefb611c3df08d64520 from Checkpoint 101 @ 1693194000259 for 596bdc6b7ac5bcefb611c3df08d64520 located at oss://flink-test/flink-k8s-ha-stress-test/flink-cp/596bdc6b7ac5bcefb611c3df08d64520/chk-101. All the things work well after refactoring of leader-election, akka, and flink-shaded. I just find a log that could be improved by replacing the object id with some more meaningful name. 2023-08-28 03:40:18,258 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - LeaderContender has been registered under component 'resourcemanager' for org.apache.flink.kubernetes.highavailability.KubernetesLeaderElectionDriver@2a19a0fe. I am still working on the stress test and will share the result later today. was (Author: fly_in_gis): # Functionality Test 1. [SUCCEED] Build the docker image with release-1.18 branch 2. [SUCCEED] Use the flink-k8s-operator to start a Flink app with HA enabled, check the logs, UI 3. [SUCCEED] Check HA ConfigMaps, one for leader election and one for the job checkpoint 4. [SUCCEED] Check the thread dump of the JobManager and verify only one leader elector is running(the value is 4 before 1.15 with old HA) 5. [SUCCEED] Use the command {{kubectl exec flink-example-statemachine-897cb6d4f-bzdv5 -- /bin/sh -c 'kill 1'}} to kill the JobManager and verify no more TaskManager is created(Flink should reuse the existing TaskManager before idle timeout). 6. [SUCCEED] Verify the Flink job recover from the latest checkpoint and keep running 2023-08-28 03:40:29,167 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Restoring job 596bdc6b7ac5bcefb611c3df08d64520 from Checkpoint 101 @ 1693194000259 for 596bdc6b7ac5bcefb611c3df08d64520 located at oss://flink-test/flink-k8s-ha-stress-test/flink-cp/596bdc6b7ac5bcefb611c3df08d64520/chk-101. All the things work well after refactoring of leader-election, akka, and flink-shaded. I just find a log that could be improved by replacing the object id with some more meaningful name. 2023-08-28 03:40:18,258 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - LeaderContender has been registered under component 'resourcemanager' for org.apache.flink.kubernetes.highavailability.KubernetesLeaderElectionDriver@2a19a0fe. I am still working on the stress test and will share the result later today. > Release Testing: Stress-Test to cover multiple low-level changes in Flink > - > > Key: FLINK-32678 > URL: https://issues.apache.org/jira/browse/FLINK-32678 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.18.0 >Reporter: Matthias Pohl >Assignee: Yang Wang >Priority: Major > Labels: release-testing > Fix For: 1.18.0 > > > -We decided to do another round of testing for the LeaderElection refactoring > which happened in > [FLIP-285|https://cwiki.apache.org/confluence/display/FLINK/FLIP-285%3A+refactoring+leaderelection+to+make+flink+support+multi-component+leader+election+out-of-the-box].- > This release testing task is about running a bigger amount of jobs in a Flink > environment to look for unusual behavior. This Jira issue shall cover the > following 1.18 efforts: > * Leader Election refactoring > ([FLIP-285|https://cwiki.apache.org/confluence/display/FLINK/FLIP-285%3A+refactoring+leaderelection+to+make+flink+support+multi-component+leader+election+out-of-the-box], > FLINK-26522) > * Akka to Pekko transition (FLINK-32468) > * flink-shaded 17.0 updates (FLINK-32032) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32678) Release Testing: Stress-Test to cover multiple low-level changes in Flink
[ https://issues.apache.org/jira/browse/FLINK-32678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759429#comment-17759429 ] Yang Wang commented on FLINK-32678: --- # Functionality Test 1. [SUCCEED] Build the docker image with release-1.18 branch 2. [SUCCEED] Use the flink-k8s-operator to start a Flink app with HA enabled, check the logs, UI 3. [SUCCEED] Check HA ConfigMaps, one for leader election and one for the job checkpoint 4. [SUCCEED] Check the thread dump of the JobManager and verify only one leader elector is running(the value is 4 before 1.15 with old HA) 5. [SUCCEED] Use the command {{kubectl exec flink-example-statemachine-897cb6d4f-bzdv5 -- /bin/sh -c 'kill 1'}} to kill the JobManager and verify no more TaskManager is created(Flink should reuse the existing TaskManager before idle timeout). 6. [SUCCEED] Verify the Flink job recover from the latest checkpoint and keep running 2023-08-28 03:40:29,167 INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Restoring job 596bdc6b7ac5bcefb611c3df08d64520 from Checkpoint 101 @ 1693194000259 for 596bdc6b7ac5bcefb611c3df08d64520 located at oss://flink-test/flink-k8s-ha-stress-test/flink-cp/596bdc6b7ac5bcefb611c3df08d64520/chk-101. All the things work well after refactoring of leader-election, akka, and flink-shaded. I just find a log that could be improved by replacing the object id with some more meaningful name. 2023-08-28 03:40:18,258 INFO org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - LeaderContender has been registered under component 'resourcemanager' for org.apache.flink.kubernetes.highavailability.KubernetesLeaderElectionDriver@2a19a0fe. I am still working on the stress test and will share the result later today. > Release Testing: Stress-Test to cover multiple low-level changes in Flink > - > > Key: FLINK-32678 > URL: https://issues.apache.org/jira/browse/FLINK-32678 > Project: Flink > Issue Type: Sub-task > Components: Runtime / Coordination >Affects Versions: 1.18.0 >Reporter: Matthias Pohl >Assignee: Yang Wang >Priority: Major > Labels: release-testing > Fix For: 1.18.0 > > > -We decided to do another round of testing for the LeaderElection refactoring > which happened in > [FLIP-285|https://cwiki.apache.org/confluence/display/FLINK/FLIP-285%3A+refactoring+leaderelection+to+make+flink+support+multi-component+leader+election+out-of-the-box].- > This release testing task is about running a bigger amount of jobs in a Flink > environment to look for unusual behavior. This Jira issue shall cover the > following 1.18 efforts: > * Leader Election refactoring > ([FLIP-285|https://cwiki.apache.org/confluence/display/FLINK/FLIP-285%3A+refactoring+leaderelection+to+make+flink+support+multi-component+leader+election+out-of-the-box], > FLINK-26522) > * Akka to Pekko transition (FLINK-32468) > * flink-shaded 17.0 updates (FLINK-32032) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] RanJinh commented on a diff in pull request #23000: [FLINK-32594][runtime] Use blocking ResultPartitionType if operator only outputs records on EOF
RanJinh commented on code in PR #23000: URL: https://github.com/apache/flink/pull/23000#discussion_r1306883153 ## flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/translators/OneInputTransformationTranslator.java: ## @@ -50,24 +50,47 @@ public Collection translateForBatchInternal( keySelector, transformation.getStateKeyType(), context); -boolean isKeyed = keySelector != null; -if (isKeyed) { -BatchExecutionUtils.applyBatchExecutionSettings( -transformation.getId(), context, StreamConfig.InputRequirement.SORTED); -} + +oneInputTransformationOutputEOFSetting(transformation, context); return ids; } @Override public Collection translateForStreamingInternal( final OneInputTransformation transformation, final Context context) { -return translateInternal( -transformation, -transformation.getOperatorFactory(), -transformation.getInputType(), -transformation.getStateKeySelector(), -transformation.getStateKeyType(), -context); +Collection ids = +translateInternal( +transformation, +transformation.getOperatorFactory(), +transformation.getInputType(), +transformation.getStateKeySelector(), +transformation.getStateKeyType(), +context); + +if (transformation.isOutputOnEOF()) { +// Try to apply batch execution settings for streaming mode transformation. +oneInputTransformationOutputEOFSetting(transformation, context); +} + +return ids; +} + +private void oneInputTransformationOutputEOFSetting( Review Comment: I agree, and also modified in `TwoInputTransformationTranslator` and `MultiInputTransformationTranslator`. Thanks for your suggestion. ## flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/translators/OneInputTransformationTranslator.java: ## @@ -50,24 +50,47 @@ public Collection translateForBatchInternal( keySelector, transformation.getStateKeyType(), context); -boolean isKeyed = keySelector != null; -if (isKeyed) { -BatchExecutionUtils.applyBatchExecutionSettings( -transformation.getId(), context, StreamConfig.InputRequirement.SORTED); -} + +oneInputTransformationOutputEOFSetting(transformation, context); return ids; } @Override public Collection translateForStreamingInternal( final OneInputTransformation transformation, final Context context) { -return translateInternal( -transformation, -transformation.getOperatorFactory(), -transformation.getInputType(), -transformation.getStateKeySelector(), -transformation.getStateKeyType(), -context); +Collection ids = +translateInternal( +transformation, +transformation.getOperatorFactory(), +transformation.getInputType(), +transformation.getStateKeySelector(), +transformation.getStateKeyType(), +context); + +if (transformation.isOutputOnEOF()) { +// Try to apply batch execution settings for streaming mode transformation. +oneInputTransformationOutputEOFSetting(transformation, context); +} + +return ids; +} + +private void oneInputTransformationOutputEOFSetting( Review Comment: I agree, and also modified in `TwoInputTransformationTranslator` and `MultiInputTransformationTranslator`. Thanks for your suggestion. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lsyldliu commented on a diff in pull request #23282: [FlINK-32865][table-planner] Add ExecutionOrderEnforcer to exec plan and put it into BatchExecMultipleInput
lsyldliu commented on code in PR #23282: URL: https://github.com/apache/flink/pull/23282#discussion_r1306878807 ## flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/fusion/spec/ExecutionOrderEnforcerFusionCodegenSpec.scala: ## @@ -0,0 +1,78 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.flink.table.planner.plan.fusion.spec + +import org.apache.flink.table.planner.codegen.{CodeGeneratorContext, GeneratedExpression} +import org.apache.flink.table.planner.plan.fusion.{OpFusionCodegenSpecBase, OpFusionContext} + +import java.util + +/** The operator fusion codegen spec for ExecutionOrderEnforcer. */ +class ExecutionOrderEnforcerFusionCodegenSpec(opCodegenCtx: CodeGeneratorContext) + extends OpFusionCodegenSpecBase(opCodegenCtx) { + private lazy val sourceInputId = 2 + + private var dynamicFilteringInputContext: OpFusionContext = _ + private var sourceInputContext: OpFusionContext = _ + + override def setup(opFusionContext: OpFusionContext): Unit = { +super.setup(opFusionContext) +val inputContexts = fusionContext.getInputFusionContexts +assert(inputContexts.size == 2) +dynamicFilteringInputContext = inputContexts.get(0) +sourceInputContext = inputContexts.get(1) + } + + override def variablePrefix(): String = "orderEnforcer" + + override def doProcessProduce(codegenCtx: CodeGeneratorContext): Unit = { +dynamicFilteringInputContext.processProduce(codegenCtx) +sourceInputContext.processProduce(codegenCtx) + } + + override def doEndInputProduce(codegenCtx: CodeGeneratorContext): Unit = { +dynamicFilteringInputContext.endInputProduce(codegenCtx) +sourceInputContext.endInputProduce(codegenCtx) + } + + override def doProcessConsume( + inputId: Int, + inputVars: util.List[GeneratedExpression], + row: GeneratedExpression): String = { +if (inputId == sourceInputId) { + s""" + |// call downstream to consume the row + |${row.code} + |${fusionContext.processConsume(inputVars)} Review Comment: ```suggestion |${fusionContext.processConsume(null, ${row.resultTerm})} ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-32835) [JUnit5 Migration] The accumulators, blob and blocklist packages of flink-runtime module
[ https://issues.apache.org/jira/browse/FLINK-32835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759428#comment-17759428 ] Rui Fan commented on FLINK-32835: - Merged 3ef713271771f9367166bcb31a6c08075f1d9617 > [JUnit5 Migration] The accumulators, blob and blocklist packages of > flink-runtime module > > > Key: FLINK-32835 > URL: https://issues.apache.org/jira/browse/FLINK-32835 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Rui Fan >Assignee: Ferenc Csaky >Priority: Minor > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-32835) [JUnit5 Migration] The accumulators, blob and blocklist packages of flink-runtime module
[ https://issues.apache.org/jira/browse/FLINK-32835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Fan resolved FLINK-32835. - Fix Version/s: 1.19.0 Resolution: Fixed > [JUnit5 Migration] The accumulators, blob and blocklist packages of > flink-runtime module > > > Key: FLINK-32835 > URL: https://issues.apache.org/jira/browse/FLINK-32835 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Rui Fan >Assignee: Ferenc Csaky >Priority: Minor > Labels: pull-request-available > Fix For: 1.19.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] 1996fanrui merged pull request #23211: [FLINK-32835][runtime] Migrate unit tests in "accumulators" and "blob" packages to JUnit5
1996fanrui merged PR #23211: URL: https://github.com/apache/flink/pull/23211 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang closed FLINK-32969. - Fix Version/s: 1.18.0 Resolution: Fixed > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Fix For: 1.18.0 > > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is a udf. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png|width=648,height=317! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png|width=707,height=161! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759427#comment-17759427 ] Lyn Zhang commented on FLINK-32969: --- [~luoyuxia] looks like it has been fixed in 1.17.2 and 1.18, I will try to run this sql in those versions, this issue closed first. > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is a udf. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png|width=648,height=317! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png|width=707,height=161! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] leonardBang commented on pull request #23152: [FLINK-32767][doc] Fix en-doc SHOW CREATE TABLE usage
leonardBang commented on PR #23152: URL: https://github.com/apache/flink/pull/23152#issuecomment-1694950735 > @leonardBang @RocMarshal I found sqlserver offical doc use this. Although the explanation is very redundant, it is very clear. https://user-images.githubusercontent.com/13617900/258964222-84b5c800-0140-41fe-ae04-1c8c532f8506.png;> +1, the sqlserver way is more friendly to users. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759426#comment-17759426 ] luoyuxia commented on FLINK-32969: -- [~zicat] Thanks for reporting. Is it similar to FLINK-30966? > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is a udf. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png|width=648,height=317! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png|width=707,height=161! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Function will cause NullPointException when the third param is other functions. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png|width=648,height=317! h1. Exception Stack !image-2023-08-28-11-02-03-842.png|width=707,height=161! was: If Function will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png|width=648,height=317! h1. Exception Stack !image-2023-08-28-11-02-03-842.png|width=707,height=161! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is other > functions. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png|width=648,height=317! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png|width=707,height=161! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Function will cause NullPointException when the third param is a udf. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png|width=648,height=317! h1. Exception Stack !image-2023-08-28-11-02-03-842.png|width=707,height=161! was: If Function will cause NullPointException when the third param is other functions. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png|width=648,height=317! h1. Exception Stack !image-2023-08-28-11-02-03-842.png|width=707,height=161! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is a udf. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png|width=648,height=317! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png|width=707,height=161! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Function will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png|width=648,height=317! h1. Exception Stack !image-2023-08-28-11-02-03-842.png|width=707,height=161! was: If Function will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! h1. Exception Stack !image-2023-08-28-11-02-03-842.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png|width=648,height=317! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png|width=707,height=161! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Attachment: image-2023-08-28-11-02-03-842.png > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Function will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! h1. Exception Stack !image-2023-08-28-11-02-03-842.png! was: If Function will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png, image-2023-08-28-11-02-03-842.png > > > If Function will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! > h1. Exception Stack > !image-2023-08-28-11-02-03-842.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Function will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! was: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Function will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759423#comment-17759423 ] Lyn Zhang commented on FLINK-32969: --- [~jark] [~godfreyhe] Please help to check this issue > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} –- \{"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! was: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} – {"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > –- \{"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score FROM source;{code} – {"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! was: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source;{code} – {"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')),score > FROM source;{code} > – > {"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' );CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print');INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source;{code} -- {"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! was: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' );CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print');INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source; -- {"name":"aa","score":10} {code} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > );CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print');INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) > ,score > FROM source;{code} > -- {"name":"aa","score":10} > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' ); CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print'); INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source;{code} – {"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! was: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' );CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print');INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source;{code} -- {"name":"aa","score":10} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > ); > CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print'); > INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) > ,score > FROM source;{code} > – > {"name":"aa","score":10} > > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Description: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'localhost:', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' );CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print');INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source; -- {"name":"aa","score":10} {code} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! was: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'host.docker.internal', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' );CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print');INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source; {code} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'localhost:', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > );CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print');INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) > ,score > FROM source; > -- {"name":"aa","score":10} {code} > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32969) IfCallGen NullPointException Bug
[ https://issues.apache.org/jira/browse/FLINK-32969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lyn Zhang updated FLINK-32969: -- Attachment: image-2023-08-28-10-55-26-228.png if_function_codegen.java Component/s: Table SQL / Planner Affects Version/s: 1.17.1 Description: If Functions will cause NullPointException when the third param is other function. h1. Example: {code:java} CREATE TABLE source ( name STRING , score INT ) WITH ( 'connector' = 'socket', 'hostname' = 'host.docker.internal', 'port' = '', 'byte-delimiter' = '10', 'format' = 'json' );CREATE TABLE print( name STRING, score INT ) WITH ('connector' = 'print');INSERT INTO print SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) ,score FROM source; {code} h1. Code Generator Result [^if_function_codegen.java] !image-2023-08-28-10-55-26-228.png! Summary: IfCallGen NullPointException Bug (was: IfCallGen Bug) > IfCallGen NullPointException Bug > > > Key: FLINK-32969 > URL: https://issues.apache.org/jira/browse/FLINK-32969 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Lyn Zhang >Priority: Major > Attachments: if_function_codegen.java, > image-2023-08-28-10-55-26-228.png > > > If Functions will cause NullPointException when the third param is other > function. > h1. Example: > {code:java} > CREATE TABLE source ( > name STRING , > score INT > ) WITH ( > 'connector' = 'socket', > 'hostname' = 'host.docker.internal', > 'port' = '', > 'byte-delimiter' = '10', > 'format' = 'json' > );CREATE TABLE print( > name STRING, > score INT > ) WITH ('connector' = 'print');INSERT INTO print > SELECT IF(name = 'aa', 'null', CONCAT(name,'x')) > ,score > FROM source; > {code} > h1. Code Generator Result > [^if_function_codegen.java] > !image-2023-08-28-10-55-26-228.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32969) IfCallGen Bug
Lyn Zhang created FLINK-32969: - Summary: IfCallGen Bug Key: FLINK-32969 URL: https://issues.apache.org/jira/browse/FLINK-32969 Project: Flink Issue Type: Bug Reporter: Lyn Zhang -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] jiexray commented on pull request #21410: [FLINK-29776][flink-statebackend-changelog][JUnit5 Migration]Module: flink-statebackend-changelog.
jiexray commented on PR #21410: URL: https://github.com/apache/flink/pull/21410#issuecomment-1694921585 @Myasuka @fredia I have resolved all conversations? Could you have a look at this pr again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (FLINK-32958) Support VIEW as a source table in CREATE TABLE ... Like statement
[ https://issues.apache.org/jira/browse/FLINK-32958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759420#comment-17759420 ] Han commented on FLINK-32958: - [~lsy] Sure # In the above case, we just want to print the table content. It's convenient to use the CREATE TABLE LIKE statement to copy schema and create a print sink table. But now if the source table is a view, we had to give up this idea in frustration; # Other engines, such as hive and spark, support CREATE TABLE LIKE a view. So maybe supporting this syntax is not a strange thing. > Support VIEW as a source table in CREATE TABLE ... Like statement > - > > Key: FLINK-32958 > URL: https://issues.apache.org/jira/browse/FLINK-32958 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Han >Priority: Major > Labels: pull-request-available > > We can't create a table from a view through CREATE TABLE LIKE statement > > case 1: > {code:sql} > create view source_view as select id,val from source; > create table sink with ('connector' = 'print') like source_view (excluding > all); > insert into sink select * from source_view;{code} > case 2 > {code:java} > DataStreamSource source = ...; > tEnv.createTemporaryView("source", source); > tEnv.executeSql("create table sink with ('connector' = 'print') like source > (excluding all)"); > tEnv.executeSql("insert into sink select * from source");{code} > > The above cases will throw an exception: > {code:java} > Source table '`default_catalog`.`default_database`.`source`' of the LIKE > clause can not be a VIEW{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32963) Make the test "testKeyedMapStateStateMigration" stable
[ https://issues.apache.org/jira/browse/FLINK-32963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759413#comment-17759413 ] Hangxiang Yu commented on FLINK-32963: -- Thanks for reporting this. You're right. Just assigned to you, please go ahead. > Make the test "testKeyedMapStateStateMigration" stable > -- > > Key: FLINK-32963 > URL: https://issues.apache.org/jira/browse/FLINK-32963 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.17.1 >Reporter: Asha Boyapati >Assignee: Asha Boyapati >Priority: Minor > Labels: pull-request-available > > We are proposing to make the following test stable: > {{org.apache.flink.runtime.state.FileStateBackendMigrationTest.testKeyedMapStateStateMigration}} > The test is currently flaky because the order of elements returned by the > iterator is non-deterministic. > The following PR fixes the flaky test by making it independent of the order > of elements returned by the iterator: > [https://github.com/apache/flink/pull/23298] > We detected this using the NonDex tool using the following command: > {{mvn edu.illinois:nondex-maven-plugin:2.1.1:nondex -pl flink-runtime > -DnondexRuns=10 > -Dtest=org.apache.flink.runtime.state.FileStateBackendMigrationTest#testKeyedMapStateStateMigration}} > Please see the following Continuous Integration log that shows the flakiness: > [https://github.com/asha-boyapati/flink/actions/runs/5909136145/job/16029377793] > Please see the following Continuous Integration log that shows that the > flakiness is fixed by this change: > [https://github.com/asha-boyapati/flink/actions/runs/5909183468/job/16029467973] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] 1996fanrui commented on pull request #23295: [FLINK-26341][zookeeper] Upgrade all tests related to ZooKeeperTestEnvironment to junit5 and removing the ZooKeeperTestEnvironment
1996fanrui commented on PR #23295: URL: https://github.com/apache/flink/pull/23295#issuecomment-1694902985 Hi @XComp , would you mind helping double check this PR in your free time? thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] 1996fanrui commented on pull request #23217: [FLINK-32866][Tests] Enable the TestLoggerExtension for all junit5 tests
1996fanrui commented on PR #23217: URL: https://github.com/apache/flink/pull/23217#issuecomment-1694902697 Hi @XComp @snuyanzin @RocMarshal , would you mind helping review this PR in your free time? thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] lindong28 commented on a diff in pull request #23268: [FLINK-32945][runtime] Fix NPE when task reached end-of-data but checkpoint failed
lindong28 commented on code in PR #23268: URL: https://github.com/apache/flink/pull/23268#discussion_r1306838812 ## flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/VertexEndOfDataListener.java: ## @@ -23,17 +23,24 @@ import org.apache.flink.runtime.executiongraph.ExecutionJobVertex; import org.apache.flink.runtime.jobgraph.JobGraph; import org.apache.flink.runtime.jobgraph.JobVertexID; +import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; import java.util.BitSet; import java.util.HashMap; import java.util.Iterator; import java.util.Map; +import java.util.Set; /** * Records the end of data event of each task, and allows for checking whether all tasks of a {@link * JobGraph} have reached the end of data. */ public class VertexEndOfDataListener { +private static final Logger LOG = LoggerFactory.getLogger(VertexEndOfDataListener.class); Review Comment: Do we still need this variable? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (FLINK-32963) Make the test "testKeyedMapStateStateMigration" stable
[ https://issues.apache.org/jira/browse/FLINK-32963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hangxiang Yu reassigned FLINK-32963: Assignee: Asha Boyapati > Make the test "testKeyedMapStateStateMigration" stable > -- > > Key: FLINK-32963 > URL: https://issues.apache.org/jira/browse/FLINK-32963 > Project: Flink > Issue Type: Bug > Components: Runtime / State Backends >Affects Versions: 1.17.1 >Reporter: Asha Boyapati >Assignee: Asha Boyapati >Priority: Minor > Labels: pull-request-available > > We are proposing to make the following test stable: > {{org.apache.flink.runtime.state.FileStateBackendMigrationTest.testKeyedMapStateStateMigration}} > The test is currently flaky because the order of elements returned by the > iterator is non-deterministic. > The following PR fixes the flaky test by making it independent of the order > of elements returned by the iterator: > [https://github.com/apache/flink/pull/23298] > We detected this using the NonDex tool using the following command: > {{mvn edu.illinois:nondex-maven-plugin:2.1.1:nondex -pl flink-runtime > -DnondexRuns=10 > -Dtest=org.apache.flink.runtime.state.FileStateBackendMigrationTest#testKeyedMapStateStateMigration}} > Please see the following Continuous Integration log that shows the flakiness: > [https://github.com/asha-boyapati/flink/actions/runs/5909136145/job/16029377793] > Please see the following Continuous Integration log that shows that the > flakiness is fixed by this change: > [https://github.com/asha-boyapati/flink/actions/runs/5909183468/job/16029467973] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32968) Update doc for customized catalog listener
Fang Yong created FLINK-32968: - Summary: Update doc for customized catalog listener Key: FLINK-32968 URL: https://issues.apache.org/jira/browse/FLINK-32968 Project: Flink Issue Type: Improvement Components: Documentation Affects Versions: 1.18.0, 1.19.0 Reporter: Fang Yong Refer to https://issues.apache.org/jira/browse/FLINK-32798 for more details -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-25242) UDF with primitive int argument does not accept int values even after a not null filter
[ https://issues.apache.org/jira/browse/FLINK-25242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759407#comment-17759407 ] lincoln lee commented on FLINK-25242: - [~337361...@qq.com]could you also verify it both in release-1.18 & release-1.17 branches ? > UDF with primitive int argument does not accept int values even after a not > null filter > --- > > Key: FLINK-25242 > URL: https://issues.apache.org/jira/browse/FLINK-25242 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.15.0 >Reporter: Caizhi Weng >Assignee: Yunhong Zheng >Priority: Major > > Add the following test case to {{TableEnvironmentITCase}} to reproduce this > issue. > {code:scala} > @Test > def myTest(): Unit = { > tEnv.executeSql("CREATE TEMPORARY FUNCTION MyUdf AS > 'org.apache.flink.table.api.MyUdf'") > tEnv.executeSql( > """ > |CREATE TABLE T ( > | a INT > |) WITH ( > | 'connector' = 'values', > | 'bounded' = 'true' > |) > |""".stripMargin) > tEnv.executeSql("SELECT MyUdf(a) FROM T WHERE a IS NOT NULL").print() > } > {code} > UDF code > {code:scala} > class MyUdf extends ScalarFunction { > def eval(a: Int): Int = { > a + 1 > } > } > {code} > Exception stack > {code} > org.apache.flink.table.api.ValidationException: SQL validation failed. > Invalid function call: > default_catalog.default_database.MyUdf(INT) > at > org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:168) > at > org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107) > at > org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:219) > at > org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101) > at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:736) > at > org.apache.flink.table.api.TableEnvironmentITCase.myTest(TableEnvironmentITCase.scala:97) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:258) > at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) > at > org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner.run(ParentRunner.java:413) > at org.junit.runners.Suite.runChild(Suite.java:128) > at org.junit.runners.Suite.runChild(Suite.java:27) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) > at
[jira] [Commented] (FLINK-32958) Support VIEW as a source table in CREATE TABLE ... Like statement
[ https://issues.apache.org/jira/browse/FLINK-32958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759406#comment-17759406 ] dalongliu commented on FLINK-32958: --- [~fornaix] Can you explain why we need to support this syntax? > Support VIEW as a source table in CREATE TABLE ... Like statement > - > > Key: FLINK-32958 > URL: https://issues.apache.org/jira/browse/FLINK-32958 > Project: Flink > Issue Type: Improvement > Components: Table SQL / Planner >Affects Versions: 1.17.1 >Reporter: Han >Priority: Major > Labels: pull-request-available > > We can't create a table from a view through CREATE TABLE LIKE statement > > case 1: > {code:sql} > create view source_view as select id,val from source; > create table sink with ('connector' = 'print') like source_view (excluding > all); > insert into sink select * from source_view;{code} > case 2 > {code:java} > DataStreamSource source = ...; > tEnv.createTemporaryView("source", source); > tEnv.executeSql("create table sink with ('connector' = 'print') like source > (excluding all)"); > tEnv.executeSql("insert into sink select * from source");{code} > > The above cases will throw an exception: > {code:java} > Source table '`default_catalog`.`default_database`.`source`' of the LIKE > clause can not be a VIEW{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-32831) RuntimeFilterProgram should aware join type when looking for the build side
[ https://issues.apache.org/jira/browse/FLINK-32831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lijie Wang closed FLINK-32831. -- Fix Version/s: 1.18.0 1.19.0 Resolution: Fixed > RuntimeFilterProgram should aware join type when looking for the build side > --- > > Key: FLINK-32831 > URL: https://issues.apache.org/jira/browse/FLINK-32831 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Lijie Wang >Assignee: Lijie Wang >Priority: Major > Labels: pull-request-available > Fix For: 1.18.0, 1.19.0 > > > Currently, runtime filter program will try to look for an {{Exchange}} as > build side to avoid affecting {{MultiInput}}. It will try to push down the > runtime filter builder if the original build side is not {{Exchange}}. > Currenlty, the builder-push-down does not aware the join type, which may lead > to incorrect results(For example, push down the builder to the right input of > left-join). > We should only support following cases: > 1. Inner join: builder can push to left + right input > 2. semi join: builder can push to left + right input > 3. left join: builder can only push to the left input > 4. right join: builder can only push to the right input -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32831) RuntimeFilterProgram should aware join type when looking for the build side
[ https://issues.apache.org/jira/browse/FLINK-32831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759405#comment-17759405 ] Lijie Wang commented on FLINK-32831: Fixed via master: 43bcf319ff6104e1af829e34ae8a430c6417622d f518f8a5bdfcde4912961f60075e530399160f43 07888af59bbc2d24afa049e8a6aedcd9eb822986 a68dd419718b4304343c2b27dab94394c88c67b5 release-1.18: ac3f3cf4802d0271349636322dbd16772e86453b e48a02628349cdfdecf7a1aedd2a576fa2a7caf3 6938b928e0104f6b222a4281fbdc7bde20260f3b fa2ab5e3bfe401d0c599c8ffd224aafb69d6d503 > RuntimeFilterProgram should aware join type when looking for the build side > --- > > Key: FLINK-32831 > URL: https://issues.apache.org/jira/browse/FLINK-32831 > Project: Flink > Issue Type: Bug > Components: Table SQL / Planner >Affects Versions: 1.18.0 >Reporter: Lijie Wang >Assignee: Lijie Wang >Priority: Major > Labels: pull-request-available > > Currently, runtime filter program will try to look for an {{Exchange}} as > build side to avoid affecting {{MultiInput}}. It will try to push down the > runtime filter builder if the original build side is not {{Exchange}}. > Currenlty, the builder-push-down does not aware the join type, which may lead > to incorrect results(For example, push down the builder to the right input of > left-join). > We should only support following cases: > 1. Inner join: builder can push to left + right input > 2. semi join: builder can push to left + right input > 3. left join: builder can only push to the left input > 4. right join: builder can only push to the right input -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
[ https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759404#comment-17759404 ] Shengkai Fang commented on FLINK-32731: --- Merged into master: 4b84b6cd5983ae8f058fae731eb0f4af6214b738 Waiting to check whether it works... > SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException > - > > Key: FLINK-32731 > URL: https://issues.apache.org/jira/browse/FLINK-32731 > Project: Flink > Issue Type: Bug > Components: Table SQL / Gateway >Affects Versions: 1.18.0 >Reporter: Matthias Pohl >Assignee: Shengkai Fang >Priority: Major > Labels: pull-request-available, test-stability > > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987 > {code} > Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, > Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in > org.apache.flink.table.gateway.SqlGatewayE2ECase > Aug 02 02:14:04 02:14:04.966 [ERROR] > org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement > Time elapsed: 31.437 s <<< ERROR! > Aug 02 02:14:04 java.util.concurrent.ExecutionException: > Aug 02 02:14:04 java.sql.SQLException: > org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to > execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d. > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267) > Aug 02 02:14:04 at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > Aug 02 02:14:04 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Aug 02 02:14:04 at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > Aug 02 02:14:04 at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > Aug 02 02:14:04 at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > Aug 02 02:14:04 at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > Aug 02 02:14:04 at java.lang.Thread.run(Thread.java:750) > Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could > not execute CreateTable in path `hive`.`default`.`CsvTable` > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289) > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939) > Aug 02 02:14:04 at > org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84) > Aug 02 02:14:04 at > org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119) > Aug 02 02:14:04 at > org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258) > Aug 02 02:14:04 ... 7 more > Aug 02 02:14:04 Caused by: > org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create > table default.CsvTable > Aug 02 02:14:04 at > org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547) > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950) > Aug 02 02:14:04 at > org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283) > Aug 02 02:14:04 ... 16 more > Aug 02 02:14:04 Caused by: MetaException(message:Got exception: > java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to > hadoop-master:9000 failed on connection exception: java.net.ConnectException: > Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused) > Aug 02
[GitHub] [flink] wanglijie95 closed pull request #23216: [FLINK-32831][table-planner] RuntimeFilterProgram should aware join type when looking for the build side
wanglijie95 closed pull request #23216: [FLINK-32831][table-planner] RuntimeFilterProgram should aware join type when looking for the build side URL: https://github.com/apache/flink/pull/23216 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (FLINK-32454) deserializeStreamStateHandle of checkpoint read byte
[ https://issues.apache.org/jira/browse/FLINK-32454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-32454: --- Labels: pull-request-available stale-major (was: pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > deserializeStreamStateHandle of checkpoint read byte > > > Key: FLINK-32454 > URL: https://issues.apache.org/jira/browse/FLINK-32454 > Project: Flink > Issue Type: Bug >Reporter: Bo Cui >Priority: Major > Labels: pull-request-available, stale-major > > during checkpoint deserialization, deserializeStreamStateHandle shold read > byte instead of int > https://github.com/apache/flink/blob/c5acd8dd800dfcd2c8873c569d0028fc7d991b1c/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/metadata/MetadataV2V3SerializerBase.java#L712 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-31070) Update jline to 3.22.0
[ https://issues.apache.org/jira/browse/FLINK-31070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-31070: --- Labels: pull-request-available stale-major (was: pull-request-available) I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help the community manage its development. I see this issues has been marked as Major but is unassigned and neither itself nor its Sub-Tasks have been updated for 60 days. I have gone ahead and added a "stale-major" to the issue". If this ticket is a Major, please either assign yourself or give an update. Afterwards, please remove the label or in 7 days the issue will be deprioritized. > Update jline to 3.22.0 > -- > > Key: FLINK-31070 > URL: https://issues.apache.org/jira/browse/FLINK-31070 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Client >Reporter: Sergey Nuyanzin >Priority: Major > Labels: pull-request-available, stale-major > > Among changes > https://github.com/jline/jline3/commit/1315fc0bde9325baff8bc4035dbf29184b0b79f7 > which could simplify parse of comments in cli > and infinite loop fix > https://github.com/jline/jline3/commit/4dac9b0ce78a0ac37f580e708267d95553a999eb -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-32395) EmbeddedFileSourceCsvReaderFormatTests.test_csv_add_columns_from fails on AZP
[ https://issues.apache.org/jira/browse/FLINK-32395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-32395: --- Labels: auto-deprioritized-major test-stability (was: stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > EmbeddedFileSourceCsvReaderFormatTests.test_csv_add_columns_from fails on AZP > - > > Key: FLINK-32395 > URL: https://issues.apache.org/jira/browse/FLINK-32395 > Project: Flink > Issue Type: Bug > Components: API / Python >Affects Versions: 1.16.3 >Reporter: Sergey Nuyanzin >Priority: Minor > Labels: auto-deprioritized-major, test-stability > > This build fails > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50218=logs=3e4dd1a2-fe2f-5e5d-a581-48087e718d53=b4612f28-e3b5-5853-8a8b-610ae894217a=39609 > {noformat} > Jun 20 03:09:48 === FAILURES > === > Jun 20 03:09:48 ___ > EmbeddedFileSourceCsvReaderFormatTests.test_csv_add_columns_from ___ > Jun 20 03:09:48 > Jun 20 03:09:48 self = > testMethod=test_csv_add_columns_from> > Jun 20 03:09:48 > Jun 20 03:09:48 def test_csv_add_columns_from(self): > Jun 20 03:09:48 original_schema, lines = > _create_csv_primitive_column_schema_and_lines() > Jun 20 03:09:48 schema = > CsvSchema.builder().add_columns_from(original_schema).build() > Jun 20 03:09:48 self._build_csv_job(schema, lines) > Jun 20 03:09:48 > Jun 20 03:09:48 > self.env.execute('test_csv_schema_copy') > Jun 20 03:09:48 > Jun 20 03:09:48 pyflink/datastream/formats/tests/test_csv.py:56: > ... > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (FLINK-28321) HiveDialectQueryITCase fails with error code 137
[ https://issues.apache.org/jira/browse/FLINK-28321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Flink Jira Bot updated FLINK-28321: --- Labels: auto-deprioritized-critical auto-deprioritized-major test-stability (was: auto-deprioritized-critical stale-major test-stability) Priority: Minor (was: Major) This issue was labeled "stale-major" 7 days ago and has not received any updates so it is being deprioritized. If this ticket is actually Major, please raise the priority and ask a committer to assign you the issue or revive the public discussion. > HiveDialectQueryITCase fails with error code 137 > > > Key: FLINK-28321 > URL: https://issues.apache.org/jira/browse/FLINK-28321 > Project: Flink > Issue Type: Bug > Components: Connectors / Hive >Affects Versions: 1.15.0, 1.17.0 >Reporter: Martijn Visser >Priority: Minor > Labels: auto-deprioritized-critical, auto-deprioritized-major, > test-stability > > {code:java} > Moving data to directory > file:/tmp/junit6349996144152770842/warehouse/db1.db/src1/.hive-staging_hive_2022-06-30_03-47-28_878_1781340705558822791-1/-ext-1 > Loading data to table db1.src1 > MapReduce Jobs Launched: > Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 SUCCESS > Total MapReduce CPU Time Spent: 0 msec > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > OK > ##[error]Exit code 137 returned from process: file name '/bin/docker', > arguments 'exec -i -u 1001 -w /home/agent02_azpcontainer > 8f23cd917ec9d96c13789dabcaafe59398053d00ecf042a5426f9d1588ade349 > /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'. > {code} > https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=37387=logs=fc5181b0-e452-5c8f-68de-1097947f6483=995c650b-6573-581c-9ce6-7ad4cc038461=24786 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] snuyanzin commented on a diff in pull request #23200: [FLINK-32850][flink-runtime][JUnit5 Migration] Module: The io package of flink-runtime
snuyanzin commented on code in PR #23200: URL: https://github.com/apache/flink/pull/23200#discussion_r1306707624 ## flink-runtime/src/test/java/org/apache/flink/runtime/io/disk/BatchShuffleReadBufferPoolTest.java: ## @@ -19,115 +19,114 @@ import org.apache.flink.core.memory.MemorySegment; -import org.junit.Rule; -import org.junit.Test; -import org.junit.rules.Timeout; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.Timeout; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; -import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicReference; -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertFalse; -import static org.junit.Assert.assertNull; -import static org.junit.Assert.assertTrue; +import static org.assertj.core.api.Assertions.assertThat; +import static org.assertj.core.api.Assertions.assertThatThrownBy; /** Tests for {@link BatchShuffleReadBufferPool}. */ -public class BatchShuffleReadBufferPoolTest { +@Timeout(60) Review Comment: it would be better to specify timeunits -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration
[ https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759381#comment-17759381 ] Sergey Nuyanzin edited comment on FLINK-32785 at 8/27/23 5:20 PM: -- Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1, 1.16 and it is reproduced there as well, so probably it is not related to current change UPD: probably the reason is not enough resources for tasks in 1.16 it fails as {noformat} Caused by: java.util.concurrent.CompletionException: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Could not acquire the minimum required resources. at org.apache.flink.runtime.scheduler.DefaultExecutionDeployer.lambda$assignResource$4(DefaultExecutionDeployer.java:227) ... 39 more Caused by: java.util.concurrent.CompletionException: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Could not acquire the minimum required resources. at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292) at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607) at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591) ... 37 more Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Could not acquire the minimum required resources. {noformat} was (Author: sergey nuyanzin): Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1, 1.16 and it is reproduced there as well, so probably it is not related to current change UPD: probably the reason is not enough resources for tasks > Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support > operator-level state TTL configuration > - > > Key: FLINK-32785 > URL: https://issues.apache.org/jira/browse/FLINK-32785 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration
[ https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759381#comment-17759381 ] Sergey Nuyanzin edited comment on FLINK-32785 at 8/27/23 5:19 PM: -- Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1, 1.16 and it is reproduced there as well, so probably it is not related to current change UPD: probably the reason is not enough resources for tasks was (Author: sergey nuyanzin): Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1, 1.16 and it is reproduced there as well, so probably it is not related to current change > Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support > operator-level state TTL configuration > - > > Key: FLINK-32785 > URL: https://issues.apache.org/jira/browse/FLINK-32785 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration
[ https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759381#comment-17759381 ] Sergey Nuyanzin edited comment on FLINK-32785 at 8/27/23 5:18 PM: -- Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1, 1.16 and it is reproduced there as well, so probably it is not related to current change was (Author: sergey nuyanzin): Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1 and it is reproduced there as well, so probably it is not related to current change > Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support > operator-level state TTL configuration > - > > Key: FLINK-32785 > URL: https://issues.apache.org/jira/browse/FLINK-32785 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (FLINK-31674) [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)
[ https://issues.apache.org/jira/browse/FLINK-31674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Nuyanzin resolved FLINK-31674. - Resolution: Duplicate > [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase) > -- > > Key: FLINK-31674 > URL: https://issues.apache.org/jira/browse/FLINK-31674 > Project: Flink > Issue Type: Sub-task >Reporter: Ryan Skraba >Assignee: Ryan Skraba >Priority: Major > > This is one sub-subtask related to the flink-table-planner migration > (FLINK-29541). > While most of the JUnit migrations tasks are done by modules, a number of > abstract test classes in flink-table-planner have large hierarchies that > cross module boundaries. This task is to migrate all of the tests that > depend on {{BatchAbstractTestBase}} to JUnit5. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31674) [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)
[ https://issues.apache.org/jira/browse/FLINK-31674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759384#comment-17759384 ] Sergey Nuyanzin commented on FLINK-31674: - yes probably you are right, thanks for highlighting > [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase) > -- > > Key: FLINK-31674 > URL: https://issues.apache.org/jira/browse/FLINK-31674 > Project: Flink > Issue Type: Sub-task >Reporter: Ryan Skraba >Assignee: Ryan Skraba >Priority: Major > > This is one sub-subtask related to the flink-table-planner migration > (FLINK-29541). > While most of the JUnit migrations tasks are done by modules, a number of > abstract test classes in flink-table-planner have large hierarchies that > cross module boundaries. This task is to migrate all of the tests that > depend on {{BatchAbstractTestBase}} to JUnit5. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration
[ https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759381#comment-17759381 ] Sergey Nuyanzin commented on FLINK-32785: - Checked plan in 1.18: it contains {{TTL}} as mentioned above. Compiled in 1.17 and then executed in 1.18: the job was successfully running in 1.18 However there is an issue that after about 1 min 30 sec the job fails with {noformat} Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with id localhost:45987-ac10bb timed out. ... 30 more {noformat} I checked it with 1.17.1 and it is reproduced there as well, so probably it is not related to current change > Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support > operator-level state TTL configuration > - > > Key: FLINK-32785 > URL: https://issues.apache.org/jira/browse/FLINK-32785 > Project: Flink > Issue Type: Sub-task > Components: Tests >Affects Versions: 1.18.0 >Reporter: Qingsheng Ren >Assignee: Sergey Nuyanzin >Priority: Major > Fix For: 1.18.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-29541) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-29541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759379#comment-17759379 ] Jiabao Sun commented on FLINK-29541: Hey [~jark], I found that this issue has not been updated for a long time. Could you help assign this ticket to me? > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-29541 > URL: https://issues.apache.org/jira/browse/FLINK-29541 > Project: Flink > Issue Type: Technical Debt > Components: Table SQL / Planner, Tests >Reporter: Lijie Wang >Assignee: Ryan Skraba >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-31674) [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase)
[ https://issues.apache.org/jira/browse/FLINK-31674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759377#comment-17759377 ] Jiabao Sun commented on FLINK-31674: Hi [~Sergey Nuyanzin], this issue is duplicate with FLINK-30815. I think we can close this issue now. > [JUnit5 Migration] Module: flink-table-planner (BatchAbstractTestBase) > -- > > Key: FLINK-31674 > URL: https://issues.apache.org/jira/browse/FLINK-31674 > Project: Flink > Issue Type: Sub-task >Reporter: Ryan Skraba >Assignee: Ryan Skraba >Priority: Major > > This is one sub-subtask related to the flink-table-planner migration > (FLINK-29541). > While most of the JUnit migrations tasks are done by modules, a number of > abstract test classes in flink-table-planner have large hierarchies that > cross module boundaries. This task is to migrate all of the tests that > depend on {{BatchAbstractTestBase}} to JUnit5. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink-kubernetes-operator] Ocean22 commented on pull request #478: [FLINK-30329] Mount flink-operator-config-volume at /opt/flink/conf without subPath
Ocean22 commented on PR #478: URL: https://github.com/apache/flink-kubernetes-operator/pull/478#issuecomment-1694699075 > Hi,everyone,could I ask a question? If i want to add some log's file into FlinkDepoyment's /opt/flink/conf, how should I do.I found even though I change helm's config,it will only add into operator's /opt/flink/conf , not FlinkDepoyment's /opt/flink/conf,and my flink-kubernetes-operator version is 1.5.These is some pictures. ![企业微信截图_16931497895309](https://user-images.githubusercontent.com/45907917/263538988-507c2189-b924-4e06-8fb9-b1a7c0e9b7b4.png) ![企业微信截图_16931498872716](https://user-images.githubusercontent.com/45907917/263539020-2ba0ca96-3d32-47a9-a762-afd71b1a0076.png) ![企业微信截图_16931499803969](https://user-images.githubusercontent.com/45907917/263539022-e17456ef-147c-4247-9e64-dbe075d6c9dc.png) ![企业微信截图_16931500338352](https://user-images.githubusercontent.com/45907917/263539029-165cb28c-db4c-4d87-bb65-f7de9dc7cadd.png) ![企业微信截图_1693150207756](https://user-images.githubusercontent.com/45907917/263539037-b6d933f3- a917-4d44-ba2d-2648eaed7779.png) ![企业微信截图_16931504206024](https://user-images.githubusercontent.com/45907917/263539038-9f4e03be-c8d5-4d61-96f6-f2d275692895.png) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink-kubernetes-operator] Ocean22 commented on pull request #478: [FLINK-30329] Mount flink-operator-config-volume at /opt/flink/conf without subPath
Ocean22 commented on PR #478: URL: https://github.com/apache/flink-kubernetes-operator/pull/478#issuecomment-1694697907 Hi,everyone,could I ask a question? If i want to add some log's file into FlinkDepoyment's /opt/flink/conf, how should I do.I found even though I change helm's config,it will only add into operator's /opt/flink/conf , not FlinkDepoyment's /opt/flink/conf. These is some pictures. ![企业微信截图_16931497895309](https://github.com/apache/flink-kubernetes-operator/assets/45907917/507c2189-b924-4e06-8fb9-b1a7c0e9b7b4) ![企业微信截图_16931498872716](https://github.com/apache/flink-kubernetes-operator/assets/45907917/2ba0ca96-3d32-47a9-a762-afd71b1a0076) ![企业微信截图_16931499803969](https://github.com/apache/flink-kubernetes-operator/assets/45907917/e17456ef-147c-4247-9e64-dbe075d6c9dc) ![企业微信截图_16931500338352](https://github.com/apache/flink-kubernetes-operator/assets/45907917/165cb28c-db4c-4d87-bb65-f7de9dc7cadd) ![企业微信截图_1693150207756](https://github.com/apache/flink-kubernetes-operator/assets/45907917/b6d933f3-a917-4d44-ba2d-2648eaed7779) ![企业微信截图_16931504206024](https://github.com/apache/flink-kubernetes-operator/assets/45907917/9f4e03be-c8d5-4d61-96f6-f2d275692895) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (FLINK-32967) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-32967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Nuyanzin resolved FLINK-32967. - Resolution: Duplicate > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-32967 > URL: https://issues.apache.org/jira/browse/FLINK-32967 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32967) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-32967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759374#comment-17759374 ] Sergey Nuyanzin commented on FLINK-32967: - closing since it is a duplicate of https://issues.apache.org/jira/browse/FLINK-29541 > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-32967 > URL: https://issues.apache.org/jira/browse/FLINK-32967 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32967) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-32967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759373#comment-17759373 ] Jiabao Sun commented on FLINK-32967: Hi [~fanrui], could you help assign this ticket to me? > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-32967 > URL: https://issues.apache.org/jira/browse/FLINK-32967 > Project: Flink > Issue Type: Sub-task > Components: Tests >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (FLINK-32966) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-32966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759372#comment-17759372 ] Jiabao Sun edited comment on FLINK-32966 at 8/27/23 2:45 PM: - Duplicate with https://issues.apache.org/jira/browse/FLINK-32967 was (Author: jiabao.sun): Duplicate with https://issues.apache.org/jira/browse/FLINK-32966 > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-32966 > URL: https://issues.apache.org/jira/browse/FLINK-32966 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.17.1 >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-32966) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-32966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jiabao Sun closed FLINK-32966. -- Resolution: Duplicate Duplicate with https://issues.apache.org/jira/browse/FLINK-32966 > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-32966 > URL: https://issues.apache.org/jira/browse/FLINK-32966 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.17.1 >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32967) [JUnit5 Migration] Module: flink-table-planner
Jiabao Sun created FLINK-32967: -- Summary: [JUnit5 Migration] Module: flink-table-planner Key: FLINK-32967 URL: https://issues.apache.org/jira/browse/FLINK-32967 Project: Flink Issue Type: Sub-task Components: Tests Reporter: Jiabao Sun -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (FLINK-32966) [JUnit5 Migration] Module: flink-table-planner
[ https://issues.apache.org/jira/browse/FLINK-32966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17759369#comment-17759369 ] Jiabao Sun commented on FLINK-32966: Hi [~fanrui], could you help assign this ticket to me? Thanks a lot. > [JUnit5 Migration] Module: flink-table-planner > -- > > Key: FLINK-32966 > URL: https://issues.apache.org/jira/browse/FLINK-32966 > Project: Flink > Issue Type: Improvement > Components: Tests >Affects Versions: 1.17.1 >Reporter: Jiabao Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (FLINK-32966) [JUnit5 Migration] Module: flink-table-planner
Jiabao Sun created FLINK-32966: -- Summary: [JUnit5 Migration] Module: flink-table-planner Key: FLINK-32966 URL: https://issues.apache.org/jira/browse/FLINK-32966 Project: Flink Issue Type: Improvement Components: Tests Affects Versions: 1.17.1 Reporter: Jiabao Sun -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (FLINK-32683) Update Pekko from 1.0.0 to 1.0.1
[ https://issues.apache.org/jira/browse/FLINK-32683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthew de Detrich closed FLINK-32683. -- Resolution: Fixed > Update Pekko from 1.0.0 to 1.0.1 > > > Key: FLINK-32683 > URL: https://issues.apache.org/jira/browse/FLINK-32683 > Project: Flink > Issue Type: Improvement > Components: Runtime / Coordination >Affects Versions: 1.18.0 >Reporter: Matthew de Detrich >Assignee: Matthew de Detrich >Priority: Major > Labels: pull-request-available, stale-assigned > > Updates Pekko dependency to 1.0.1 which contains the following bugfix > [https://github.com/apache/incubator-pekko/pull/492] . See > [https://github.com/apache/incubator-pekko/issues/491] for more info -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [flink] ferenc-csaky commented on pull request #23183: [FLINK-32811][runtime] Add port range support for "taskmanager.data.bind-port"
ferenc-csaky commented on PR #23183: URL: https://github.com/apache/flink/pull/23183#issuecomment-1694598625 > I run this locally, it makes sense to add this improvement. It would be great if we could make it explicit in the logs (maybe adding an extra log line) that the following is for the data bind port: > > ``` > 2023-08-26 21:30:45,211 INFO org.apache.flink.runtime.io.network.netty.NettyConfig[] - NettyConfig [server address: localhost/127.0.0.1, server port range: 55000-55100, ssl enabled: false, memory segment size (bytes): 32768, transport type: AUTO, number of server threads: 1 (manual), number of client threads: 1 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)] > 2023-08-26 21:30:45,246 INFO org.apache.flink.runtime.io.network.NettyShuffleServiceFactory [] - Created a new FileChannelManager for storing result partitions of BLOCKING shuffles. Used directories: > /var/folders/yh/t9bt8gwj4zsd949jnr55vx98gn/T/flink-netty-shuffle-9c342614-dc5f-467f-9a9d-1736246bdc62 > 2023-08-26 21:30:45,280 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool [] - Allocated 128 MB for network buffer pool (number of memory segments: 4096, bytes per segment: 32768). > 2023-08-26 21:30:45,289 INFO org.apache.flink.runtime.io.network.NettyShuffleEnvironment [] - Starting the network environment and its components. > 2023-08-26 21:30:45,309 INFO org.apache.flink.runtime.io.network.netty.NettyClient[] - Transport type 'auto': using NIO. > 2023-08-26 21:30:45,310 INFO org.apache.flink.runtime.io.network.netty.NettyClient[] - Successful initialization (took 20 ms). > 2023-08-26 21:30:45,312 INFO org.apache.flink.runtime.io.network.netty.NettyServer[] - Transport type 'auto': using NIO. > 2023-08-26 21:30:45,341 INFO org.apache.flink.runtime.io.network.netty.NettyServer[] - Successful initialization (took 30 ms). Listening on SocketAddress /127.0.0.1:55000. > ``` I'm not sure where you think it would be appropriate. I think Netty specific classes are too general to add that info explicitly, what can be done easily is to highlight the listening port explicitly when the connection is successfully established (last line is the newly added log): ``` 2023-08-27 09:42:16,255 INFO org.apache.flink.runtime.taskexecutor.TaskManagerRunner [] - Starting TaskManager with ResourceID: localhost:59695-945f3f 2023-08-27 09:42:16,286 INFO org.apache.flink.runtime.taskexecutor.TaskManagerServices[] - Temporary file directory '/var/folders/wp/ccy48gw1255bswh9bx9svxjcgn/T': total 460 GB, usable 125 GB (27.17% usable) 2023-08-27 09:42:16,287 INFO org.apache.flink.runtime.io.disk.iomanager.IOManager [] - Created a new FileChannelManager for spilling of task related data to disk (joins, sorting, ...). Used directories: /var/folders/wp/ccy48gw1255bswh9bx9svxjcgn/T/flink-io-6c07c533-5a9f-42e2-abeb-a3b35042bd57 2023-08-27 09:42:16,291 INFO org.apache.flink.runtime.io.network.netty.NettyConfig[] - NettyConfig [server address: localhost/127.0.0.1, server port range: 55000-55100, ssl enabled: false, memory segment size (bytes): 32768, transport type: AUTO, number of server threads: 1 (manual), number of client threads: 1 (manual), server connect backlog: 0 (use Netty's default), client connect timeout (sec): 120, send/receive buffer size (bytes): 0 (use Netty's default)] 2023-08-27 09:42:16,324 INFO org.apache.flink.runtime.io.network.NettyShuffleServiceFactory [] - Created a new FileChannelManager for storing result partitions of BLOCKING shuffles. Used directories: /var/folders/wp/ccy48gw1255bswh9bx9svxjcgn/T/flink-netty-shuffle-2e98bd72-3306-4ddc-b551-3d3c77d95cfa 2023-08-27 09:42:16,345 INFO org.apache.flink.runtime.io.network.buffer.NetworkBufferPool [] - Allocated 128 MB for network buffer pool (number of memory segments: 4096, bytes per segment: 32768). 2023-08-27 09:42:16,352 INFO org.apache.flink.runtime.io.network.NettyShuffleEnvironment [] - Starting the network environment and its components. 2023-08-27 09:42:16,374 INFO org.apache.flink.runtime.io.network.netty.NettyClient[] - Transport type 'auto': using NIO. 2023-08-27 09:42:16,374 INFO org.apache.flink.runtime.io.network.netty.NettyClient[] - Successful initialization (took 21 ms). 2023-08-27 09:42:16,376 INFO org.apache.flink.runtime.io.network.netty.NettyServer[] - Transport type 'auto': using NIO. 2023-08-27 09:42:16,395 INFO org.apache.flink.runtime.io.network.netty.NettyServer[] - Successful initialization (took 19 ms). Listening on SocketAddress /127.0.0.1:55000. 2023-08-27 09:42:16,395 INFO
[GitHub] [flink] xuzifu666 commented on pull request #23220: [FLINK-32871][SQL] Support BuiltInMethod TO_TIMESTAMP with timezone options
xuzifu666 commented on PR #23220: URL: https://github.com/apache/flink/pull/23220#issuecomment-1694596699 @JingsongLi could you have a review plz or closing the pr? thanks~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] ferenc-csaky commented on a diff in pull request #23183: [FLINK-32811][runtime] Add port range support for "taskmanager.data.bind-port"
ferenc-csaky commented on code in PR #23183: URL: https://github.com/apache/flink/pull/23183#discussion_r1306611189 ## flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyServer.java: ## @@ -145,7 +144,35 @@ int init( // Start Server // -bindFuture = bootstrap.bind().syncUninterruptibly(); +LOG.debug( +"Trying to initialize Netty server on address: {} and port range {}", +config.getServerAddress(), +config.getServerPortRange()); + +Iterator portsIterator = config.getServerPortRange().getPortsIterator(); +while (portsIterator.hasNext() && bindFuture == null) { +Integer port = portsIterator.next(); +LOG.debug("Trying to bind Netty server to port: {}", port); + +bootstrap.localAddress(config.getServerAddress(), port); +try { +bindFuture = bootstrap.bind().syncUninterruptibly(); +} catch (Exception e) { Review Comment: Probably yes, but Netty wraps and throws any kind of exception via `FailedChannelFuture`, which is not specified really well, so not without a lot more digging. Furthermore I copied this "pattern" from the codebase, so I considered it good enough: https://github.com/apache/flink/blob/42f170b192049bc573efddd3f82169bb327b3fe4/flink-runtime/src/main/java/org/apache/flink/runtime/rest/RestServerEndpoint.java#L275 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [flink] ferenc-csaky commented on a diff in pull request #23211: [FLINK-32835][runtime] Migrate unit tests in "accumulators" and "blob" packages to JUnit5
ferenc-csaky commented on code in PR #23211: URL: https://github.com/apache/flink/pull/23211#discussion_r1306606274 ## flink-runtime/src/test/java/org/apache/flink/runtime/blob/BlobCachePutTest.java: ## @@ -582,57 +545,47 @@ public void testPutBufferFailsForJobHa() throws IOException { */ private void testPutBufferFails(@Nullable final JobID jobId, BlobKey.BlobType blobType) throws IOException { -assumeTrue(!OperatingSystem.isWindows()); // setWritable doesn't work on Windows. +// setWritable doesn't work on Windows. +assumeThat(OperatingSystem.isWindows()).as("setWritable doesn't work on Windows").isFalse(); -final Configuration config = new Configuration(); -File tempFileDir = null; -try (BlobServer server = -new BlobServer(config, temporaryFolder.newFolder(), new VoidBlobStore()); -BlobCacheService cache = -new BlobCacheService( -config, -temporaryFolder.newFolder(), -new VoidBlobStore(), -new InetSocketAddress("localhost", server.getPort( { +Tuple2 serverAndCache = +TestingBlobUtils.createServerAndCache(tempDir); +try (BlobServer server = serverAndCache.f0; +BlobCacheService cache = serverAndCache.f1) { server.start(); // make sure the blob server cannot create any files in its storage dir -tempFileDir = server.createTemporaryFilename().getParentFile().getParentFile(); -assertTrue(tempFileDir.setExecutable(true, false)); -assertTrue(tempFileDir.setReadable(true, false)); -assertTrue(tempFileDir.setWritable(false, false)); +File tempFileDir = server.createTemporaryFilename().getParentFile().getParentFile(); +assertThat(tempFileDir.setExecutable(true, false)).isTrue(); +assertThat(tempFileDir.setReadable(true, false)).isTrue(); +assertThat(tempFileDir.setWritable(false, false)).isTrue(); byte[] data = new byte[200]; rnd.nextBytes(data); -// upload the file to the server via the cache -exception.expect(IOException.class); -exception.expectMessage("PUT operation failed: "); - -put(cache, jobId, data, blobType); - -} finally { -// set writable again to make sure we can remove the directory -if (tempFileDir != null) { -//noinspection ResultOfMethodCallIgnored -tempFileDir.setWritable(true, false); +try { +assertThatThrownBy(() -> put(cache, jobId, data, blobType)) +.isInstanceOf(IOException.class) +.hasMessageStartingWith("PUT operation failed: "); +} finally { +assertThat(tempFileDir.setWritable(true, false)).isTrue(); } } Review Comment: No, you are correct, thanks for the suggestion, fixed! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org