[jira] [Created] (FLINK-21450) Add local recovery support to adaptive scheduler
Robert Metzger created FLINK-21450: -- Summary: Add local recovery support to adaptive scheduler Key: FLINK-21450 URL: https://issues.apache.org/jira/browse/FLINK-21450 Project: Flink Issue Type: New Feature Components: Runtime / Coordination Reporter: Robert Metzger local recovery means that, on a failure, we are able to re-use the state in a taskmanager, instead of loading it again from distributed storage (which means the scheduler needs to know where which state is located, and schedule tasks accordingly). Adaptive Scheduler is currently not respecting the location of state, so failures require the re-loading of state from the distributed storage. Adding this feature will allow us to enable the {{Local recovery and sticky scheduling end-to-end test}} for adaptive scheduler again. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21449) Kafka011ProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceRegularSink fails on azure
Dawid Wysakowicz created FLINK-21449: Summary: Kafka011ProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceRegularSink fails on azure Key: FLINK-21449 URL: https://issues.apache.org/jira/browse/FLINK-21449 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.11.3 Reporter: Dawid Wysakowicz https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13612=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=d44f43ce-542c-597d-bf94-b0718c71e5e8 {code} 2021-02-22T22:53:30.4664077Z 22:53:30,464 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (3/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.5659222Z 22:53:30,565 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (1/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.5660797Z 22:53:30,565 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (1/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.5662001Z 22:53:30,565 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (2/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.5663175Z 22:53:30,565 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (2/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.5673806Z 22:53:30,566 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (3/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.5675137Z 22:53:30,566 [Kafka 0.10 Fetcher for Source: Custom Source -> Sink: Unnamed (3/3)] DEBUG org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher [] - Sending async offset commit request to Kafka broker 2021-02-22T22:53:30.7785928Z 22:53:30,767 [program runner thread] ERROR org.apache.flink.streaming.connectors.kafka.KafkaTestBase[] - Job Runner failed with exception 2021-02-22T22:53:30.7787195Z org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 5488f1249d451a6f8e5d479342959c8b) 2021-02-22T22:53:30.7789060Zat org.apache.flink.client.ClientUtils.submitJobAndWaitForResult(ClientUtils.java:111) ~[flink-clients_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] 2021-02-22T22:53:30.7791173Zat org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.lambda$runCancelingOnEmptyInputTest$2(KafkaConsumerTestBase.java:1205) ~[flink-connector-kafka-base_2.11-1.11-SNAPSHOT-tests.jar:?] 2021-02-22T22:53:30.7792255Zat java.lang.Thread.run(Thread.java:748) [?:1.8.0_275] 2021-02-22T22:53:30.7792963Z Caused by: org.apache.flink.runtime.client.JobCancellationException: Job was cancelled. 2021-02-22T22:53:30.7794413Zat org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146) ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] 2021-02-22T22:53:30.7796019Zat org.apache.flink.client.ClientUtils.submitJobAndWaitForResult(ClientUtils.java:109) ~[flink-clients_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] 2021-02-22T22:53:30.7796808Z... 2 more {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21448) Test Changelog State backend
Yuan Mei created FLINK-21448: Summary: Test Changelog State backend Key: FLINK-21448 URL: https://issues.apache.org/jira/browse/FLINK-21448 Project: Flink Issue Type: Sub-task Reporter: Yuan Mei -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21447) Resuming Savepoint (file, async, scale up) end-to-end test Fail
Guowei Ma created FLINK-21447: - Summary: Resuming Savepoint (file, async, scale up) end-to-end test Fail Key: FLINK-21447 URL: https://issues.apache.org/jira/browse/FLINK-21447 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.11.3 Reporter: Guowei Ma [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13612=logs=08866332-78f7-59e4-4f7e-49a56faa3179=3e8647c1-5a28-5917-dd93-bf78594ea994] {code:java} The program finished with the following exception: org.apache.flink.util.FlinkException: Could not stop with a savepoint job "4b80056de9c6ffc5a204a1e972fa62f1". at org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:534) at org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:940) at org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:522) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1007) at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1070) at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1070) Caused by: java.util.concurrent.TimeoutException at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928) at org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:532) ... 6 more Waiting for job (4b80056de9c6ffc5a204a1e972fa62f1) to reach terminal state FINISHED ... ##[error]The operation was canceled. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21446) KafkaProducerTestBase.testExactlyOnce Fail
Guowei Ma created FLINK-21446: - Summary: KafkaProducerTestBase.testExactlyOnce Fail Key: FLINK-21446 URL: https://issues.apache.org/jira/browse/FLINK-21446 Project: Flink Issue Type: Bug Components: Connectors / Kafka Affects Versions: 1.11.3 Reporter: Guowei Ma [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13612=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8] {code:java} 22:52:12,032 [ Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Checkpoint 18 of job 84164b7efcb41ca524cc7076030d1f91 expired before completing. 22:52:12,033 [ Checkpoint Timer] INFO org.apache.flink.runtime.checkpoint.CheckpointCoordinator [] - Triggering checkpoint 19 (type=CHECKPOINT) @ 1614034332033 for job 84164b7efcb41ca524cc7076030d1f91. 22:52:12,036 [Source: Custom Source -> Map -> Sink: Unnamed (1/1)] INFO org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaProducer [] - Flushing new partitions 22:52:12,036 [flink-akka.actor.default-dispatcher-172] INFO org.apache.flink.runtime.jobmaster.JobMaster [] - Trying to recover from a global failure. org.apache.flink.util.FlinkRuntimeException: Exceeded checkpoint tolerable failure threshold. at org.apache.flink.runtime.checkpoint.CheckpointFailureManager.handleJobLevelCheckpointException(CheckpointFailureManager.java:68) ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:1892) ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.abortPendingCheckpoint(CheckpointCoordinator.java:1869) ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.access$600(CheckpointCoordinator.java:93) ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at org.apache.flink.runtime.checkpoint.CheckpointCoordinator$CheckpointCanceller.run(CheckpointCoordinator.java:2006) ~[flink-runtime_2.11-1.11-SNAPSHOT.jar:1.11-SNAPSHOT] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_275] at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_275] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_275] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_275] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_275] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_275] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_275] {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21445) Application mode does not set the configuration when building PackagedProgram
Yang Wang created FLINK-21445: - Summary: Application mode does not set the configuration when building PackagedProgram Key: FLINK-21445 URL: https://issues.apache.org/jira/browse/FLINK-21445 Project: Flink Issue Type: Bug Components: Deployment / Kubernetes, Deployment / Scripts, Deployment / YARN Affects Versions: 1.12.1, 1.11.3, 1.13.0 Reporter: Yang Wang Application mode uses {{ClassPathPackagedProgramRetriever}} to create the {{PackagedProgram}}. However, it does not set the configuration. This will cause some client configurations not take effect. For example, {{classloader.resolve-order}}. I think we just forget to do this since we have done the similar thing in {{CliFrontend}}. -- This message was sent by Atlassian Jira (v8.3.4#803005)
rocksdb version inquiring
Hi all, I'd like to ask 2 questions about rocksdb statebackend: 1, I need to use the FIFO compation, and there is a configuration: compactionOptionsFIFO.setAllowCompaction, which is only available in https://mvnrepository.com/artifact/io.github.myasuka/frocksdbjni 6.10 + version I noticed there is a performance issue, https://github.com/facebook/rocksdb/issues/5774 If I am not so concerned about performance, can I use it in my own job (flink 1.11) 2, I want to understand the logic of allocateRocksDBSharedResources, a task manager with 14G managed memory, 10 slots, rocksdb log shows that the writeBufferManager only has 400M+ , which cause the sst file really small. Would someone please show me a document I can read to understand? regards
Re: Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulations Wei and Xingbo! Best, Zhijiang -- From:Yun Tang Send Time:2021年2月23日(星期二) 10:58 To:Roman Khachatryan ; dev Subject:Re: Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang Congratulation! Best Yun Tang From: Yun Gao Sent: Tuesday, February 23, 2021 10:56 To: Roman Khachatryan ; dev Subject: Re: Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang Congratulations Wei and Xingbo! Best, Yun --Original Mail -- Sender:Roman Khachatryan Send Date:Tue Feb 23 00:59:22 2021 Recipients:dev Subject:Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang Congratulations! Regards, Roman On Mon, Feb 22, 2021 at 12:22 PM Yangze Guo wrote: > Congrats, Well deserved! > > Best, > Yangze Guo > > On Mon, Feb 22, 2021 at 6:47 PM Yang Wang wrote: > > > > Congratulations Wei & Xingbo! > > > > Best, > > Yang > > > > Rui Li 于2021年2月22日周一 下午6:23写道: > > > > > Congrats Wei & Xingbo! > > > > > > On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei > wrote: > > > > > > > Congratulations Wei & Xingbo! > > > > > > > > Best, > > > > Yuan > > > > > > > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > > > > > > > Congratulations Wei and Xingbo! > > > > > > > > > > Best Regards, > > > > > Yu > > > > > > > > > > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > > > > wrote: > > > > > > > > > > > Congratulations Wei & Xingbo. Great to have you as committers in > the > > > > > > community now. > > > > > > > > > > > > Cheers, > > > > > > Till > > > > > > > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song < > tonysong...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu > > > wrote: > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei > Zhong > > > and > > > > > > > Xingbo > > > > > > > > Huang have accepted the invitation to become Flink > committers. > > > > > > > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > > > > important > > > > > > > > features in PyFlink, e.g. Python UDF dependency management > > > > (FLIP-78), > > > > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF > > > support > > > > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink > and > > > > have > > > > > > > > contributed 100+ commits since then. > > > > > > > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and > has > > > > > driven > > > > > > > > several important features in PyFlink, e.g. performance > > > optimizing > > > > > for > > > > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, > FLINK-19236), > > > > > Pandas > > > > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > > > > row-based > > > > > > > > Operations support in Python Table API (FLINK-20479), etc. > He is > > > > also > > > > > > > > actively helping on answering questions in the user mailing > list, > > > > > > helping > > > > > > > > on the release check, monitoring the status of the azure > > > pipeline, > > > > > etc. > > > > > > > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang > for > > > > > > becoming > > > > > > > > Flink committers! > > > > > > > > > > > > > > > > Regards, > > > > > > > > Dian > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Best regards! > > > Rui Li > > > >
Re: Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulation! Best Yun Tang From: Yun Gao Sent: Tuesday, February 23, 2021 10:56 To: Roman Khachatryan ; dev Subject: Re: Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang Congratulations Wei and Xingbo! Best, Yun --Original Mail -- Sender:Roman Khachatryan Send Date:Tue Feb 23 00:59:22 2021 Recipients:dev Subject:Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang Congratulations! Regards, Roman On Mon, Feb 22, 2021 at 12:22 PM Yangze Guo wrote: > Congrats, Well deserved! > > Best, > Yangze Guo > > On Mon, Feb 22, 2021 at 6:47 PM Yang Wang wrote: > > > > Congratulations Wei & Xingbo! > > > > Best, > > Yang > > > > Rui Li 于2021年2月22日周一 下午6:23写道: > > > > > Congrats Wei & Xingbo! > > > > > > On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei > wrote: > > > > > > > Congratulations Wei & Xingbo! > > > > > > > > Best, > > > > Yuan > > > > > > > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > > > > > > > Congratulations Wei and Xingbo! > > > > > > > > > > Best Regards, > > > > > Yu > > > > > > > > > > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > > > > wrote: > > > > > > > > > > > Congratulations Wei & Xingbo. Great to have you as committers in > the > > > > > > community now. > > > > > > > > > > > > Cheers, > > > > > > Till > > > > > > > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song < > tonysong...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu > > > wrote: > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei > Zhong > > > and > > > > > > > Xingbo > > > > > > > > Huang have accepted the invitation to become Flink > committers. > > > > > > > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > > > > important > > > > > > > > features in PyFlink, e.g. Python UDF dependency management > > > > (FLIP-78), > > > > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF > > > support > > > > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink > and > > > > have > > > > > > > > contributed 100+ commits since then. > > > > > > > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and > has > > > > > driven > > > > > > > > several important features in PyFlink, e.g. performance > > > optimizing > > > > > for > > > > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, > FLINK-19236), > > > > > Pandas > > > > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > > > > row-based > > > > > > > > Operations support in Python Table API (FLINK-20479), etc. > He is > > > > also > > > > > > > > actively helping on answering questions in the user mailing > list, > > > > > > helping > > > > > > > > on the release check, monitoring the status of the azure > > > pipeline, > > > > > etc. > > > > > > > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang > for > > > > > > becoming > > > > > > > > Flink committers! > > > > > > > > > > > > > > > > Regards, > > > > > > > > Dian > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Best regards! > > > Rui Li > > > >
Re: Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulations Wei and Xingbo! Best, Yun --Original Mail -- Sender:Roman Khachatryan Send Date:Tue Feb 23 00:59:22 2021 Recipients:dev Subject:Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang Congratulations! Regards, Roman On Mon, Feb 22, 2021 at 12:22 PM Yangze Guo wrote: > Congrats, Well deserved! > > Best, > Yangze Guo > > On Mon, Feb 22, 2021 at 6:47 PM Yang Wang wrote: > > > > Congratulations Wei & Xingbo! > > > > Best, > > Yang > > > > Rui Li 于2021年2月22日周一 下午6:23写道: > > > > > Congrats Wei & Xingbo! > > > > > > On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei > wrote: > > > > > > > Congratulations Wei & Xingbo! > > > > > > > > Best, > > > > Yuan > > > > > > > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > > > > > > > Congratulations Wei and Xingbo! > > > > > > > > > > Best Regards, > > > > > Yu > > > > > > > > > > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > > > > wrote: > > > > > > > > > > > Congratulations Wei & Xingbo. Great to have you as committers in > the > > > > > > community now. > > > > > > > > > > > > Cheers, > > > > > > Till > > > > > > > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song < > tonysong...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu > > > wrote: > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei > Zhong > > > and > > > > > > > Xingbo > > > > > > > > Huang have accepted the invitation to become Flink > committers. > > > > > > > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > > > > important > > > > > > > > features in PyFlink, e.g. Python UDF dependency management > > > > (FLIP-78), > > > > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF > > > support > > > > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink > and > > > > have > > > > > > > > contributed 100+ commits since then. > > > > > > > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and > has > > > > > driven > > > > > > > > several important features in PyFlink, e.g. performance > > > optimizing > > > > > for > > > > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, > FLINK-19236), > > > > > Pandas > > > > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > > > > row-based > > > > > > > > Operations support in Python Table API (FLINK-20479), etc. > He is > > > > also > > > > > > > > actively helping on answering questions in the user mailing > list, > > > > > > helping > > > > > > > > on the release check, monitoring the status of the azure > > > pipeline, > > > > > etc. > > > > > > > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang > for > > > > > > becoming > > > > > > > > Flink committers! > > > > > > > > > > > > > > > > Regards, > > > > > > > > Dian > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Best regards! > > > Rui Li > > > >
[jira] [Created] (FLINK-21444) RelNodeBlock#isValidBreakPoint fails to deal with lookup tables of new source (FLIP27 source)
Caizhi Weng created FLINK-21444: --- Summary: RelNodeBlock#isValidBreakPoint fails to deal with lookup tables of new source (FLIP27 source) Key: FLINK-21444 URL: https://issues.apache.org/jira/browse/FLINK-21444 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.12.0, 1.13.0 Reporter: Caizhi Weng Fix For: 1.13.0, 1.12.3 Add the following test case to {{org.apache.flink.table.planner.runtime.stream.sql.LookupJoinITCase}} {code:scala} @Test def myTest(): Unit = { tEnv.getConfig.getConfiguration.setBoolean(OptimizerConfigOptions.TABLE_OPTIMIZER_REUSE_SOURCE_ENABLED, true) tEnv.getConfig.getConfiguration.setBoolean(RelNodeBlockPlanBuilder.TABLE_OPTIMIZER_REUSE_OPTIMIZE_BLOCK_WITH_DIGEST_ENABLED, true) val ddl1 = """ |CREATE TABLE sink1 ( | `id` BIGINT |) WITH ( | 'connector' = 'blackhole' |) |""".stripMargin tEnv.executeSql(ddl1) val ddl2 = """ |CREATE TABLE sink2 ( | `id` BIGINT |) WITH ( | 'connector' = 'blackhole' |) |""".stripMargin tEnv.executeSql(ddl2) val sql1 = "INSERT INTO sink1 SELECT T.id FROM src AS T JOIN user_table for system_time as of T.proctime AS D ON T.id = D.id" val sql2 = "INSERT INTO sink2 SELECT T.id FROM src AS T JOIN user_table for system_time as of T.proctime AS D ON T.id + 1 = D.id" val stmtSet = tEnv.createStatementSet() stmtSet.addInsertSql(sql1) stmtSet.addInsertSql(sql2) stmtSet.execute().await() } {code} The following exception will occur {code} org.apache.flink.table.api.ValidationException: Temporal Table Join requires primary key in versioned table, but no primary key can be found. The physical plan is: FlinkLogicalJoin(condition=[AND(=($0, $2), __INITIAL_TEMPORAL_JOIN_CONDITION($1, __TEMPORAL_JOIN_LEFT_KEY($0), __TEMPORAL_JOIN_RIGHT_KEY($2)))], joinType=[inner]) FlinkLogicalCalc(select=[id, proctime]) FlinkLogicalIntermediateTableScan(table=[[IntermediateRelTable_0]], fields=[id, len, content, proctime]) FlinkLogicalSnapshot(period=[$cor0.proctime]) FlinkLogicalCalc(select=[id]) FlinkLogicalIntermediateTableScan(table=[[IntermediateRelTable_1]], fields=[age, id, name]) at org.apache.flink.table.planner.plan.rules.logical.TemporalJoinRewriteWithUniqueKeyRule.org$apache$flink$table$planner$plan$rules$logical$TemporalJoinRewriteWithUniqueKeyRule$$validateRightPrimaryKey(TemporalJoinRewriteWithUniqueKeyRule.scala:124) at org.apache.flink.table.planner.plan.rules.logical.TemporalJoinRewriteWithUniqueKeyRule$$anon$1.visitCall(TemporalJoinRewriteWithUniqueKeyRule.scala:88) at org.apache.flink.table.planner.plan.rules.logical.TemporalJoinRewriteWithUniqueKeyRule$$anon$1.visitCall(TemporalJoinRewriteWithUniqueKeyRule.scala:70) at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at org.apache.calcite.rex.RexShuttle.visitList(RexShuttle.java:158) at org.apache.calcite.rex.RexShuttle.visitCall(RexShuttle.java:110) at org.apache.flink.table.planner.plan.rules.logical.TemporalJoinRewriteWithUniqueKeyRule$$anon$1.visitCall(TemporalJoinRewriteWithUniqueKeyRule.scala:109) at org.apache.flink.table.planner.plan.rules.logical.TemporalJoinRewriteWithUniqueKeyRule$$anon$1.visitCall(TemporalJoinRewriteWithUniqueKeyRule.scala:70) at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at org.apache.flink.table.planner.plan.rules.logical.TemporalJoinRewriteWithUniqueKeyRule.onMatch(TemporalJoinRewriteWithUniqueKeyRule.scala:70) at org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) at org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) at org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) at org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) at org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) at org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69) at org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62) at org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58) at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157) at
Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulations! Regards, Roman On Mon, Feb 22, 2021 at 12:22 PM Yangze Guo wrote: > Congrats, Wei and Xingbo! Well deserved! > > Best, > Yangze Guo > > On Mon, Feb 22, 2021 at 6:47 PM Yang Wang wrote: > > > > Congratulations Wei & Xingbo! > > > > Best, > > Yang > > > > Rui Li 于2021年2月22日周一 下午6:23写道: > > > > > Congrats Wei & Xingbo! > > > > > > On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei > wrote: > > > > > > > Congratulations Wei & Xingbo! > > > > > > > > Best, > > > > Yuan > > > > > > > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > > > > > > > Congratulations Wei and Xingbo! > > > > > > > > > > Best Regards, > > > > > Yu > > > > > > > > > > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > > > > wrote: > > > > > > > > > > > Congratulations Wei & Xingbo. Great to have you as committers in > the > > > > > > community now. > > > > > > > > > > > > Cheers, > > > > > > Till > > > > > > > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song < > tonysong...@gmail.com> > > > > > > wrote: > > > > > > > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu > > > wrote: > > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei > Zhong > > > and > > > > > > > Xingbo > > > > > > > > Huang have accepted the invitation to become Flink > committers. > > > > > > > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > > > > important > > > > > > > > features in PyFlink, e.g. Python UDF dependency management > > > > (FLIP-78), > > > > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF > > > support > > > > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink > and > > > > have > > > > > > > > contributed 100+ commits since then. > > > > > > > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and > has > > > > > driven > > > > > > > > several important features in PyFlink, e.g. performance > > > optimizing > > > > > for > > > > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, > FLINK-19236), > > > > > Pandas > > > > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > > > > row-based > > > > > > > > Operations support in Python Table API (FLINK-20479), etc. > He is > > > > also > > > > > > > > actively helping on answering questions in the user mailing > list, > > > > > > helping > > > > > > > > on the release check, monitoring the status of the azure > > > pipeline, > > > > > etc. > > > > > > > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang > for > > > > > > becoming > > > > > > > > Flink committers! > > > > > > > > > > > > > > > > Regards, > > > > > > > > Dian > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Best regards! > > > Rui Li > > > >
[jira] [Created] (FLINK-21443) SQLClientHBaseITCase fails on azure
Dawid Wysakowicz created FLINK-21443: Summary: SQLClientHBaseITCase fails on azure Key: FLINK-21443 URL: https://issues.apache.org/jira/browse/FLINK-21443 Project: Flink Issue Type: Bug Components: Connectors / HBase, Table SQL / Client Affects Versions: 1.13.0 Reporter: Dawid Wysakowicz https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13578=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529 {code} Feb 22 15:24:07 [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase == === WARNING: This E2E Run will time out in the next few minutes. Starting to upload the log output === == Feb 22 15:36:21 at org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:133) Feb 22 15:36:21 at org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108) Feb 22 15:36:21 at org.apache.flink.tests.util.AutoClosableProcess.runBlocking(AutoClosableProcess.java:70) Feb 22 15:36:21 at org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.setupHBaseDist(LocalStandaloneHBaseResource.java:86) Feb 22 15:36:21 at org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.before(LocalStandaloneHBaseResource.java:76) Feb 22 15:36:21 at org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46) Feb 22 15:36:21 at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) Feb 22 15:36:21 at org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) Feb 22 15:36:21 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) Feb 22 15:36:21 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Feb 22 15:36:21 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) Feb 22 15:36:21 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) Feb 22 15:36:21 at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) Feb 22 15:36:21 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) Feb 22 15:36:21 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) Feb 22 15:36:21 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) Feb 22 15:36:21 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) Feb 22 15:36:21 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) Feb 22 15:36:21 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) Feb 22 15:36:21 at org.junit.runners.Suite.runChild(Suite.java:128) Feb 22 15:36:21 at org.junit.runners.Suite.runChild(Suite.java:27) Feb 22 15:36:21 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) Feb 22 15:36:21 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) Feb 22 15:36:21 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) Feb 22 15:36:21 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) Feb 22 15:36:21 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) Feb 22 15:36:21 at org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:48) Feb 22 15:36:21 at org.junit.rules.RunRules.evaluate(RunRules.java:20) Feb 22 15:36:21 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) Feb 22 15:36:21 at org.junit.runners.Suite.runChild(Suite.java:128) Feb 22 15:36:21 at org.junit.runners.Suite.runChild(Suite.java:27) Feb 22 15:36:21 at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) Feb 22 15:36:21 at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) Feb 22 15:36:21 at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) Feb 22 15:36:21 at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) Feb 22 15:36:21 at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) Feb 22 15:36:21 at org.junit.runners.ParentRunner.run(ParentRunner.java:363) Feb 22 15:36:21 at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55) Feb 22 15:36:21 at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137) Feb 22 15:36:21 at org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107) Feb 22 15:36:21 at
[jira] [Created] (FLINK-21442) Jdbc XA sink - restore state serialization error
Maciej Obuchowski created FLINK-21442: - Summary: Jdbc XA sink - restore state serialization error Key: FLINK-21442 URL: https://issues.apache.org/jira/browse/FLINK-21442 Project: Flink Issue Type: Bug Components: Connectors / JDBC Affects Versions: 1.13.0 Reporter: Maciej Obuchowski Fix For: 1.13.0 There are state restoration errors connected to XaSinkStateSerializer with it's implementation of SNAPSHOT using anonymous inner class, which is not restorable due to not being public. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21441) FlinkKinesisProducer does not forward custom endpoint
Ingo Bürk created FLINK-21441: - Summary: FlinkKinesisProducer does not forward custom endpoint Key: FLINK-21441 URL: https://issues.apache.org/jira/browse/FLINK-21441 Project: Flink Issue Type: Bug Components: Connectors / Kinesis Affects Versions: 1.12.1, 1.13.0 Reporter: Ingo Bürk For the Kinesis connector, between aws.region and aws.endpoint one is required. For FlinkKinesisProducer, however, aws.region is always required. However, in this case the aws.endpoint never ends up being used since the code only calls KinesisProducerConfiguration#fromProperties. One would have to set a "KinesisEndpoint" property instead, but this is not a valid property. This should be fixed such that a custom endpoint can be used, probably the same goes for the port. Also, this is currently being worked around here: https://github.com/apache/flink/blob/master/flink-end-to-end-tests/flink-streaming-kinesis-test/src/main/java/org/apache/flink/streaming/kinesis/test/KinesisExample.java#L70 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21440) Translate Real Time Reporting with the Table API doc and correct a spelling mistake
GuotaoLi created FLINK-21440: Summary: Translate Real Time Reporting with the Table API doc and correct a spelling mistake Key: FLINK-21440 URL: https://issues.apache.org/jira/browse/FLINK-21440 Project: Flink Issue Type: Improvement Affects Versions: shaded-13.0 Reporter: GuotaoLi Fix For: shaded-13.0 * Translate Real Time Reporting with the Table API doc to Chinese * Correct Real Time Reporting with the Table API doc allong with spelling mistake to along with -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: New Jdbc XA sink - state serialization error.
Hi, Yes, please go ahead. Thanks! Regards, Roman On Mon, Feb 22, 2021 at 12:18 PM Maciej Obuchowski < obuchowski.mac...@gmail.com> wrote: > Hey, while working with the new 1.13 JDBC XA sink I had state restoration > errors connected to XaSinkStateSerializer with it's implementation of > SNAPSHOT using anonymous inner class, which is not restorable due to not > being public. > When changed to implementation similar to > CheckpointAndXidSimpleTypeSerializerSnapshot state restores without > problem. > > Should I provide Jira ticket and patch for this error? Sorry if that's > obvious. > > Thanks, > Maciej >
[jira] [Created] (FLINK-21439) Add support for exception history
Matthias created FLINK-21439: Summary: Add support for exception history Key: FLINK-21439 URL: https://issues.apache.org/jira/browse/FLINK-21439 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Affects Versions: 1.13.0 Reporter: Matthias Fix For: 1.13.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21438) Broken links in content.zh/docs/concepts/flink-architecture.md
Ting Sun created FLINK-21438: Summary: Broken links in content.zh/docs/concepts/flink-architecture.md Key: FLINK-21438 URL: https://issues.apache.org/jira/browse/FLINK-21438 Project: Flink Issue Type: Bug Components: Documentation Affects Versions: shaded-13.0 Reporter: Ting Sun Fix For: shaded-13.0 When reading the Chinese doc I find some links are broken and the responses for these links are 404. And I find that this is because of the format error: these links are different from their corresponding original links in the English doc, while the links which are identical to the corresponding links in the English doc are OK for me. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congrats, Wei and Xingbo! Well deserved! Best, Yangze Guo On Mon, Feb 22, 2021 at 6:47 PM Yang Wang wrote: > > Congratulations Wei & Xingbo! > > Best, > Yang > > Rui Li 于2021年2月22日周一 下午6:23写道: > > > Congrats Wei & Xingbo! > > > > On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei wrote: > > > > > Congratulations Wei & Xingbo! > > > > > > Best, > > > Yuan > > > > > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > > > > > Congratulations Wei and Xingbo! > > > > > > > > Best Regards, > > > > Yu > > > > > > > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > > > wrote: > > > > > > > > > Congratulations Wei & Xingbo. Great to have you as committers in the > > > > > community now. > > > > > > > > > > Cheers, > > > > > Till > > > > > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song > > > > > wrote: > > > > > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > > > > > Thank you~ > > > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu > > wrote: > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei Zhong > > and > > > > > > Xingbo > > > > > > > Huang have accepted the invitation to become Flink committers. > > > > > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > > > important > > > > > > > features in PyFlink, e.g. Python UDF dependency management > > > (FLIP-78), > > > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF > > support > > > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink and > > > have > > > > > > > contributed 100+ commits since then. > > > > > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and has > > > > driven > > > > > > > several important features in PyFlink, e.g. performance > > optimizing > > > > for > > > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), > > > > Pandas > > > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > > > row-based > > > > > > > Operations support in Python Table API (FLINK-20479), etc. He is > > > also > > > > > > > actively helping on answering questions in the user mailing list, > > > > > helping > > > > > > > on the release check, monitoring the status of the azure > > pipeline, > > > > etc. > > > > > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang for > > > > > becoming > > > > > > > Flink committers! > > > > > > > > > > > > > > Regards, > > > > > > > Dian > > > > > > > > > > > > > > > > > > > > > > > > -- > > Best regards! > > Rui Li > >
[jira] [Created] (FLINK-21437) Memory leak when using filesystem state backend on Alibaba Cloud OSS
Qian Chao created FLINK-21437: - Summary: Memory leak when using filesystem state backend on Alibaba Cloud OSS Key: FLINK-21437 URL: https://issues.apache.org/jira/browse/FLINK-21437 Project: Flink Issue Type: Bug Components: Runtime / State Backends Reporter: Qian Chao When using filesystem state backend, and storing checkpoints on Alibaba Cloud OSS flink-conf.yaml: {code:java} state.backend: filesystem state.checkpoints.dir: oss://yourBucket/checkpoints fs.oss.endpoint: x fs.oss.accessKeyId: x fs.oss.accessKeySecret: x{code} A memory leak (both jobmanager and taskmanager) would occur after a period of time, objects retained in jvm heap like: {code:java} The class "java.io.DeleteOnExitHook", loaded by "", occupies 1,018,323,960 (96.47%) bytes. The memory is accumulated in one instance of "java.util.LinkedHashMap", loaded by "", which occupies 1,018,323,832 (96.47%) bytes. {code} The root cause should be that when using flink-oss-fs-hadoop to upload file to OSS, OSSFileSystem will create temporary file, and deleteOnExit, so LinkedHashSet files in DeleteOnExitHook will get bigger and bigger. {code:java} org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem::create -> org.apache.hadoop.fs.aliyun.oss.AliyunOSSOutputStream::new -> dirAlloc.createTmpFileForWrite("output-", -1L, conf) -> org.apache.hadoop.fs.LocalDirAllocator::createTmpFileForWrite -> result.deleteOnExit() {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
New Jdbc XA sink - state serialization error.
Hey, while working with the new 1.13 JDBC XA sink I had state restoration errors connected to XaSinkStateSerializer with it's implementation of SNAPSHOT using anonymous inner class, which is not restorable due to not being public. When changed to implementation similar to CheckpointAndXidSimpleTypeSerializerSnapshot state restores without problem. Should I provide Jira ticket and patch for this error? Sorry if that's obvious. Thanks, Maciej
[jira] [Created] (FLINK-21436) Speed up the restore of UnionListState
fanrui created FLINK-21436: -- Summary: Speed up the restore of UnionListState Key: FLINK-21436 URL: https://issues.apache.org/jira/browse/FLINK-21436 Project: Flink Issue Type: Improvement Components: Runtime / Checkpointing, Runtime / State Backends Affects Versions: 1.13.0 Reporter: fanrui h1. 1. Problem introduction and cause analysis Problem description: The duration of UnionListState restore under large concurrency is more than 2 minutes. h2. the reason: 2000 subtasks write 2000 files during checkpoint, and each subtask needs to read 2000 files during restore. 2000*2000 = 4 million, so 4 million small files need to be read to hdfs during restore. HDFS has become a bottleneck, causing restore to be particularly time-consuming. h1. 2. Optimize ideas Under normal circumstances, the UnionListState state is relatively small. Typical usage scenario: Kafka offset information. When restoring, JM can directly read all 2000 small files, merge UnionListState into a byte array and send it to all TMs to avoid frequent access to hdfs by TMs. h1. 3. Benefits after optimization Before optimization: 2000 concurrent, Kafka offset restore takes 90~130 s. After optimization: 2000 concurrent, Kafka offset restore takes less than 1s. h1. 4. Risk points Too big UnionListState leads to too much pressure on JM. Solution 1: Add configuration and decide whether to enable this feature. The default is false, which means the old plan is used. When the user is set to true, JM will merge. Solution 2: The above configuration is not required, which is equivalent to being enabled by default. However, JM detects the size of the state before merge, and does not merge if it exceeds the threshold. The user can control the threshold size. Note: Most of the scenarios where Flink uses UnionListState are Kafka offset (small state). In theory, most jobs are risk-free. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulations Wei & Xingbo! Best, Yang Rui Li 于2021年2月22日周一 下午6:23写道: > Congrats Wei & Xingbo! > > On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei wrote: > > > Congratulations Wei & Xingbo! > > > > Best, > > Yuan > > > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > > > Congratulations Wei and Xingbo! > > > > > > Best Regards, > > > Yu > > > > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > > wrote: > > > > > > > Congratulations Wei & Xingbo. Great to have you as committers in the > > > > community now. > > > > > > > > Cheers, > > > > Till > > > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song > > > > wrote: > > > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > > > Thank you~ > > > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu > wrote: > > > > > > > > > > > Hi all, > > > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei Zhong > and > > > > > Xingbo > > > > > > Huang have accepted the invitation to become Flink committers. > > > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > > important > > > > > > features in PyFlink, e.g. Python UDF dependency management > > (FLIP-78), > > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF > support > > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink and > > have > > > > > > contributed 100+ commits since then. > > > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and has > > > driven > > > > > > several important features in PyFlink, e.g. performance > optimizing > > > for > > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), > > > Pandas > > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > > row-based > > > > > > Operations support in Python Table API (FLINK-20479), etc. He is > > also > > > > > > actively helping on answering questions in the user mailing list, > > > > helping > > > > > > on the release check, monitoring the status of the azure > pipeline, > > > etc. > > > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang for > > > > becoming > > > > > > Flink committers! > > > > > > > > > > > > Regards, > > > > > > Dian > > > > > > > > > > > > > > > > > -- > Best regards! > Rui Li >
Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congrats Wei & Xingbo! On Mon, Feb 22, 2021 at 4:24 PM Yuan Mei wrote: > Congratulations Wei & Xingbo! > > Best, > Yuan > > On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > > > Congratulations Wei and Xingbo! > > > > Best Regards, > > Yu > > > > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann > wrote: > > > > > Congratulations Wei & Xingbo. Great to have you as committers in the > > > community now. > > > > > > Cheers, > > > Till > > > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song > > > wrote: > > > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > > > Thank you~ > > > > > > > > Xintong Song > > > > > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu wrote: > > > > > > > > > Hi all, > > > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei Zhong and > > > > Xingbo > > > > > Huang have accepted the invitation to become Flink committers. > > > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several > important > > > > > features in PyFlink, e.g. Python UDF dependency management > (FLIP-78), > > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF support > > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink and > have > > > > > contributed 100+ commits since then. > > > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and has > > driven > > > > > several important features in PyFlink, e.g. performance optimizing > > for > > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), > > Pandas > > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), > row-based > > > > > Operations support in Python Table API (FLINK-20479), etc. He is > also > > > > > actively helping on answering questions in the user mailing list, > > > helping > > > > > on the release check, monitoring the status of the azure pipeline, > > etc. > > > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang for > > > becoming > > > > > Flink committers! > > > > > > > > > > Regards, > > > > > Dian > > > > > > > > > > -- Best regards! Rui Li
Re: [VOTE] FLIP-163: SQL Client Improvements
+1 (non-binding) On Mon, Feb 22, 2021 at 4:10 PM Timo Walther wrote: > +1 (binding) > > Thanks, > Timo > > On 22.02.21 04:44, Jark Wu wrote: > > +1 (binding) > > > > Best, > > Jark > > > > On Mon, 22 Feb 2021 at 11:06, Shengkai Fang wrote: > > > >> Hi devs > >> > >> It seems we have reached consensus on FLIP-163[1] in the discussion[2]. > So > >> I'd like to start the vote for this FLIP. > >> > >> Please vote +1 to approve the FLIP, or -1 with a comment. > >> > >> The vote will be open for 72 hours, until Feb. 25 2021 12:00 AM UTC+8, > >> unless there's an objection. > >> > >> Best, > >> Shengkai > >> > > > > -- Best regards! Rui Li
[jira] [Created] (FLINK-21435) Add a SqlExpression in table-common
Timo Walther created FLINK-21435: Summary: Add a SqlExpression in table-common Key: FLINK-21435 URL: https://issues.apache.org/jira/browse/FLINK-21435 Project: Flink Issue Type: Sub-task Components: Table SQL / API Reporter: Timo Walther Assignee: Timo Walther We introduced a dedicated function definition in FLINK-20971 to use SQL expression in Table API. However, for usability and clear Maven module design it is better add a dedicated `SqlExpression` in `table-common`. This allows catalogs to specify an unresolved SQL expression without a dependency on `table-api` furthermore we can add SQL specific functions to the new `Schema` class. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21434) When UDAF return ROW type, and the number of fields is more than 14, the crash happend
awayne created FLINK-21434: -- Summary: When UDAF return ROW type, and the number of fields is more than 14, the crash happend Key: FLINK-21434 URL: https://issues.apache.org/jira/browse/FLINK-21434 Project: Flink Issue Type: Bug Components: API / Python Affects Versions: 1.12.1 Environment: python 3.7.5 pyflink 1.12.1 Reporter: awayne Code(a simple udaf to return a Row containing 15 fields): {code:python} from pyflink.common import Row from pyflink.table.udf import AggregateFunction, udaf from pyflink.table import DataTypes, EnvironmentSettings, StreamTableEnvironment class Test(AggregateFunction): def create_accumulator(self): return Row(0, 0) def get_value(self, accumulator): return Row(1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23, 1.23) def accumulate(self, accumulator, a, b): pass def get_result_type(self): return DataTypes.ROW([ DataTypes.FIELD("f1", DataTypes.FLOAT()), DataTypes.FIELD("f2", DataTypes.FLOAT()), DataTypes.FIELD("f3", DataTypes.FLOAT()), DataTypes.FIELD("f4", DataTypes.FLOAT()), DataTypes.FIELD("f5", DataTypes.FLOAT()), DataTypes.FIELD("f6", DataTypes.FLOAT()), DataTypes.FIELD("f7", DataTypes.FLOAT()), DataTypes.FIELD("f8", DataTypes.FLOAT()), DataTypes.FIELD("f9", DataTypes.FLOAT()), DataTypes.FIELD("f10", DataTypes.FLOAT()), DataTypes.FIELD("f11", DataTypes.FLOAT()), DataTypes.FIELD("f12", DataTypes.FLOAT()), DataTypes.FIELD("f13", DataTypes.FLOAT()), DataTypes.FIELD("f14", DataTypes.FLOAT()), DataTypes.FIELD("f15", DataTypes.FLOAT()) ]) def get_accumulator_type(self): return DataTypes.ROW([ DataTypes.FIELD("f1", DataTypes.BIGINT()), DataTypes.FIELD("f2", DataTypes.BIGINT())]) def udaf_test(): env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build() table_env = StreamTableEnvironment.create(environment_settings=env_settings) test = udaf(Test()) table_env.execute_sql(""" CREATE TABLE print_sink ( `name` STRING, `agg` ROW ) WITH ( 'connector' = 'print' ) """) table = table_env.from_elements([(1, 2, "Lee")], ['value', 'count', 'name']) result_table = table.group_by(table.name)\ .select(table.name, test(table.value, table.count)) result_table.execute_insert("print_sink").wait() if __name__ == "__main__": udaf_test() {code} Exception: {code:java} Caused by: java.io.EOFException at java.base/java.io.DataInputStream.readInt(DataInputStream.java:397) at java.base/java.io.DataInputStream.readFloat(DataInputStream.java:451) at org.apache.flink.api.common.typeutils.base.FloatSerializer.deserialize(FloatSerializer.java:72) at org.apache.flink.api.common.typeutils.base.FloatSerializer.deserialize(FloatSerializer.java:30) at org.apache.flink.table.runtime.typeutils.serializers.python.RowDataSerializer.deserialize(RowDataSerializer.java:106) at org.apache.flink.table.runtime.typeutils.serializers.python.RowDataSerializer.deserialize(RowDataSerializer.java:49) at org.apache.flink.table.runtime.typeutils.serializers.python.RowDataSerializer.deserialize(RowDataSerializer.java:106 {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-21433) Default slot resource is calculated inconsistently for pending / registered TMs
Xintong Song created FLINK-21433: Summary: Default slot resource is calculated inconsistently for pending / registered TMs Key: FLINK-21433 URL: https://issues.apache.org/jira/browse/FLINK-21433 Project: Flink Issue Type: Sub-task Components: Runtime / Coordination Reporter: Xintong Song Assignee: Xintong Song For a {{PendingTaskManager}}, the default slot resource is decided by {{ResourceAllocationStrategy}}. However, this default slot resource is ignored in {{FineGrainedSlotManager#allocateResource}}. On the TM side, the default slot profile is calculated via {{TaskExecutorResourceUtils#generateDefaultSlotResourceProfile}}. This may lead to registered TM does not match the pending TM. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulations Wei & Xingbo! Best, Yuan On Mon, Feb 22, 2021 at 4:04 PM Yu Li wrote: > Congratulations Wei and Xingbo! > > Best Regards, > Yu > > > On Mon, 22 Feb 2021 at 15:56, Till Rohrmann wrote: > > > Congratulations Wei & Xingbo. Great to have you as committers in the > > community now. > > > > Cheers, > > Till > > > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song > > wrote: > > > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > > > Thank you~ > > > > > > Xintong Song > > > > > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu wrote: > > > > > > > Hi all, > > > > > > > > On behalf of the PMC, I’m very happy to announce that Wei Zhong and > > > Xingbo > > > > Huang have accepted the invitation to become Flink committers. > > > > > > > > - Wei Zhong mainly works on PyFlink and has driven several important > > > > features in PyFlink, e.g. Python UDF dependency management (FLIP-78), > > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF support > > > > (FLIP-139), etc. He has contributed the first PR of PyFlink and have > > > > contributed 100+ commits since then. > > > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and has > driven > > > > several important features in PyFlink, e.g. performance optimizing > for > > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), > Pandas > > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), row-based > > > > Operations support in Python Table API (FLINK-20479), etc. He is also > > > > actively helping on answering questions in the user mailing list, > > helping > > > > on the release check, monitoring the status of the azure pipeline, > etc. > > > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang for > > becoming > > > > Flink committers! > > > > > > > > Regards, > > > > Dian > > > > > >
Re: [VOTE] FLIP-163: SQL Client Improvements
+1 (binding) Thanks, Timo On 22.02.21 04:44, Jark Wu wrote: +1 (binding) Best, Jark On Mon, 22 Feb 2021 at 11:06, Shengkai Fang wrote: Hi devs It seems we have reached consensus on FLIP-163[1] in the discussion[2]. So I'd like to start the vote for this FLIP. Please vote +1 to approve the FLIP, or -1 with a comment. The vote will be open for 72 hours, until Feb. 25 2021 12:00 AM UTC+8, unless there's an objection. Best, Shengkai
Re: [ANNOUNCE] New Apache Flink Committers - Wei Zhong and Xingbo Huang
Congratulations Wei and Xingbo! Best Regards, Yu On Mon, 22 Feb 2021 at 15:56, Till Rohrmann wrote: > Congratulations Wei & Xingbo. Great to have you as committers in the > community now. > > Cheers, > Till > > On Mon, Feb 22, 2021 at 5:08 AM Xintong Song > wrote: > > > Congratulations, Wei & Xingbo~! Welcome aboard. > > > > Thank you~ > > > > Xintong Song > > > > > > > > On Mon, Feb 22, 2021 at 11:48 AM Dian Fu wrote: > > > > > Hi all, > > > > > > On behalf of the PMC, I’m very happy to announce that Wei Zhong and > > Xingbo > > > Huang have accepted the invitation to become Flink committers. > > > > > > - Wei Zhong mainly works on PyFlink and has driven several important > > > features in PyFlink, e.g. Python UDF dependency management (FLIP-78), > > > Python UDF support in SQL (FLIP-106, FLIP-114), Python UDAF support > > > (FLIP-139), etc. He has contributed the first PR of PyFlink and have > > > contributed 100+ commits since then. > > > > > > - Xingbo Huang's contribution is also mainly in PyFlink and has driven > > > several important features in PyFlink, e.g. performance optimizing for > > > Python UDF and Python UDAF (FLIP-121, FLINK-16747, FLINK-19236), Pandas > > > UDAF support (FLIP-137), Python UDTF support (FLINK-14500), row-based > > > Operations support in Python Table API (FLINK-20479), etc. He is also > > > actively helping on answering questions in the user mailing list, > helping > > > on the release check, monitoring the status of the azure pipeline, etc. > > > > > > Please join me in congratulating Wei Zhong and Xingbo Huang for > becoming > > > Flink committers! > > > > > > Regards, > > > Dian > > >