Re?? [VOTE] FLIP-146: Improve new TableSource and TableSink interfaces
+1 It is A essential feature for the users who use DataStream inside of TableSource/TableSink so that they can do some advanced operations. As one of them, I feel really excited and look forward to the implementation of FLIP-146 Appreciate it! -- -- ??: "dev" https://cwiki.apache.org/confluence/display/FLINK/FLIP-146%3A+Improve+new+TableSource+and+TableSink+interfaces [2] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-146-Improve-new-TableSource-and-TableSink-interfaces-td45161.html Best, Jingsong Lee -- Best, Jingsong Lee Best Regards ??Shoi Liu Dalian University of Technology
?????? [VOTE] FLIP-146: Improve new TableSource and TableSink interfaces
+1 It is A essential feature for the users who use DataStream inside of TableSource/TableSink so that they can do some advanced operations. As one of them, I feel really excited and look forward to the implementation of FLIP-146 Appreciate it! ---- ??: "dev" https://cwiki.apache.org/confluence/display/FLINK/FLIP-146%3A+Improve+new+TableSource+and+TableSink+interfaces [2] http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-146-Improve-new-TableSource-and-TableSink-interfaces-td45161.html Best, Jingsong Lee -- Best, Jingsong Lee Best Regards ??Shoi Liu Dalian University of Technology
[jira] [Created] (FLINK-19702) Avoid using HiveConf::hiveSiteURL
Rui Li created FLINK-19702: -- Summary: Avoid using HiveConf::hiveSiteURL Key: FLINK-19702 URL: https://issues.apache.org/jira/browse/FLINK-19702 Project: Flink Issue Type: Improvement Components: Connectors / Hive Reporter: Rui Li Fix For: 1.12.0 We're relying on this static field to set path to hive-site.xml. This can be error-prone if users create multiple HiveCatalog instances in their programs. -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [VOTE] FLIP-146: Improve new TableSource and TableSink interfaces
+1 Jingsong Li 于2020年10月19日周一 上午10:54写道: > +1 > > On Fri, Oct 16, 2020 at 2:33 PM Leonard Xu wrote: > > > +1 > > > > Best, > > Leonard > > > > > 在 2020年10月16日,11:01,Jark Wu 写道: > > > > > > +1 > > > > > > On Fri, 16 Oct 2020 at 10:27, admin <17626017...@163.com> wrote: > > > > > >> +1 > > >> > > >>> 2020年10月16日 上午10:05,Danny Chan 写道: > > >>> > > >>> +1, nice job ! > > >>> > > >>> Best, > > >>> Danny Chan > > >>> 在 2020年10月15日 +0800 PM8:08,Jingsong Li ,写道: > > Hi all, > > > > I would like to start the vote for FLIP-146 [1], which is discussed > > and > > reached consensus in the discussion thread [2]. The vote will be > open > > >> until > > 20th Oct. (72h, exclude weekends), unless there is an objection or > not > > enough votes. > > > > [1] > > > > >> > > > https://cwiki.apache.org/confluence/display/FLINK/FLIP-146%3A+Improve+new+TableSource+and+TableSink+interfaces > > > > [2] > > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-146-Improve-new-TableSource-and-TableSink-interfaces-td45161.html > > > > Best, > > Jingsong Lee > > >> > > >> > > > > > > -- > Best, Jingsong Lee >
[jira] [Created] (FLINK-19701) Unaligned Checkpoint might misuse the number of buffers to persist from the previous barrier
Yun Gao created FLINK-19701: --- Summary: Unaligned Checkpoint might misuse the number of buffers to persist from the previous barrier Key: FLINK-19701 URL: https://issues.apache.org/jira/browse/FLINK-19701 Project: Flink Issue Type: Bug Components: Runtime / Checkpointing Affects Versions: 1.12.0 Reporter: Yun Gao Current CheckpointUnaligner interacts with RemoteInputChannel to persisting the input buffers. However, based the current implementation it seems if we have the following case: {code:java} 1. There are 3 input channels. 2. Input channel 0 received barrier 1, and processed barrier 1 to start checkpoint 1. 3. Input channel 1 received barrier 1, and processed barrier 1. Now the state of input channel persister becomes BARRIER_RECEIVED and numBuffersOvertaken(channel 1) = n_1. 4. However, input 2 received nothing and the checkpoint expired, new checkpoint is trigger. 5. Input channel 0 received barrier 2, checkpoint 1 is deserted and checkpoint 2 is started. However, in this case the state of the input channels are not cleared. Thus now channel 1 is still BARRIER_RECEIVED and numBuffersOvertaken(channel 1) = n_1. Then channel 1 would only persist n_1 buffers in the channel. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: [VOTE] FLIP-146: Improve new TableSource and TableSink interfaces
+1 On Fri, Oct 16, 2020 at 2:33 PM Leonard Xu wrote: > +1 > > Best, > Leonard > > > 在 2020年10月16日,11:01,Jark Wu 写道: > > > > +1 > > > > On Fri, 16 Oct 2020 at 10:27, admin <17626017...@163.com> wrote: > > > >> +1 > >> > >>> 2020年10月16日 上午10:05,Danny Chan 写道: > >>> > >>> +1, nice job ! > >>> > >>> Best, > >>> Danny Chan > >>> 在 2020年10月15日 +0800 PM8:08,Jingsong Li ,写道: > Hi all, > > I would like to start the vote for FLIP-146 [1], which is discussed > and > reached consensus in the discussion thread [2]. The vote will be open > >> until > 20th Oct. (72h, exclude weekends), unless there is an objection or not > enough votes. > > [1] > > >> > https://cwiki.apache.org/confluence/display/FLINK/FLIP-146%3A+Improve+new+TableSource+and+TableSink+interfaces > > [2] > > >> > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-146-Improve-new-TableSource-and-TableSink-interfaces-td45161.html > > Best, > Jingsong Lee > >> > >> > > -- Best, Jingsong Lee
[jira] [Created] (FLINK-19700) Make Kubernetes Client in KubernetesResourceManagerDriver use io executor
Yang Wang created FLINK-19700: - Summary: Make Kubernetes Client in KubernetesResourceManagerDriver use io executor Key: FLINK-19700 URL: https://issues.apache.org/jira/browse/FLINK-19700 Project: Flink Issue Type: Improvement Components: Deployment / Kubernetes Reporter: Yang Wang Fix For: 1.12.0 Currently, the Kubernetes Client in {{KubernetesResourceManagerDriver}} is using a dedicated thread pool. After FLINK-19037 and FLINK-18722, we could get the io executor in {{KubernetesResourceManagerDriver}} and eliminate the redundant thread pool. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19699) PrometheusReporterEndToEndITCase crashes with exit code 143
Dian Fu created FLINK-19699: --- Summary: PrometheusReporterEndToEndITCase crashes with exit code 143 Key: FLINK-19699 URL: https://issues.apache.org/jira/browse/FLINK-19699 Project: Flink Issue Type: Bug Components: Runtime / Metrics Affects Versions: 1.12.0 Reporter: Dian Fu [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7814=logs=68a897ab-3047-5660-245a-cce8f83859f6=16ca2cca-2f63-5cce-12d2-d519b930a729] {code} 2020-10-18T23:46:04.9667443Z [ERROR] The forked VM terminated without properly saying goodbye. VM crash or System.exit called? 2020-10-18T23:46:04.9669237Z [ERROR] Command was /bin/sh -c cd /home/vsts/work/1/s/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=2 -XX:+UseG1GC -jar /home/vsts/work/1/s/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/surefire/surefirebooter6797466627443523305.jar /home/vsts/work/1/s/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/surefire 2020-10-18T23-44-09_467-jvmRun2 surefire930806459376622178tmp surefire_41970585275084524978tmp 2020-10-18T23:46:04.9670440Z [ERROR] Error occurred in starting fork, check output in log 2020-10-18T23:46:04.9671283Z [ERROR] Process Exit Code: 143 2020-10-18T23:46:04.9671614Z [ERROR] Crashed tests: 2020-10-18T23:46:04.9672025Z [ERROR] org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase 2020-10-18T23:46:04.9672649Z [ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM terminated without properly saying goodbye. VM crash or System.exit called? 2020-10-18T23:46:04.9674834Z [ERROR] Command was /bin/sh -c cd /home/vsts/work/1/s/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m -Dmvn.forkNumber=2 -XX:+UseG1GC -jar /home/vsts/work/1/s/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/surefire/surefirebooter6797466627443523305.jar /home/vsts/work/1/s/flink-end-to-end-tests/flink-metrics-reporter-prometheus-test/target/surefire 2020-10-18T23-44-09_467-jvmRun2 surefire930806459376622178tmp surefire_41970585275084524978tmp 2020-10-18T23:46:04.9676153Z [ERROR] Error occurred in starting fork, check output in log 2020-10-18T23:46:04.9676556Z [ERROR] Process Exit Code: 143 2020-10-18T23:46:04.9676882Z [ERROR] Crashed tests: 2020-10-18T23:46:04.9677288Z [ERROR] org.apache.flink.metrics.prometheus.tests.PrometheusReporterEndToEndITCase 2020-10-18T23:46:04.9677827Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669) 2020-10-18T23:46:04.9678408Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282) 2020-10-18T23:46:04.9678965Z [ERROR] at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245) 2020-10-18T23:46:04.9679575Z [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183) 2020-10-18T23:46:04.9680983Z [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011) 2020-10-18T23:46:04.9681749Z [ERROR] at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857) 2020-10-18T23:46:04.9682246Z [ERROR] at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132) 2020-10-18T23:46:04.9682728Z [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) 2020-10-18T23:46:04.9683179Z [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) 2020-10-18T23:46:04.9683609Z [ERROR] at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) 2020-10-18T23:46:04.9684102Z [ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) 2020-10-18T23:46:04.9684639Z [ERROR] at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) 2020-10-18T23:46:04.9685180Z [ERROR] at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) 2020-10-18T23:46:04.9685711Z [ERROR] at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:120) 2020-10-18T23:46:04.9686145Z [ERROR] at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:355) 2020-10-18T23:46:04.9686516Z [ERROR] at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155) 2020-10-18T23:46:04.9689517Z [ERROR] at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584) 2020-10-18T23:46:04.9689917Z [ERROR] at
[jira] [Created] (FLINK-19698) Add close() method and onCheckpointComplete() to the Source.
Jiangjie Qin created FLINK-19698: Summary: Add close() method and onCheckpointComplete() to the Source. Key: FLINK-19698 URL: https://issues.apache.org/jira/browse/FLINK-19698 Project: Flink Issue Type: Improvement Components: Connectors / Common Affects Versions: 1.11.2 Reporter: Jiangjie Qin Right now there are some caveats to the new Source API. From the implementation of some connectors. We would like to make the following improvements to the current Source API. # Add the following method to the {{SplitReader}} API. {{public void close() throws Exception;}} This method allows the SplitReader implementations to be closed properly when the split fetcher exits. # Add the following method to the {{SourceReader}} API. {{public void checkpointComplete(long checkpointId);}} This method allows the {{SourceReader}} to take some cleanup / reporting actions when a checkpoint has been successfully taken. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19697) Make the streaming committer retry-able
Guowei Ma created FLINK-19697: - Summary: Make the streaming committer retry-able Key: FLINK-19697 URL: https://issues.apache.org/jira/browse/FLINK-19697 Project: Flink Issue Type: Sub-task Components: API / DataStream Reporter: Guowei Ma -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (FLINK-19695) Writing Table with RowTime Column of type TIMESTAMP(3) to Kafka fails with ClassCastException
Konstantin Knauf created FLINK-19695: Summary: Writing Table with RowTime Column of type TIMESTAMP(3) to Kafka fails with ClassCastException Key: FLINK-19695 URL: https://issues.apache.org/jira/browse/FLINK-19695 Project: Flink Issue Type: Bug Components: Table SQL / Planner Affects Versions: 1.11.2, 1.12.0 Reporter: Konstantin Knauf When I try to write a table to Kafka (JSON format) that has a rowtime attribute of type TIMESTAMP(3) the job fails with {noformat} 2020-10-18 18:02:08 java.lang.ClassCastException: org.apache.flink.table.data.TimestampData cannot be cast to java.lang.Long at org.apache.flink.table.data.GenericRowData.getLong(GenericRowData.java:154) at org.apache.flink.table.runtime.operators.sink.SinkOperator$SimpleContext.timestamp(SinkOperator.java:144) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:866) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.invoke(FlinkKafkaProducer.java:99) at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.invoke(TwoPhaseCommitSinkFunction.java:235) at org.apache.flink.table.runtime.operators.sink.SinkOperator.processElement(SinkOperator.java:86) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) at org.apache.flink.table.runtime.operators.wmassigners.WatermarkAssignerOperator.processElement(WatermarkAssignerOperator.java:123) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692) at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52) at org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30) at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104) at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111) at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352) at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:185) at org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:141) at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:755) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100) at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63) at org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:213) {noformat} >From looking at the relevant code in SinkOperator$SimpleContext#timestamp it >seems that we can only deal with long type timestamps in SinkOperator?! -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: Re: [VOTE] Add Translation Specification for Stateful Functions
@Shawn Thanks for the notice, that's is not related to this thread, but I think we should fix it, will update it accordingly. Best, Congxian Shawn Huang 于2020年10月17日周六 上午11:27写道: > Hi, Congxian. > Thanks for your contribution. > I noticed that in wiki, the link format is still like "{{ site.baseurl > }}/.../xxx.html". > But the "{% link %}" tag is recommended now [1]. > So is it necessary to update this in wiki? It seems some translators > haven't noticed this. > > [1] > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html > > Best, > Shawn Huang > > > Congxian Qiu 于2020年10月16日周五 下午11:10写道: > > > FYI, I've added the Specification for Stateful Functions to the existing > > wiki[1] > > > > [1] > > > > > https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications > > > > Best, > > Congxian > > > > > > Congxian Qiu 于2020年10月15日周四 下午8:33写道: > > > > > Hi all > > > Thanks everyone for the voting. > > > > > > The voting time for "Add Translation Specification for Stateful > > > Functions" has passed, I'm closing the vote now. > > > > > > There were 7 votes, 4 of which are binding: > > >- Yu Li (binding) > > >- Jark Wu (binding) > > >- Xintong Song (binding) > > >- Smile > > >- Dian Fu (binding) > > >- Hailong Wang > > >- Shawn Huang > > > > > > There were no -1 votes. > > > > > > Thus, changes have been accepted. I'll update the wiki accordingly. > > > > > > Best, > > > Congxian > > > > > > > > > Shawn Huang 于2020年10月13日周二 下午3:19写道: > > > > > >> +1 > > >> > > >> Best, > > >> Shawn Huang > > >> > > >> > > >> hailongwang <18868816...@163.com> 于2020年10月12日周一 下午11:21写道: > > >> > > >> > +1 > > >> > Best, > > >> > Hailong Wang > > >> > At 2020-10-12 17:00:34, "Xintong Song" > wrote: > > >> > >+1 > > >> > > > > >> > >Thank you~ > > >> > > > > >> > >Xintong Song > > >> > > > > >> > > > > >> > > > > >> > >On Mon, Oct 12, 2020 at 5:59 PM Jark Wu wrote: > > >> > > > > >> > >> +1 > > >> > >> > > >> > >> On Mon, 12 Oct 2020 at 17:14, Yu Li wrote: > > >> > >> > > >> > >> > +1 > > >> > >> > > > >> > >> > Best Regards, > > >> > >> > Yu > > >> > >> > > > >> > >> > > > >> > >> > On Mon, 12 Oct 2020 at 14:41, Congxian Qiu < > > qcx978132...@gmail.com > > >> > > > >> > >> wrote: > > >> > >> > > > >> > >> > > I would like to start a voting thread for adding translation > > >> > >> > specification > > >> > >> > > for Stateful Functions, which we’ve reached consensus in [1]. > > >> > >> > > > > >> > >> > > > > >> > >> > > This voting will be open for a minimum 3 days till 3:00 pm > UTC, > > >> Oct > > >> > 15. > > >> > >> > > > > >> > >> > > > > >> > >> > > [1] > > >> > >> > > > > >> > >> > > > > >> > >> > > > >> > >> > > >> > > > >> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Add-Translation-Specification-for-Stateful-Functions-td45531.html > > >> > >> > > > > >> > >> > > > > >> > >> > > Best, > > >> > >> > > Congxian > > >> > >> > > > > >> > >> > > > >> > >> > > >> > > > >> > > > > > >
[jira] [Created] (FLINK-19693) Scheduler Change for Approximate Local Recovery
Yuan Mei created FLINK-19693: Summary: Scheduler Change for Approximate Local Recovery Key: FLINK-19693 URL: https://issues.apache.org/jira/browse/FLINK-19693 Project: Flink Issue Type: Sub-task Reporter: Yuan Mei -- This message was sent by Atlassian Jira (v8.3.4#803005)