[jira] [Created] (FLINK-23592) Format of the "Table API & SQL >> Concepts & Common API" Chinese document is a mess

2021-08-02 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-23592:
---

 Summary: Format of the "Table API & SQL >> Concepts & Common API" 
Chinese document is a mess
 Key: FLINK-23592
 URL: https://issues.apache.org/jira/browse/FLINK-23592
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Caizhi Weng


See 
https://ci.apache.org/projects/flink/flink-docs-release-1.13/zh/docs/dev/table/common/#%e8%a7%a3%e9%87%8a%e8%a1%a8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23591) [DOCS]The link to the 'deployment/config' page is invalid because several characters are written in page 'deployment/memory/mem_tuning/'

2021-08-02 Thread wuguihu (Jira)
wuguihu created FLINK-23591:
---

 Summary: [DOCS]The link to the 'deployment/config' page is invalid 
because several characters are written in page 'deployment/memory/mem_tuning/'
 Key: FLINK-23591
 URL: https://issues.apache.org/jira/browse/FLINK-23591
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: wuguihu
 Attachments: image-20210803011947329.png, image-20210803012556316.png

1、 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/memory/mem_tuning/]

2、 docs/content.zh/docs/deployment/memory/mem_tuning.md

 

3、The wrong link is below:

{code:java}
[`jobmanager.memory.flink.size`]({{< ref "docs/deployment/config" 
>}}#jobmanager-memory-flink-size" >}})
{code}

4、It should be modified as follows

{code:java}
[`jobmanager.memory.flink.size`]({{< ref "docs/deployment/config" 
>}}#jobmanager-memory-flink-size)
{code}
 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23590) StreamTaskTest#testProcessWithUnAvailableInput is flaky

2021-08-02 Thread Jira
David Morávek created FLINK-23590:
-

 Summary: StreamTaskTest#testProcessWithUnAvailableInput is flaky
 Key: FLINK-23590
 URL: https://issues.apache.org/jira/browse/FLINK-23590
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Task
Affects Versions: 1.14.0
Reporter: David Morávek
 Fix For: 1.14.0


[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21218=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb]

 
{code:java}
java.lang.AssertionError: 
Expected: a value equal to or greater than <22L>
 but: <217391L> was less than <22L> at 
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:964)
at org.junit.Assert.assertThat(Assert.java:930)
at 
org.apache.flink.streaming.runtime.tasks.StreamTaskTest.testProcessWithUnAvailableInput(StreamTaskTest.java:1561)
at jdk.internal.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:829){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23589) Support Avro Microsecond precision

2021-08-02 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-23589:
--

 Summary: Support Avro Microsecond precision
 Key: FLINK-23589
 URL: https://issues.apache.org/jira/browse/FLINK-23589
 Project: Flink
  Issue Type: Improvement
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Reporter: Robert Metzger
 Fix For: 1.14.0


This was raised by a user: 
https://lists.apache.org/thread.html/r463f748358202d207e4bf9c7fdcb77e609f35bbd670dbc5278dd7615%40%3Cuser.flink.apache.org%3E

Here's the Avro spec: 
https://avro.apache.org/docs/1.8.0/spec.html#Timestamp+%28microsecond+precision%29



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Security Vulnerabilities with Flink OpenJDK Docker Image

2021-08-02 Thread Konstantin Knauf
Hi Daniel,

sorry for the late reply and thanks for the report. We'll look into this
and get back to you.

Cheers,

Konstantin

On Tue, Jun 15, 2021 at 4:33 AM Daniel Moore
 wrote:

> Hello All,
>
> We have been implementing a solution using the Flink image from
> https://github.com/apache/flink-docker/blob/master/1.13/scala_2.12-java11-debian/Dockerfile
> and it got flagged by our image repository for 3 major security
> vulnerabilities:
>
> CVE-2017-8804
> CVE-2019-25013
> CVE-2021-33574
>
> All of these stem from the `glibc` packages in the `openjdk:11-jre` image.
>
> We have a working image based on building Flink using the Amazon Corretto
> image -
> https://github.com/corretto/corretto-docker/blob/88df29474df6fc3f3f19daa8c5515d934f706cd0/11/jdk/al2/Dockerfile.
> This works although there are  some issues related to linking
> `libjemalloc`.  Before we fully test this new image we wanted to reach out
> to the community for insight on the following questions:
>
> 1. Are these vulnerabilities captured in an issue yet?
> 2. If so, when could we except a new official image that contains the
> Debian fixes for these issues?
> 3. If not, how can we help contribute to a solution?
> 4. Are there officially supported non-Debian based Flink images?
>
> We appreciate the insights and look forward to working with the community
> on a solution.
>
>

-- 

Konstantin Knauf

https://twitter.com/snntrable

https://github.com/knaufk


[RESULT] [VOTE] Release 1.11.4, release candidate #1

2021-08-02 Thread godfrey he
Dear all,

I'm happy to announce that we have unanimously approved the release of
1.11.4.

There are 8 approving votes, 3 of which are binding:

  *   Yun Tang
  *   Xingbo Huang
  *   Robert Metzger (binding)
  *   Jingsong Li
  *   JING ZHANG
  *   Leonard Xu
  *   Dian Fu (binding)
  *   Yu Li  (binding)

There are no disapproving votes.

Thanks everyone!

Best,
Godfrey


[jira] [Created] (FLINK-23588) Class not found

2021-08-02 Thread Abhishek Jain (Jira)
Abhishek Jain created FLINK-23588:
-

 Summary: Class not found 
 Key: FLINK-23588
 URL: https://issues.apache.org/jira/browse/FLINK-23588
 Project: Flink
  Issue Type: Technical Debt
  Components: API / Scala
Affects Versions: 1.13.0
 Environment: I have below jars:

 

[root@ip-10-65-1-98 lib]# ls -lrt /opt/flink-1.13.1/lib/*
-rw-rw-r-- 1 ec2-user ec2-user 276771 Oct 10 2019 
/opt/flink-1.13.1/lib/log4j-api-2.12.1.jar
-rw-rw-r-- 1 ec2-user ec2-user 23518 Oct 10 2019 
/opt/flink-1.13.1/lib/log4j-slf4j-impl-2.12.1.jar
-rw-rw-r-- 1 ec2-user ec2-user 1674433 Oct 10 2019 
/opt/flink-1.13.1/lib/log4j-core-2.12.1.jar
-rw-rw-r-- 1 ec2-user ec2-user 67114 Oct 10 2019 
/opt/flink-1.13.1/lib/log4j-1.2-api-2.12.1.jar
-rw-rw-r-- 1 ec2-user ec2-user 7709740 Apr 8 14:08 
/opt/flink-1.13.1/lib/flink-shaded-zookeeper-3.4.14.jar
-rw-r--r-- 1 root root 442539 Apr 23 15:14 
/opt/flink-1.13.1/lib/flink-java-1.13.0.jar
-rw-r--r-- 1 root root 739107 Apr 23 15:18 
/opt/flink-1.13.1/lib/flink-scala_2.12-1.13.0.jar
-rw-r--r-- 1 root root 1427340 Apr 23 15:21 
/opt/flink-1.13.1/lib/flink-streaming-java_2.12-1.13.0.jar
-rw-r--r-- 1 root root 204067 Apr 23 15:21 
/opt/flink-1.13.1/lib/flink-clients_2.12-1.13.0.jar
-rw-r--r-- 1 root root 209630 Apr 23 15:26 
/opt/flink-1.13.1/lib/flink-streaming-scala_2.12-1.13.0.jar
-rw-r--r-- 1 root root 918478 Apr 23 15:27 
/opt/flink-1.13.1/lib/flink-table-common-1.13.0.jar
-rw-r--r-- 1 root root 54775 Apr 23 15:28 
/opt/flink-1.13.1/lib/flink-table-api-scala-bridge_2.12-1.13.0.jar
-rw-r--r-- 1 root root 32664252 Apr 23 15:30 
/opt/flink-1.13.1/lib/flink-table-planner_2.12-1.13.0.jar
-rw-r--r-- 1 root root 353365 Apr 23 15:39 
/opt/flink-1.13.1/lib/flink-connector-kafka_2.12-1.13.0.jar
-rw-r--r-- 1 root root 3670329 Apr 23 15:43 
/opt/flink-1.13.1/lib/flink-sql-connector-kafka_2.12-1.13.0.jar
-rw-r--r-- 1 ec2-user ec2-user 148131 May 25 12:12 
/opt/flink-1.13.1/lib/flink-json-1.13.1.jar
-rw-r--r-- 1 ec2-user ec2-user 92311 May 25 12:13 
/opt/flink-1.13.1/lib/flink-csv-1.13.1.jar
-rw-r--r-- 1 ec2-user ec2-user 35015429 May 25 12:16 
/opt/flink-1.13.1/lib/flink-table_2.12-1.13.1.jar
-rw-r--r-- 1 ec2-user ec2-user 38533349 May 25 12:16 
/opt/flink-1.13.1/lib/flink-table-blink_2.12-1.13.1.jar
-rw-r--r-- 1 ec2-user ec2-user 106648120 May 25 12:16 
/opt/flink-1.13.1/lib/flink-dist_2.12-1.13.1.jar
[root@ip-10-65-1-98 lib]#

 

 

Still getting below error:

 

 

[root@ip-10-65-1-98 tcpreplayInput]# flink run --class 
iot.flink.readKafkaViaFlink ./flink_2.12-0.1.jar
java.lang.NoClassDefFoundError: 
org/apache/kafka/common/serialization/ByteArrayDeserializer
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.setDeserializer(FlinkKafkaConsumer.java:322)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:223)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:154)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:139)
 at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer.(FlinkKafkaConsumer.java:108)
 at iot.flink.readKafkaViaFlink$.main(readKafkaViaFlink.scala:23)
 at iot.flink.readKafkaViaFlink.main(readKafkaViaFlink.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
 at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
 at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
 at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
 at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
 at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
 at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
 at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
 at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
 ... 20 more
Reporter: Abhishek Jain


[root@ip-10-65-1-98 tcpreplayInput]# flink run --class 
iot.flink.readKafkaViaFlink ./flink_2.12-0.1.jar
java.lang.NoClassDefFoundError: 

Re: Julia SDK for Stateful Functions

2021-08-02 Thread Igal Shilman
Hi Tom!

I'm very happy to see this! and would be very happy to help out!
We can continue to conversation over here, and in addition we can setup a
call,
Where we can go over some of your questions.

Regarding the specific question about the "incomplete context”.
You are absolutely right, the remote SDK is expected to look for unknown
states
and reply with the specification of these states.

In the current version, it is expected to see a bounded number of these
messages in a row, but after
the initialization phase you are not expected to see them anymore. We will
be working on a canary requests
that will reduce the number of these messages soon.

All the best,
Igal.


On Sat, Jul 31, 2021 at 5:30 PM Tom Breloff  wrote:

> Hi all. I’ve been working on an Stateful Functions SDK for Julia.  I think
> it’s pretty close to working, except I’m running into a bit of confusion
> around the type expectations around the storage mechanism.
>
> Looking through the python SDK, it seems like my req/rep handler should
> look for missing persisted state objects in the ToFunction and then reply
> with an “incomplete context”. I do this, but then the subsequent
> ToFunctions are still missing the PersistedValues.
>
> Is there an assumption in the statefun backend about what types are allowed
> in the storage? Or anything else that may cause what I’m seeing?
>
> Also if there’s anyone willing to help work on this project with me,
> please let me know! The repo:
>
> https://github.com/tbreloff/StateFun.jl
>
> Best,
> Tom
>


Re: [VOTE] FLIP-179: Expose Standardized Operator Metrics

2021-08-02 Thread Becket Qin
+1 (binding).

Thanks for driving the efforts, Arvid.

Cheers,

Jiangjie (Becket) Qin

On Sat, Jul 31, 2021 at 12:08 PM Steven Wu  wrote:

> +1 (non-binding)
>
> On Fri, Jul 30, 2021 at 3:55 AM Arvid Heise  wrote:
>
> > Dear devs,
> >
> > I'd like to open a vote on FLIP-179: Expose Standardized Operator Metrics
> > [1] which was discussed in this thread [2].
> > The vote will be open for at least 72 hours unless there is an objection
> > or not enough votes.
> >
> > The proposal excludes the implementation for the currentFetchEventTimeLag
> > metric, which caused a bit of discussion without a clear convergence. We
> > will implement that metric in a generic way at a later point and
> encourage
> > sources to implement it themselves in the meantime.
> >
> > Best,
> >
> > Arvid
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-179%3A+Expose+Standardized+Operator+Metrics
> > [2]
> >
> >
> https://lists.apache.org/thread.html/r856920cbfe6a262b521109c5bdb9e904e00a9b3f1825901759c24d85%40%3Cdev.flink.apache.org%3E
> >
>


[jira] [Created] (FLINK-23587) Set Deployment's Annotation when using kubernetes

2021-08-02 Thread KarlManong (Jira)
KarlManong created FLINK-23587:
--

 Summary: Set Deployment's Annotation when using kubernetes
 Key: FLINK-23587
 URL: https://issues.apache.org/jira/browse/FLINK-23587
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / Kubernetes
Affects Versions: 1.13.1
Reporter: KarlManong


When deploying  natively on Kubernetes, the jobmanager  pod will setup the 
annotations. The Deployment should also set these too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23586) Further enhancements for pluggable shuffle service framework

2021-08-02 Thread Jin Xing (Jira)
Jin Xing created FLINK-23586:


 Summary: Further enhancements for pluggable shuffle service 
framework
 Key: FLINK-23586
 URL: https://issues.apache.org/jira/browse/FLINK-23586
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Reporter: Jin Xing






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] FLIP-177: Extend Sink API

2021-08-02 Thread Guowei Ma
Hi Arvid

In fact, I think the way of combination is more flexible, especially in
terms of reuse. Once you make changes, you can reuse the last logic well.
For example, two AsyncIOA+AsyncIOB, you can easily reuse the previous logic
AsyncIOA+AsyncIOC,
For compatibility, I don't think there would be a big problem. For example,
for the old AsyncIOA+AsyncIOB, we can always convert to
AsyncIOA+AsyncIOB+AsyncIOC. When AsyncIOB does not have a state, simply
pass it through.

Hi Steffen
You are right, there are indeed such examples. However, I also mentioned in
my previous email that we could extend the AsyncIO capability 
XXXFuture.fail(Excelption) so that AsyncIO can re-try(put the failed
element back to the internal queue again), which will also solve the
problem you mentioned. BTW if we implement `AsyncSink` separately, there
may also be Order/UnOrder issues, which is already resolved by the `AsyncIO`

In general, I think AsyncIO and AsyncSink are very similar. There might be
some duplicated work.

Best,
Guowei


On Fri, Jul 30, 2021 at 8:16 PM Hausmann, Steffen 
wrote:

> Hey Guowei,
>
> there is one additional aspect I want to highlight that is relevant for
> the types of destinations we had in mind when designing the AsyncSink.
>
> I'll again use Kinesis as an example, but the same argument applies to
> other destinations. We are using the PutRecords API to persist up to 500
> events with a single API call to reduce the overhead compared to using
> individual calls per event. But not all of the 500 events may be persisted
> successfully, eg, a single event fails to be persisted due to server side
> throttling. With the MailboxExecutor based implementation, we can just add
> this event back to the internal queue. The event will then be retied with
> the next batch of 500 events.
>
> In my understanding, that's not possible with the AsyncIO based approach.
> During a retry, we can only retry the failed events of the original batch
> of events, which means we would need to send a single event with a separate
> PutRecords call. Depending how often that happens, this could add up.
>
> Does that make sense?
>
> Cheers, Steffen
>
>
> On 30.07.21, 05:51, "Guowei Ma"  wrote:
>
> CAUTION: This email originated from outside of the organization. Do
> not click links or open attachments unless you can confirm the sender and
> know the content is safe.
>
>
>
> Hi, Arvid & Piotr
> Sorry for the late reply.
> 1. Thank you all very much for your patience and explanation.
> Recently, I
> have also studied the related code of 'MailBox', which may not be as
> serious as I thought, after all, it is very similar to Java's
> `Executor`;
> 2. Whether to use AsyncIO or MailBox to implement Kinesis connector is
> more
> up to the contributor to decide (after all, `Mailbox` has decided to be
> exposed :-) ). It’s just that I personally prefer to combine some
> simple
> functions to complete a more advanced function.
> Best,
> Guowei
>
>
> On Sat, Jul 24, 2021 at 3:38 PM Arvid Heise  wrote:
>
> > Just to reiterate on Piotr's point: MailboxExecutor is pretty much an
> > Executor [1] with named lambdas, except for the name MailboxExecutor
> > nothing is hinting at a specific threading model.
> >
> > Currently, we expose it on StreamOperator API. Afaik the idea is to
> make
> > the StreamOperator internal and beef up ProcessFunction but for
> several use
> > cases (e.g., AsyncIO), we actually need to expose the executor
> anyways.
> >
> > We could rename MailboxExecutor to avoid exposing the name of the
> threading
> > model. For example, we could rename it to TaskThreadExecutor (but
> that's
> > pretty much the same), to CooperativeExecutor (again implies
> Mailbox), to
> > o.a.f.Executor, to DeferredExecutor... Ideas are welcome.
> >
> > We could also simply use Java's Executor interface, however, when
> working
> > with that interface, I found that the missing context of async
> executed
> > lambdas made debugging much much harder. So that's why I designed
> > MailboxExecutor to force the user to give some debug string to each
> > invokation. In the sink context, we could, however, use an adaptor
> from
> > MailboxExecutor to Java's Executor and simply bind the sink name to
> the
> > invokations.
> >
> > [1]
> >
> >
> https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executor.html
> >
> > On Fri, Jul 23, 2021 at 5:36 PM Piotr Nowojski  >
> > wrote:
> >
> > > Hi,
> > >
> > > Regarding the question whether to expose the MailboxExecutor or
> not:
> > > 1. We have plans on exposing it in the ProcessFunction (in short
> we want
> > to
> > > make StreamOperator API private/internal only, and move all of
> it's extra
> > > functionality in one way or another to the ProcessFunction). I
> don't
> > > remember and I'm not sure if 

[jira] [Created] (FLINK-23585) The "stable version" link in the document should lead to the corresponding document page of the stable version, instead of to the front page

2021-08-02 Thread Caizhi Weng (Jira)
Caizhi Weng created FLINK-23585:
---

 Summary: The "stable version" link in the document should lead to 
the corresponding document page of the stable version, instead of to the front 
page
 Key: FLINK-23585
 URL: https://issues.apache.org/jira/browse/FLINK-23585
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.14.0
Reporter: Caizhi Weng


When visiting the documentation website of the master branch (for example 
[this|https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/table/sql/queries/window-agg/])
 there will be a notice on top of the website stating "This documentation is 
for an unreleased version of Apache Flink. We recommend you use the latest 
[stable version|https://ci.apache.org/projects/flink/flink-docs-stable/].;

When clicking the "stable version" link, the website will jump to the 
documentation front page of the stable version, instead of the corresponding 
page. It is actually quite annoying as one need to search for the page they 
want again.

This issue also applies to the "the latest stable version" link when visiting 
the documentation of previous versions, and the "中文版"(Chinese version) link on 
the left side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] FLIP-177: Extend Sink API

2021-08-02 Thread Guowei Ma
+1(binding)
Best,
Guowei


On Mon, Aug 2, 2021 at 4:04 PM Danny Cranmer 
wrote:

> +1 (binding)
>
> On Mon, Aug 2, 2021 at 12:42 AM Thomas Weise  wrote:
>
> > +1 (binding)
> >
> >
> > On Fri, Jul 30, 2021 at 5:05 AM Arvid Heise  wrote:
> >
> > > Hi all,
> > >
> > > I'd like to start a vote on FLIP-177: Extend Sink API [1] which
> provides
> > > small extensions to the Sink API introduced through FLIP-143.
> > > The vote will be open for at least 72 hours unless there is an
> objection
> > or
> > > not enough votes.
> > >
> > > Note that the FLIP was larger initially and I cut down all
> > > advanced/breaking changes.
> > >
> > > Best,
> > >
> > > Arvid
> > >
> > > [1]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-177%3A+Extend+Sink+API
> > >
> >
>


[jira] [Created] (FLINK-23584) Introduce PythonCoProcessOperator and remove PythonCoFlatMapOperator & PythonCoMapOperator

2021-08-02 Thread Dian Fu (Jira)
Dian Fu created FLINK-23584:
---

 Summary: Introduce PythonCoProcessOperator and remove 
PythonCoFlatMapOperator & PythonCoMapOperator
 Key: FLINK-23584
 URL: https://issues.apache.org/jira/browse/FLINK-23584
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Dian Fu
Assignee: Dian Fu
 Fix For: 1.14.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23583) Easy way to remove metadata before serializing row data for connector implementations

2021-08-02 Thread Jira
Ingo Bürk created FLINK-23583:
-

 Summary: Easy way to remove metadata before serializing row data 
for connector implementations
 Key: FLINK-23583
 URL: https://issues.apache.org/jira/browse/FLINK-23583
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Ingo Bürk
Assignee: Ingo Bürk


In FLINK-23537 we made JoinedRowData a public API, which helps when developing 
source connectors with format + metadata.

However, when implementing a sink connector (with format + metadata), we need 
an equivalent. The connector receives a RowData with appended metadata, but 
needs to pass only the row data without metadata to the SerializationSchema.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23582) Add document for session window

2021-08-02 Thread JING ZHANG (Jira)
JING ZHANG created FLINK-23582:
--

 Summary: Add document for session window
 Key: FLINK-23582
 URL: https://issues.apache.org/jira/browse/FLINK-23582
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: JING ZHANG






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23581) The link to the Flink download page is broken

2021-08-02 Thread wuguihu (Jira)
wuguihu created FLINK-23581:
---

 Summary: The link to the Flink download page is broken
 Key: FLINK-23581
 URL: https://issues.apache.org/jira/browse/FLINK-23581
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Reporter: wuguihu
 Attachments: image-20210802152418242.png, image-20210802153010175.png

*1. The url of the download link is not written correctly, so the link is 
invalid.*

 

The wrong link information is as follows:
{code:java}
 [download page]({{ site.download_url }}) 
{code}
It should be modified as follows:

 
{code:java}
 [download page](https://flink.apache.org/downloads.html) 
{code}
 

*2.  *The pages involved are as follows**

1> 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/resource-providers/standalone/overview/]


2> 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/resource-providers/yarn/]


3> 
[https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/standalone/overview/]


4> 
[https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/yarn/]

 

*3、The documents involved are as follows:*
{code:java}
1. docs/content.zh/docs/deployment/resource-providers/standalone/overview.md
2. docs/content.zh/docs/deployment/resource-providers/yarn.md
3. docs/content/docs/deployment/resource-providers/standalone/overview.md
4. docs/content/docs/deployment/resource-providers/yarn.md
{code}
*4、 *The modifications involved are as follows:**
{code:java}
1.  [download page]({{ site.download_url }})   
Modified as follows: 
[download page](https://flink.apache.org/downloads.html)
 
2. [下载页面]({{ site.zh_download_url }}) 
Modified as follows: 
[下载页面](https://flink.apache.org/zh/downloads.html)
  
3. [Downloads / Additional 
Components]({{site.download_url}}#additional-components)
Modified as follows: 
[Downloads / Additional 
Components](https://flink.apache.org/downloads.html#additional-components)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23580) Cannot handle such jdbc url

2021-08-02 Thread chenpeng (Jira)
chenpeng created FLINK-23580:


 Summary: Cannot handle such jdbc url
 Key: FLINK-23580
 URL: https://issues.apache.org/jira/browse/FLINK-23580
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.12.0
Reporter: chenpeng
 Attachments: image-2021-08-02-16-02-21-897.png

 

Caused by: java.lang.IllegalStateException: Cannot handle such jdbc url: 
jdbc:clickhouse://xx:8123/dict
{code:java}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".SLF4J: Failed 
to load class "org.slf4j.impl.StaticLoggerBinder".SLF4J: Defaulting to 
no-operation (NOP) logger implementationSLF4J: See 
http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
org.apache.flink.table.api.ValidationException: Unable to create a source for 
reading table 'default_catalog.default_database.sink_table'.
Table options are:
'connector'='jdbc''driver'='ru.yandex.clickhouse.ClickHouseDriver''password'='''table-name'='tbl3_dict''url'='jdbc:clickhouse://xxx:8123/dict''username'='default'
 at 
org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:125)
 at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.createDynamicTableSource(CatalogSourceTable.java:265)
 at 
org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.java:100)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3585) 
at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2507)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2144)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2093)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2050)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:663)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:644)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3438)
 at 
org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:570)
 at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:165)
 at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:157)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:823)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:795)
 at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:250)
 at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78) 
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:639)
 at FlinkStreamSql.test7(FlinkStreamSql.java:212) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:69)
 at 
com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
 at 
com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:221)
 at 

Re: [VOTE] FLIP-177: Extend Sink API

2021-08-02 Thread Danny Cranmer
+1 (binding)

On Mon, Aug 2, 2021 at 12:42 AM Thomas Weise  wrote:

> +1 (binding)
>
>
> On Fri, Jul 30, 2021 at 5:05 AM Arvid Heise  wrote:
>
> > Hi all,
> >
> > I'd like to start a vote on FLIP-177: Extend Sink API [1] which provides
> > small extensions to the Sink API introduced through FLIP-143.
> > The vote will be open for at least 72 hours unless there is an objection
> or
> > not enough votes.
> >
> > Note that the FLIP was larger initially and I cut down all
> > advanced/breaking changes.
> >
> > Best,
> >
> > Arvid
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-177%3A+Extend+Sink+API
> >
>


Re: Looking for Maintainers for Flink on YARN

2021-08-02 Thread Gabor Somogyi
Hi All,

Just arrived back from holiday and seen this thread.
I'm working with Marton and you can ping me directly on jira/PR in need.

BR,
G


On Mon, Aug 2, 2021 at 5:02 AM Xintong Song  wrote:

> Thanks Konstantin for bringing this up, and thanks Marton and your team for
> volunteering.
>
> @Marton,
> Our team at Alibaba will keep helping with the Flink on YARN maintenance.
> However, as Konstaintin said, the time our team dedicated to this will
> probably be less than it used to be. Your offer of help is significant and
> greatly appreciated. Please feel free to reach out to us if you need any
> help on this.
>
> Thank you~
>
> Xintong Song
>
>
>
> On Thu, Jul 29, 2021 at 5:04 PM Márton Balassi 
> wrote:
>
> > Hi Konstantin,
> >
> > Thank you for raising this topic, our development team at Cloudera would
> be
> > happy to step up to address this responsibility.
> >
> > Best,
> > Marton
> >
> > On Thu, Jul 29, 2021 at 10:15 AM Konstantin Knauf 
> > wrote:
> >
> > > Dear community,
> > >
> > > We are looking for community members, who would like to maintain
> Flink's
> > > YARN support going forward. So far, this has been handled by teams at
> > > Ververica & Alibaba. The focus of these teams has shifted over the past
> > > months so that we only have little time left for this topic. Still, we
> > > think, it is important to maintain high quality support for Flink on
> > YARN.
> > >
> > > What does "Maintaining Flink on YARN" mean? There are no known bigger
> > > efforts outstanding. We are mainly talking about addressing
> > > "test-stability" issues, bugs, version upgrades, community
> contributions
> > &
> > > smaller feature requests. The prioritization of these would be up to
> the
> > > future maintainers, except "test-stability" issues which are important
> to
> > > address for overall productivity.
> > >
> > > If a group of community members forms itself, we are happy to give an
> > > introduction to relevant pieces of the code base, principles,
> > assumptions,
> > > ... and hand over open threads.
> > >
> > > If you would like to take on this responsibility or can join this
> effort
> > in
> > > a supporting role, please reach out!
> > >
> > > Cheers,
> > >
> > > Konstantin
> > > for the Deployment & Coordination Team at Ververica
> > >
> > > --
> > >
> > > Konstantin Knauf
> > >
> > > https://twitter.com/snntrable
> > >
> > > https://github.com/knaufk
> > >
> >
>


[RESULT] [VOTE] Release 1.12.5, release candidate #3

2021-08-02 Thread Jingsong Li
Dear all,

I'm happy to announce that we have unanimously approved the release of
1.12.5.

There are 10 approving votes, 4 of which are binding:

  *   Yun Tang
  *   Zakelly Lan
  *   Xingbo Huang
  *   Robert Metzger (binding)
  *   Godfrey He
  *   Jark Wu (binding)
  *   Leonard Xu
  *   Dian Fu (binding)
  *   Jing Zhang
  *   Yu Li  (binding)

There are no disapproving votes.

Thanks everyone!

Best,
Jingsong Lee


Re: [VOTE] Release 1.12.5, release candidate #3

2021-08-02 Thread Yu Li
+1 (binding)

- Checked the diff between 1.12.4 and 1.12.5-rc3: OK (
https://github.com/apache/flink/compare/release-1.12.4...release-1.12.5-rc3)
  - commons-io version has been bumped to 2.8.0 through FLINK-22747 and all
NOTICE files updated correctly
  - guava version has been bumped to 29.0 for kinesis connector through
FLINK-23009 and all NOTICE files updated correctly
- Checked release notes: OK
  - minor: I've moved FLINK-23577 out of 1.12.5
- Checked sums and signatures: OK
- Maven clean install from source: OK
- Checked the jars in the staging repo: OK
- Checked the website updates: Changes Required
  - Note: release note needs update but is not a blocker for the RC

Best Regards,
Yu


On Mon, 2 Aug 2021 at 12:25, JING ZHANG  wrote:

> +1 (non-binding)
>
> 1. built from source code flink-1.12.5-bin-scala_2.11.tgz
> <
> https://dist.apache.org/repos/dist/dev/flink/flink-1.12.5-rc3/flink-1.12.5-bin-scala_2.11.tgz
> >
> succeeded
> 2. Started a local Flink cluster, ran the WordCount example, WebUI looks
> good,  no suspicious output/log
> 3. started cluster and run some e2e sql queries using SQL Client, query
> result is expected. (Find a minor bug  FLINK-23554
>  which would happen in
> some corner cases, so it is not a blocker)
> 4. Repeat Step 2 and 3 with flink-1.12.5-src.tgz
> <
> https://dist.apache.org/repos/dist/dev/flink/flink-1.12.5-rc3/flink-1.12.5-src.tgz
> >
>
> Best,
> JING ZHANG
>
> Dian Fu  于2021年8月2日周一 上午9:59写道:
>
> > +1 (binding)
> >
> > -  checked the website PR
> > -  verified the signatures and checksum
> > -  Installed the Python wheel package of Python 3.8 on MacOS and runs
> some
> > simple examples
> >
> > Regards,
> > Dian
> >
> > > 2021年8月1日 下午3:32,Leonard Xu  写道:
> > >
> > > +1 (non-binding)
> > >
> > > - verified signatures and hashsums
> > > - checked all dependency artifacts are 1.12.5
> > > - started a cluster, ran a wordcount job, the result is expected, no
> > suspicious log output
> > > - started SQL Client, ran some sql queries in SQL Client, the result is
> > expected, no suspicious log output
> > > - reviewed the web PR
> > >
> > > Best,
> > > Leonard
> > >
> > >> 在 2021年8月1日,15:24,Jark Wu  写道:
> > >>
> > >> +1 (binding)
> > >>
> > >> - checked/verified signatures and hashes
> > >> - started cluster, ran examples, verified web ui and log output,
> nothing
> > >> unexpected
> > >> - started cluster and ran some e2e sql queries using sql-client, looks
> > good:
> > >> - read from kafka source, aggregate, write into mysql
> > >> - read from kafka source with watermark defined in ddl, window
> > aggregate,
> > >> write into mysql
> > >> - read from kafka with computed column defined in ddl, temporal join
> > with
> > >> a mysql table, write into kafka
> > >> - reviewed the release PR
> > >>
> > >> Best,
> > >> Jark
> > >>
> > >> On Sat, 31 Jul 2021 at 10:22, godfrey he  wrote:
> > >>
> > >>> +1 (non-binding)
> > >>>
> > >>> - Checked checksums and signatures: OK
> > >>> - Built from source: OK
> > >>> - Checked the flink-web PR
> > >>>  - find one typo about flink version
> > >>> - Submit some jobs from sql-client to local cluster, checked the
> > web-ui,
> > >>> cp, sp, log, etc: OK
> > >>>
> > >>> Best,
> > >>> Godfrey
> > >>>
> > >>> Robert Metzger  于2021年7月30日周五 下午4:33写道:
> > >>>
> >  Thanks a lot for providing the new staging repository. I dropped the
> > 1440
> >  and 1441 staging repositories, to avoid that other RC reviewers
> >  accidentally look into it, or that we accidentally release it.
> > 
> >  +1 (binding)
> > 
> >  Checks:
> >  - I didn't find any additional issues in the release announcement
> >  - the pgp signatures on the source archive seem fine
> >  - source archive compilation starts successfully (rat check passes
> > etc.)
> >  - standalone mode, job submission and cli cancellation works. logs
> > look
> >  fine
> >  - maven staging repository looks fine
> > 
> >  On Fri, Jul 30, 2021 at 7:30 AM Jingsong Li  >
> >  wrote:
> > 
> > > Hi everyone,
> > >
> > > Thanks Robert, I created a new one.
> > >
> > > all artifacts to be deployed to the Maven Central Repository [4],
> > >
> > > [4]
> > >
> > >>>
> > https://repository.apache.org/content/repositories/orgapacheflink-1444/
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Thu, Jul 29, 2021 at 9:50 PM Robert Metzger <
> rmetz...@apache.org>
> > > wrote:
> > >
> > >> The difference is that the 1440 staging repository contains the
> > Scala
> > > _2.11
> > >> files, the 1441 repo contains scala_2.12. I'm not sure if this
> > works,
> > >> because things like "flink-core:1.11.5" will be released twice?
> > >> I would prefer to have a single staging repository containing all
> > > binaries
> > >> we intend to release to maven central, to avoid complications in
> the
> > >> release process.
> > 

[FLINK-23555] needs your attention

2021-08-02 Thread 卫博文
Hi, PMC & commiters


I have come up with an issue about common subexpression elimination
https://issues.apache.org/jira/browse/FLINK-23555
I hacked on the flink-table-planner module to improve common subexpression 
elimination by using localRef
And the result tells that my hacked codes make a big difference.
The issue has been proposed several days but got little attention.
Now I don't know how to show you my code hence don't know if it's suitable.
I need help and long for your attention.


Yours sincerely

Re: [VOTE] Release 1.11.4, release candidate #1

2021-08-02 Thread Yu Li
+1 (binding)

- Checked the diff between 1.11.3 and 1.11.4-rc1: OK (
https://github.com/apache/flink/compare/release-1.11.3...release-1.11.4-rc1)
  - jackson version has been bumped to 2.10.5.1 through FLINK-21020 and all
NOTICE files updated correctly
  - beanutils version has been bumped to 1.9.4 through FLINK-21123 and all
NOTICE files updated correctly
  - snappy-java version has been bumped to 1.1.8.3 through FLINK-22208 and
all NOTICE files updated correctly
  - aws sdk version has been bumped to 1.12.7 for kinesis connector through
FLINK-18182 and all NOTICE files updated correctly
  - guava version has been bumped to 29.0 for kinesis connector through
FLINK-18182 and all NOTICE files updated correctly
- Checked release notes: OK
- Checked sums and signatures: OK
- Maven clean install from source: OK
- Checked the jars in the staging repo: OK
- Checked the website updates: Changes Required
  - Note: necessary changes needed before merging, but this is not a
blocker for RC

Best Regards,
Yu


On Mon, 2 Aug 2021 at 10:22, Dian Fu  wrote:

> +1 (binding)
>
> -  checked the website PR
> -  verified the signatures and checksum
> -  installed the Python wheel package of Python 3.7 on MacOS and runs the
> word count example
>
> Regards,
> Dian
>
> > 2021年8月1日 下午4:17,Leonard Xu  写道:
> >
> > +1 (non-binding)
> >
> > - verified signatures and hashsums
> > - built from source code with scala 2.11 succeeded
> > - checked all dependency artifacts are 1.11.4
> > - started a cluster, ran a wordcount job, the result is expected, no
> suspicious log output
> > - started SQL Client, ran some SQL queries, the result is expected
> > - reviewed the web PR
> >
> > Best,
> > Leonard
> >
> >> 在 2021年7月26日,23:25,godfrey he  写道:
> >>
> >> Hi everyone,
> >> Please review and vote on the release candidate #1 for the version
> 1.11.4,
> >> as follows:
> >> [ ] +1, Approve the release
> >> [ ] -1, Do not approve the release (please provide specific comments)
> >>
> >>
> >> The complete staging area is available for your review, which includes:
> >> * JIRA release notes [1],
> >> * the official Apache source release and binary convenience releases to
> be
> >> deployed to dist.apache.org [2], which are signed with the key with
> >> fingerprint 4A978875E56AA2100EB0CF12A244D52CF0A40279 [3],
> >> * all artifacts to be deployed to the Maven Central Repository [4],
> >> * source code tag "release-1.11.4-rc1" [5],
> >> * website pull request listing the new release and adding announcement
> blog
> >> post [6].
> >>
> >> The vote will be open for at least 72 hours. It is adopted by majority
> >> approval, with at least 3 PMC affirmative votes.
> >>
> >> Best,
> >> Godfrey
> >>
> >> [1]
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12349404
> >> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.11.4-rc1/
> >> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> >> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1438
> >> [5] https://github.com/apache/flink/releases/tag/release-1.11.4-rc1
> >> [6] https://github.com/apache/flink-web/pull/459
> >
>
>


[jira] [Created] (FLINK-23579) SELECT SHA2(NULL, CAST(NULL AS INT)) AS ref0 can't compile

2021-08-02 Thread xiaojin.wy (Jira)
xiaojin.wy created FLINK-23579:
--

 Summary: SELECT SHA2(NULL, CAST(NULL AS INT)) AS ref0 can't compile
 Key: FLINK-23579
 URL: https://issues.apache.org/jira/browse/FLINK-23579
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.14.0
Reporter: xiaojin.wy
 Fix For: 1.14.0


Running the sql of SELECT SHA2(NULL, CAST(NULL AS INT)) AS ref0 will get the 
error below:
java.lang.RuntimeException: Could not instantiate generated class 
'ExpressionReducer$5'

at 
org.apache.flink.table.runtime.generated.GeneratedClass.newInstance(GeneratedClass.java:75)
at 
org.apache.flink.table.planner.codegen.ExpressionReducer.reduce(ExpressionReducer.scala:108)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressionsInternal(ReduceExpressionsRule.java:759)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule.reduceExpressions(ReduceExpressionsRule.java:699)
at 
org.apache.calcite.rel.rules.ReduceExpressionsRule$ProjectReduceExpressionsRule.onMatch(ReduceExpressionsRule.java:306)
at 
org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
at 
org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
at 
org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepProgram.optimize(FlinkHepProgram.scala:69)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkHepRuleSetProgram.optimize(FlinkHepRuleSetProgram.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:62)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram$$anonfun$optimize$1.apply(FlinkChainedProgram.scala:58)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at 
scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractTraversable.foldLeft(Traversable.scala:104)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChainedProgram.optimize(FlinkChainedProgram.scala:57)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.optimizeTree(BatchCommonSubGraphBasedOptimizer.scala:87)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.org$apache$flink$table$planner$plan$optimize$BatchCommonSubGraphBasedOptimizer$$optimizeBlock(BatchCommonSubGraphBasedOptimizer.scala:58)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer$$anonfun$doOptimize$1.apply(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.planner.plan.optimize.BatchCommonSubGraphBasedOptimizer.doOptimize(BatchCommonSubGraphBasedOptimizer.scala:46)
at 
org.apache.flink.table.planner.plan.optimize.CommonSubGraphBasedOptimizer.optimize(CommonSubGraphBasedOptimizer.scala:93)
at 
org.apache.flink.table.planner.delegation.PlannerBase.optimize(PlannerBase.scala:307)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:169)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:1718)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:803)
at 
org.apache.flink.table.api.internal.StatementSetImpl.execute(StatementSetImpl.java:125)
at 
org.apache.flink.table.planner.runtime.batch.sql.TableSourceITCase.testTableXiaojin(TableSourceITCase.scala:487)
at 

[jira] [Created] (FLINK-23578) Remove Python operators PythonFlatMapOperator/PythonMapOperator/PythonPartitionCustomOperator and use PythonProcessOperator instead

2021-08-02 Thread Dian Fu (Jira)
Dian Fu created FLINK-23578:
---

 Summary: Remove Python operators 
PythonFlatMapOperator/PythonMapOperator/PythonPartitionCustomOperator and use 
PythonProcessOperator instead
 Key: FLINK-23578
 URL: https://issues.apache.org/jira/browse/FLINK-23578
 Project: Flink
  Issue Type: Sub-task
  Components: API / Python
Reporter: Dian Fu
Assignee: Dian Fu
 Fix For: 1.14.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)