[jira] [Created] (FLINK-23026) OVER WINDOWS function lost data

2021-06-17 Thread MOBIN (Jira)
MOBIN created FLINK-23026:
-

 Summary: OVER WINDOWS function lost data
 Key: FLINK-23026
 URL: https://issues.apache.org/jira/browse/FLINK-23026
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.12.1
Reporter: MOBIN
 Attachments: image-2021-06-18-10-54-18-125.png

{code:java}
Flink SQL> CREATE TABLE tmall_item(
>   itemID VARCHAR,
>   itemType VARCHAR,
>   eventtime varchar,
>   onSellTime AS TO_TIMESTAMP(eventtime),
>   price DOUBLE,
>   WATERMARK FOR onSellTime AS onSellTime - INTERVAL '0' SECOND
> ) with (
>   'connector.type' = 'kafka',
>'connector.version' = 'universal',
>'connector.topic' = 'items',
>'format.type' = 'csv',
>'connector.properties.bootstrap.servers' = 'localhost:9092'
> );
>
[INFO] Table has been created.

Flink SQL> SELECT
> itemType,
> COUNT(itemID) OVER (
> PARTITION BY itemType
> ORDER BY onSellTime
> RANGE BETWEEN INTERVAL '1' DAY preceding AND CURRENT ROW) AS cot
> FROM tmall_item;
{code}

When I enter the following data into the topic, its Electronic count value is 
3, which should normally be 4. If the event time and the value of the partition 
field are the same, data will be lost

ITEM001,Electronic,2017-11-11 10:01:00,20
{color:red}ITEM002{color},Electronic,{color:red}2017-11-11 10:02:00{color},50
{color:red}ITEM002{color},Electronic,{color:red}2017-11-11 10:02:00{color},50
ITEM003,Electronic,2017-11-11 10:03:00,50

!image-2021-06-18-10-54-18-125.png|width=1066,height=177!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Re: [ANNOUNCE] New PMC member: Xintong Song

2021-06-17 Thread Yu Li
Congratulations, Xintong!

Best Regards,
Yu


On Thu, 17 Jun 2021 at 15:23, Yuan Mei  wrote:

> Congratulations, Xintong :-)
>
> On Thu, Jun 17, 2021 at 11:57 AM Xingbo Huang  wrote:
>
> > Congratulations, Xintong!
> >
> > Best,
> > Xingbo
> >
> > Yun Gao  于2021年6月17日周四 上午10:46写道:
> >
> > > Congratulations, Xintong!
> > >
> > > Best,
> > > Yun
> > >
> > >
> > > --
> > > Sender:Jingsong Li
> > > Date:2021/06/17 10:41:22
> > > Recipient:dev
> > > Theme:Re: [ANNOUNCE] New PMC member: Xintong Song
> > >
> > > Congratulations, Xintong!
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Thu, Jun 17, 2021 at 10:26 AM Yun Tang  wrote:
> > >
> > > > Congratulations, Xintong!
> > > >
> > > > Best
> > > > Yun Tang
> > > > 
> > > > From: Leonard Xu 
> > > > Sent: Wednesday, June 16, 2021 21:05
> > > > To: dev (dev@flink.apache.org) 
> > > > Subject: Re: [ANNOUNCE] New PMC member: Xintong Song
> > > >
> > > >
> > > > Congratulations, Xintong!
> > > >
> > > >
> > > > Best,
> > > > Leonard
> > > > > 在 2021年6月16日,20:07,Till Rohrmann  写道:
> > > > >
> > > > > Congratulations, Xintong!
> > > > >
> > > > > Cheers,
> > > > > Till
> > > > >
> > > > > On Wed, Jun 16, 2021 at 1:47 PM JING ZHANG 
> > > wrote:
> > > > >
> > > > >> Congratulations, Xintong!
> > > > >>
> > > > >>
> > > > >> Jiayi Liao  于2021年6月16日周三 下午7:30写道:
> > > > >>
> > > > 
> > > >  <
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/>
> > > >  Congratulations Xintong!
> > > > 
> > > >  On Wed, Jun 16, 2021 at 7:24 PM Nicholas Jiang <
> > programg...@163.com
> > > >
> > > >  wrote:
> > > > 
> > > > > Congratulations, Xintong!
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Sent from:
> > > > >
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
> > > > 
> > > > 
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> > > --
> > > Best, Jingsong Lee
> > >
> > >
> >
>


Re: [DISCUSS] Dashboard/HistoryServer authentication

2021-06-17 Thread Austin Cawley-Edwards
Hi all,

Sorry to be joining the conversation late. I'm also on the side of
Konstantin, generally, in that this seems to not be a core goal of Flink as
a project and adds a maintenance burden.

Would another con of Kerberos be that is likely a fading project in terms
of network security? (serious question, please correct me if there is
reason to believe it is gaining adoption)

The point about Kerberos being independent of infrastructure is a good one
but is something that is also solved by modern sidecar proxies + service
meshes that can run across Kubernetes and bare-metal. These solutions also
handle certificate provisioning, rotation, etc. in addition to higher-level
authorization policies. Some examples of projects with this "universal
infrastructure support" are Kuma[1] (CNCF Sandbox, I'm a maintainer) and
Istio[2] (Google).

Wondering out loud: has anyone tried to run Flink on top of cilium[3],
which also provides zero-trust networking at the kernel level without
needing to instrument applications? This currently only runs on Kubernetes
on Linux, so that's a major limitation, but solves many of the request
forging concerns at all levels.

Thanks,
Austin

[1]: https://kuma.io/docs/1.1.6/quickstart/universal/
[2]: https://istio.io/latest/docs/setup/install/virtual-machine/
[3]: https://cilium.io/

On Thu, Jun 17, 2021 at 1:50 PM Till Rohrmann  wrote:

> I left some comments in the Google document. It would be great if
> someone from the community with security experience could also take a look
> at it. Maybe Eron you have an opinion on the topic.
>
> Cheers,
> Till
>
> On Thu, Jun 17, 2021 at 6:57 PM Till Rohrmann 
> wrote:
>
> > Hi Gabor,
> >
> > I haven't found time to look into the updated FLIP yet. I'll try to do it
> > asap.
> >
> > Cheers,
> > Till
> >
> > On Wed, Jun 16, 2021 at 9:35 PM Konstantin Knauf 
> > wrote:
> >
> >> Hi Gabor,
> >>
> >> > However representing Kerberos as completely new feature is not true
> >> because
> >> it's already in since Flink makes authentication at least with HDFS and
> >> Hbase through Kerberos.
> >>
> >> True, that is one way to look at it, but there are differences, too:
> >> Control Plane vs Data Plane, Core vs Connectors.
> >>
> >> > Adding OIDC or OAuth2 has the exact same concerns what you've guys
> just
> >> raised. Why exactly these? If you think this would be beneficial we can
> >> discuss it in detail
> >>
> >> That's exactly my point. Once we start adding authx support, we will
> >> sooner or later discuss other options besides Kerberos, too. A user who
> >> would like to use OAuth can not easily use Kerberos, right?
> >> That is one of the reasons I am skeptical about adding initial authx
> >> support.
> >>
> >> > Related authorization you've mentioned it can be complicated over
> time.
> >> Can
> >> you show us an example? We've knowledge with couple of open source
> >> components
> >> but authorization was never a horror complex story. I personally have
> the
> >> most experience with Spark which I think is quite simple and stable.
> Users
> >> can be viewers/admins
> >> and jobs started by others can't be modified. If you can share an
> example
> >> over-complication we can discuss on facts.
> >>
> >> Authorization is a new aspect that needs to be considered for every
> >> addition to the REST API. In the future users might ask for additional
> >> roles (e.g. an editor), user-defined roles and you've already mentioned
> >> job-level permissions yourself. And keep in mind that there might also
> be
> >> larger additions in the future like the flink-sql-gateway. Contributions
> >> like this become more expensive the more aspects we need to consider.
> >>
> >> In general, I believe, it is important that the community focuses its
> >> efforts where we can generate the most value to the user and -
> personally -
> >> I don't think there is much to gain by extending Flink's scope in that
> >> direction. Of course, this is not black and white and there are other
> valid
> >> opinions.
> >>
> >> Thanks,
> >>
> >> Konstantin
> >>
> >> On Wed, Jun 16, 2021 at 7:38 PM Gabor Somogyi <
> gabor.g.somo...@gmail.com>
> >> wrote:
> >>
> >>> Hi Konstantin,
> >>>
> >>> Thanks for the response. Related new feature introduction in case of
> >>> Basic
> >>> auth I tend to agree, anything else can be chosen.
> >>>
> >>> However representing Kerberos as completely new feature is not true
> >>> because
> >>> it's already in since Flink makes authentication at least with HDFS and
> >>> Hbase through Kerberos.
> >>> The main problem with the actual Kerberos implementation is that it
> >>> contains several bugs and only partially implemented. Following your
> >>> suggestion can we agree that we
> >>> skip the Basic auth implementation and finish an already started
> Kerberos
> >>> story by adding History Server and Job Dashboard authentication?
> >>>
> >>> Adding OIDC or OAuth2 has the exact same concerns what you've guys just
> >>> raised. Why exactly these? If you think 

Re: [DISCUSS] Dashboard/HistoryServer authentication

2021-06-17 Thread Till Rohrmann
I left some comments in the Google document. It would be great if
someone from the community with security experience could also take a look
at it. Maybe Eron you have an opinion on the topic.

Cheers,
Till

On Thu, Jun 17, 2021 at 6:57 PM Till Rohrmann  wrote:

> Hi Gabor,
>
> I haven't found time to look into the updated FLIP yet. I'll try to do it
> asap.
>
> Cheers,
> Till
>
> On Wed, Jun 16, 2021 at 9:35 PM Konstantin Knauf 
> wrote:
>
>> Hi Gabor,
>>
>> > However representing Kerberos as completely new feature is not true
>> because
>> it's already in since Flink makes authentication at least with HDFS and
>> Hbase through Kerberos.
>>
>> True, that is one way to look at it, but there are differences, too:
>> Control Plane vs Data Plane, Core vs Connectors.
>>
>> > Adding OIDC or OAuth2 has the exact same concerns what you've guys just
>> raised. Why exactly these? If you think this would be beneficial we can
>> discuss it in detail
>>
>> That's exactly my point. Once we start adding authx support, we will
>> sooner or later discuss other options besides Kerberos, too. A user who
>> would like to use OAuth can not easily use Kerberos, right?
>> That is one of the reasons I am skeptical about adding initial authx
>> support.
>>
>> > Related authorization you've mentioned it can be complicated over time.
>> Can
>> you show us an example? We've knowledge with couple of open source
>> components
>> but authorization was never a horror complex story. I personally have the
>> most experience with Spark which I think is quite simple and stable. Users
>> can be viewers/admins
>> and jobs started by others can't be modified. If you can share an example
>> over-complication we can discuss on facts.
>>
>> Authorization is a new aspect that needs to be considered for every
>> addition to the REST API. In the future users might ask for additional
>> roles (e.g. an editor), user-defined roles and you've already mentioned
>> job-level permissions yourself. And keep in mind that there might also be
>> larger additions in the future like the flink-sql-gateway. Contributions
>> like this become more expensive the more aspects we need to consider.
>>
>> In general, I believe, it is important that the community focuses its
>> efforts where we can generate the most value to the user and - personally -
>> I don't think there is much to gain by extending Flink's scope in that
>> direction. Of course, this is not black and white and there are other valid
>> opinions.
>>
>> Thanks,
>>
>> Konstantin
>>
>> On Wed, Jun 16, 2021 at 7:38 PM Gabor Somogyi 
>> wrote:
>>
>>> Hi Konstantin,
>>>
>>> Thanks for the response. Related new feature introduction in case of
>>> Basic
>>> auth I tend to agree, anything else can be chosen.
>>>
>>> However representing Kerberos as completely new feature is not true
>>> because
>>> it's already in since Flink makes authentication at least with HDFS and
>>> Hbase through Kerberos.
>>> The main problem with the actual Kerberos implementation is that it
>>> contains several bugs and only partially implemented. Following your
>>> suggestion can we agree that we
>>> skip the Basic auth implementation and finish an already started Kerberos
>>> story by adding History Server and Job Dashboard authentication?
>>>
>>> Adding OIDC or OAuth2 has the exact same concerns what you've guys just
>>> raised. Why exactly these? If you think this would be beneficial we can
>>> discuss it in detail
>>> but as a side story it would be good to finish a halfway done Kerberos
>>> story.
>>>
>>> Related authorization you've mentioned it can be complicated over time.
>>> Can
>>> you show us an example? We've knowledge with couple of open source
>>> components
>>> but authorization was never a horror complex story. I personally have the
>>> most experience with Spark which I think is quite simple and stable.
>>> Users
>>> can be viewers/admins
>>> and jobs started by others can't be modified. If you can share an example
>>> over-complication we can discuss on facts.
>>>
>>> Thank you in advance!
>>>
>>> BR,
>>> G
>>>
>>>
>>> On Wed, Jun 16, 2021 at 5:42 PM Konstantin Knauf 
>>> wrote:
>>>
>>> > Hi everyone,
>>> >
>>> > sorry for joining late and thanks for the insightful discussion.
>>> >
>>> > In general, I'd personally prefer not to increase the surface area of
>>> > Apache Flink unless there is a good reason. It seems we all agree that
>>> > authx is not part of the core value proposition of Apache Flink, so if
>>> we
>>> > can delegate this problem to a more specialized tool, I am in favor of
>>> > that. Apache Flink is already huge and a lot of work goes into
>>> maintenance,
>>> > so I personally have become more sensitive to this aspect over time.
>>> >
>>> > If we add support for Basic Auth and Kerberos now, users will sooner or
>>> > later ask for OIDC, LDAP, SAML,... I acknowledge that Kerberos is
>>> widely
>>> > used in the corporate, on-premises context, but isn't the focus moving
>>> more
>>> > towards more 

Re: [DISCUSS] Dashboard/HistoryServer authentication

2021-06-17 Thread Till Rohrmann
Hi Gabor,

I haven't found time to look into the updated FLIP yet. I'll try to do it
asap.

Cheers,
Till

On Wed, Jun 16, 2021 at 9:35 PM Konstantin Knauf  wrote:

> Hi Gabor,
>
> > However representing Kerberos as completely new feature is not true
> because
> it's already in since Flink makes authentication at least with HDFS and
> Hbase through Kerberos.
>
> True, that is one way to look at it, but there are differences, too:
> Control Plane vs Data Plane, Core vs Connectors.
>
> > Adding OIDC or OAuth2 has the exact same concerns what you've guys just
> raised. Why exactly these? If you think this would be beneficial we can
> discuss it in detail
>
> That's exactly my point. Once we start adding authx support, we will
> sooner or later discuss other options besides Kerberos, too. A user who
> would like to use OAuth can not easily use Kerberos, right?
> That is one of the reasons I am skeptical about adding initial authx
> support.
>
> > Related authorization you've mentioned it can be complicated over time.
> Can
> you show us an example? We've knowledge with couple of open source
> components
> but authorization was never a horror complex story. I personally have the
> most experience with Spark which I think is quite simple and stable. Users
> can be viewers/admins
> and jobs started by others can't be modified. If you can share an example
> over-complication we can discuss on facts.
>
> Authorization is a new aspect that needs to be considered for every
> addition to the REST API. In the future users might ask for additional
> roles (e.g. an editor), user-defined roles and you've already mentioned
> job-level permissions yourself. And keep in mind that there might also be
> larger additions in the future like the flink-sql-gateway. Contributions
> like this become more expensive the more aspects we need to consider.
>
> In general, I believe, it is important that the community focuses its
> efforts where we can generate the most value to the user and - personally -
> I don't think there is much to gain by extending Flink's scope in that
> direction. Of course, this is not black and white and there are other valid
> opinions.
>
> Thanks,
>
> Konstantin
>
> On Wed, Jun 16, 2021 at 7:38 PM Gabor Somogyi 
> wrote:
>
>> Hi Konstantin,
>>
>> Thanks for the response. Related new feature introduction in case of Basic
>> auth I tend to agree, anything else can be chosen.
>>
>> However representing Kerberos as completely new feature is not true
>> because
>> it's already in since Flink makes authentication at least with HDFS and
>> Hbase through Kerberos.
>> The main problem with the actual Kerberos implementation is that it
>> contains several bugs and only partially implemented. Following your
>> suggestion can we agree that we
>> skip the Basic auth implementation and finish an already started Kerberos
>> story by adding History Server and Job Dashboard authentication?
>>
>> Adding OIDC or OAuth2 has the exact same concerns what you've guys just
>> raised. Why exactly these? If you think this would be beneficial we can
>> discuss it in detail
>> but as a side story it would be good to finish a halfway done Kerberos
>> story.
>>
>> Related authorization you've mentioned it can be complicated over time.
>> Can
>> you show us an example? We've knowledge with couple of open source
>> components
>> but authorization was never a horror complex story. I personally have the
>> most experience with Spark which I think is quite simple and stable. Users
>> can be viewers/admins
>> and jobs started by others can't be modified. If you can share an example
>> over-complication we can discuss on facts.
>>
>> Thank you in advance!
>>
>> BR,
>> G
>>
>>
>> On Wed, Jun 16, 2021 at 5:42 PM Konstantin Knauf 
>> wrote:
>>
>> > Hi everyone,
>> >
>> > sorry for joining late and thanks for the insightful discussion.
>> >
>> > In general, I'd personally prefer not to increase the surface area of
>> > Apache Flink unless there is a good reason. It seems we all agree that
>> > authx is not part of the core value proposition of Apache Flink, so if
>> we
>> > can delegate this problem to a more specialized tool, I am in favor of
>> > that. Apache Flink is already huge and a lot of work goes into
>> maintenance,
>> > so I personally have become more sensitive to this aspect over time.
>> >
>> > If we add support for Basic Auth and Kerberos now, users will sooner or
>> > later ask for OIDC, LDAP, SAML,... I acknowledge that Kerberos is widely
>> > used in the corporate, on-premises context, but isn't the focus moving
>> more
>> > towards more web-friendly standards like OIDC/OAuth 2.0? If we only
>> want to
>> > support a single protocol, there is an argument to be made that it
>> should
>> > be OIDC and Dex [1,2] as a bridge to everything else. Have OIDC or
>> OAuth2
>> > been considered instead of Kerberos? How do you see the market moving?
>> But
>> > as I said before, in my opinion we can generate more value by investing
>> > into 

[jira] [Created] (FLINK-23025) sink-buffer-max-rows and sink-buffer-flush-interval options produce a lot of duplicates

2021-06-17 Thread Johannes Moser (Jira)
Johannes Moser created FLINK-23025:
--

 Summary: sink-buffer-max-rows and sink-buffer-flush-interval 
options produce a lot of duplicates
 Key: FLINK-23025
 URL: https://issues.apache.org/jira/browse/FLINK-23025
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.1
Reporter: Johannes Moser


Using the 
[sink-buffer-flush-max-rows|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/upsert-kafka/#sink-buffer-flush-interval]
 and 
[sink-buffer-flush-interval|https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/upsert-kafka/#sink-buffer-flush-interval]
 options for a kafka sink produces a lot of duplicate key/values in the target 
kafka topic. Maybe the {{BufferedUpsertSinkFunction}} should clone the buffered 
key/value RowData objects, but it doesn’t. Seems like in [line 
134|https://github.com/apache/flink/blob/60c7d9e77a6e9d82e0feb33f0d8bc263dddf2fd9/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/BufferedUpsertSinkFunction.java#L133-L137]
 the condition should be negated or the ternary operator results swapped:
{code:java}
this.valueCopier =
 getRuntimeContext().getExecutionConfig().isObjectReuseEnabled()
 ? Function.identity()
 : typeSerializer::copy;{code}

(in the jdbc sink the same logic is done but the ternary operator results 
swapped)

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23024) RPC result TaskManagerInfoWithSlots not serializable

2021-06-17 Thread Arvid Heise (Jira)
Arvid Heise created FLINK-23024:
---

 Summary: RPC result TaskManagerInfoWithSlots not serializable
 Key: FLINK-23024
 URL: https://issues.apache.org/jira/browse/FLINK-23024
 Project: Flink
  Issue Type: Bug
  Components: Runtime / REST
Affects Versions: 1.13.1, 1.14.0
Reporter: Arvid Heise
 Fix For: 1.14.0, 1.13.2


A user reported the following stacktrace while accessing web UI.


{noformat}
Unhandled exception.
org.apache.flink.runtime.rpc.akka.exceptions.AkkaRpcException: Failed
to serialize the result for RPC call : requestTaskManagerDetailsInfo.
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:404)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.lambda$sendAsyncResponse$0(AkkaRpcActor.java:360)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
~[?:1.8.0_251] at
java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:848)
~[?:1.8.0_251] at
java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2168)
~[?:1.8.0_251] at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.sendAsyncResponse(AkkaRpcActor.java:352)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:319)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:212)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:77)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:158)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.actor.Actor$class.aroundReceive(Actor.scala:517)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.actor.ActorCell.invoke(ActorCell.scala:561)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.Mailbox.run(Mailbox.scala:225)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.Mailbox.exec(Mailbox.scala:235)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[flink-dist_2.11-1.13.1.jar:1.13.1] at
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[flink-dist_2.11-1.13.1.jar:1.13.1] Caused by:
java.io.NotSerializableException:
org.apache.flink.runtime.resourcemanager.TaskManagerInfoWithSlots at
java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
~[?:1.8.0_251] at
java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
~[?:1.8.0_251] at
org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:624)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.AkkaRpcSerializedValue.valueOf(AkkaRpcSerializedValue.java:66)
~[flink-dist_2.11-1.13.1.jar:1.13.1] at
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.serializeRemoteResultAndVerifySize(AkkaRpcActor.java:387)
~[flink-dist_2.11-1.13.1.jar:1.13.1] ... 27 more
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Output from RichAsyncFunction on failure

2021-06-17 Thread Arvid Heise
Hi Satish,

usually you would side-outputs [1] for that but afaik asyncIO doesn't
support that (yet).
So your option works well to use some union type. You can then chain a map
function that uses side-outputs.

[1]
https://ci.apache.org/projects/flink/flink-docs-master/docs/dev/datastream/side_output/

On Fri, Jun 11, 2021 at 7:49 PM Satish Saley  wrote:

> One way I thought to achieve this is -
> 1. For failures, add a special record to collection in RichAsyncFunction
> 2. Filter out those special records from the DataStream and push to
> another Kafka
> Let me know if it makes sense.
>
>
> On Fri, Jun 11, 2021 at 10:40 AM Satish Saley 
> wrote:
>
>> Hi,
>> - I have a kafka consumer to read events.
>> - Then, I have RichAsyncFunction to call a remote service to get
>> more information about that event.
>>
>> If the remote call fails after X number of retries, I don't want flink to
>> fail the job and start processing from the beginning. Instead I would like
>> to push info about failed call to another Kafka topic. Is there a way to
>> achieve this?
>>
>


Assign-evenly kafkaTopicPartitions of multiple topics to flinkKafkaConsumer subtask

2021-06-17 Thread 徐小龙



Hi, all:


  When we consume multiplue topic in one flinkKafkaConsumer, each topic 
partition count is less than the parallelism of the consumer, there is a 
problem we encounted is that, currently the partitionAssigner can evenly assign 
partition of one topic to subtask, but the assignment process is indepent 
between topic, so finally , we get some subtasks is in charge of much more 
partitions while some subtask is total free! To slow this issuse, i have 
reported the issue by https://issues.apache.org/jira/browse/FLINK-22840.

  I worked on this issue by add one assign strategy wich can deal with this 
requirement , and want to contibue to flink guys!


  Is there someone can talk about this issuce with me ?
















--

顺祺!
徐小龙
同济大学软件学院
上海市曹安公路4800号济事楼508  201804
电话/传真:13671633451
E-MAIL:xiaolongsy2...@163.com

join example in batch mode using DataStream API

2021-06-17 Thread Etienne Chauchot

Hi all,

In case it is useful to people:

I was testing new DataStream convergent batch/streaming API and join did 
not work in batch mode : https://issues.apache.org/jira/browse/FLINK-22587


I had to manually code an inner join using *KeyedCoProcessFunction* and 
*states*. Here is an example of a manual join (implementing part of 
TPC-DS query3 with avro GenericRecords) it may not be the best code, but 
it could serve as an example for people interested


Best,

Etienne


// Join1: WHERE date_dim.d_date_sk = store_sales.ss_sold_date_sk Schema schemaJoinDateSk 
=AvroUtils .getSchemaMerged(dateDimAvroSchema, storeSalesAvroSchema, 
"recordsJoinDateSk"); final DataStream recordsJoinDateSk = 
dateDim
  .keyBy((KeySelector) value -> (Integer) 
value.get("d_date_sk"))
  .connect(storeSales.keyBy(
(KeySelector) value -> (Integer) 
value.get("ss_sold_date_sk")))
  .process(new JoinRecords(dateDimAvroSchema, storeSalesAvroSchema, 
schemaJoinDateSk))
  .returns(new GenericRecordAvroTypeInfo(schemaJoinDateSk));


Where *JoinRecords* does a manual inner join with states :


  public JoinRecords(Schema schemaLhs, Schema schemaRhs, Schema outputSchema) {
this.schemaLhs = schemaLhs; this.schemaRhs = schemaRhs; this.outputSchema = 
outputSchema; this.schemaLhsString = schemaLhs.toString(); this.schemaRhsString 
= schemaRhs.toString(); this.schemaString = outputSchema.toString(); }

  @Override public void open(Configuration parameters)throws Exception {
state1 = getRuntimeContext().getMapState(
  new MapStateDescriptor<>("records_dataStream_1", Integer.class, 
GenericRecord.class)); state2 = getRuntimeContext().getMapState(
  new MapStateDescriptor<>("records_dataStream_2", Integer.class, 
GenericRecord.class)); }

  private GenericRecordjoinRecords(GenericRecord first, GenericRecord second)
throws Exception {
// after deserialization if (schemaLhs ==null) {
  schemaLhs =new Schema.Parser().parse(schemaLhsString); }
if (schemaRhs ==null) {
  schemaRhs =new Schema.Parser().parse(schemaRhsString); }
if (outputSchema ==null) {
  outputSchema =new Schema.Parser().parse(schemaString); }

GenericRecord outputRecord =new GenericRecordBuilder(outputSchema).build(); 
for (Schema.Field f :outputSchema.getFields()) {
  if (schemaLhs.getField(f.name()) !=null) {
outputRecord.put(f.name(), first.get(f.name())); }else if 
(schemaRhs.getField(f.name()) !=null) {
outputRecord.put(f.name(), second.get(f.name())); }
}
return outputRecord; }

  private GenericRecordstateJoin(GenericRecord currentRecord, int 
currentDatastream, Context context)throws Exception {
final Integer currentKey = context.getCurrentKey(); MapState myState = currentDatastream ==1 ?state1 :state2; MapState otherState = currentDatastream ==1 ?state2 :state1; // join with the other datastream by looking into the state of 
the other datastream final GenericRecord otherRecord = otherState.get(currentKey); if (otherRecord ==null) {// did not find a record to join with, store record for later join myState.put(currentKey, currentRecord); return null; }else {// found a record to join with (same key), join (with using the correct 
avro schema) return currentDatastream ==1 ? joinRecords(currentRecord, otherRecord) : joinRecords(otherRecord, currentRecord); }

  }

  @Override public void processElement1(GenericRecord currentRecord, Context context, 
Collector collector)throws Exception {
final GenericRecord jointRecord = stateJoin(currentRecord, 1, context); if 
(jointRecord !=null) {
  collector.collect(jointRecord); }
  }

  @Override public void processElement2(GenericRecord currentRecord, Context context, 
Collector collector)throws Exception {
final GenericRecord jointRecord = stateJoin(currentRecord, 2, context); if 
(jointRecord !=null) {
  collector.collect(jointRecord); }
  }
}



[jira] [Created] (FLINK-23023) Support offset in window TVF

2021-06-17 Thread JING ZHANG (Jira)
JING ZHANG created FLINK-23023:
--

 Summary: Support offset in window TVF
 Key: FLINK-23023
 URL: https://issues.apache.org/jira/browse/FLINK-23023
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: JING ZHANG






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Feedback Collection Jira Bot

2021-06-17 Thread Piotr Nowojski
Hi,

I also think that the bot is a bit too aggressive/too quick with assigning
stale issues/deprioritizing them, but that's not that big of a deal for me.

What bothers me much more is that it's closing minor issues automatically.
Depriotising issues makes sense to me. If a wish for improvement or a bug
report has been opened a long time ago, and they got no attention over the
time, sure depriotize them. But closing them is IMO a bad idea. Bug might
be minor, but if it's not fixed it's still there - it shouldn't be closed.
Closing with "won't fix" should be done for very good reasons and very
rarely. Same applies to improvements/wishes. Furthermore, very often
descriptions and comments have a lot of value, and if we keep closing minor
issues I'm afraid that we end up with:
- more duplication. I doubt anyone will be looking for prior "closed" bug
reports/improvement requests. Definitely I'm only looking for open tickets
when looking if a ticket for XYZ already exists or not
- we will be losing knowledge

Piotrek

śr., 16 cze 2021 o 15:12 Robert Metzger  napisał(a):

> Very sorry for the delayed response.
>
> Regarding tickets with the "test-instability" label (topic 1): I'm usually
> assigning a fixVersion to the next release of the branch where the failure
> occurred, when I'm opening a test failure ticket. Others seem to do that
> too. Hence my comment that not checking tickets with a fixVersion set by
> Flink bot is good (because test failures should always stay "Critical"
> until we've understood what's going on)
> I see that it is a bit contradicting that Critical test instabilities
> receive no attention for 14 days, but that seems to be the norm given the
> current number of incoming test instabilities.
>
> On Wed, Jun 16, 2021 at 2:05 PM Till Rohrmann 
> wrote:
>
> > Another example for category 4 would be the ticket where we collect
> > breaking API changes for Flink 2.0 [1]. The idea behind this ticket is to
> > collect things to consider when developing the next major version.
> > Admittedly, we have never seen the benefits of collecting the breaking
> > changes because we haven't started Flink 2.x yet. Also, it is not clear
> how
> > relevant these tickets are right now.
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-3957
> >
> > Cheers,
> > Till
> >
> > On Wed, Jun 16, 2021 at 11:42 AM Konstantin Knauf 
> > wrote:
> >
> > > Hi everyone,
> > >
> > > thank you for all the feedback so far. I believe we have four different
> > > topics by now:
> > >
> > > 1 about *test-instability tickets* raised by Robert. Waiting for
> feedback
> > > by Robert.
> > >
> > > 2 about *aggressiveness of stale-assigned *rule raised by Timo. Waiting
> > > for feedback by Timo and others.
> > >
> > > 3 about *excluding issues with a fixVersion* raised by Konstantin,
> Till.
> > > Waiting for more feedback by the community as it involves general
> changes
> > > to how we deal with fixVersion.
> > >
> > > 4 about *excluding issues with a specific-label* raised by Arvid.
> > >
> > > I've already written something about 1-3. Regarding 4:
> > >
> > > How do we make sure that these don't become stale? I think, there have
> > > been a few "long-term efforts" in the past that never got the attention
> > > that we initially wanted. Is this just about the ability to collect
> > tickets
> > > under an umbrella to document a future effort? Maybe for the example of
> > > DataStream replacing DataSet how would this look like in Jira?
> > >
> > > Cheers,
> > >
> > > Konstantin
> > >
> > >
> > > On Tue, Jun 8, 2021 at 11:31 AM Till Rohrmann 
> > > wrote:
> > >
> > >> I like this idea. It would then be the responsibility of the component
> > >> maintainers to manage the lifecycle explicitly.
> > >>
> > >> Cheers,
> > >> Till
> > >>
> > >> On Mon, Jun 7, 2021 at 1:48 PM Arvid Heise  wrote:
> > >>
> > >> > One more idea for the bot. Could we have a label to exclude certain
> > >> tickets
> > >> > from the life-cycle?
> > >> >
> > >> > I'm thinking about long-term tickets such as improving DataStream to
> > >> > eventually replace DataSet. We would collect ideas over the next
> > couple
> > >> of
> > >> > weeks without any visible progress on the implementation.
> > >> >
> > >> > On Fri, May 21, 2021 at 2:06 PM Konstantin Knauf  >
> > >> > wrote:
> > >> >
> > >> > > Hi Timo,
> > >> > >
> > >> > > Thanks for joining the discussion. All rules except the unassigned
> > >> rule
> > >> > do
> > >> > > not apply to Sub-Tasks actually (like deprioritization, closing).
> > >> > > Additionally, activity on a Sub-Taks counts as activity for the
> > >> parent.
> > >> > So,
> > >> > > the parent ticket would not be touched by the bot as long as there
> > is
> > >> a
> > >> > > single Sub-Task that has a discussion or an update. If you
> > experience
> > >> > > something different, this is a bug.
> > >> > >
> > >> > > Is there a reason why it is important to assign all Sub-Tasks to
> the
> > >> same
> > >> > > person immediately? I am not 

[jira] [Created] (FLINK-23022) Plugin classloader settings do not work as described

2021-06-17 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-23022:


 Summary: Plugin classloader settings do not work as described
 Key: FLINK-23022
 URL: https://issues.apache.org/jira/browse/FLINK-23022
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Reporter: Dawid Wysakowicz


The options {{plugin.classloader.parent-first-patterns}} are documented as all 
patterns that are put there should be loaded from the Flink classloader instead 
of the plugin classloader.

However, the way they work is that they define a set of patterns allowed to be 
pulled from the parent classloader. The plugin classloader takes precedence in 
all cases. And a class can be loaded from the parent classloader only if it 
matches the pattern in one of the aforementioned options.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23021) Check if the restarted job topology is consistent on restart

2021-06-17 Thread Yun Gao (Jira)
Yun Gao created FLINK-23021:
---

 Summary: Check if the restarted job topology is consistent on 
restart
 Key: FLINK-23021
 URL: https://issues.apache.org/jira/browse/FLINK-23021
 Project: Flink
  Issue Type: Sub-task
Reporter: Yun Gao


Users might modify the job topology before restart for external checkpoint and 
savepoint. To overcome this issue, we would need to check if a fully finished 
operator have been added after a non-fully-finished operator. If so, we would 
throw exception to disallow this situation or re-mark the fully finished 
operator as alive. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23020) NullPointerException when running collect from Python API

2021-06-17 Thread Jira
Maciej Bryński created FLINK-23020:
--

 Summary: NullPointerException when running collect from Python API
 Key: FLINK-23020
 URL: https://issues.apache.org/jira/browse/FLINK-23020
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.13.1
Reporter: Maciej Bryński


Hi, 

I'm trying to use PyFlink from Jupyter Notebook and I'm getting NPE in 
following scenario.

1. I'm creating datagen table.
{code:java}
from pyflink.table import EnvironmentSettings, TableEnvironment, 
StreamTableEnvironment, DataTypes
from pyflink.table.udf import udf
from pyflink.common import Configuration, Row
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.java_gateway import get_gateway

conf = Configuration()
env_settings = 
EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
table_env = StreamTableEnvironment.create(environment_settings=env_settings)
table_env.get_config().get_configuration().set_integer("parallelism.default", 1)

table_env.execute_sql("DROP TABLE IF EXISTS datagen")
table_env.execute_sql("""
CREATE TABLE datagen (
id INT
) WITH (
'connector' = 'datagen'
)
""")
{code}
2. Then I'm running collect
{code:java}
try:
result = table_env.sql_query("select * from datagen limit 1").execute()
for r in result.collect():
print(r)
except KeyboardInterrupt:
result.get_job_client().cancel()
{code}
3. I'm using "interrupt the kernel" button. This is handled by above try/except 
and will cancel the query.

4. I'm running collect from point 2 one more time. Result:
{code:java}
---
Py4JJavaError Traceback (most recent call last)
 in 
  1 try:
> 2 result = table_env.sql_query("select * from datagen limit 
1").execute()
  3 for r in result.collect():
  4 print(r)
  5 except KeyboardInterrupt:

/usr/local/lib/python3.8/dist-packages/pyflink/table/table.py in execute(self)
   1070 """
   1071 self._t_env._before_execute()
-> 1072 return TableResult(self._j_table.execute())
   1073 
   1074 def explain(self, *extra_details: ExplainDetail) -> str:

/usr/local/lib/python3.8/dist-packages/py4j/java_gateway.py in __call__(self, 
*args)
   1283 
   1284 answer = self.gateway_client.send_command(command)
-> 1285 return_value = get_return_value(
   1286 answer, self.gateway_client, self.target_id, self.name)
   1287 

/usr/local/lib/python3.8/dist-packages/pyflink/util/exceptions.py in deco(*a, 
**kw)
144 def deco(*a, **kw):
145 try:
--> 146 return f(*a, **kw)
147 except Py4JJavaError as e:
148 from pyflink.java_gateway import get_gateway

/usr/local/lib/python3.8/dist-packages/py4j/protocol.py in 
get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)

Py4JJavaError: An error occurred while calling o69.execute.
: java.lang.NullPointerException
at java.base/java.util.Objects.requireNonNull(Objects.java:221)
at 
org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:144)
at 
org.apache.calcite.rel.metadata.RelMetadataQuery.(RelMetadataQuery.java:108)
at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.(FlinkRelMetadataQuery.java:73)
at 
org.apache.flink.table.planner.plan.metadata.FlinkRelMetadataQuery.instance(FlinkRelMetadataQuery.java:54)
at 
org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:39)
at 
org.apache.flink.table.planner.calcite.FlinkRelOptClusterFactory$$anon$1.get(FlinkRelOptClusterFactory.scala:38)
at 
org.apache.calcite.plan.RelOptCluster.getMetadataQuery(RelOptCluster.java:178)
at 
org.apache.calcite.rel.metadata.RelMdUtil.clearCache(RelMdUtil.java:965)
at 
org.apache.calcite.plan.hep.HepPlanner.clearCache(HepPlanner.java:879)
at 
org.apache.calcite.plan.hep.HepPlanner.contractVertices(HepPlanner.java:858)
at 
org.apache.calcite.plan.hep.HepPlanner.applyTransformationResults(HepPlanner.java:745)
at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:545)
at 
org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
at 
org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:271)
at 
org.apache.calcite.plan.hep.HepInstruction$RuleCollection.execute(HepInstruction.java:74)
 

[jira] [Created] (FLINK-23019) Avoid errors when identifiers use reserved keywords

2021-06-17 Thread Timo Walther (Jira)
Timo Walther created FLINK-23019:


 Summary: Avoid errors when identifiers use reserved keywords
 Key: FLINK-23019
 URL: https://issues.apache.org/jira/browse/FLINK-23019
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: Timo Walther


We add more and more keywords and built-in functions with special meaning to 
SQL. However, this could be quite annoying for users that have columns named 
like a keyword. E.g. {{timestamp}} or {{current_timestamp}}.

We should investigate if we can do better and avoid forcing escaping with 
backticks. IIRC  Calcite also offers functionalities for that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-23018) TTL state factory should handle extended state descriptors

2021-06-17 Thread Yun Tang (Jira)
Yun Tang created FLINK-23018:


 Summary: TTL state factory should handle extended state descriptors
 Key: FLINK-23018
 URL: https://issues.apache.org/jira/browse/FLINK-23018
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Reporter: Yun Tang
Assignee: Yun Tang
 Fix For: 1.14.0


Currently, {{TtlStateFactory}} can only handle fixed type of state descriptors. 
As {{ValueStateDescriptor}} is not a final class and user could still extend 
it, however, {{TtlStateFactory}} cannot recognize the extending class.

 {{TtlStateFactory}} should use {{StateDescriptor#Type}} to check what kinds of 
state is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


来自chenxuying的邮件

2021-06-17 Thread chenxuying



[jira] [Created] (FLINK-23017) HELP in sql-client still shows the removed SOURCE functionality

2021-06-17 Thread Daniel Lenz (Jira)
Daniel Lenz created FLINK-23017:
---

 Summary: HELP in sql-client still shows the removed SOURCE 
functionality
 Key: FLINK-23017
 URL: https://issues.apache.org/jira/browse/FLINK-23017
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Affects Versions: 1.13.1
 Environment: This affects 1.13.0 and 1.13.1
Reporter: Daniel Lenz


The sql-client still shows the SOURCE command in HELP, even though the command 
itself doesn't exist anymore and using it causes an error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Re: [ANNOUNCE] New PMC member: Xintong Song

2021-06-17 Thread Yuan Mei
Congratulations, Xintong :-)

On Thu, Jun 17, 2021 at 11:57 AM Xingbo Huang  wrote:

> Congratulations, Xintong!
>
> Best,
> Xingbo
>
> Yun Gao  于2021年6月17日周四 上午10:46写道:
>
> > Congratulations, Xintong!
> >
> > Best,
> > Yun
> >
> >
> > --
> > Sender:Jingsong Li
> > Date:2021/06/17 10:41:22
> > Recipient:dev
> > Theme:Re: [ANNOUNCE] New PMC member: Xintong Song
> >
> > Congratulations, Xintong!
> >
> > Best,
> > Jingsong
> >
> > On Thu, Jun 17, 2021 at 10:26 AM Yun Tang  wrote:
> >
> > > Congratulations, Xintong!
> > >
> > > Best
> > > Yun Tang
> > > 
> > > From: Leonard Xu 
> > > Sent: Wednesday, June 16, 2021 21:05
> > > To: dev (dev@flink.apache.org) 
> > > Subject: Re: [ANNOUNCE] New PMC member: Xintong Song
> > >
> > >
> > > Congratulations, Xintong!
> > >
> > >
> > > Best,
> > > Leonard
> > > > 在 2021年6月16日,20:07,Till Rohrmann  写道:
> > > >
> > > > Congratulations, Xintong!
> > > >
> > > > Cheers,
> > > > Till
> > > >
> > > > On Wed, Jun 16, 2021 at 1:47 PM JING ZHANG 
> > wrote:
> > > >
> > > >> Congratulations, Xintong!
> > > >>
> > > >>
> > > >> Jiayi Liao  于2021年6月16日周三 下午7:30写道:
> > > >>
> > > 
> > >  
> > >  Congratulations Xintong!
> > > 
> > >  On Wed, Jun 16, 2021 at 7:24 PM Nicholas Jiang <
> programg...@163.com
> > >
> > >  wrote:
> > > 
> > > > Congratulations, Xintong!
> > > >
> > > >
> > > >
> > > > --
> > > > Sent from:
> > > > http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
> > > 
> > > 
> > > >>>
> > > >>
> > >
> > >
> >
> > --
> > Best, Jingsong Lee
> >
> >
>


Re: Re: [ANNOUNCE] New PMC member: Arvid Heise

2021-06-17 Thread Arvid Heise
Thank you for your trust and support.

Arvid

On Thu, Jun 17, 2021 at 8:39 AM Roman Khachatryan  wrote:

> Congratulations!
>
> Regards,
> Roman
>
> On Thu, Jun 17, 2021 at 5:56 AM Xingbo Huang  wrote:
> >
> > Congratulations, Arvid!
> >
> > Best,
> > Xingbo
> >
> > Yun Tang  于2021年6月17日周四 上午10:49写道:
> >
> > > Congratulations, Arvid
> > >
> > > Best
> > > Yun Tang
> > > 
> > > From: Yun Gao 
> > > Sent: Thursday, June 17, 2021 10:46
> > > To: Jingsong Li ; dev 
> > > Subject: Re: Re: [ANNOUNCE] New PMC member: Arvid Heise
> > >
> > > Congratulations, Arvid!
> > >
> > > Best,
> > > Yun
> > >
> > >
> > > --
> > > Sender:Jingsong Li
> > > Date:2021/06/17 10:41:29
> > > Recipient:dev
> > > Theme:Re: [ANNOUNCE] New PMC member: Arvid Heise
> > >
> > > Congratulations, Arvid!
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Thu, Jun 17, 2021 at 6:52 AM Matthias J. Sax 
> wrote:
> > >
> > > > Congrats!
> > > >
> > > > On 6/16/21 6:06 AM, Leonard Xu wrote:
> > > > > Congratulations, Arvid!
> > > > >
> > > > >
> > > > >> 在 2021年6月16日,20:08,Till Rohrmann  写道:
> > > > >>
> > > > >> Congratulations, Arvid!
> > > > >>
> > > > >> Cheers,
> > > > >> Till
> > > > >>
> > > > >> On Wed, Jun 16, 2021 at 1:47 PM JING ZHANG 
> > > > wrote:
> > > > >>
> > > > >>> Congratulations, Arvid!
> > > > >>>
> > > > >>> Nicholas Jiang  于2021年6月16日周三 下午7:25写道:
> > > > >>>
> > > >  Congratulations, Arvid!
> > > > 
> > > > 
> > > > 
> > > >  --
> > > >  Sent from:
> > > > >>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
> > > > 
> > > > >>>
> > > > >
> > > >
> > >
> > >
> > > --
> > > Best, Jingsong Lee
> > >
> > >
>


Re: Re: [ANNOUNCE] New PMC member: Arvid Heise

2021-06-17 Thread Roman Khachatryan
Congratulations!

Regards,
Roman

On Thu, Jun 17, 2021 at 5:56 AM Xingbo Huang  wrote:
>
> Congratulations, Arvid!
>
> Best,
> Xingbo
>
> Yun Tang  于2021年6月17日周四 上午10:49写道:
>
> > Congratulations, Arvid
> >
> > Best
> > Yun Tang
> > 
> > From: Yun Gao 
> > Sent: Thursday, June 17, 2021 10:46
> > To: Jingsong Li ; dev 
> > Subject: Re: Re: [ANNOUNCE] New PMC member: Arvid Heise
> >
> > Congratulations, Arvid!
> >
> > Best,
> > Yun
> >
> >
> > --
> > Sender:Jingsong Li
> > Date:2021/06/17 10:41:29
> > Recipient:dev
> > Theme:Re: [ANNOUNCE] New PMC member: Arvid Heise
> >
> > Congratulations, Arvid!
> >
> > Best,
> > Jingsong
> >
> > On Thu, Jun 17, 2021 at 6:52 AM Matthias J. Sax  wrote:
> >
> > > Congrats!
> > >
> > > On 6/16/21 6:06 AM, Leonard Xu wrote:
> > > > Congratulations, Arvid!
> > > >
> > > >
> > > >> 在 2021年6月16日,20:08,Till Rohrmann  写道:
> > > >>
> > > >> Congratulations, Arvid!
> > > >>
> > > >> Cheers,
> > > >> Till
> > > >>
> > > >> On Wed, Jun 16, 2021 at 1:47 PM JING ZHANG 
> > > wrote:
> > > >>
> > > >>> Congratulations, Arvid!
> > > >>>
> > > >>> Nicholas Jiang  于2021年6月16日周三 下午7:25写道:
> > > >>>
> > >  Congratulations, Arvid!
> > > 
> > > 
> > > 
> > >  --
> > >  Sent from:
> > > >>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/
> > > 
> > > >>>
> > > >
> > >
> >
> >
> > --
> > Best, Jingsong Lee
> >
> >


[jira] [Created] (FLINK-23016) Job client must be a Coordination Request Gateway when submit a job on web ui

2021-06-17 Thread wen qi (Jira)
wen qi created FLINK-23016:
--

 Summary: Job client must be a Coordination Request Gateway when 
submit a job on web ui 
 Key: FLINK-23016
 URL: https://issues.apache.org/jira/browse/FLINK-23016
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.13.1
 Environment: flink: 1.13.1

flink-cdc: com.alibaba.ververica:flink-connector-postgres-cdc:1.4.0

jdk:1.8
Reporter: wen qi
 Attachments: WechatIMG10.png, WechatIMG11.png, WechatIMG8.png

I used flink cdc to collect data,and use table api to  transfer data  and write 
to another table.

That's all ritht when I run code in IDE and submit jar of jobs use cli, but web 
ui

When I use StreamTableEnvironment.from('table-path').execute(), it's failed! 

please check my attachments , it seems that a  bug of web ui bug ? 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)