Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Jingsong Li
Congratulations Yu, well deserved!

Best,
Jingsong

On Wed, Jun 17, 2020 at 1:42 PM Yuan Mei  wrote:

> Congrats, Yu!
>
> GXGX & well deserved!!
>
> Best Regards,
>
> Yuan
>
> On Wed, Jun 17, 2020 at 9:15 AM jincheng sun 
> wrote:
>
>> Hi all,
>>
>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>> part of the Apache Flink Project Management Committee (PMC).
>>
>> Yu Li has been very active on Flink's Statebackend component, working on
>> various improvements, for example the RocksDB memory management for 1.10.
>> and keeps checking and voting for our releases, and also has successfully
>> produced two releases(1.10.0&1.10.1) as RM.
>>
>> Congratulations & Welcome Yu Li!
>>
>> Best,
>> Jincheng (on behalf of the Flink PMC)
>>
>

-- 
Best, Jingsong Lee


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Yuan Mei
Congrats, Yu!

GXGX & well deserved!!

Best Regards,

Yuan

On Wed, Jun 17, 2020 at 9:15 AM jincheng sun 
wrote:

> Hi all,
>
> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> part of the Apache Flink Project Management Committee (PMC).
>
> Yu Li has been very active on Flink's Statebackend component, working on
> various improvements, for example the RocksDB memory management for 1.10.
> and keeps checking and voting for our releases, and also has successfully
> produced two releases(1.10.0&1.10.1) as RM.
>
> Congratulations & Welcome Yu Li!
>
> Best,
> Jincheng (on behalf of the Flink PMC)
>


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread godfrey he
Congratulations Yu!

Rui Li  于2020年6月17日周三 下午1:15写道:

> Congratulations!
>
> On Wed, Jun 17, 2020 at 11:58 AM Xingbo Huang  wrote:
>
> > Congratulations Yu! Well deserved
> >
> > Best,
> > Xingbo
> >
> > Peter Huang  于2020年6月17日周三 上午11:51写道:
> >
> > > Congratulations Yu!
> > >
> > > On Tue, Jun 16, 2020 at 8:31 PM Dan Zou  wrote:
> > >
> > > > Congratulations Yu!
> > > >
> > > > Best,
> > > > Dan Zou
> > > >
> > > > > 2020年6月17日 上午11:28,Jeff Zhang  写道:
> > > > >
> > > > > Congratulations Yu !
> > > > >
> > > > > Yun Gao  于2020年6月17日周三 上午11:26写道:
> > > > >
> > > > >> Congratulations Yu!
> > > > >>
> > > > >> Best,
> > > > >> Yun
> > > > >>
> > > > >> --
> > > > >> Sender:Zhijiang
> > > > >> Date:2020/06/17 11:18:35
> > > > >> Recipient:Dian Fu; dev<
> dev@flink.apache.org>
> > > > >> Cc:Haibo Sun; user;
> > > user-zh<
> > > > >> user...@flink.apache.org>
> > > > >> Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> > > > >>
> > > > >> Congratulations Yu! Well deserved!
> > > > >>
> > > > >> Best,
> > > > >> Zhijiang
> > > > >>
> > > > >>
> > > > >> --
> > > > >> From:Dian Fu 
> > > > >> Send Time:2020年6月17日(星期三) 10:48
> > > > >> To:dev 
> > > > >> Cc:Haibo Sun ; user ;
> > > > user-zh <
> > > > >> user...@flink.apache.org>
> > > > >> Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> > > > >>
> > > > >> Congrats Yu!
> > > > >>
> > > > >> Regards,
> > > > >> Dian
> > > > >>
> > > > >>> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> > > > >>>
> > > > >>> Congratulations Yu! Well deserved!
> > > > >>>
> > > > >>> Best,
> > > > >>> Jark
> > > > >>>
> > > > >>> On Wed, 17 Jun 2020 at 10:18, Haibo Sun 
> > wrote:
> > > > >>>
> > > >  Congratulations Yu!
> > > > 
> > > >  Best,
> > > >  Haibo
> > > > 
> > > > 
> > > >  At 2020-06-17 09:15:02, "jincheng sun" <
> sunjincheng...@gmail.com>
> > > > >> wrote:
> > > > > Hi all,
> > > > >
> > > > > On behalf of the Flink PMC, I'm happy to announce that Yu Li is
> > now
> > > > > part of the Apache Flink Project Management Committee (PMC).
> > > > >
> > > > > Yu Li has been very active on Flink's Statebackend component,
> > > working
> > > > >> on
> > > > > various improvements, for example the RocksDB memory management
> > for
> > > > >> 1.10.
> > > > > and keeps checking and voting for our releases, and also has
> > > > >> successfully
> > > > > produced two releases(1.10.0&1.10.1) as RM.
> > > > >
> > > > > Congratulations & Welcome Yu Li!
> > > > >
> > > > > Best,
> > > > > Jincheng (on behalf of the Flink PMC)
> > > > 
> > > > 
> > > > >>
> > > > >>
> > > > >>
> > > > >
> > > > > --
> > > > > Best Regards
> > > > >
> > > > > Jeff Zhang
> > > >
> > > >
> > >
> >
>
>
> --
> Best regards!
> Rui Li
>


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Rui Li
Congratulations!

On Wed, Jun 17, 2020 at 11:58 AM Xingbo Huang  wrote:

> Congratulations Yu! Well deserved
>
> Best,
> Xingbo
>
> Peter Huang  于2020年6月17日周三 上午11:51写道:
>
> > Congratulations Yu!
> >
> > On Tue, Jun 16, 2020 at 8:31 PM Dan Zou  wrote:
> >
> > > Congratulations Yu!
> > >
> > > Best,
> > > Dan Zou
> > >
> > > > 2020年6月17日 上午11:28,Jeff Zhang  写道:
> > > >
> > > > Congratulations Yu !
> > > >
> > > > Yun Gao  于2020年6月17日周三 上午11:26写道:
> > > >
> > > >> Congratulations Yu!
> > > >>
> > > >> Best,
> > > >> Yun
> > > >>
> > > >> --
> > > >> Sender:Zhijiang
> > > >> Date:2020/06/17 11:18:35
> > > >> Recipient:Dian Fu; dev
> > > >> Cc:Haibo Sun; user;
> > user-zh<
> > > >> user...@flink.apache.org>
> > > >> Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> > > >>
> > > >> Congratulations Yu! Well deserved!
> > > >>
> > > >> Best,
> > > >> Zhijiang
> > > >>
> > > >>
> > > >> --
> > > >> From:Dian Fu 
> > > >> Send Time:2020年6月17日(星期三) 10:48
> > > >> To:dev 
> > > >> Cc:Haibo Sun ; user ;
> > > user-zh <
> > > >> user...@flink.apache.org>
> > > >> Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> > > >>
> > > >> Congrats Yu!
> > > >>
> > > >> Regards,
> > > >> Dian
> > > >>
> > > >>> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> > > >>>
> > > >>> Congratulations Yu! Well deserved!
> > > >>>
> > > >>> Best,
> > > >>> Jark
> > > >>>
> > > >>> On Wed, 17 Jun 2020 at 10:18, Haibo Sun 
> wrote:
> > > >>>
> > >  Congratulations Yu!
> > > 
> > >  Best,
> > >  Haibo
> > > 
> > > 
> > >  At 2020-06-17 09:15:02, "jincheng sun" 
> > > >> wrote:
> > > > Hi all,
> > > >
> > > > On behalf of the Flink PMC, I'm happy to announce that Yu Li is
> now
> > > > part of the Apache Flink Project Management Committee (PMC).
> > > >
> > > > Yu Li has been very active on Flink's Statebackend component,
> > working
> > > >> on
> > > > various improvements, for example the RocksDB memory management
> for
> > > >> 1.10.
> > > > and keeps checking and voting for our releases, and also has
> > > >> successfully
> > > > produced two releases(1.10.0&1.10.1) as RM.
> > > >
> > > > Congratulations & Welcome Yu Li!
> > > >
> > > > Best,
> > > > Jincheng (on behalf of the Flink PMC)
> > > 
> > > 
> > > >>
> > > >>
> > > >>
> > > >
> > > > --
> > > > Best Regards
> > > >
> > > > Jeff Zhang
> > >
> > >
> >
>


-- 
Best regards!
Rui Li


[jira] [Created] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-06-16 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-18341:
--

 Summary: Building Flink Walkthrough Table Java 0.1 COMPILATION 
ERROR
 Key: FLINK-18341
 URL: https://issues.apache.org/jira/browse/FLINK-18341
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Tests
Affects Versions: 1.12.0
Reporter: Piotr Nowojski


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334

{noformat}
[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
 cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
  bad class file: 
/home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
class file has wrong version 55.0, should be 52.0
Please remove or make sure it appears in the correct subdirectory of the 
classpath.

(...)

[FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 minutes 
and 4 seconds! Test exited with exit code 1

{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Flink Table program cannot be compiled when enable checkpoint of StreamExecutionEnvironment

2020-06-16 Thread Jark Wu
Why compile JobGraph yourself? This is really an internal API and may cause
problems.
Could you try to use `flink run` command [1] to submit your user jar
instead?

Btw, what's your Flink version? If you are using Flink 1.10.0, could you
try to use 1.10.1?

Best,
Jark

On Wed, 17 Jun 2020 at 12:41, 杜斌  wrote:

> Thanks for the reply,
> Here is the simple java program that re-produce the problem:
> 1. code for the application:
>
> import org.apache.flink.api.java.tuple.Tuple2;
> import org.apache.flink.streaming.api.datastream.DataStream;
> import
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.table.api.EnvironmentSettings;
> import org.apache.flink.table.api.Table;
> import org.apache.flink.table.api.java.StreamTableEnvironment;
> import org.apache.flink.types.Row;
>
>
> import java.util.Arrays;
>
> public class Test {
> public static void main(String[] args) throws Exception {
>
> StreamExecutionEnvironment env =
> StreamExecutionEnvironment.getExecutionEnvironment();
> /**
>  * If enable checkpoint, blink planner will failed
>  */
> env.enableCheckpointing(1000);
>
> EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
> //.useBlinkPlanner() // compile fail
> .useOldPlanner() // compile success
> .inStreamingMode()
> .build();
> StreamTableEnvironment tEnv =
> StreamTableEnvironment.create(env, envSettings);
> DataStream orderA = env.fromCollection(Arrays.asList(
> new Order(1L, "beer", 3),
> new Order(1L, "diaper", 4),
> new Order(3L, "beer", 2)
> ));
>
> //Table table = tEnv.fromDataStream(orderA);
> tEnv.createTemporaryView("orderA", orderA);
> Table res = tEnv.sqlQuery("SELECT * FROM orderA");
> DataStream> ds =
> tEnv.toRetractStream(res, Row.class);
> ds.print();
> env.execute();
>
> }
>
> public static class Order {
> public long user;
> public String product;
> public int amount;
>
> public Order(long user, String product, int amount) {
> this.user = user;
> this.product = product;
> this.amount = amount;
> }
>
> public long getUser() {
> return user;
> }
>
> public void setUser(long user) {
> this.user = user;
> }
>
> public String getProduct() {
> return product;
> }
>
> public void setProduct(String product) {
> this.product = product;
> }
>
> public int getAmount() {
> return amount;
> }
>
> public void setAmount(int amount) {
> this.amount = amount;
> }
> }
> }
>
> 2. mvn clean package to a jar file
> 3. then we use the following code to produce a jobgraph:
>
> PackagedProgram program =
> PackagedProgram.newBuilder()
> .setJarFile(userJarPath)
> .setUserClassPaths(classpaths)
> .setEntryPointClassName(userMainClass)
> .setConfiguration(configuration)
>
> .setSavepointRestoreSettings((descriptor.isRecoverFromSavepoint() &&
> descriptor.getSavepointPath() != null &&
> !descriptor.getSavepointPath().equals("")) ?
>
> SavepointRestoreSettings.forPath(descriptor.getSavepointPath(),
> descriptor.isAllowNonRestoredState()) :
> SavepointRestoreSettings.none())
> .setArguments(userArgs)
> .build();
>
>
> JobGraph jobGraph = PackagedProgramUtils.createJobGraph(program,
> configuration, 4, true);
>
> 4. If we use blink planner & enable checkpoint, the compile will failed.
> For the others, the compile success.
>
> Thanks,
> Bin
>
> Jark Wu  于2020年6月17日周三 上午10:42写道:
>
> > Hi,
> >
> > Which Flink version are you using? Are you using SQL CLI? Could you share
> > your table/sql program?
> > We did fix some classloading problems around SQL CLI, e.g. FLINK-18302
> >
> > Best,
> > Jark
> >
> > On Wed, 17 Jun 2020 at 10:31, 杜斌  wrote:
> >
> > > add the full stack trace here:
> > >
> > >
> > > Caused by:
> > >
> > >
> >
> org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
> > > org.apache.flink.api.common.InvalidProgramException: Table program
> cannot
> > > be compiled. This is a bug. Please file an issue.
> > > at
> > >
> > >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
> > > at
> > >
> > >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
> > > at
> > >
> > >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
> > > at
> > >
> > >
> >
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:66)
> > > ... 14 more
> > > 

Re: Any python example with json data from Kafka using flink-statefun

2020-06-16 Thread Tzu-Li (Gordon) Tai
(forwarding this to user@ as it is more suited to be located there)

Hi Sunil,

With remote functions (using the Python SDK), messages sent to / from them
must be Protobuf messages.
This is a requirement since remote functions can be written in any
language, and we use Protobuf as a means for cross-language messaging.
If you are defining Kafka ingresses in a remote module (via textual YAML
module configs), then records in the Kafka ingress will be directly routed
to the remote functions, and therefore they are required to be Protobuf
messages as well.

With embedded functions (using the current Java SDK), then what you are
trying to do is possible.
When using the Java SDK, the Kafka ingress allows providing a
`KafkaIngressDeserializer` [1], where you can convert the bytes in Kafka
into any type you intend for messaging within the StateFun application. So
there, you can convert your JSON records.

If you want to still write your main application logic in Python, but the
input and output messages in Kafka are required to be JSON,
what you can currently do is have a mix of remote module [2] containing the
application logic as Python functions,
and a separate embedded module [3] containing the Java Kafka ingress and
egresses.
So, concretely, your 2 modules will contain:

Remote module:
- Your Python functions implementing the main business logic.

Embedded module:
- Java Kafka ingress with deserializer that converts JSON to Protobuf
messages. Here you have the freedom to extract only the fields that you
need.
- A Java router [4] that routes those converted messages to the remote
functions, by their logical address
- A Java Kafka egress with serializer that converts Protobuf messages from
remote functions into JSON Kafka records.
- A Java function that simply forwards input messages to the Kafka Kafka
egress. If the remote functions need to write JSON messages to Kafka, they
send a Protobuf message to this function.


Hope this helps.
Note that the egress side of things can definitely be easier (without the
extra forwarding through a Java function) if the Python SDK's
`kafka_egress_record` method allows supplying arbitrary bytes.
Then you would be able to already write to Kafka JSON messages in the
Python functions.
This however isn't supported yet, but technically it is quite easy to
achieve. I've just filed a issue for this [5], in case you'd like to follow
that.

Cheers,
Gordon

[1]
https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1/io-module/apache-kafka.html#kafka-deserializer
[2]
https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1/sdk/modules.html#remote-module

[3]
https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1/sdk/modules.html#embedded-module
[4]
https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.1/io-module/index.html#router
[5] https://issues.apache.org/jira/browse/FLINK-18340

On Wed, Jun 17, 2020 at 9:25 AM Sunil  wrote:

> checking to see if this is possible currently.
> Read json data from kafka topic => process using statefun => write out to
> kafka in json format.
>
> I could have a separate process to read the source json data convert to
> protobuf into another kafka topic but it sounds in-efficient.
> e.g.
> Read json data from kafka topic =>convert json to protobuf =>  process
> using statefun => write out to kafka in protobuf format.=> convert protobuf
> to json message
>
> Appreciate any advice on how to process json messages using statefun ,
> also if this is not possible in the current python sdk, can i do that using
> the java/scala sdk?
>
> Thanks.
>
> On 2020/06/15 15:34:39, Sunil Sattiraju  wrote:
> > Thanks Igal,
> > I dont have control over the data source inside kafka ( current kafka
> topic contains either json or avro formats only, i am trying to reproduce
> this scenario using my test data generator ).
> >
> > is it possible to convert the json to proto at the receiving end of
> statefun applicaiton?
> >
> > On 2020/06/15 14:51:01, Igal Shilman  wrote:
> > > Hi,
> > >
> > > The values must be valid encoded Protobuf messages [1], while in your
> > > attached code snippet you are sending utf-8 encoded JSON strings.
> > > You can take a look at this example with a generator that produces
> Protobuf
> > > messages [2][3]
> > >
> > > [1] https://developers.google.com/protocol-buffers/docs/pythontutorial
> > > [2]
> > >
> https://github.com/apache/flink-statefun/blob/8376afa6b064bfa2374eefbda5e149cd490985c0/statefun-examples/statefun-python-greeter-example/generator/event-generator.py#L37
> > > [3]
> > >
> https://github.com/apache/flink-statefun/blob/8376afa6b064bfa2374eefbda5e149cd490985c0/statefun-examples/statefun-python-greeter-example/greeter/messages.proto#L25
> > >
> > > On Mon, Jun 15, 2020 at 4:25 PM Sunil Sattiraju <
> sunilsattir...@gmail.com>
> > > wrote:
> > >
> > > > Hi, Based on the example from
> > > >
> 

[jira] [Created] (FLINK-18340) Support directly providing bytes instead of a Protobuf messages when writing to Kafka / Kinesis with the Python SDK

2020-06-16 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-18340:
---

 Summary: Support directly providing bytes instead of a Protobuf 
messages when writing to Kafka / Kinesis with the Python SDK
 Key: FLINK-18340
 URL: https://issues.apache.org/jira/browse/FLINK-18340
 Project: Flink
  Issue Type: New Feature
  Components: Stateful Functions
 Environment: State
Reporter: Tzu-Li (Gordon) Tai
 Fix For: statefun-2.2.0


This was insight from this ML thread: 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Any-python-example-with-json-data-from-Kafka-using-flink-statefun-td42520.html

Currently, the {{kafka_egress_record}} and {{kinesis_egress_record}} methods in 
the Python SDK only support providing a Protobuf message to be written to Kafka 
/ Kinesis.
We should make this more flexible so that a user can directly supply bytes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Flink Table program cannot be compiled when enable checkpoint of StreamExecutionEnvironment

2020-06-16 Thread 杜斌
Thanks for the reply,
Here is the simple java program that re-produce the problem:
1. code for the application:

import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.java.StreamTableEnvironment;
import org.apache.flink.types.Row;


import java.util.Arrays;

public class Test {
public static void main(String[] args) throws Exception {

StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
/**
 * If enable checkpoint, blink planner will failed
 */
env.enableCheckpointing(1000);

EnvironmentSettings envSettings = EnvironmentSettings.newInstance()
//.useBlinkPlanner() // compile fail
.useOldPlanner() // compile success
.inStreamingMode()
.build();
StreamTableEnvironment tEnv =
StreamTableEnvironment.create(env, envSettings);
DataStream orderA = env.fromCollection(Arrays.asList(
new Order(1L, "beer", 3),
new Order(1L, "diaper", 4),
new Order(3L, "beer", 2)
));

//Table table = tEnv.fromDataStream(orderA);
tEnv.createTemporaryView("orderA", orderA);
Table res = tEnv.sqlQuery("SELECT * FROM orderA");
DataStream> ds =
tEnv.toRetractStream(res, Row.class);
ds.print();
env.execute();

}

public static class Order {
public long user;
public String product;
public int amount;

public Order(long user, String product, int amount) {
this.user = user;
this.product = product;
this.amount = amount;
}

public long getUser() {
return user;
}

public void setUser(long user) {
this.user = user;
}

public String getProduct() {
return product;
}

public void setProduct(String product) {
this.product = product;
}

public int getAmount() {
return amount;
}

public void setAmount(int amount) {
this.amount = amount;
}
}
}

2. mvn clean package to a jar file
3. then we use the following code to produce a jobgraph:

PackagedProgram program =
PackagedProgram.newBuilder()
.setJarFile(userJarPath)
.setUserClassPaths(classpaths)
.setEntryPointClassName(userMainClass)
.setConfiguration(configuration)

.setSavepointRestoreSettings((descriptor.isRecoverFromSavepoint() &&
descriptor.getSavepointPath() != null &&
!descriptor.getSavepointPath().equals("")) ?

SavepointRestoreSettings.forPath(descriptor.getSavepointPath(),
descriptor.isAllowNonRestoredState()) :
SavepointRestoreSettings.none())
.setArguments(userArgs)
.build();


JobGraph jobGraph = PackagedProgramUtils.createJobGraph(program,
configuration, 4, true);

4. If we use blink planner & enable checkpoint, the compile will failed.
For the others, the compile success.

Thanks,
Bin

Jark Wu  于2020年6月17日周三 上午10:42写道:

> Hi,
>
> Which Flink version are you using? Are you using SQL CLI? Could you share
> your table/sql program?
> We did fix some classloading problems around SQL CLI, e.g. FLINK-18302
>
> Best,
> Jark
>
> On Wed, 17 Jun 2020 at 10:31, 杜斌  wrote:
>
> > add the full stack trace here:
> >
> >
> > Caused by:
> >
> >
> org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
> > org.apache.flink.api.common.InvalidProgramException: Table program cannot
> > be compiled. This is a bug. Please file an issue.
> > at
> >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
> > at
> >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
> > at
> >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
> > at
> >
> >
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:66)
> > ... 14 more
> > Caused by: org.apache.flink.api.common.InvalidProgramException: Table
> > program cannot be compiled. This is a bug. Please file an issue.
> > at
> >
> >
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
> > at
> >
> >
> org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:66)
> > at
> >
> >
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
> > at
> >
> >
> 

Re: [Reminder] Prefer {% link %} tag in documentation

2020-06-16 Thread Yangze Guo
I second Jark's comment. We need a CI mechanism to ensure this.

Best,
Yangze Guo

On Wed, Jun 17, 2020 at 11:45 AM Jark Wu  wrote:
>
> Hi everyone,
>
> Before going to use {% link %} and adding a CI to force using link tag, I
> suggest adding a CI profile to check broken links first.
>
> The background is that, recently, I noticed that many contributors are
> beginning to use link tags, but forget to link ".zh.md" instead of ".md" in
> Chinese documentation.
> This leads to the docs build failing in the last two days [1]. I have fixed
> a couple of broken links. But if we don't have a CI mechanism, this would
> make the docs build unstable.
>
> Best,
> Jark
>
> [1]: https://ci.apache.org/builders/flink-docs-master
>
> On Mon, 15 Jun 2020 at 12:48, Congxian Qiu  wrote:
>
> > +1 to use {% link %} tag and add a check during CI.
> > for Chinese doc, will suggest the Chinese translate contributor use the {%
> > link %} tag when reviewing the translate pr.
> >
> > Best,
> > Congxian
> >
> >
> > Jark Wu  于2020年6月10日周三 上午10:48写道:
> >
> > > +1 to use  {% link %}  tag and add check in CI.
> > >
> > > Tips: if want to link a Chinese page, should write: [CLI]({% link ops/
> > > cli.zh.md %})
> > >
> > > Best,
> > > Jark
> > >
> > > On Wed, 10 Jun 2020 at 10:30, Yangze Guo  wrote:
> > >
> > > > Thanks for that reminder, Seth!
> > > >
> > > > +1 to add a check during CI if possible.
> > > >
> > > > Best,
> > > > Yangze Guo
> > > >
> > > > On Wed, Jun 10, 2020 at 3:04 AM Kostas Kloudas 
> > > wrote:
> > > > >
> > > > > Thanks for the heads up Seth!
> > > > >
> > > > > Kostas
> > > > >
> > > > > On Tue, Jun 9, 2020 at 7:27 PM Seth Wiesman 
> > > wrote:
> > > > > >
> > > > > > The tag is new to Jekyll 4.0 which we only recently updated to.
> > > > > >
> > > > > > There are a lot of existing tags that would need to be updated
> > first
> > > :)
> > > > > > I opened a ticket to track that work and then yes that would make
> > > > sense.
> > > > > >
> > > > > > Seth
> > > > > >
> > > > > > [1] https://issues.apache.org/jira/browse/FLINK-18193
> > > > > >
> > > > > > On Tue, Jun 9, 2020 at 12:05 PM Robert Metzger <
> > rmetz...@apache.org>
> > > > wrote:
> > > > > >
> > > > > > > Thanks for the reminder. I was also not aware of this tag!
> > > > > > >
> > > > > > > How about enforcing the use of this tag through CI?
> > > > > > > We could for example grep through the added lines of all changes
> > in
> > > > docs/
> > > > > > > and fail the build if we see the wrong pattern.
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Jun 9, 2020 at 4:37 PM Seth Wiesman  > >
> > > > wrote:
> > > > > > >
> > > > > > > > Whoops, responded from the wrong email :)
> > > > > > > >
> > > > > > > > Thank you for noticing that, the guide is out of date. I will
> > fix
> > > > that
> > > > > > > > immediately!
> > > > > > > >
> > > > > > > > On Tue, Jun 9, 2020 at 9:36 AM Seth Wiesman <
> > s...@ververica.com>
> > > > wrote:
> > > > > > > >
> > > > > > > > > Thank you for noticing that, the guide is out of date. I will
> > > > fix that
> > > > > > > > > immediately!
> > > > > > > > >
> > > > > > > > > On Tue, Jun 9, 2020 at 9:34 AM Dawid Wysakowicz <
> > > > > > > dwysakow...@apache.org>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > >> Hi Seth,
> > > > > > > > >>
> > > > > > > > >> Thanks I did not know that.
> > > > > > > > >>
> > > > > > > > >> I am not entirely sure, but I think our documentation guide
> > is
> > > > > > > slightly
> > > > > > > > >> outdated on that manner (
> > > > > > > > >> https://flink.apache.org/contributing/docs-style.html) Or
> > is
> > > > there a
> > > > > > > > >> mistake in your example? Our guide recommends:
> > > > > > > > >>
> > > > > > > > >> [CLI]({{ site.baseurl }}{% link ops/cli.md %})
> > > > > > > > >>
> > > > > > > > >>
> > > > > > > > >> Best,
> > > > > > > > >>
> > > > > > > > >> Dawid
> > > > > > > > >> On 09/06/2020 16:20, Seth Wiesman wrote:
> > > > > > > > >>
> > > > > > > > >> Hi Everyone!
> > > > > > > > >>
> > > > > > > > >> As we are seeing an influx of documentation PRs in
> > > anticipation
> > > > of the
> > > > > > > > 1.11
> > > > > > > > >> release I would like to remind everyone to use the {% link
> > %}
> > > > tag when
> > > > > > > > >> cross-linking pages[1]. This is opposed to creating a link
> > > > based on
> > > > > > > > >> site.baseurl.
> > > > > > > > >>
> > > > > > > > >> Going forward a link such as:
> > > > > > > > >>
> > > > > > > > >> [CLI]({% site.baseurl %}/ops/cli.html)
> > > > > > > > >>
> > > > > > > > >> Should be written as:
> > > > > > > > >>
> > > > > > > > >> [CLI]({% link ops/cli.md %})
> > > > > > > > >>
> > > > > > > > >> This tag will fail the build on broken links which will help
> > > us
> > > > > > > prevent
> > > > > > > > >> 404s on the website.
> > > > > > > > >>
> > > > > > > > >> You can see a good example of the link tag in action
> > here[2].
> > > > > > > > >>
> > > > > > > > >> Seth
> > > > > > > > >>
> > > > > > > > >> [1] 

Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Xingbo Huang
Congratulations Yu! Well deserved

Best,
Xingbo

Peter Huang  于2020年6月17日周三 上午11:51写道:

> Congratulations Yu!
>
> On Tue, Jun 16, 2020 at 8:31 PM Dan Zou  wrote:
>
> > Congratulations Yu!
> >
> > Best,
> > Dan Zou
> >
> > > 2020年6月17日 上午11:28,Jeff Zhang  写道:
> > >
> > > Congratulations Yu !
> > >
> > > Yun Gao  于2020年6月17日周三 上午11:26写道:
> > >
> > >> Congratulations Yu!
> > >>
> > >> Best,
> > >> Yun
> > >>
> > >> --
> > >> Sender:Zhijiang
> > >> Date:2020/06/17 11:18:35
> > >> Recipient:Dian Fu; dev
> > >> Cc:Haibo Sun; user;
> user-zh<
> > >> user...@flink.apache.org>
> > >> Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> > >>
> > >> Congratulations Yu! Well deserved!
> > >>
> > >> Best,
> > >> Zhijiang
> > >>
> > >>
> > >> --
> > >> From:Dian Fu 
> > >> Send Time:2020年6月17日(星期三) 10:48
> > >> To:dev 
> > >> Cc:Haibo Sun ; user ;
> > user-zh <
> > >> user...@flink.apache.org>
> > >> Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> > >>
> > >> Congrats Yu!
> > >>
> > >> Regards,
> > >> Dian
> > >>
> > >>> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> > >>>
> > >>> Congratulations Yu! Well deserved!
> > >>>
> > >>> Best,
> > >>> Jark
> > >>>
> > >>> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
> > >>>
> >  Congratulations Yu!
> > 
> >  Best,
> >  Haibo
> > 
> > 
> >  At 2020-06-17 09:15:02, "jincheng sun" 
> > >> wrote:
> > > Hi all,
> > >
> > > On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> > > part of the Apache Flink Project Management Committee (PMC).
> > >
> > > Yu Li has been very active on Flink's Statebackend component,
> working
> > >> on
> > > various improvements, for example the RocksDB memory management for
> > >> 1.10.
> > > and keeps checking and voting for our releases, and also has
> > >> successfully
> > > produced two releases(1.10.0&1.10.1) as RM.
> > >
> > > Congratulations & Welcome Yu Li!
> > >
> > > Best,
> > > Jincheng (on behalf of the Flink PMC)
> > 
> > 
> > >>
> > >>
> > >>
> > >
> > > --
> > > Best Regards
> > >
> > > Jeff Zhang
> >
> >
>


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Peter Huang
Congratulations Yu!

On Tue, Jun 16, 2020 at 8:31 PM Dan Zou  wrote:

> Congratulations Yu!
>
> Best,
> Dan Zou
>
> > 2020年6月17日 上午11:28,Jeff Zhang  写道:
> >
> > Congratulations Yu !
> >
> > Yun Gao  于2020年6月17日周三 上午11:26写道:
> >
> >> Congratulations Yu!
> >>
> >> Best,
> >> Yun
> >>
> >> --
> >> Sender:Zhijiang
> >> Date:2020/06/17 11:18:35
> >> Recipient:Dian Fu; dev
> >> Cc:Haibo Sun; user; user-zh<
> >> user...@flink.apache.org>
> >> Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> >>
> >> Congratulations Yu! Well deserved!
> >>
> >> Best,
> >> Zhijiang
> >>
> >>
> >> --
> >> From:Dian Fu 
> >> Send Time:2020年6月17日(星期三) 10:48
> >> To:dev 
> >> Cc:Haibo Sun ; user ;
> user-zh <
> >> user...@flink.apache.org>
> >> Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
> >>
> >> Congrats Yu!
> >>
> >> Regards,
> >> Dian
> >>
> >>> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> >>>
> >>> Congratulations Yu! Well deserved!
> >>>
> >>> Best,
> >>> Jark
> >>>
> >>> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
> >>>
>  Congratulations Yu!
> 
>  Best,
>  Haibo
> 
> 
>  At 2020-06-17 09:15:02, "jincheng sun" 
> >> wrote:
> > Hi all,
> >
> > On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> > part of the Apache Flink Project Management Committee (PMC).
> >
> > Yu Li has been very active on Flink's Statebackend component, working
> >> on
> > various improvements, for example the RocksDB memory management for
> >> 1.10.
> > and keeps checking and voting for our releases, and also has
> >> successfully
> > produced two releases(1.10.0&1.10.1) as RM.
> >
> > Congratulations & Welcome Yu Li!
> >
> > Best,
> > Jincheng (on behalf of the Flink PMC)
> 
> 
> >>
> >>
> >>
> >
> > --
> > Best Regards
> >
> > Jeff Zhang
>
>


Re: [Reminder] Prefer {% link %} tag in documentation

2020-06-16 Thread Jark Wu
Hi everyone,

Before going to use {% link %} and adding a CI to force using link tag, I
suggest adding a CI profile to check broken links first.

The background is that, recently, I noticed that many contributors are
beginning to use link tags, but forget to link ".zh.md" instead of ".md" in
Chinese documentation.
This leads to the docs build failing in the last two days [1]. I have fixed
a couple of broken links. But if we don't have a CI mechanism, this would
make the docs build unstable.

Best,
Jark

[1]: https://ci.apache.org/builders/flink-docs-master

On Mon, 15 Jun 2020 at 12:48, Congxian Qiu  wrote:

> +1 to use {% link %} tag and add a check during CI.
> for Chinese doc, will suggest the Chinese translate contributor use the {%
> link %} tag when reviewing the translate pr.
>
> Best,
> Congxian
>
>
> Jark Wu  于2020年6月10日周三 上午10:48写道:
>
> > +1 to use  {% link %}  tag and add check in CI.
> >
> > Tips: if want to link a Chinese page, should write: [CLI]({% link ops/
> > cli.zh.md %})
> >
> > Best,
> > Jark
> >
> > On Wed, 10 Jun 2020 at 10:30, Yangze Guo  wrote:
> >
> > > Thanks for that reminder, Seth!
> > >
> > > +1 to add a check during CI if possible.
> > >
> > > Best,
> > > Yangze Guo
> > >
> > > On Wed, Jun 10, 2020 at 3:04 AM Kostas Kloudas 
> > wrote:
> > > >
> > > > Thanks for the heads up Seth!
> > > >
> > > > Kostas
> > > >
> > > > On Tue, Jun 9, 2020 at 7:27 PM Seth Wiesman 
> > wrote:
> > > > >
> > > > > The tag is new to Jekyll 4.0 which we only recently updated to.
> > > > >
> > > > > There are a lot of existing tags that would need to be updated
> first
> > :)
> > > > > I opened a ticket to track that work and then yes that would make
> > > sense.
> > > > >
> > > > > Seth
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/FLINK-18193
> > > > >
> > > > > On Tue, Jun 9, 2020 at 12:05 PM Robert Metzger <
> rmetz...@apache.org>
> > > wrote:
> > > > >
> > > > > > Thanks for the reminder. I was also not aware of this tag!
> > > > > >
> > > > > > How about enforcing the use of this tag through CI?
> > > > > > We could for example grep through the added lines of all changes
> in
> > > docs/
> > > > > > and fail the build if we see the wrong pattern.
> > > > > >
> > > > > >
> > > > > > On Tue, Jun 9, 2020 at 4:37 PM Seth Wiesman  >
> > > wrote:
> > > > > >
> > > > > > > Whoops, responded from the wrong email :)
> > > > > > >
> > > > > > > Thank you for noticing that, the guide is out of date. I will
> fix
> > > that
> > > > > > > immediately!
> > > > > > >
> > > > > > > On Tue, Jun 9, 2020 at 9:36 AM Seth Wiesman <
> s...@ververica.com>
> > > wrote:
> > > > > > >
> > > > > > > > Thank you for noticing that, the guide is out of date. I will
> > > fix that
> > > > > > > > immediately!
> > > > > > > >
> > > > > > > > On Tue, Jun 9, 2020 at 9:34 AM Dawid Wysakowicz <
> > > > > > dwysakow...@apache.org>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > >> Hi Seth,
> > > > > > > >>
> > > > > > > >> Thanks I did not know that.
> > > > > > > >>
> > > > > > > >> I am not entirely sure, but I think our documentation guide
> is
> > > > > > slightly
> > > > > > > >> outdated on that manner (
> > > > > > > >> https://flink.apache.org/contributing/docs-style.html) Or
> is
> > > there a
> > > > > > > >> mistake in your example? Our guide recommends:
> > > > > > > >>
> > > > > > > >> [CLI]({{ site.baseurl }}{% link ops/cli.md %})
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> Best,
> > > > > > > >>
> > > > > > > >> Dawid
> > > > > > > >> On 09/06/2020 16:20, Seth Wiesman wrote:
> > > > > > > >>
> > > > > > > >> Hi Everyone!
> > > > > > > >>
> > > > > > > >> As we are seeing an influx of documentation PRs in
> > anticipation
> > > of the
> > > > > > > 1.11
> > > > > > > >> release I would like to remind everyone to use the {% link
> %}
> > > tag when
> > > > > > > >> cross-linking pages[1]. This is opposed to creating a link
> > > based on
> > > > > > > >> site.baseurl.
> > > > > > > >>
> > > > > > > >> Going forward a link such as:
> > > > > > > >>
> > > > > > > >> [CLI]({% site.baseurl %}/ops/cli.html)
> > > > > > > >>
> > > > > > > >> Should be written as:
> > > > > > > >>
> > > > > > > >> [CLI]({% link ops/cli.md %})
> > > > > > > >>
> > > > > > > >> This tag will fail the build on broken links which will help
> > us
> > > > > > prevent
> > > > > > > >> 404s on the website.
> > > > > > > >>
> > > > > > > >> You can see a good example of the link tag in action
> here[2].
> > > > > > > >>
> > > > > > > >> Seth
> > > > > > > >>
> > > > > > > >> [1] https://jekyllrb.com/docs/liquid/tags/
> > > > > > > >> [2]
> > > > > > >
> > > > > >
> > >
> >
> https://github.com/apache/flink/blame/b6ea96251d101ca25aa6a6b92170cfa4274b4cc3/docs/index.md#L65-L67
> > > > > > > >>
> > > > > > > >>
> > > > > > > >
> > > > > > > > --
> > > > > > > >
> > > > > > > > Seth Wiesman | Solutions Architect
> > > > > > > >
> > > > > > > > +1 314 387 1463
> > > > > > > >
> > > > > > > > 
> > > 

Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Dan Zou
Congratulations Yu!

Best,
Dan Zou

> 2020年6月17日 上午11:28,Jeff Zhang  写道:
> 
> Congratulations Yu !
> 
> Yun Gao  于2020年6月17日周三 上午11:26写道:
> 
>> Congratulations Yu!
>> 
>> Best,
>> Yun
>> 
>> --
>> Sender:Zhijiang
>> Date:2020/06/17 11:18:35
>> Recipient:Dian Fu; dev
>> Cc:Haibo Sun; user; user-zh<
>> user...@flink.apache.org>
>> Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
>> 
>> Congratulations Yu! Well deserved!
>> 
>> Best,
>> Zhijiang
>> 
>> 
>> --
>> From:Dian Fu 
>> Send Time:2020年6月17日(星期三) 10:48
>> To:dev 
>> Cc:Haibo Sun ; user ; user-zh <
>> user...@flink.apache.org>
>> Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
>> 
>> Congrats Yu!
>> 
>> Regards,
>> Dian
>> 
>>> 在 2020年6月17日,上午10:35,Jark Wu  写道:
>>> 
>>> Congratulations Yu! Well deserved!
>>> 
>>> Best,
>>> Jark
>>> 
>>> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
>>> 
 Congratulations Yu!
 
 Best,
 Haibo
 
 
 At 2020-06-17 09:15:02, "jincheng sun" 
>> wrote:
> Hi all,
> 
> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> part of the Apache Flink Project Management Committee (PMC).
> 
> Yu Li has been very active on Flink's Statebackend component, working
>> on
> various improvements, for example the RocksDB memory management for
>> 1.10.
> and keeps checking and voting for our releases, and also has
>> successfully
> produced two releases(1.10.0&1.10.1) as RM.
> 
> Congratulations & Welcome Yu Li!
> 
> Best,
> Jincheng (on behalf of the Flink PMC)
 
 
>> 
>> 
>> 
> 
> -- 
> Best Regards
> 
> Jeff Zhang



Re: Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Jeff Zhang
Congratulations Yu !

Yun Gao  于2020年6月17日周三 上午11:26写道:

> Congratulations Yu!
>
> Best,
> Yun
>
> --
> Sender:Zhijiang
> Date:2020/06/17 11:18:35
> Recipient:Dian Fu; dev
> Cc:Haibo Sun; user; user-zh<
> user...@flink.apache.org>
> Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
>
> Congratulations Yu! Well deserved!
>
> Best,
> Zhijiang
>
>
> --
> From:Dian Fu 
> Send Time:2020年6月17日(星期三) 10:48
> To:dev 
> Cc:Haibo Sun ; user ; user-zh <
> user...@flink.apache.org>
> Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC
>
> Congrats Yu!
>
> Regards,
> Dian
>
> > 在 2020年6月17日,上午10:35,Jark Wu  写道:
> >
> > Congratulations Yu! Well deserved!
> >
> > Best,
> > Jark
> >
> > On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
> >
> >> Congratulations Yu!
> >>
> >> Best,
> >> Haibo
> >>
> >>
> >> At 2020-06-17 09:15:02, "jincheng sun" 
> wrote:
> >>> Hi all,
> >>>
> >>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> >>> part of the Apache Flink Project Management Committee (PMC).
> >>>
> >>> Yu Li has been very active on Flink's Statebackend component, working
> on
> >>> various improvements, for example the RocksDB memory management for
> 1.10.
> >>> and keeps checking and voting for our releases, and also has
> successfully
> >>> produced two releases(1.10.0&1.10.1) as RM.
> >>>
> >>> Congratulations & Welcome Yu Li!
> >>>
> >>> Best,
> >>> Jincheng (on behalf of the Flink PMC)
> >>
> >>
>
>
>

-- 
Best Regards

Jeff Zhang


Re: Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Yun Gao
Congratulations Yu! 

Best,
Yun

--
Sender:Zhijiang
Date:2020/06/17 11:18:35
Recipient:Dian Fu; dev
Cc:Haibo Sun; user; 
user-zh
Theme:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

Congratulations Yu! Well deserved!

Best,
Zhijiang


--
From:Dian Fu 
Send Time:2020年6月17日(星期三) 10:48
To:dev 
Cc:Haibo Sun ; user ; user-zh 

Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

Congrats Yu!

Regards,
Dian

> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> 
> Congratulations Yu! Well deserved!
> 
> Best,
> Jark
> 
> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
> 
>> Congratulations Yu!
>> 
>> Best,
>> Haibo
>> 
>> 
>> At 2020-06-17 09:15:02, "jincheng sun"  wrote:
>>> Hi all,
>>> 
>>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>>> part of the Apache Flink Project Management Committee (PMC).
>>> 
>>> Yu Li has been very active on Flink's Statebackend component, working on
>>> various improvements, for example the RocksDB memory management for 1.10.
>>> and keeps checking and voting for our releases, and also has successfully
>>> produced two releases(1.10.0&1.10.1) as RM.
>>> 
>>> Congratulations & Welcome Yu Li!
>>> 
>>> Best,
>>> Jincheng (on behalf of the Flink PMC)
>> 
>> 




Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Zhijiang
Congratulations Yu! Well deserved!

Best,
Zhijiang


--
From:Dian Fu 
Send Time:2020年6月17日(星期三) 10:48
To:dev 
Cc:Haibo Sun ; user ; user-zh 

Subject:Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

Congrats Yu!

Regards,
Dian

> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> 
> Congratulations Yu! Well deserved!
> 
> Best,
> Jark
> 
> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
> 
>> Congratulations Yu!
>> 
>> Best,
>> Haibo
>> 
>> 
>> At 2020-06-17 09:15:02, "jincheng sun"  wrote:
>>> Hi all,
>>> 
>>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>>> part of the Apache Flink Project Management Committee (PMC).
>>> 
>>> Yu Li has been very active on Flink's Statebackend component, working on
>>> various improvements, for example the RocksDB memory management for 1.10.
>>> and keeps checking and voting for our releases, and also has successfully
>>> produced two releases(1.10.0&1.10.1) as RM.
>>> 
>>> Congratulations & Welcome Yu Li!
>>> 
>>> Best,
>>> Jincheng (on behalf of the Flink PMC)
>> 
>> 



[jira] [Created] (FLINK-18339) ValidationException exception that field typeinformation in TableSchema and in TableSource return type for blink

2020-06-16 Thread hehuiyuan (Jira)
hehuiyuan created FLINK-18339:
-

 Summary: ValidationException exception that  field typeinformation 
in TableSchema and in TableSource return type for blink
 Key: FLINK-18339
 URL: https://issues.apache.org/jira/browse/FLINK-18339
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.9.0
Reporter: hehuiyuan
 Attachments: image-2020-06-17-10-37-48-166.png, 
image-2020-06-17-10-53-08-424.png

The  type of `datatime` field   is OBJECT_ARRAY.

 

Exception:

 
{code:java}
Exception in thread "main" org.apache.flink.table.api.ValidationException: Type 
LEGACY(BasicArrayTypeInfo) of table field 'datatime' does not match 
with type BasicArrayTypeInfo of the field 'datatime' of the TableSource 
return type.Exception in thread "main" 
org.apache.flink.table.api.ValidationException: Type 
LEGACY(BasicArrayTypeInfo) of table field 'datatime' does not match 
with type BasicArrayTypeInfo of the field 'datatime' of the TableSource 
return type. at 
org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$4.apply(TableSourceUtil.scala:121)
 at 
org.apache.flink.table.planner.sources.TableSourceUtil$$anonfun$4.apply(TableSourceUtil.scala:92)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
 at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
 at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at 
scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186) at 
org.apache.flink.table.planner.sources.TableSourceUtil$.computeIndexMapping(TableSourceUtil.scala:92)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:100)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:55)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:55)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:86)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:46)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlan(StreamExecCalc.scala:46)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:84)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlanInternal(StreamExecExchange.scala:44)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecExchange.translateToPlan(StreamExecExchange.scala:44)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecGroupWindowAggregate.translateToPlanInternal(StreamExecGroupWindowAggregate.scala:141)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecGroupWindowAggregate.translateToPlanInternal(StreamExecGroupWindowAggregate.scala:55)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecGroupWindowAggregate.translateToPlan(StreamExecGroupWindowAggregate.scala:55)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:86)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:46)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlan(StreamExecCalc.scala:46)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToTransformation(StreamExecSink.scala:185)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:119)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:50)
 at 
org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:54)
 at 

Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Dian Fu
Congrats Yu!

Regards,
Dian

> 在 2020年6月17日,上午10:35,Jark Wu  写道:
> 
> Congratulations Yu! Well deserved!
> 
> Best,
> Jark
> 
> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
> 
>> Congratulations Yu!
>> 
>> Best,
>> Haibo
>> 
>> 
>> At 2020-06-17 09:15:02, "jincheng sun"  wrote:
>>> Hi all,
>>> 
>>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>>> part of the Apache Flink Project Management Committee (PMC).
>>> 
>>> Yu Li has been very active on Flink's Statebackend component, working on
>>> various improvements, for example the RocksDB memory management for 1.10.
>>> and keeps checking and voting for our releases, and also has successfully
>>> produced two releases(1.10.0&1.10.1) as RM.
>>> 
>>> Congratulations & Welcome Yu Li!
>>> 
>>> Best,
>>> Jincheng (on behalf of the Flink PMC)
>> 
>> 



Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Forward Xu
Congratulations Yu!


Best,

Forward

Jark Wu  于2020年6月17日周三 上午10:36写道:

> Congratulations Yu! Well deserved!
>
> Best,
> Jark
>
> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
>
> > Congratulations Yu!
> >
> > Best,
> > Haibo
> >
> >
> > At 2020-06-17 09:15:02, "jincheng sun"  wrote:
> > >Hi all,
> > >
> > >On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> > >part of the Apache Flink Project Management Committee (PMC).
> > >
> > >Yu Li has been very active on Flink's Statebackend component, working on
> > >various improvements, for example the RocksDB memory management for
> 1.10.
> > >and keeps checking and voting for our releases, and also has
> successfully
> > >produced two releases(1.10.0&1.10.1) as RM.
> > >
> > >Congratulations & Welcome Yu Li!
> > >
> > >Best,
> > >Jincheng (on behalf of the Flink PMC)
> >
> >
>


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Benchao Li
Congratulations Yu!

Jark Wu  于2020年6月17日周三 上午10:36写道:

> Congratulations Yu! Well deserved!
>
> Best,
> Jark
>
> On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:
>
> > Congratulations Yu!
> >
> > Best,
> > Haibo
> >
> >
> > At 2020-06-17 09:15:02, "jincheng sun"  wrote:
> > >Hi all,
> > >
> > >On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> > >part of the Apache Flink Project Management Committee (PMC).
> > >
> > >Yu Li has been very active on Flink's Statebackend component, working on
> > >various improvements, for example the RocksDB memory management for
> 1.10.
> > >and keeps checking and voting for our releases, and also has
> successfully
> > >produced two releases(1.10.0&1.10.1) as RM.
> > >
> > >Congratulations & Welcome Yu Li!
> > >
> > >Best,
> > >Jincheng (on behalf of the Flink PMC)
> >
> >
>


Re: Flink Table program cannot be compiled when enable checkpoint of StreamExecutionEnvironment

2020-06-16 Thread Jark Wu
Hi,

Which Flink version are you using? Are you using SQL CLI? Could you share
your table/sql program?
We did fix some classloading problems around SQL CLI, e.g. FLINK-18302

Best,
Jark

On Wed, 17 Jun 2020 at 10:31, 杜斌  wrote:

> add the full stack trace here:
>
>
> Caused by:
>
> org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
> org.apache.flink.api.common.InvalidProgramException: Table program cannot
> be compiled. This is a bug. Please file an issue.
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
> at
>
> org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:66)
> ... 14 more
> Caused by: org.apache.flink.api.common.InvalidProgramException: Table
> program cannot be compiled. This is a bug. Please file an issue.
> at
>
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
> at
>
> org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:66)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
> at
>
> org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
> ... 17 more
> Caused by: org.codehaus.commons.compiler.CompileException: Line 2, Column
> 46: Cannot determine simple type name "org"
> at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12124)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6746)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6507)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
> at
> org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
> at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6486)
> at org.codehaus.janino.UnitCompiler.access$13800(UnitCompiler.java:215)
> at
>
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6394)
> at
>
> org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6389)
> at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3917)
> at org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6389)
> at org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6382)
> at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3916)
> at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382)
> at org.codehaus.janino.UnitCompiler.access$1300(UnitCompiler.java:215)
> at
> org.codehaus.janino.UnitCompiler$33.getSuperclass2(UnitCompiler.java:9886)
> at org.codehaus.janino.IClass.getSuperclass(IClass.java:455)
> at org.codehaus.janino.IClass.getIMethods(IClass.java:260)
> at org.codehaus.janino.IClass.getIMethods(IClass.java:237)
> at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:492)
> at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:432)
> at org.codehaus.janino.UnitCompiler.access$400(UnitCompiler.java:215)
> at
>
> org.codehaus.janino.UnitCompiler$2.visitPackageMemberClassDeclaration(UnitCompiler.java:411)
> at
>
> org.codehaus.janino.UnitCompiler$2.visitPackageMemberClassDeclaration(UnitCompiler.java:406)
> at
>
> org.codehaus.janino.Java$PackageMemberClassDeclaration.accept(Java.java:1414)
> at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:406)
> at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:378)
> at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237)
> at
>
> org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465)
> at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216)
> at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207)
> at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
> at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
> at
>
> org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:78)
> ... 23 more
>
> 杜斌  于2020年6月17日周三 上午10:29写道:
>
> > Hi,
> > Need 

Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Jark Wu
Congratulations Yu! Well deserved!

Best,
Jark

On Wed, 17 Jun 2020 at 10:18, Haibo Sun  wrote:

> Congratulations Yu!
>
> Best,
> Haibo
>
>
> At 2020-06-17 09:15:02, "jincheng sun"  wrote:
> >Hi all,
> >
> >On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> >part of the Apache Flink Project Management Committee (PMC).
> >
> >Yu Li has been very active on Flink's Statebackend component, working on
> >various improvements, for example the RocksDB memory management for 1.10.
> >and keeps checking and voting for our releases, and also has successfully
> >produced two releases(1.10.0&1.10.1) as RM.
> >
> >Congratulations & Welcome Yu Li!
> >
> >Best,
> >Jincheng (on behalf of the Flink PMC)
>
>


Re: Flink Table program cannot be compiled when enable checkpoint of StreamExecutionEnvironment

2020-06-16 Thread 杜斌
add the full stack trace here:


Caused by:
org.apache.flink.shaded.guava18.com.google.common.util.concurrent.UncheckedExecutionException:
org.apache.flink.api.common.InvalidProgramException: Table program cannot
be compiled. This is a bug. Please file an issue.
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2203)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache.get(LocalCache.java:3937)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4739)
at
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:66)
... 14 more
Caused by: org.apache.flink.api.common.InvalidProgramException: Table
program cannot be compiled. This is a bug. Please file an issue.
at
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:81)
at
org.apache.flink.table.runtime.generated.CompileUtils.lambda$compile$1(CompileUtils.java:66)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4742)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2319)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2282)
at
org.apache.flink.shaded.guava18.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
... 17 more
Caused by: org.codehaus.commons.compiler.CompileException: Line 2, Column
46: Cannot determine simple type name "org"
at org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:12124)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6746)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6507)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
at org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:6520)
at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:6486)
at org.codehaus.janino.UnitCompiler.access$13800(UnitCompiler.java:215)
at
org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6394)
at
org.codehaus.janino.UnitCompiler$21$1.visitReferenceType(UnitCompiler.java:6389)
at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3917)
at org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6389)
at org.codehaus.janino.UnitCompiler$21.visitType(UnitCompiler.java:6382)
at org.codehaus.janino.Java$ReferenceType.accept(Java.java:3916)
at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:6382)
at org.codehaus.janino.UnitCompiler.access$1300(UnitCompiler.java:215)
at
org.codehaus.janino.UnitCompiler$33.getSuperclass2(UnitCompiler.java:9886)
at org.codehaus.janino.IClass.getSuperclass(IClass.java:455)
at org.codehaus.janino.IClass.getIMethods(IClass.java:260)
at org.codehaus.janino.IClass.getIMethods(IClass.java:237)
at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:492)
at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:432)
at org.codehaus.janino.UnitCompiler.access$400(UnitCompiler.java:215)
at
org.codehaus.janino.UnitCompiler$2.visitPackageMemberClassDeclaration(UnitCompiler.java:411)
at
org.codehaus.janino.UnitCompiler$2.visitPackageMemberClassDeclaration(UnitCompiler.java:406)
at
org.codehaus.janino.Java$PackageMemberClassDeclaration.accept(Java.java:1414)
at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:406)
at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:378)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:237)
at
org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:465)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:216)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:207)
at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:75)
at
org.apache.flink.table.runtime.generated.CompileUtils.doCompile(CompileUtils.java:78)
... 23 more

杜斌  于2020年6月17日周三 上午10:29写道:

> Hi,
> Need help on this issue, here is what Flink reported when I enable the
> checkpoint setting of the StreamExecutionEnvironment:
>
> /* 1 */
> /* 2 */  public class SourceConversion$1 extends
> org.apache.flink.table.runtime.operators.AbstractProcessStreamOperator
> /* 3 */  implements
> org.apache.flink.streaming.api.operators.OneInputStreamOperator {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */private 

Flink Table program cannot be compiled when enable checkpoint of StreamExecutionEnvironment

2020-06-16 Thread 杜斌
Hi,
Need help on this issue, here is what Flink reported when I enable the
checkpoint setting of the StreamExecutionEnvironment:

/* 1 */
/* 2 */  public class SourceConversion$1 extends
org.apache.flink.table.runtime.operators.AbstractProcessStreamOperator
/* 3 */  implements
org.apache.flink.streaming.api.operators.OneInputStreamOperator {
/* 4 */
/* 5 */private final Object[] references;
/* 6 */private transient
org.apache.flink.table.dataformat.DataFormatConverters.RowConverter
converter$0;
/* 7 */private final
org.apache.flink.streaming.runtime.streamrecord.StreamRecord outElement =
new org.apache.flink.streaming.runtime.streamrecord.StreamRecord(null);
/* 8 */
/* 9 */public SourceConversion$1(
/* 10 */Object[] references,
/* 11 */org.apache.flink.streaming.runtime.tasks.StreamTask
task,
/* 12 */org.apache.flink.streaming.api.graph.StreamConfig
config,
/* 13 */org.apache.flink.streaming.api.operators.Output output)
throws Exception {
/* 14 */  this.references = references;
/* 15 */  converter$0 =
(((org.apache.flink.table.dataformat.DataFormatConverters.RowConverter)
references[0]));
/* 16 */  this.setup(task, config, output);
/* 17 */}
/* 18 */
/* 19 */@Override
/* 20 */public void open() throws Exception {
/* 21 */  super.open();
/* 22 */
/* 23 */}
/* 24 */
/* 25 */@Override
/* 26 */public void
processElement(org.apache.flink.streaming.runtime.streamrecord.StreamRecord
element) throws Exception {
/* 27 */  org.apache.flink.table.dataformat.BaseRow in1 =
(org.apache.flink.table.dataformat.BaseRow)
(org.apache.flink.table.dataformat.BaseRow)
converter$0.toInternal((org.apache.flink.types.Row) element.getValue());
/* 28 */
/* 29 */
/* 30 */
/* 31 */  output.collect(outElement.replace(in1));
/* 32 */}
/* 33 */
/* 34 */
/* 35 */
/* 36 */@Override
/* 37 */public void close() throws Exception {
/* 38 */   super.close();
/* 39 */
/* 40 */}
/* 41 */
/* 42 */
/* 43 */  }
/* 44 */

Exception in thread "main" org.apache.flink.util.FlinkRuntimeException:
org.apache.flink.api.common.InvalidProgramException: Table program cannot
be compiled. This is a bug. Please file an issue.
at
org.apache.flink.table.runtime.generated.CompileUtils.compile(CompileUtils.java:68)
at
org.apache.flink.table.runtime.generated.GeneratedClass.compile(GeneratedClass.java:78)
at
org.apache.flink.table.runtime.generated.GeneratedClass.getClass(GeneratedClass.java:96)
at
org.apache.flink.table.runtime.operators.CodeGenOperatorFactory.getStreamOperatorClass(CodeGenOperatorFactory.java:62)
at
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.preValidate(StreamingJobGraphGenerator.java:214)
at
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:149)
at
org.apache.flink.streaming.api.graph.StreamingJobGraphGenerator.createJobGraph(StreamingJobGraphGenerator.java:104)
at
org.apache.flink.streaming.api.graph.StreamGraph.getJobGraph(StreamGraph.java:777)
at
org.apache.flink.streaming.api.graph.StreamGraphTranslator.translateToJobGraph(StreamGraphTranslator.java:52)
at
org.apache.flink.client.FlinkPipelineTranslationUtil.getJobGraph(FlinkPipelineTranslationUtil.java:43)
at
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:57)
at
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:85)

The janino will compile successfully when I remove the checkpoint setting
of the env.

Can anyone help on this?

Thanks,
Bin


Re:[ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Haibo Sun
Congratulations Yu!


Best,
Haibo




At 2020-06-17 09:15:02, "jincheng sun"  wrote:
>Hi all,
>
>On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>part of the Apache Flink Project Management Committee (PMC).
>
>Yu Li has been very active on Flink's Statebackend component, working on
>various improvements, for example the RocksDB memory management for 1.10.
>and keeps checking and voting for our releases, and also has successfully
>produced two releases(1.10.0&1.10.1) as RM.
>
>Congratulations & Welcome Yu Li!
>
>Best,
>Jincheng (on behalf of the Flink PMC)


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Leonard Xu
Congratulations Yu !

Best,
Leonard Xu

> 在 2020年6月17日,09:50,Yangze Guo  写道:
> 
> Congrats, Yu!
> Best,
> Yangze Guo
> 
> On Wed, Jun 17, 2020 at 9:35 AM Xintong Song  wrote:
>> 
>> Congratulations Yu, well deserved~!
>> 
>> Thank you~
>> 
>> Xintong Song
>> 
>> 
>> 
>> On Wed, Jun 17, 2020 at 9:15 AM jincheng sun  
>> wrote:
>>> 
>>> Hi all,
>>> 
>>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>>> part of the Apache Flink Project Management Committee (PMC).
>>> 
>>> Yu Li has been very active on Flink's Statebackend component, working on 
>>> various improvements, for example the RocksDB memory management for 1.10. 
>>> and keeps checking and voting for our releases, and also has successfully 
>>> produced two releases(1.10.0&1.10.1) as RM.
>>> 
>>> Congratulations & Welcome Yu Li!
>>> 
>>> Best,
>>> Jincheng (on behalf of the Flink PMC)



Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Yangze Guo
Congrats, Yu!
Best,
Yangze Guo

On Wed, Jun 17, 2020 at 9:35 AM Xintong Song  wrote:
>
> Congratulations Yu, well deserved~!
>
> Thank you~
>
> Xintong Song
>
>
>
> On Wed, Jun 17, 2020 at 9:15 AM jincheng sun  wrote:
>>
>> Hi all,
>>
>> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
>> part of the Apache Flink Project Management Committee (PMC).
>>
>> Yu Li has been very active on Flink's Statebackend component, working on 
>> various improvements, for example the RocksDB memory management for 1.10. 
>> and keeps checking and voting for our releases, and also has successfully 
>> produced two releases(1.10.0&1.10.1) as RM.
>>
>> Congratulations & Welcome Yu Li!
>>
>> Best,
>> Jincheng (on behalf of the Flink PMC)


Re: [ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread Xintong Song
Congratulations Yu, well deserved~!

Thank you~

Xintong Song



On Wed, Jun 17, 2020 at 9:15 AM jincheng sun 
wrote:

> Hi all,
>
> On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
> part of the Apache Flink Project Management Committee (PMC).
>
> Yu Li has been very active on Flink's Statebackend component, working on
> various improvements, for example the RocksDB memory management for 1.10.
> and keeps checking and voting for our releases, and also has successfully
> produced two releases(1.10.0&1.10.1) as RM.
>
> Congratulations & Welcome Yu Li!
>
> Best,
> Jincheng (on behalf of the Flink PMC)
>


[ANNOUNCE] Yu Li is now part of the Flink PMC

2020-06-16 Thread jincheng sun
Hi all,

On behalf of the Flink PMC, I'm happy to announce that Yu Li is now
part of the Apache Flink Project Management Committee (PMC).

Yu Li has been very active on Flink's Statebackend component, working on
various improvements, for example the RocksDB memory management for 1.10.
and keeps checking and voting for our releases, and also has successfully
produced two releases(1.10.0&1.10.1) as RM.

Congratulations & Welcome Yu Li!

Best,
Jincheng (on behalf of the Flink PMC)


Re: Any python example with json data from Kafka using flink-statefun

2020-06-16 Thread Sunil
checking to see if this is possible currently.
Read json data from kafka topic => process using statefun => write out to kafka 
in json format.

I could have a separate process to read the source json data convert to 
protobuf into another kafka topic but it sounds in-efficient. 
e.g.
Read json data from kafka topic =>convert json to protobuf =>  process using 
statefun => write out to kafka in protobuf format.=> convert protobuf to json 
message

Appreciate any advice on how to process json messages using statefun , also if 
this is not possible in the current python sdk, can i do that using the 
java/scala sdk?

Thanks.

On 2020/06/15 15:34:39, Sunil Sattiraju  wrote: 
> Thanks Igal,
> I dont have control over the data source inside kafka ( current kafka topic 
> contains either json or avro formats only, i am trying to reproduce this 
> scenario using my test data generator ). 
> 
> is it possible to convert the json to proto at the receiving end of statefun 
> applicaiton?
> 
> On 2020/06/15 14:51:01, Igal Shilman  wrote: 
> > Hi,
> > 
> > The values must be valid encoded Protobuf messages [1], while in your
> > attached code snippet you are sending utf-8 encoded JSON strings.
> > You can take a look at this example with a generator that produces Protobuf
> > messages [2][3]
> > 
> > [1] https://developers.google.com/protocol-buffers/docs/pythontutorial
> > [2]
> > https://github.com/apache/flink-statefun/blob/8376afa6b064bfa2374eefbda5e149cd490985c0/statefun-examples/statefun-python-greeter-example/generator/event-generator.py#L37
> > [3]
> > https://github.com/apache/flink-statefun/blob/8376afa6b064bfa2374eefbda5e149cd490985c0/statefun-examples/statefun-python-greeter-example/greeter/messages.proto#L25
> > 
> > On Mon, Jun 15, 2020 at 4:25 PM Sunil Sattiraju 
> > wrote:
> > 
> > > Hi, Based on the example from
> > > https://github.com/apache/flink-statefun/tree/master/statefun-examples/statefun-python-greeter-example
> > > I am trying to ingest json data in kafka, but unable to achieve based on
> > > the examples.
> > >
> > > event-generator.py
> > >
> > > def produce():
> > > request = {}
> > > request['id'] = "abc-123"
> > > request['field1'] = "field1-1"
> > > request['field2'] = "field2-2"
> > > request['field3'] = "field3-3"
> > > if len(sys.argv) == 2:
> > > delay_seconds = int(sys.argv[1])
> > > else:
> > > delay_seconds = 1
> > > producer = KafkaProducer(bootstrap_servers=[KAFKA_BROKER])
> > > for request in random_requests_dict():
> > > producer.send(topic='test-topic',
> > >   value=json.dumps(request).encode('utf-8'))
> > > producer.flush()
> > > time.sleep(delay_seconds)
> > >
> > > Below is the proto definition of the json data ( i dont always know all
> > > the fields, but i know id fields definitely exists)
> > > message.proto
> > >
> > > message MyRow {
> > > string id = 1;
> > > google.protobuf.Struct message = 2;
> > > }
> > >
> > > Below is greeter that received the data
> > > tokenizer.py ( same like greeter.py saving state of id instead of counting
> > > )
> > >
> > >
> > > @app.route('/statefun', methods=['POST'])
> > > def handle():
> > > my_row = MyRow()
> > > data = my_row.ParseFromString(request.data) // Is this the right way
> > > to do it?
> > > response_data = handler(request.data)
> > > response = make_response(response_data)
> > > response.headers.set('Content-Type', 'application/octet-stream')
> > > return response
> > >
> > >
> > > but, below is the error message. I am a newbie with proto and appreciate
> > > any help
> > >
> > > 11:55:17,996 tokenizer ERROR Exception on /statefun [POST]
> > > Traceback (most recent call last):
> > >   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2447,
> > > in wsgi_app
> > > response = self.full_dispatch_request()
> > >   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1952,
> > > in full_dispatch_request
> > > rv = self.handle_user_exception(e)
> > >   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1821,
> > > in handle_user_exception
> > > reraise(exc_type, exc_value, tb)
> > >   File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39,
> > > in reraise
> > > raise value
> > >   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1950,
> > > in full_dispatch_request
> > > rv = self.dispatch_request()
> > >   File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1936,
> > > in dispatch_request
> > > return self.view_functions[rule.endpoint](**req.view_args)
> > >   File "/app/tokenizer.py", line 101, in handle
> > > response_data = handler(data)
> > >   File "/usr/local/lib/python3.8/site-packages/statefun/request_reply.py",
> > > line 38, in __call__
> > > request.ParseFromString(request_bytes)
> > >   File
> > > "/usr/local/lib/python3.8/site-packages/google/protobuf/message.py", 

[jira] [Created] (FLINK-18338) RocksDB tests crash the JVM on CI

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18338:


 Summary: RocksDB tests crash the JVM on CI
 Key: FLINK-18338
 URL: https://issues.apache.org/jira/browse/FLINK-18338
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.11.0
Reporter: Chesnay Schepler


Some about {{pure virtual method called}}.

Seen this twice in separate PRs.

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3632=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18337) Introduce TableResult#await method to wait data ready

2020-06-16 Thread godfrey he (Jira)
godfrey he created FLINK-18337:
--

 Summary: Introduce TableResult#await method to wait data ready
 Key: FLINK-18337
 URL: https://issues.apache.org/jira/browse/FLINK-18337
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: godfrey he


Currently, {{TableEnvironment.executeSql()}}  method for INSERT statement 
returns TableResult once the job is submitted. Users must use 
{{tableResult.getJobClient.get()
  .getJobExecutionResult(Thread.currentThread().getContextClassLoader)
  .get()}} to wait the job finish. This API looks very ugly.
So this issue aims to introduce {{TableResult#await}} method, the code snippet 
looks like:

{code:java}
val tEnv = ...
// submit the job and wait job finish
tEnv.executeSql("insert into ...").await()
{code}

the suggested new methods are:

{code:java}
/**
 * Wait until the data is ready.
 *
 * For select operation, this method will wait unit the first row 
can be accessed in local.
 * For insert operation, this method will wait for the job to finish, 
because the result contains only one row.
 * For other operations, this method will return immediately, because 
the result is ready in local.
 *
 * @throws ExecutionException if this future completed exceptionally
 * @throws InterruptedException if the current thread was interrupted 
while waiting
 */
void await() throws InterruptedException, ExecutionException;

/**
 * Wait until the data is ready.
 *
 * For select operation, this method will wait unit the first row 
can be accessed in local.
 * For insert operation, this method will wait for the job to finish, 
because the result contains only one row.
 * For other operations, this method will return immediately, because 
the result is ready in local.
 *
 * @param timeout the maximum time to wait
 * @param unit the time unit of the timeout argument
 * @throws ExecutionException if this future completed exceptionally
 * @throws InterruptedException if the current thread was interrupted 
while waiting
 * @throws TimeoutException if the wait timed out
 */
void await(long timeout, TimeUnit unit) throws InterruptedException, 
ExecutionException, TimeoutException;

{code}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18336) CheckpointFailureManager forgets failed checkpoints after a successful one

2020-06-16 Thread Roman Khachatryan (Jira)
Roman Khachatryan created FLINK-18336:
-

 Summary: CheckpointFailureManager forgets failed checkpoints after 
a successful one
 Key: FLINK-18336
 URL: https://issues.apache.org/jira/browse/FLINK-18336
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Reporter: Roman Khachatryan
Assignee: Roman Khachatryan






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18335) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted time outs

2020-06-16 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-18335:
--

 Summary:  
NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted time outs
 Key: FLINK-18335
 URL: https://issues.apache.org/jira/browse/FLINK-18335
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.12.0
Reporter: Piotr Nowojski
 Fix For: 1.12.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3582=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0


{noformat}
[ERROR] Errors: 
[ERROR]   
NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted:182->verifyAllOperatorsNotifyAborted:195->Object.wait:502->Object.wait:-2
 » TestTimedOut
{noformat}

CC [~yunta]




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18334) Exclude .dumpstream files that only contain JAVA_TOOL_OPTIONS logging

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18334:


 Summary: Exclude .dumpstream files that only contain 
JAVA_TOOL_OPTIONS logging
 Key: FLINK-18334
 URL: https://issues.apache.org/jira/browse/FLINK-18334
 Project: Flink
  Issue Type: Bug
  Components: Build System / CI
Reporter: Chesnay Schepler
 Fix For: 1.11.0


For debugging purposes we tell all JVMs on CI to create a heap dump if they hit 
an OOM.
We additionally collect the .dumpstream files from surefire, which contain the 
raw JVM output.

All JVMs will log something like this:
{code}
# Created at 2020-06-16T14:30:35.624
Picked up JAVA_TOOL_OPTIONS: -XX:+HeapDumpOnOutOfMemoryError
{code}

This is fine, and cannot be avoided, but as a result you have a lot of 
.dumpstream files that only contain this entry, which is a bit annoying when 
looking for that one file that actually contains the important entries.
Maybe we could filter out files that only contain these entries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18333) UnsignedTypeConversionITCase failed caused by MariaDB4j "Asked to waitFor Program"

2020-06-16 Thread Jark Wu (Jira)
Jark Wu created FLINK-18333:
---

 Summary: UnsignedTypeConversionITCase failed caused by MariaDB4j 
"Asked to waitFor Program"
 Key: FLINK-18333
 URL: https://issues.apache.org/jira/browse/FLINK-18333
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC, Table SQL / Ecosystem
Reporter: Jark Wu


https://dev.azure.com/rmetzger/Flink/_build/results?buildId=8173=logs=66592496-52df-56bb-d03e-37509e1d9d0f=ae0269db-6796-5583-2e5f-d84757d711aa
{code}
2020-06-16T08:23:26.3013987Z [INFO] Running 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSourceITCase
2020-06-16T08:23:30.2252334Z Tue Jun 16 08:23:30 UTC 2020 Thread[main,5,main] 
java.lang.NoSuchFieldException: DEV_NULL
2020-06-16T08:23:31.2907920Z 

2020-06-16T08:23:31.2913806Z Tue Jun 16 08:23:30 UTC 2020:
2020-06-16T08:23:31.2914839Z Booting Derby version The Apache Software 
Foundation - Apache Derby - 10.14.2.0 - (1828579): instance 
a816c00e-0172-bc39-e4b1-0e4ce818 
2020-06-16T08:23:31.2915845Z on database directory 
memory:/__w/1/s/flink-connectors/flink-connector-jdbc/target/test with class 
loader sun.misc.Launcher$AppClassLoader@677327b6 
2020-06-16T08:23:31.2916637Z Loaded from 
file:/__w/1/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
2020-06-16T08:23:31.2916968Z java.vendor=Private Build
2020-06-16T08:23:31.2917461Z 
java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
2020-06-16T08:23:31.2922200Z 
user.dir=/__w/1/s/flink-connectors/flink-connector-jdbc/target
2020-06-16T08:23:31.2922516Z os.name=Linux
2020-06-16T08:23:31.2922709Z os.arch=amd64
2020-06-16T08:23:31.2923086Z os.version=4.15.0-1083-azure
2020-06-16T08:23:31.2923316Z derby.system.home=null
2020-06-16T08:23:31.2923616Z 
derby.stream.error.field=org.apache.flink.connector.jdbc.JdbcTestBase.DEV_NULL
2020-06-16T08:23:31.2924790Z Database Class Loader started - 
derby.database.classpath=''
2020-06-16T08:23:37.4354243Z [INFO] Tests run: 2, Failures: 0, Errors: 0, 
Skipped: 0, Time elapsed: 11.133 s - in 
org.apache.flink.connector.jdbc.table.JdbcDynamicTableSourceITCase
2020-06-16T08:23:38.1880075Z [INFO] Running 
org.apache.flink.connector.jdbc.table.JdbcTableSourceITCase
2020-06-16T08:23:41.3718038Z Tue Jun 16 08:23:41 UTC 2020 Thread[main,5,main] 
java.lang.NoSuchFieldException: DEV_NULL
2020-06-16T08:23:41.4383244Z 

2020-06-16T08:23:41.4401761Z Tue Jun 16 08:23:41 UTC 2020:
2020-06-16T08:23:41.4402797Z Booting Derby version The Apache Software 
Foundation - Apache Derby - 10.14.2.0 - (1828579): instance 
a816c00e-0172-bc3a-103b-0e4b0610 
2020-06-16T08:23:41.4403758Z on database directory 
memory:/__w/1/s/flink-connectors/flink-connector-jdbc/target/test with class 
loader sun.misc.Launcher$AppClassLoader@677327b6 
2020-06-16T08:23:41.4404581Z Loaded from 
file:/__w/1/.m2/repository/org/apache/derby/derby/10.14.2.0/derby-10.14.2.0.jar
2020-06-16T08:23:41.4404945Z java.vendor=Private Build
2020-06-16T08:23:41.4405497Z 
java.runtime.version=1.8.0_242-8u242-b08-0ubuntu3~16.04-b08
2020-06-16T08:23:41.4406048Z 
user.dir=/__w/1/s/flink-connectors/flink-connector-jdbc/target
2020-06-16T08:23:41.4406303Z os.name=Linux
2020-06-16T08:23:41.4406494Z os.arch=amd64
2020-06-16T08:23:41.4406878Z os.version=4.15.0-1083-azure
2020-06-16T08:23:41.4407097Z derby.system.home=null
2020-06-16T08:23:41.4407415Z 
derby.stream.error.field=org.apache.flink.connector.jdbc.JdbcTestBase.DEV_NULL
2020-06-16T08:23:41.5287219Z Database Class Loader started - 
derby.database.classpath=''
2020-06-16T08:23:46.4567063Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 23.729 s <<< FAILURE! - in 
org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase
2020-06-16T08:23:46.4575785Z [ERROR] 
org.apache.flink.connector.jdbc.table.UnsignedTypeConversionITCase  Time 
elapsed: 23.729 s  <<< ERROR!
2020-06-16T08:23:46.4576490Z ch.vorburger.exec.ManagedProcessException: An 
error occurred while running a command: create database if not exists `test`;
2020-06-16T08:23:46.4577193Zat ch.vorburger.mariadb4j.DB.run(DB.java:300)
2020-06-16T08:23:46.4577537Zat ch.vorburger.mariadb4j.DB.run(DB.java:265)
2020-06-16T08:23:46.4577861Zat ch.vorburger.mariadb4j.DB.run(DB.java:269)
2020-06-16T08:23:46.4578212Zat 
ch.vorburger.mariadb4j.DB.createDB(DB.java:308)
2020-06-16T08:23:46.4578611Zat 
ch.vorburger.mariadb4j.junit.MariaDB4jRule.initDB(MariaDB4jRule.java:55)
2020-06-16T08:23:46.4579084Zat 
ch.vorburger.mariadb4j.junit.MariaDB4jRule.before(MariaDB4jRule.java:50)
2020-06-16T08:23:46.4579547Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
2020-06-16T08:23:46.4579987Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)

[jira] [Created] (FLINK-18332) Add error message to precondition in KeyGroupPartitionedPriorityQueue

2020-06-16 Thread tartarus (Jira)
tartarus created FLINK-18332:


 Summary: Add error message to precondition in 
KeyGroupPartitionedPriorityQueue
 Key: FLINK-18332
 URL: https://issues.apache.org/jira/browse/FLINK-18332
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Queryable State
Affects Versions: 1.10.1, 1.10.0
 Environment: CentOS 7.0

Flink 1.10.0

jdk-1.8
Reporter: tartarus


in my case, the user custom a KeySelector and use a static SimpleDateFormat to 
format unix timestamp. sometimes job will throw an 
ArrayIndexOutOfBoundsException
{code:java}
java.lang.ArrayIndexOutOfBoundsException: -49
at 
org.apache.flink.runtime.state.heap.KeyGroupPartitionedPriorityQueue.getKeyGroupSubHeapForElement(KeyGroupPartitionedPriorityQueue.java:174)
at 
org.apache.flink.runtime.state.heap.KeyGroupPartitionedPriorityQueue.add(KeyGroupPartitionedPriorityQueue.java:110)
at 
org.apache.flink.streaming.api.operators.InternalTimerServiceImpl.registerProcessingTimeTimer(InternalTimerServiceImpl.java:203)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.registerProcessingTimeTimer(WindowOperator.java:901)
at 
org.apache.flink.streaming.api.windowing.triggers.ProcessingTimeTrigger.onElement(ProcessingTimeTrigger.java:36)
at 
org.apache.flink.streaming.api.windowing.triggers.ProcessingTimeTrigger.onElement(ProcessingTimeTrigger.java:28)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onElement(WindowOperator.java:920)
at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:402)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.processRecord(OneInputStreamTask.java:204)
at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:196)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:151)
at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:128)
at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:69)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:311)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:187)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:487)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:470)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
at java.lang.Thread.run(Thread.java:745)
{code}
I reproduced this case.
 Because keySelector.getKey() will be called twice on the same record, and 
SimpleDateFormat is not thread safe, In the case of high concurrency and Cross 
Days, the results returned by the two calls of keySelector.getKey() may be 
different.
 So the keygroup calculated in the second execution is different from the 
result of the first calculation,then throw an ArrayIndexOutOfBoundsException.

I think the error message should be clearer, not just the 
ArrayIndexOutOfBoundsException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18331) Hive NOTICE issues

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18331:


 Summary: Hive NOTICE issues
 Key: FLINK-18331
 URL: https://issues.apache.org/jira/browse/FLINK-18331
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.11.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
 Fix For: 1.11.0


1.2/2.2 NOTICE entries are not sorted alphabetically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18330) Python NOTICE issues

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18330:


 Summary: Python NOTICE issues
 Key: FLINK-18330
 URL: https://issues.apache.org/jira/browse/FLINK-18330
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Build System
Affects Versions: 1.11.0
Reporter: Chesnay Schepler
 Fix For: 1.11.0


beam-runners-core-java / beam-vendor-bytebuddy bundled but not listed
protobuf-java-util listed but not bundled

The NOTICE file additionally lists various dependencies that are bundled by 
beam. While this is fine, the lack of separation makes verification difficult.

These would be the entries for directly bundled dependencies:
{code}
This project bundles the following dependencies under the Apache Software 
License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)

- com.fasterxml.jackson.core:jackson-annotations:2.10.1
- com.fasterxml.jackson.core:jackson-core:2.10.1
- com.fasterxml.jackson.core:jackson-databind:2.10.1
- com.google.flatbuffers:flatbuffers-java:1.9.0
- io.netty:netty-buffer:4.1.44.Final
- io.netty:netty-common:4.1.44.Final
- joda-time:joda-time:2.5
- org.apache.arrow:arrow-format:0.16.0
- org.apache.arrow:arrow-memory:0.16.0
- org.apache.arrow:arrow-vector:0.16.0
- org.apache.beam:beam-model-fn-execution:2.19.0
- org.apache.beam:beam-model-job-management:2.19.0
- org.apache.beam:beam-model-pipeline:2.19.0
- org.apache.beam:beam-runners-core-construction-java:2.19.0
- org.apache.beam:beam-runners-java-fn-execution:2.19.0
- org.apache.beam:beam-sdks-java-core:2.19.0
- org.apache.beam:beam-sdks-java-fn-execution:2.19.0
- org.apache.beam:beam-vendor-grpc-1_21_0:0.1
- org.apache.beam:beam-vendor-guava-26_0-jre:0.1
- org.apache.beam:beam-vendor-sdks-java-extensions-protobuf:2.19.0

This project bundles the following dependencies under the BSD license.
See bundled license files for details

- net.sf.py4j:py4j:0.10.8.1
- com.google.protobuf:protobuf-java:3.7.1

This project bundles the following dependencies under the MIT license. 
(https://opensource.org/licenses/MIT)
See bundled license files for details.

- net.razorvine:pyrolite:4.13
{code}

These are the ones that are (supposedly) bundled by beam, which would need 
additional verification:
{code}
The bundled Apache Beam dependencies bundle the following dependencies under 
the Apache Software License 2.0 (http://www.apache.org/licenses/LICENSE-2.0.txt)

- com.google.api.grpc:proto-google-common-protos:1.12.0
- com.google.code.gson:gson:2.7
- com.google.guava:guava:26.0-jre
- io.grpc:grpc-auth:1.21.0
- io.grpc:grpc-core:1.21.0
- io.grpc:grpc-context:1.21.0
- io.grpc:grpc-netty:1.21.0
- io.grpc:grpc-protobuf:1.21.0
- io.grpc:grpc-stub:1.21.0
- io.grpc:grpc-testing:1.21.0
- io.netty:netty-buffer:4.1.34.Final
- io.netty:netty-codec:4.1.34.Final
- io.netty:netty-codec-http:4.1.34.Final
- io.netty:netty-codec-http2:4.1.34.Final
- io.netty:netty-codec-socks:4.1.34.Final
- io.netty:netty-common:4.1.34.Final
- io.netty:netty-handler:4.1.34.Final
- io.netty:netty-handler-proxy:4.1.34.Final
- io.netty:netty-resolver:4.1.34.Final
- io.netty:netty-transport:4.1.34.Final
- io.netty:netty-transport-native-epoll:4.1.34.Final
- io.netty:netty-transport-native-unix-common:4.1.34.Final
- io.netty:netty-tcnative-boringssl-static:2.0.22.Final
- io.opencensus:opencensus-api:0.21.0
- io.opencensus:opencensus-contrib-grpc-metrics:0.21.0

The bundled Apache Beam dependencies bundle the following dependencies under 
the BSD license.
See bundled license files for details

- com.google.protobuf:protobuf-java-util:3.7.1
- com.google.auth:google-auth-library-credentials:0.13.0
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18329) Runtime NOTICE issues

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18329:


 Summary: Runtime NOTICE issues
 Key: FLINK-18329
 URL: https://issues.apache.org/jira/browse/FLINK-18329
 Project: Flink
  Issue Type: Bug
  Components: Build System, Runtime / Coordination
Affects Versions: 1.11.0
Reporter: Chesnay Schepler
 Fix For: 1.11.0


akka-actor version incorrect 2.5.1 -> 2.5.21



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18327) ML NOTICE issues

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18327:


 Summary: ML NOTICE issues
 Key: FLINK-18327
 URL: https://issues.apache.org/jira/browse/FLINK-18327
 Project: Flink
  Issue Type: Bug
  Components: Build System, Library / Machine Learning
Affects Versions: 1.11.0
Reporter: Chesnay Schepler
 Fix For: 1.11.0


com.github.fommil.netlib:core:1.1.2 is not actually bundled



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18328) Blink planner NOTICE issues

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18328:


 Summary: Blink planner NOTICE issues
 Key: FLINK-18328
 URL: https://issues.apache.org/jira/browse/FLINK-18328
 Project: Flink
  Issue Type: Bug
  Components: Build System, Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Chesnay Schepler
 Fix For: 1.11.0


not actually bundled: asm, json-smart, accessors-smart, joda-time



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18326) Kubernetes NOTICE issues

2020-06-16 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-18326:


 Summary: Kubernetes NOTICE issues
 Key: FLINK-18326
 URL: https://issues.apache.org/jira/browse/FLINK-18326
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.11.0
Reporter: Chesnay Schepler
 Fix For: 1.11.0


snakeyaml: 1.23 -> 1.24
logging-interceptor: 3.12.0 -> 3.12.6
okhttp: 3.12.0 -> 3.12.1
generex is actually excluded



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18325) SqlMapTypeNameSpec#unparse may throw NPE

2020-06-16 Thread Jiatao Tao (Jira)
Jiatao Tao created FLINK-18325:
--

 Summary: SqlMapTypeNameSpec#unparse may throw NPE
 Key: FLINK-18325
 URL: https://issues.apache.org/jira/browse/FLINK-18325
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Reporter: Jiatao Tao
 Attachments: image-2020-06-16-18-53-17-462.png

SqlMapTypeNameSpec#unparse call SqlDataTypeSpec#getNullable, and "getNullable" 
may throw NPE

 
{code:java}
if (!keyType.getNullable()) { 
  writer.keyword("NOT NULL"); 
}
{code}
 

 

See in  SqlDataTypeSpec

 
{code:java}
/** Whether data type allows nulls.
 *
 * Nullable is nullable! Null means "not specified". E.g.
 * {@code CAST(x AS INTEGER)} preserves the same nullability as {@code x}.
 */
private Boolean nullable;
{code}
 

This API is from calcite, and all callers will determine if it is null:

!image-2020-06-16-18-53-17-462.png|width=488,height=114!

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18324) Translate updated data type and function page into Chinese

2020-06-16 Thread Timo Walther (Jira)
Timo Walther created FLINK-18324:


 Summary: Translate updated data type and function page into Chinese
 Key: FLINK-18324
 URL: https://issues.apache.org/jira/browse/FLINK-18324
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table SQL / API
Reporter: Timo Walther


The Chinese translations of the pages updated in FLINK-18248 and FLINK-18065 
need an update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18323) Implement a Kafka Source based on new Source API

2020-06-16 Thread Jiangjie Qin (Jira)
Jiangjie Qin created FLINK-18323:


 Summary: Implement a Kafka Source based on new Source API
 Key: FLINK-18323
 URL: https://issues.apache.org/jira/browse/FLINK-18323
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Kafka
Reporter: Jiangjie Qin


This is an umbrella ticket for a new Kafka Source implementation based on the 
new Source API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18322) Fix unstable ExecutorNotifierTest#testExceptionInHandler.

2020-06-16 Thread Jiangjie Qin (Jira)
Jiangjie Qin created FLINK-18322:


 Summary: Fix unstable ExecutorNotifierTest#testExceptionInHandler.
 Key: FLINK-18322
 URL: https://issues.apache.org/jira/browse/FLINK-18322
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Reporter: Jiangjie Qin


The {{ExecutorNotifierTest#testExceptionInHandler()}} fails intermittently 
because the {{UncaughtExceptionHandler}} may fire after the {{ExecutorService}} 
has  shutdown. The fix is to ensure only check the exception after the 
{{UncaughtExceptionHandler}} has fired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18321) AbstractCloseableRegistryTest.testClose unstable

2020-06-16 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-18321:
--

 Summary: AbstractCloseableRegistryTest.testClose unstable
 Key: FLINK-18321
 URL: https://issues.apache.org/jira/browse/FLINK-18321
 Project: Flink
  Issue Type: Bug
  Components: FileSystems, Tests
Affects Versions: 1.12.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3553=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0

{code}
java.lang.AssertionError: expected:<0> but was:<-1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.flink.core.fs.AbstractCloseableRegistryTest.testClose(AbstractCloseableRegistryTest.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18320) flink-sql-connector-hive modules should merge hive-exec dependencies

2020-06-16 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-18320:


 Summary: flink-sql-connector-hive modules should merge hive-exec 
dependencies
 Key: FLINK-18320
 URL: https://issues.apache.org/jira/browse/FLINK-18320
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Reporter: Jingsong Lee
 Fix For: 1.11.0


Since hive-exec is a bundle jar, we should merge the bundle dependencies from 
hive-exec.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18319) Lack LICENSE.protobuf in flink-sql-orc

2020-06-16 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-18319:


 Summary: Lack LICENSE.protobuf in flink-sql-orc
 Key: FLINK-18319
 URL: https://issues.apache.org/jira/browse/FLINK-18319
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ORC
Reporter: Jingsong Lee
Assignee: Jingsong Lee
 Fix For: 1.11.0


flink-sql-orc bundle protobuf but not include LICENSE.protobuf.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18318) Flink ML

2020-06-16 Thread Dimitris Vogiatzidakis (Jira)
Dimitris Vogiatzidakis created FLINK-18318:
--

 Summary: Flink ML
 Key: FLINK-18318
 URL: https://issues.apache.org/jira/browse/FLINK-18318
 Project: Flink
  Issue Type: Task
  Components: Library / Machine Learning
Affects Versions: 1.10.0
Reporter: Dimitris Vogiatzidakis


Hello, I'm a cs student currently working on my Bachelor's thesis. I've used 
Flink to extract features out of some datasets, and I would like to use them 
together with another dataset of 1-0 (Node exists or doesn't) to perform a 
logistic regresssion. I have found that FLIP-39 has been accepted and it is 
running in version 1.10.0 that I also currently use, but I'm having trouble 
implementing it. Are there any java examples currently up and running? Also 
what is the situation currently with ML in Flink? Thank you very much for your 
time and sorry if i'm posting it on wrong section. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Rest handler redirect problem

2020-06-16 Thread Till Rohrmann
Thanks for reporting this issue Lucent. One way to solve it would be to
reintroduce the RedirectHandler and allow the user to choose between
forwarding to the leader as it is right now in Flink 1.10 and using
redirects as it was the case in the past. If I remember correctly, then
redirects also had their problems.

Cheers,
Till

On Mon, Jun 15, 2020 at 8:16 PM Chesnay Schepler  wrote:

> I think this is not unintentional and simply a case we did not consider.
>
> Please file a JIRA.
>
> On 15/06/2020 19:01, Wong Lucent wrote:
> > Hi,
> >
> >
> > Recently, our team upgraged our filnk cluster from version 1.7 to 1.10.
> And we met some problem when calling the flink rest api.
> >
> > 1) We deploy our flink cluster in standlone mode on kubernetes and use
> two Jobmanagers for HA.
> >
> > 2) We deployed a kubernetes service for the two jobmanagers to provide a
> unified url.
> >
> > 3) We use restful api to operate the flink cluster.
> >
> > Afther upgraded to 1.10,  we found there is some difference between 1.7
> when processing the savepoint query request. For example, if we send a
> savepoint trigger request to the leader jobmanager, in 1.7 we can query the
> standby jobmanager to get the status of the checkpoint, while in 1.10 it
> will return a 404 response.
> >
> > In 1.7 all the requests to standby Jobmanager will be forward to the
> leader in "RedirectHandler", while in 1.10 the requesets will be forward
> with RPC in "LeaderRetrievalHandler". But there seems a issue in
> "AbstractAsynchronousOperationHandlers", in this handler, there is a local
> memory cache "completedOperationCache" to store the pending savpoint
> opeartion before redirect the request to the leader jobmanager, which seems
> not synced between all the jobmanagers. This makes only the jobmanager
> which receive the savepoint trigger requset can lookup the status of the
> savpoint, while the others can only return 404.
> >
> > As this breaks our design in operating the flink cluster with restful
> API, we cannot use kubernetes service to hide the standby jobmanager any
> more. We hope to know is this behavior by design or it's really a bug?
> >
> >
> > Thanks and Best Regards
> > Lucent Wong
>
>
>


[jira] [Created] (FLINK-18317) Verify dependencies licencses in Flink 1.11

2020-06-16 Thread Piotr Nowojski (Jira)
Piotr Nowojski created FLINK-18317:
--

 Summary: Verify dependencies licencses in Flink 1.11
 Key: FLINK-18317
 URL: https://issues.apache.org/jira/browse/FLINK-18317
 Project: Flink
  Issue Type: Sub-task
  Components: Build System
Affects Versions: 1.11.0
Reporter: Piotr Nowojski
Assignee: Chesnay Schepler
 Fix For: 1.11.0


Licensees of the dependencies should be manually verified if they are not 
violating Flink's license.

CC [~rmetzger]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Interested in contributing to Apache Flink

2020-06-16 Thread Marta Paes Moreira
Hi, Vivin!

Thanks for your interest in participating in Google Season of Docs (GSoD)
and contributing to Apache Flink! We're glad that you enjoyed working with
Flink before, and that it motivated you to now apply to GSoD.

Regarding what you can work on, there are no strict requirements since
there is a lot of work to be done in the Table API / SQL documentation.
Here is a bit more detail about each part of our idea proposal [1]:

*"Restructure the Table API & SQL Documentation*

Right now the Table / SQL documentation is very technical and dry to read.
It is more like a technical specification than documentation that a new
user could read. This project is to take the existing content and
rewrite/reorganize it to make what the community already has more
approachable.

*Extend the Table API & SQL Documentation*

This is to add brand new content. This could be tutorials and walkthroughs
to help onboard new users or expanding an under-documented section such as
Hive interop."

Please remember to follow the GSoD application process [2] and timeline
[3]. Let us know if you have any further questions!

Marta

[1] https://flink.apache.org/news/2020/05/04/season-of-docs.html
[2]
https://developers.google.com/season-of-docs/docs/tech-writer-application-hints
[3] https://developers.google.com/season-of-docs/docs/timeline


On Mon, Jun 15, 2020 at 10:13 AM Vivin Peris  wrote:

> Respected Sir
>
> I am currently a final year undergrad student at National Institute of
> Technology Surathkal, India.
>
> I have extensively worked on Apache Flink in my previous internship where I
> had to create a database of billions of IPs in the world along with data
> associated with it.
> The power of Flink is just unbelievable and helped me do work which would
> take months in just a matter of two days.
>
> With this background, I am deeply interested in applying to your esteemed
> organization under Google Season of Developers. Since I am new to technical
> writing, I am willing to work hard.
>
> I would like to know what I can work on so that I can equip myself for this
> task.
>
> Thank you for your time and consideration
>
> Sincerely
> Vivin
>


Re: Request for assignment: FLINK-18119 Fix unlimitedly growing state for ...

2020-06-16 Thread Benchao Li
Hi Hyeonseop,

I'm sorry to hear that you got no reactions in time.
We can move to the issue you mentioned for further discussions.

Hyeonseop Lee  于2020年6月16日周二 下午1:00写道:

> Hello,
>
>
> I have experienced an unlimitedly growing state in my streaming query in
> table API and identified that the current implementation of time range
> bounded over aggregate function was the cause. I was able to fix it by
> modifying a couple of functions in flink-table-runtime-blink.
>
>
> I am running several streaming applications in production and desiring this
> fix to be merged to the official Flink. I have stated detailed issue
> statements and fix plans in [FLINK-18119](
> https://issues.apache.org/jira/browse/FLINK-18119) but didn't get
> reactions
> for days. Please consider assigning the ticket.
>
>
> Regards,
>
> Hyeonseop
>
> --
> Hyeonseop Lee
>