Re: Understanding timestamp and watermark assignment errors

2019-03-08 Thread Andrew Roberts
This is with flink 1.6.4. I was on 1.6.2 and saw Kryo issues in many more circumstances. > On Mar 8, 2019, at 4:25 PM, Konstantin Knauf wrote: > > Hi Andrew, > > which Flink version do you use? This sounds a bit like > https://issues.apache.org/jira/browse/FLINK-8836. > > Cheers, > >

Re: Understanding timestamp and watermark assignment errors

2019-03-08 Thread Konstantin Knauf
Hi Andrew, which Flink version do you use? This sounds a bit like https://issues.apache.org/jira/browse/FLINK-8836. Cheers, Konstantin On Thu, Mar 7, 2019 at 5:52 PM Andrew Roberts wrote: > Hello, > > I’m trying to convert some of our larger stateful computations into > something that aligns

Re: Backoff strategies for async IO functions?

2019-03-08 Thread Konstantin Knauf
Hi William, the AsyncOperator does not have such a setting. It is "merely" a wrapper around an asynchronous call, which provides integration with Flink's state & time management. I think, the way to go would be to do the exponential back-off in the user code and set the timeout of the

Re: What does "Continuous incremental cleanup" mean in Flink 1.8 release notes

2019-03-08 Thread Konstantin Knauf
Hi Tony, before Flink 1.8 expired state is only cleaned up, when you try to access it after expiration, i.e. when user code tries to access the expired state, the state value is cleaned and "null" is returned. There was also already the option to clean up expired state during full snapshots (

Re: [DISCUSS] Create a Flink ecosystem website

2019-03-08 Thread Bowen Li
Confluent hub for Kafka is another good example of this kind. I personally like it over the spark site. May worth checking it out with Kafka folks On Thu, Mar 7, 2019 at 6:06 AM Becket Qin wrote: > Absolutely! Thanks for the pointer. I'll submit a PR to update

Re: Schema Evolution on Dynamic Schema

2019-03-08 Thread shkob1
Thanks Rong, I have made some quick test changing the SQL select (adding a select field in the middle) and reran the job from a savepoint and it worked without any errors. I want to make sure i understand how at what point the state is stored and how does it work. Let's simplify the scenario and

Re: [EXTERNAL] Flink 1.7.1 KafkaProducer error using Exactly Once semantic

2019-03-08 Thread Slotterback, Chris
Hi Timothy, I recently faced a similar issue that spawned a bug discussion from the devs: https://issues.apache.org/jira/browse/FLINK-11654 As far as I can tell your understanding is correct, we also renamed the UID using the jobname to force uniqueness across identical jobs writing to the

Re: Schema Evolution on Dynamic Schema

2019-03-08 Thread Rong Rong
Hi Shahar, 1. Are you referring to that the incoming data source is published as JSON and you have a customized Pojo source function / table source that converts it? In that case it is you that maintains the schema evolution support am I correct? For Avro I think you can refer to [1]. 2. If you

Flink 1.7.1 KafkaProducer error using Exactly Once semantic

2019-03-08 Thread Timothy Victor
Yesterday I came across a weird problem when attempting to run 2 nearly identical jobs on a cluster. I was able to solve it (or rather workaround it), but am sharing here so we can consider a potential fix in Flink's KafkaProducer code. My scenario is as follows. I have a Flink program that

Intermittent KryoException

2019-03-08 Thread Scott Sue
Hi, When running our job, we’re seeing sporadic instances of when we have KryoExceptions. I’m new to this area of Flink so I’m not exactly too sure what I could look out for. From my understanding, Kryo is the default serializer for generic types, and whilst there is a potential performance

??????sql-client batch ????????????

2019-03-08 Thread yuess_coder
------ ??: "Zhenghua Gao" : 2019??3??8??(??) 10:57 ??: "user-zh"; : Re: sql-client batch ??DebugBatchCsvTableFactory?? BatchCompatibleTableSinkFactory ?? BatchTableSinkFactory??

What does "Continuous incremental cleanup" mean in Flink 1.8 release notes

2019-03-08 Thread Tony Wei
Hi everyone, I read the Flink 1.8 release notes about state [1], and it said *Continuous incremental cleanup of old Keyed State with TTL* > We introduced TTL (time-to-live) for Keyed state in Flink 1.6 (FLINK-9510 > ). This feature allowed > to