Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread Yu Yang
Thank you Fabian! We will try the approach that you suggest. On Thu, Jun 6, 2019 at 1:03 AM Fabian Hueske wrote: > Hi Yu, > > When you register a DataStream as a Table, you can create a new attribute > that contains the event timestamp of the DataStream records. > For that, you would need to

NotSerializableException: DropwizardHistogramWrapper inside AggregateFunction

2019-06-06 Thread Vijay Balakrishnan
HI, I have a class defined : public class MGroupingWindowAggregate implements AggregateFunction.. { > private final Map keyHistMap = new TreeMap<>(); > } > In the constructor, I initialize it. > public MGroupingWindowAggregate() { > Histogram minHist = new Histogram(new >

Flink 1.7.1 flink-s3-fs-hadoop-1.7.1 doesn't delete older chk- directories

2019-06-06 Thread anaray
Hi, I am using 1.7.1 and we store checkpoints in Ceph and we use flink-s3-fs-hadoop-1.7.1 to connect to Ceph. I have only 1 checkpoint retained. Issue I see is that previous/old chk- directories are still around. I verified that those older doesn't contain any checkpoint data. But the directories

Ipv6 supported?

2019-06-06 Thread Siew Wai Yow
Hi guys, May i know flink support ipv6? Thanks Yow

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-06 Thread xwang355
Thanks Fabian. This is really helpful. -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

RE: Change sink topology

2019-06-06 Thread Smirnov Sergey Vladimirovich
Great, thanks! From: Fabian Hueske [mailto:fhue...@gmail.com] Sent: Thursday, June 6, 2019 3:07 PM To: Smirnov Sergey Vladimirovich Cc: user@flink.apache.org Subject: Re: Change sink topology Hi Sergey, I would not consider this to be a topology change (the sink operator would still be a

Re: Change sink topology

2019-06-06 Thread Fabian Hueske
Hi Sergey, I would not consider this to be a topology change (the sink operator would still be a Kafka producer). It seems that dynamic topic selection is possible with a KeyedSerializationSchema (Check "Advanced Serialization Schema: [1]). Best, Fabian [1]

Change sink topology

2019-06-06 Thread Smirnov Sergey Vladimirovich
Hi flink, Im wonder, is it possible to dynamically (while job running) change sink topology* - by adding new sink on the fly? Say, we have input stream and by analyzing message property we decided to put this message into some kafka topic, i.e. choosen_topic = function(message.property).

Re: Does Flink Kafka connector has max_pending_offsets concept?

2019-06-06 Thread Fabian Hueske
Hi Ben, Flink correctly maintains the offsets of all partitions that are read by a Kafka consumer. A checkpoint is only complete when all functions successful checkpoint their state. For a Kafka consumer, this state is the current reading offset. In case of a failure the offsets and the state of

Re: Weird behavior with CoFlatMapFunction

2019-06-06 Thread Fabian Hueske
Hi, There are a few things to point out about your example: 1. The the CoFlatMapFunction is probably executed in parallel. The configuration is only applied to one of the parallel function instances. You probably want to broadcast the configuration changes to all function instances. Have a look

Weird behavior with CoFlatMapFunction

2019-06-06 Thread Andy Hoang
Hi guys, I want to merge 2 diffrent stream, one is config stream and the other is the value json, to check again that config. Its seem like the CoFlatMapFunction should be used. Here my sample: val filterStream: ConnectedStreams[ControlEvent,

Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread Fabian Hueske
Hi Yu, When you register a DataStream as a Table, you can create a new attribute that contains the event timestamp of the DataStream records. For that, you would need to assign timestamps and generate watermarks before registering the stream: FlinkKafkaConsumer kafkaConsumer = new

Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread Yu Yang
Hi Jingsong, Thanks for the reply! The following is our code snippet for creating the log stream. Our messages are in thrift format. We use a customized serializer for serializing/deserializing messages ( see https://github.com/apache/flink/pull/8067 for the implementation) . Given that, how

Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread Yu Yang
+flink-user On Wed, Jun 5, 2019 at 9:58 AM Yu Yang wrote: > Thanks for the reply! In flink-table-planner, TimeIndicatorTypeInfo is an > internal class that cannot be referenced from application. I got "cannot > find symbol" error when I tried to use it. I have also tried to use " >

Re: can flink sql handle udf-generated timestamp field

2019-06-06 Thread JingsongLee
Hi @Yu Yang: Time-based operations such as windows in both the Table API and SQL require information about the notion of time and its origin. Therefore, tables can offer logical time attributes for indicating time and accessing corresponding timestamps in table programs.[1] This mean Window