Thank you Fabian! We will try the approach that you suggest.
On Thu, Jun 6, 2019 at 1:03 AM Fabian Hueske wrote:
> Hi Yu,
>
> When you register a DataStream as a Table, you can create a new attribute
> that contains the event timestamp of the DataStream records.
> For that, you would need to
HI,
I have a class defined :
public class MGroupingWindowAggregate implements AggregateFunction.. {
> private final Map keyHistMap = new TreeMap<>();
> }
>
In the constructor, I initialize it.
> public MGroupingWindowAggregate() {
> Histogram minHist = new Histogram(new
>
Hi,
I am using 1.7.1 and we store checkpoints in Ceph and we use
flink-s3-fs-hadoop-1.7.1 to connect to Ceph. I have only 1 checkpoint
retained. Issue I see is that previous/old chk- directories are still
around. I verified that those older doesn't contain any checkpoint data. But
the directories
Hi guys,
May i know flink support ipv6?
Thanks
Yow
Thanks Fabian. This is really helpful.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Great, thanks!
From: Fabian Hueske [mailto:fhue...@gmail.com]
Sent: Thursday, June 6, 2019 3:07 PM
To: Smirnov Sergey Vladimirovich
Cc: user@flink.apache.org
Subject: Re: Change sink topology
Hi Sergey,
I would not consider this to be a topology change (the sink operator would
still be a
Hi Sergey,
I would not consider this to be a topology change (the sink operator would
still be a Kafka producer).
It seems that dynamic topic selection is possible with a
KeyedSerializationSchema (Check "Advanced Serialization Schema: [1]).
Best,
Fabian
[1]
Hi flink,
Im wonder, is it possible to dynamically (while job running) change sink
topology* - by adding new sink on the fly?
Say, we have input stream and by analyzing message property we decided to put
this message into some kafka topic, i.e. choosen_topic =
function(message.property).
Hi Ben,
Flink correctly maintains the offsets of all partitions that are read by a
Kafka consumer.
A checkpoint is only complete when all functions successful checkpoint
their state. For a Kafka consumer, this state is the current reading offset.
In case of a failure the offsets and the state of
Hi,
There are a few things to point out about your example:
1. The the CoFlatMapFunction is probably executed in parallel. The
configuration is only applied to one of the parallel function instances.
You probably want to broadcast the configuration changes to all function
instances. Have a look
Hi guys,
I want to merge 2 diffrent stream, one is config stream and the other is the
value json, to check again that config. Its seem like the CoFlatMapFunction
should be used.
Here my sample:
val filterStream: ConnectedStreams[ControlEvent,
Hi Yu,
When you register a DataStream as a Table, you can create a new attribute
that contains the event timestamp of the DataStream records.
For that, you would need to assign timestamps and generate watermarks
before registering the stream:
FlinkKafkaConsumer kafkaConsumer =
new
Hi Jingsong,
Thanks for the reply! The following is our code snippet for creating the
log stream. Our messages are in thrift format. We use a customized
serializer for serializing/deserializing messages ( see
https://github.com/apache/flink/pull/8067 for the implementation) . Given
that, how
+flink-user
On Wed, Jun 5, 2019 at 9:58 AM Yu Yang wrote:
> Thanks for the reply! In flink-table-planner, TimeIndicatorTypeInfo is an
> internal class that cannot be referenced from application. I got "cannot
> find symbol" error when I tried to use it. I have also tried to use "
>
Hi @Yu Yang:
Time-based operations such as windows in both the Table API and SQL require
information about the notion of time and its origin. Therefore, tables can
offer
logical time attributes for indicating time and accessing corresponding
timestamps
in table programs.[1]
This mean Window
15 matches
Mail list logo