Is Flink processing really repeatedly deterministic when incoming stream of
elements is out-of-order? How is it ensured?
I am aware of all the principles like event time and watermarking. But I
can't understand how it works in case there are late elements in stream -
that means there are elements
Hi Max,
thanks for answer.
I still have to wrap my head around it, but I hope we'll manage to work
it out - maybe when 1.3.x arrives I'll have access to some nice mesos
cluster... or not... we'll see :)
thanks,
maciek
On 25/10/2016 17:49, Maximilian Michels wrote:
Hi Maciek,
Your use
The DataWorks Summit EU 2017 (including Hadoop Summit) is going to be in Munich
April 5-6 2017. I’ve pasted the text from the CFP below.
Would you like to share your knowledge with the best and brightest in the data
community? If so, we encourage you to submit an abstract for DataWorks
Kostas Kloudas created FLINK-5020:
-
Summary: Make the GenericWriteAheadSink rescalable.
Key: FLINK-5020
URL: https://issues.apache.org/jira/browse/FLINK-5020
Project: Flink
Issue Type:
Hi Thomas,
Flink does not support partial functions due to the map method being
overloaded. Instead you can write map{ x match { case ... => } } or you
import org.apache.flink.scala.extensions.acceptPartialFunctions and then
write .zipWithIndex.mapWith { case ... => }.
Cheers,
Till
On Fri,
Stefan Richter created FLINK-5019:
-
Summary: Proper isRestored result for tasks that did not write
state
Key: FLINK-5019
URL: https://issues.apache.org/jira/browse/FLINK-5019
Project: Flink
Hello,
In the following code, map { case (id,(label, count)) => (label,id) } is
not resolved.
Is it related to zipWithIndex (org.apache.flink.api.scala) operation ?
My input is a DataSet[String] and I'd like to output a
DataSet[(String,Long)]
val mapping = input
.map( (s => (s, 1)) )
Tzu-Li (Gordon) Tai created FLINK-5018:
--
Summary: User configurable source idle timeout to work with
WatermarkStatus emitting
Key: FLINK-5018
URL: https://issues.apache.org/jira/browse/FLINK-5018
Tzu-Li (Gordon) Tai created FLINK-5017:
--
Summary: Introduce WatermarkStatus stream element to allow for
temporarily idle streaming sources
Key: FLINK-5017
URL:
Thanks Andrey,
I just cloned the repo you sent below and compiled it. It compiled just fine
without any errors. I then looked at the imports in the in
RideCleansingToKafka.scala and compared that to the program i'm working on and
noticed that I was missing a rather important package:
Hi,
I am working on creating an implementation for SQL Stream windows. I wanted to
ask for your opinion if you think that to do so it is better to have a
WindowRunner just like we have for the case of FlatMapRunner and FlatJoinRunner
or you think it could be potentially implemented over the
Ufuk Celebi created FLINK-5016:
--
Summary: EventTimeWindowCheckpointingITCase
testTumblingTimeWindowWithKVStateMaxMaxParallelism with RocksDB hangs
Key: FLINK-5016
URL:
Aljoscha Krettek created FLINK-5015:
---
Summary: Add Tests/ITCase for Kafka Per-Partition Watermarks
Key: FLINK-5015
URL: https://issues.apache.org/jira/browse/FLINK-5015
Project: Flink
Ufuk Celebi created FLINK-5014:
--
Summary: RocksDBStateBackend misses good toString
Key: FLINK-5014
URL: https://issues.apache.org/jira/browse/FLINK-5014
Project: Flink
Issue Type: Bug
Robert Metzger created FLINK-5013:
-
Summary: Flink Kinesis connector doesn't work on old EMR versions
Key: FLINK-5013
URL: https://issues.apache.org/jira/browse/FLINK-5013
Project: Flink
Aljoscha Krettek created FLINK-5012:
---
Summary: Provide Timestamp in TimelyFlatMapFunction
Key: FLINK-5012
URL: https://issues.apache.org/jira/browse/FLINK-5012
Project: Flink
Issue Type:
Greetings,
First, thanks to DataArtisans for putting together Apache Flink® Training
documentation. Its proving to be a practical way to learn both Flink and Scala.
In compiling our exercise writing to Kafka, I am getting following error
"missing parameter type" where the parameter 'r' seems not
17 matches
Mail list logo