Neelesh Srinivas Salian created FLINK-4278:
--
Summary: Unclosed FSDataOutputStream in multiple files in the
project
Key: FLINK-4278
URL: https://issues.apache.org/jira/browse/FLINK-4278
Is it possible to discard events that are out-of-order (in terms of
event time)?
Hi,
in fact, changing it to Iterable would simplify things because then we
would not have to duplicate code for the EvictingWindowOperator any more.
It could be a very thin subclass of WindowOperator.
Cheers,
Aljoscha
On Wed, 27 Jul 2016 at 03:56 Vishnu Viswanath
Hi,
yes, this is also what I hinted at in my earlier email about the
"SimpleTrigger" interface. We should keep the interface we currently have
and maybe extend it a bit while adding a new DSL of simpler/composable
triggers that can be executed in side on of the classic Triggers.
For now, we kept
Hi Kevin,
I don't know what your entire program is doing but wouldn't be a
FlatMapFunction containing a state with your biggest value sufficient?
Your stream goes through your FlatMapper and compares with the last
saved biggest value. You can then emit something if the value has increased.
Hi all,
I am trying to keep track of the biggest value in a stream. I do this by
using the iterative step mechanism of Apache Flink. However, I get an
exception that checkpointing is not supported for iterative jobs. Why
can't this be enabled? My iterative stream is also quite small: only one
Hi,
IMHO I think we should still maintain user specific triggers and I think there
will always be corner cases where a very specific trigger will be needed to be
constructed. With this being said, I think the idea of supporting also some
state machine to be generated for the trigger is very
Another (maybe completely crazy) idea is to regard the triggers really as a
DSL and use compiler techniques to derive a state machine that you use to
do the actual triggering.
With this, the "trigger" objects that make up the tree of triggers would
not contain any logic themselves. A trigger
Chesnay Schepler created FLINK-4277:
---
Summary: TaskManagerConfigurationTest fails
Key: FLINK-4277
URL: https://issues.apache.org/jira/browse/FLINK-4277
Project: Flink
Issue Type: Bug
Chesnay Schepler created FLINK-4276:
---
Summary: TextInputFormatTest.testNestedFileRead fails on Windows OS
Key: FLINK-4276
URL: https://issues.apache.org/jira/browse/FLINK-4276
Project: Flink
Hello,
I just created a new FLIP which aims at exposing our metrics to the
WebInterface.
https://cwiki.apache.org/confluence/display/FLINK/FLIP-7%3A+Expose+metrics+to+WebInterface
Looking forward to feedback :)
Regards,
Chesnay Schepler
Gyula Fora created FLINK-4275:
-
Summary: Generic Folding, Reducing and List states behave
differently from other state backends
Key: FLINK-4275
URL: https://issues.apache.org/jira/browse/FLINK-4275
Maximilian Michels created FLINK-4274:
-
Summary: Expose new JobClient in the DataSet/DataStream API
Key: FLINK-4274
URL: https://issues.apache.org/jira/browse/FLINK-4274
Project: Flink
Maximilian Michels created FLINK-4273:
-
Summary: Refactor JobClientActor to watch already submitted jobs
Key: FLINK-4273
URL: https://issues.apache.org/jira/browse/FLINK-4273
Project: Flink
Maximilian Michels created FLINK-4272:
-
Summary: Create a JobClient for job control and monitoring
Key: FLINK-4272
URL: https://issues.apache.org/jira/browse/FLINK-4272
Project: Flink
Hi Stephan,
Thanks for the nice wrap-up of ideas and discussions we had over the
last months (not all on the mailing list though because we were just
getting started with the FLIP process). The document is very
comprehensive and explains the changes in great details, even up to
the message
Hi Kevin,
Just a re-clarification: for Kafka 0.9 it would be “earliest”, & “smallest”
for the older Kafka 0.8.
I’m wondering whether or not it is reasonable to add a Flink-specific way
to set the consumer’s starting position to “earliest” and “latest”, without
respecting the external Kafka
Thank you Gordon and Max,
Thank you Gordon, that explains the behaviour a bit better to me. I am
now adding the timestamp to the group ID and that is a good workaround
for now. The "smallest" option is unfortunately not available in this
version of the FlinkKafkaConsumer class.
Cheers,
Hi Kevin,
You need to use properties.setProperty("auto.offset.reset",
"smallest") for Kafka 9 to start from the smallest offset. Note, that
in Kafka 8 you need to use properties.setProperty("auto.offset.reset",
"earliest") to achieve the same behavior.
Kafka keeps track of the offsets per group
Hi Kevin,
Was the same “group.id” used before?
What may be happening is that on startup of the consumer (not from failure
restore), any existing committed offset for the groupId in Kafka’s brokers
will be used as the starting point. The “auto.offset.reset” is only
respected when no committed
Hi,
I am currently facing strange behaviour of the FlinkKafkaConsumer09
class. I am using Flink 1.0.3.
These are my properties:
val properties = new Properties()
properties.setProperty("bootstrap.servers", config.urlKafka)
properties.setProperty("group.id", COLLECTOR_NAME)
21 matches
Mail list logo