If I read CompositeTypeSerializerConfigSnapshot ctor correctly:
for (TypeSerializer nestedSerializer : nestedSerializers) {
TypeSerializerConfigSnapshot configSnapshot =
nestedSerializer.snapshotConfiguration();
this.nestedSerializersAndConfigs.add(
The
Your case doesn't seem like FLINK-5462 since there was no CancellationException
in the stack trace you posted.
The exception from TraversableSerializer.snapshotConfiguration() was added
by FLINK-6178
FYI
On Fri, Jun 2, 2017 at 4:04 PM, MAHESH KUMAR
wrote:
> Hi
Hi Team,
We have some test cases written using StreamingMultipleProgramsTestBase
It was working fine in version 1.2, we get the following error in version
1.3
It seems like CheckpointCoordinator fails after this error and
Checkpointing no longer occurs.
I came across this bug:
Hello,
I am attempting to use the Flink runner to execute a data pipeline on a Hadoop
cluster. The pipeline is created successfully in Flink, but when it attempts to
execute the tasks, I receive a 'Size of total memory must be positive' error,
despite having plenty of memory.
Here's the log
Nvm i found it. Backpressure caused by aws RDS instance of mysql.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Checkpoints-very-slow-with-high-backpressure-tp12762p13468.html
Sent from the Apache Flink User Mailing List archive. mailing
I ran a few tests and was able to find the case where there won't be a data
loss. And here's how the two tests are different.
*The case where data loss is observed:*
1) Kafka source receives data. (Session window trigger hasn't been fired
yet.)
2) Bring all Kafka brokers down.
3) Flink
Thanks. Upgraded to 1.2.1 - problem goes away
Steve
On Jun 2, 2017, at 10:08 AM, Till Rohrmann
> wrote:
Hi Steve,
in the past we had some problems with cleaning up old checkpoints. But this was
in 1.1.x. These problems should be fixed by now.
Hi All,
If my states fit in the JVM heap, should I be using filesystem backend with
HDFS instead of RocksDB ?
Could someone comment on the FS vs RocksDb backend performance ?
I’ve also heard that RocksDb backend is required to support delta
checkpointing. Is this true ?
Is delta checkpointing
Hi Biplob,
1. The CEPPatternOperator can use either processing time or event time for
its internal processing logic. It only depends on what TimeCharacteristic
you have set for your program. Consequently, with event time, your example
should be detected as an alert.
2. If you don't provide a
Thanks Till. The log files I have attached are the complete logs. They are
DEBUG level. There are three files:
jobManger.log, tmOne.log and tmTwo.log.
--
View this message in context:
Hi Steve,
in the past we had some problems with cleaning up old checkpoints. But this
was in 1.1.x. These problems should be fixed by now.
Could you try upgrading to Flink 1.2.1 in order to see whether the problem
persists? If this is the case, then it would be great if you could share
the
Hi Ninad,
After recovery, the job should continue from where the last checkpoint was
taken. Thus, it should output all remaining messages at least once to Kafka.
Could you share the complete JobManager and TaskManager logs with us? Maybe
they contain some information which could be helpful to
Hi,
Configuration:
Flink 1.2.0
I'm using the Rocks DB backend for checkpointing.
The problem I have is that no checkpoints are being deleted, and my disk is
filling up.
Is there configuration for this?
Thanks
Steve
Hi Vinay,
I've pulled my colleague Gordon into the conversation who can probably tell
you more about Flink's security features.
Cheers,
Till
On Fri, Jun 2, 2017 at 2:22 PM, vinay patil wrote:
> Hi,
>
> Currently I am looking into configuring in-transit data encryption
Thanks Gordon.
*2017-06-01 20:22:44,400 WARN
org.apache.kafka.clients.producer.internals.Sender - Got error
produce response with correlation id 4 on topic-partit
ion topic.http.stream.event.processor-0, retrying (9 attempts left).
Error: NOT_ENOUGH_REPLICAS
, not sure if this may be
Hi,
To answer your first question, yes, elements (can) arrive out-of-order and in
most real-world use cases they will. Making them arrive in order can be
prohibitively expensive because you have to buffer elements and then sort them
when a watermark arrives. It’s possible to do this in custom
Hi,
Currently I am looking into configuring in-transit data encryption either
using Flink SSL Setup or directly using EMR.
Few Doubts:
1. Will the existing functionality provided by Amazon to configure
in-transit data encrytion work for Flink as well. This is explained here:
Hi ,
Thanks a lot for the help last time, I have a few more questions and I chose
to create a new topic as the problem in the previous topic was solved,
thanks to useful inputs from Flink Community. The questions are as follows
*1.* What time does the "within" operator works on "Event Time" or
Thanks for the suggestions,
Yes I believe Ali is also looking for a more straight forward approach to
access the degree of a vertex (i.e without creating a new dataset).
But Martin and Vasias suggestions will work, so thanks again
On Tue, May 30, 2017 at 10:44 AM, Nico Kruber
Hi Philip,
I don’t think that is possible right now. The main thing is that Flink
currently doesn't store information about whether or not a registered state is
queryable or not. So, it wouldn’t be queryable until a new StateDescriptor is
provided for that state.
Would you be able to know the
Hi Ninad,
Unfortunately I don’t think the provided logs shed any light here.
It does complain about:
2017-06-01 20:22:44,400 WARN
org.apache.kafka.clients.producer.internals.Sender - Got error
produce response with correlation id 4 on topic-partit
ion topic.http.stream.event.processor-0,
Thanks for that! Yes I indeed did not receive those emails. And my question
is answered.
Moiz
On Fri, Jun 2, 2017 at 12:46 PM, Nico Kruber wrote:
> Hi Moiz,
> didn't Timo's answer cover your questions?
>
> see here in case you didn't receive it:
>
Hi Moiz,
didn't Timo's answer cover your questions?
see here in case you didn't receive it:
https://lists.apache.org/thread.html/
a1a0d04e7707f4b0ac8b8b2f368110b898b2ba11463d32f9bba73968@
%3Cuser.flink.apache.org%3E
Nico
On Thursday, 1 June 2017 20:30:59 CEST Moiz S Jinia wrote:
> Bump..
>
>
23 matches
Mail list logo