ct-td30670.html
[14] https://berlin-2019.flink-forward.org/conference-program
[15] https://berlin-2019.flink-forward.org/training-program
[16] https://www.meetup.com/budapest-scala/events/263025323/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
--
e.com/ANNOUNCE-Rong-Rong-becomes-a-Flink-committer-td30451.html#a30474
Cheers and have a nice evening,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
--
Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
--
Ververica GmbH
Registered at Amtsger
, Jul 8, 2019 at 6:39 AM John Smith wrote:
>>
>>> So when we say a sink is at least once. It's because internally it's not
>>> checking any kind of state and it sends what it has regardless, correct?
>>> Cause I willl build a sink that calls stored procedures.
>
suming same
> kinesis multiple times in a single application ? Are there any issues that
> can arise from this pattern ?
> 2. How does Flink coordinate consumers of same kinesis stream across
> multiple applications ?
>
> Thanks
>
> Mans
>
--
Konstantin Knauf | Solution
, 'Flink does not rely on Kafka
> consumer offset to recover, committing offset to Kafka is merely to show
> progress to external monitoring tools'.
>
> I couldn`t pinpoint the code that Flink uses the achieve it, maybe
> in-flight async invokations in 'unorderedstreamelementqueue' are part
; Partition 2: Watermark2 (Map / State)
>
> Partition 3: Watermark3 (Map / State)
>
>
>
> For this we are using the operator state (
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#using-managed-operator-state)
> with “*Even-split redist
That's while I can save it as a text file. Here is the code.
>
> DataSet dataset = env.createInput(InputFormat);
>
> dataset.writeAsCsv("table_data");
>
> Is it a bug?
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 10.08.2
e event that failed be retried?
> - So if we had 5 incoming events and say on the 3rd one it failed, will it
> continue on the 3rd or will the job restart and try those 5 events.
>
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 10.08.2019 - 31.0
> state of the FlinkKafkaConsumerBase. We think it’s not a big change and we
> are willing to do it if you agree.
>
>
>
> Thank you,
>
> Juan G.
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 10.08.2019 - 31.08.2019, 05.09. - 06
pcoming Meetups
* On 18th of July *Christos Hadjinikolis* is speaking at the "Big Data
LDN Meetup" on "How real-time data processing is used for application in
customer experience?" [16]
[15] https://berlin-2019.flink-forward.org/
[16] https://www.meetup.com/big-data-ldn/events
state at a given point
> in time.
>
> Thanks,
> Eduardo
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 10.08.2019 - 31.08.2019, 05.09. - 06.09.2010
--
Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
--
Ververica GmbH
Regist
E-Jincheng-Sun-is-now-part-of-the-Flink-PMC-td29868i20.html#a29966
[9] https://www.meetup.com/Paris-Apache-Beam-Meetup/events/261775884/
[10] https://www.meetup.com/Apache-Flink-Meetup-Munich/events/261282757/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91
www.meetup.com/Paris-Apache-Beam-Meetup/events/261775884/
[22] https://www.meetup.com/Apache-Flink-Meetup-Munich/events/261282757/
Cheers,
Konstantin (@snntrable)
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
--
Data Artisans GmbH | Invalidenstrasse 115, 10115 Berlin, Germany
--
rk.apache.org/docs/2.4.0/streaming-kafka-0-10-integration.html#locationstrategies
> )
>
>
>
> I wonder if there is such thing in Flink as well? I didn’t find anything
> yet.
>
>
>
> Best regards
>
> Theo Diefenthal
>
--
Konstantin Knauf | Solutions Architect
+
1008284.n3.nabble.com/DISCUSS-FLIP-44-Support-Local-Aggregation-in-Flink-td29513.html
> [4]
> https://lists.apache.org/thread.html/ce99cba4a10b9dc40eb729d39910f315ae41d80ec74f09a356c73938@%3Cdev.flink.apache.org%3E
> [5]
> https://docs.google.com/document/d/1VavBrYn8vJeZs-Mhu5VzKO6xrWCF
//www.meetup.com/Bay-Area-Apache-Flink-Meetup/events/262216929
[18] https://www.meetup.com/Paris-Apache-Beam-Meetup/events/261775884/
[19] https://www.meetup.com/Apache-Flink-Meetup-Munich/events/261282757/
Any feedback or suggestions for this update thread are very much
appreciated.
Cheers,
Ko
latency of record that processed in my
> whole pipeline.
>
> Is the solution related to LatencyMarker? If yes, how can I reach it in my
> sink operation in order to retrieve it?
>
> Thanks,
> Roey.
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Plann
r Flink job is a streaming one, how can we tear the Flink job
> instance running in an IDE?
>
>
>
> Regards,
>
>
>
> Min
>
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 20. - 21.06.2019, 10.08.2019 - 31.08.2019, 05.09. -
06.0
from topic_1
> where retract=“false"
> group by id
>
> But it will also make big state because each id is being grouped.
> I wonder if using Kafka to connect streaming jobs is applicable,
> how to build a large-scale realtime system con
m/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 20. - 21.06.2019
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <
e target kafka,
> how can i get these information from flink directly?
>
> best regards
> chong
>
>
>
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 20. - 21.06.2019
<https://www.ververica.com/>
Follow us @VervericaData
--
sync part of the rocksdb incremental
>> snapshots)
>>
>> It seems to take 60-70 secs in some cases for larger state sizes, and I
>> wonder if there is anything we could tune to reduce this. Maybe its only a
>> matter of size i dont know.
>>
>> Any ideas
user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: -
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Proce
); and this holds for all the nodes in the
> graph. However, the session windowing doesn't progress, despite the
> low-watermark progressing.
>
> Regards,
> Henrik
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: -
<https://www.ververica.
gt; Any rough idea on the status of this issue?
>
> Thanks!
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
Planned Absences: 17.04.2019 - 26.04.2019
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-forward.org/>
ctor.ActorCell.receiveMessage(ActorCell.scala:526)
>
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>
>
ard talks?
>
> Thanks,
> Navneeth
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Processing | Event Driven |
in the flow are valid
> POJOs to avoid “falling back” to Kyro.
>
> Are the specific downsides listed anywhere?
>
> Thanks
> kb
>
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Konstantin Knauf | Soluti
used only for leader selection, or
>> it also stores some data relevant for switching to backup server
>> Boris Lublinsky
>> FDP Architect
>> boris.lublin...@lightbend.com
>> https://www.lightbend.com/
>>
>>
>
--
Konstantin Knauf | Solutions Architect
ween event and processing time can lead again to premature
>>deletion of late data and user cannot delay it.
>>
>> We could also make this behaviour configurable. Another option is to make
>> time provider pluggable for users. The interface can give users context
>&g
galArgument exception.
>> Is reinterpretAsKeyedStream can be used with source operators as well, or
>> should the operator to be used be already partitioned (by keyby(..)) ?
>>
>> Thanks,
>> Adrienne
>>
>
--
Konstantin Knauf | Solutions Architect
+49 160 9139
o
>> in essence the communication would be within containers of the POD and I
>> could load balance ( have ot test )
>>
>> The second alternative seems doable, but looks an overkill but am not
>> sure how to establish a TM on the standalone QueryableStateClient
ut can't find anything
> similar. I'm hoping that someone here could guide me in the right direction.
>
> Thanks in advance
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
<https://www.ververica.com/>
Follow us @VervericaData
--
Join Flink Forward <https://flink-for
through the Avro serializer anyways.
>>
>> There may be a case to use Kryo for POJOs if you don't like the Flink
>> POJO serializer.
>>
>> I would suggest to remove the "forceAvro()" option completely.
>> For "forceKryo()", I am torn between removing it co
program. Savepoints did not work when I tried because
> it required the operators code does not change.
>
> What I want is when I start the modified app, it would start every time
> from offset 1000-1500 in Kafka because these messages have not been written
> to BigQuery.
>
> Is there a
/start-cluster.sh) reports 0 task managers, 0 task slots, and
> 0 avails, out of the box. All jobs fail. All ports are open to all traffic.
>
> Can anyone tell me what I missed?
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
<https://www.ververica.com/>
Follow us
rts wrote:
> This is with flink 1.6.4. I was on 1.6.2 and saw Kryo issues in many more
> circumstances.
>
> On Mar 8, 2019, at 4:25 PM, Konstantin Knauf
> wrote:
>
> Hi Andrew,
>
> which Flink version do you use? This sounds a bit like
> https://issues.apache.org/jira/brows
rrent behavior)
>- FIX_INTERVAL_RETRY (retry with configurable fixed interval up to N
>times)
>- EXP_BACKOFF_RETRY (retry with exponential backoff up to N times)
>
> What do you guys think? Thanks a lot.
>
> Shuyi
>
> On Fri, Mar 8, 2019 at 3:17 PM Konstantin K
mentation. Am I right?
>
> Best,
> Tony Wei
>
> Konstantin Knauf 於 2019年3月9日 週六 上午7:00寫道:
>
>> Hi Tony,
>>
>> before Flink 1.8 expired state is only cleaned up, when you try to access
>> it after expiration, i.e. when user code tries to access the expire
> e-mail is strictly prohibited. If you have received this e-mail in error,
>
> please notify the sender and permanently delete the e-mail and any
>
> attachments immediately. You should not retain, copy or use this e-mail or
>
> any attachment for any purpose, nor disclose all or any part o
async function that does web requests to a rate-limited API. Can
> you handle that with settings on the async function call?
>
> Thanks,
> William
>
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
<https://www.ververica.com/>
Follow us @Ververic
after reading the
> release notes. Did they change the outcome of TTL feature, or provide new
> TTL features, or just
> change the behavior of executing TTL mechanism.
>
> Could you give me more references to learn about it? A simple example
> to illustrate it is more
> appreciated. Tha
ache much data from the faster source in order to wait the
> slower source.
> The question is that how can I make the difference of consumers'
> speed small?
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
<https://www.ververica.com/>
Follow us @VervericaData
.currentTimeMillis())
> override def extractTimestamp(rule: Rule, previousElementTimestamp: Long):
> Long = rule.created}*
>
>
> But it looks like a hack and maybe someone can give an advice with the
> more convenient approach.
>
> Thx !
>
> Sincerely yours,
> *R
Can
> someone suggest a better way of doing this?
>
> Exception :
> Caused by: java.lang.IllegalStateException: Consecutive multiple splits
> are not supported. Splits are deprecated. Please use side-outputs
>
>
> Regards,
> Taher Koitawala
> GS Lab Pune
> +91 840797916
s strictly prohibited. If you have received this e-mail in error,
>>>
>>> please notify the sender and permanently delete the e-mail and any
>>>
>>> attachments immediately. You should not retain, copy or use this e-mail
>>> or
>>>
>>> a
taskmanager-0'
> / #
>
> So the name should be postfixed with the service name. How do I force it?
> I suspect I am missing config parameter
>
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
> On Feb 19, 2019, at 4:33 A
ractConfig.getConfiguredInstance(AbstractConfig.java:248)
> at
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:327)
> ... 17 more
>
> The closest that I found
> https://stackoverflow.com/questions/37363119/kafka-producer-org-apache-kafka-common-serialization
; then
>exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh"
> start-foreground "$@"
> else
> exec $FLINK_HOME/bin/standalone-job.sh start-foreground "$@"
> fi
> fi
> fi
>
> exec "$@"
>
> It
ound for it.
> Is it possible provide the complete helm chart for it.
> Bits and pieces are in the ticket, but it would be nice to see the full
> chart
>
> Boris Lublinsky
> FDP Architect
> boris.lublin...@lightbend.com
> https://www.lightbend.com/
>
>
--
Konstantin K
id it starts from latest offset(default Kafka
> connector
> behavior) .
>
>
>
>
> Thanks
> Sohi
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394
only read messages in a given period of time to
> generate a finite DataStream. I am wondering if there is an alternative to
> this approach. Any suggestions will be very much appreciated.
>
> Regards,
> -Yu
>
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 9139452
her where the data gets lost? To me it would make sense to
> proceed with this, because the problem seems hard to reproduce outside of
> our environment.
>
Let's focus on checking this metric above, to make sure that the
WindowOperator is actually emitting less records than the overall number
processing time, hence no watermarks. Will the state still
> be cleared automatically if nothing is done?
>
>
>
> *From: *Konstantin Knauf
> *Date: *Thursday, 14 February 2019 at 5:18 PM
> *To: *Harshith Kumar Bolar
> *Cc: *"user@flink.apache.org"
> *Subject:
>
>
> Is this something that can be achieved in the evictor method
> or apply method after each keyed window is done processing?
>
>
>
> Thanks,
>
> Harshith
>
>
>
>
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
<https://www
Hi Juho,
you are right the problem has actually been narrowed down quite a bit over
time. Nevertheless, sharing the code (incl. flink-conf.yaml) might be a
good idea. Maybe something strikes the eye, that we have not thought about
so far. If you don't feel comfortable sharing the code on the ML,
b3=EmailSignature>
>
> On Mon, 11 Feb 2019 at 2:39 PM, Konstantin Knauf
> wrote:
> Hi Chirag, Hi Vadim,
>
> from the top of my head, I see two options here:
>
> * Buffer the "fast" stream inside the KeyedBroadcastProcessFunction until
> relevant (whatever this
the aggregation of smaller
> messages happen
>
>}
>
>
>
> In some cases this list field of LargeMessage can get very large (1000’s
> of messages). Is it ok to create an intermediate stream of these
> LargeMessages? What should I be concerned about while designing the
ntil the slow one is fully read (in case of a
> file) or until a marker is emitted (in case of some other source). Is there
> any way to accomplish that? It doesn't seem to be a rare use case.
>
> Thanks, Vadim.
>
--
Konstantin Knauf | Solutions Architect
+49 160 91394525
default-scheme: hdfs://
> state.backend: rocksdb
> state.savepoints.dir: hdfs://flink/savepoints
> state.checkpoints.dir: hdfs://flink/checkpoints
> 4) Dance a little jig when it works.
>
> Has anyone set this up? If so, am I missing anything?
>
> -Steve
oint of termination).
>
> Do you have any suggestions on how to improve this process?
>
> Best regards and thanks in advance for any input,
> William
>
>
> Flink Version: 1.6.2
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.233
content of your message may be monitored.
> >
> > Whilst all reasonable care has been taken to avoid the transmission of
> viruses, it is the responsibility of the recipient to ensure that the
> onward transmission, opening or use of this message and any attachments
> will not adve
Hi Ning,
could you replace the Kafka Source by a custom
SourceFunction-implementation, which just produces the new events in a loop
as fast as possible. This way we can rule out that the ingestion is
responsible for the performance jump or the limit at 5000 events/s and can
benchmark the Flink
t
> mistaken.
>
> Are you already working with `taskmanager.exit-on-fatal-akka-error` enabled?
>
>
> Nico
>
> On Thursday, 3 August 2017 10:53:00 CEST Konstantin Knauf wrote:
>> Hi everyone,
>>
>> we are running Flink 1.2.1 on YARN 2.4 (I know, way to old
ner is just missing
indefinitely. Is it known, that this does not work on YARN 2.4?
If it helps, I can also provide the full job and taskmanager logs...
Cheers & Thanks,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betas
active control operations.
>
> Thank you so much.
>
> Best,
>
>
> Gabriele
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
>>>>>> 7
>>>>>>>>>>> still.
>>>>>>>>>>>>
>>>>>>>>>>>> Whether those are relevant is up for debate.
>>>>>>>>>>>>
>>>>>>>>>>>> Cheers,
>>>>>>>>>>>> Theo
>>>>&g
parallel flink jobs wont hook up to the stream source but get
> triggered based on the global window state and trigger event. Hope it
> explains the scenario. Please excuse if I am not able to detail the
> nitty gritties to the most granular unit possible.
>
> Regards,
>
> Vij
Hi Vijay,
can you elaborate a little bit on what you would like to achieve? Right now, I
am not sure what aspect of the window you want to reference
(WindowState,Timers, State in the Windowfunction,...).
Cheers,
Konstantin
sent from my phone. Plz excuse brevity and tpyos.
---
Konstantin
gt; >
>>
>> --
>> Urs Schönenberger - urs.schoenenber...@tngtech.com
>> <mailto:urs.schoenenber...@tngtech.com>
>>
>> TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
>> Geschäftsführer: Henrik Klagges,
rg Heiler
> <georg.kf.hei...@gmail.com <mailto:georg.kf.hei...@gmail.com>> wrote:
> >
> > Hi,
> >
> > is there something like spark-testing-base for flink as well?
> >
> > Cheers,
> > Georg
>
--
Konstantin Knauf * konstantin.kn...@tng
y out
>> whats in the window while it is filling. I know I have access to onElement
>> in the trigger, and I can set up the state descriptor variables there, but
>> the variables don't seem to have exposure to the runtime environment within
>> the trigger.
>>
>>
both join input will be concurrently consumed for sorting.
>
> Best, Fabian
>
>
> 2017-01-04 13:30 GMT+01:00 Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>>:
>
> Hi everyone,
>
> I have a basic question regardin
first?
Cheers and Thanks,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
sign
; But I suspect that I need to map my JSON to a POJO?
>
>
> On Mon, Dec 5, 2016 at 12:33 PM, Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi Robert,
>
> you need to actually use Jackson. The p
pache.org/jira/browse/FLINK-5227
> <https://issues.apache.org/jira/browse/FLINK-5227>
> [2] https://issues.apache.org/jira/browse/FLINK-5186
> <https://issues.apache.org/jira/browse/FLINK-5186>
>
> 2016-11-30 17:51 GMT+01:00 Konstantin Kna
Stefan Richter wrote:
> Hi,
>
> could you somehow provide us a heap dump from a TM that run for a while
> (ideally, shortly before an OOME)? This would greatly help us to figure out
> if there is a classloader leak that causes the problem.
>
> Best,
> Stefan
>
>
. Our Jars do
not include any flink dependencies though (compileOnly), but of course many
others.
Any ideas anyone?
Cheers and thank you,
Konstantin
sent from my phone. Plz excuse brevity and tpyos.
---
Konstantin Knauf *konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting
>
> On Thu, 10 Nov 2016 at 11:47 Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi Aljoscha,
>
> unfortunately, I think, FLINK-4994 would not solve our issue. What does
> "on the very end&
ould work, correct?
>
> Cheers,
> Aljoscha
>
> On Wed, 9 Nov 2016 at 19:53 Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi Aljoscha,
>
> as it turns out the "workaround&qu
Sounds good Aljoscha.
sent from my phone. Plz excuse brevity and tpyos.
---
Konstantin Knauf *konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Aljoscha Krettek
> Aljoscha
>
> On Tue, 8 Nov 2016 at 18:18 Konstantin Knauf
> <konstantin.kn...@tngtech.com <mailto:konstantin.kn...@tngtech.com>> wrote:
>
> Hi everyone,
>
> I just migrated a streaming Job from 1.0.2 to 1.1.3 and stumbled across
> a problem
processing time timers are deleted everytime the window is
PURGEd.
clear() is the default implementation, i.e. no-op.
Just wanted to, if this is the expected behavior (processing time timers
being deleted on PURGE or FIRE_AND_PURGE) from Flink 1.1 on?
Cheers,
Konstantin
--
Konstantin Knauf
ake sense? Or is there a better approach?
>
> In general, how does Flink handle readings from the future?
>
> Thanks,
> Dominik
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäfts
Hi Ufuk,
any ideas? Any configuration that could be wrong?
Cheers,
Konstantin
On 30.09.2016 13:13, Konstantin Knauf wrote:
> Hi Ufuk,
>
> thanks for your quick answer.
>
> Setup: 2 Servers, each running a JM as well as TM
>
> 1) Removing all existing blobstores l
wrote:
> On Fri, Sep 30, 2016 at 9:12 AM, Konstantin Knauf
> <konstantin.kn...@tngtech.com> wrote:
>> we are running a Flink (1.1.2) Stand-Alone Cluster with JM HA, and HDFS
>> as checkpoint and recovery storage dir. What we see is that blobStores
>> are stored in HDFS as
om/Enriching-a-tuple-mapped-from-a-datastream-with-data-coming-from-a-JDBC-source-tp8993p9002.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting
in
LocalStreamEnvironment programmatically. Side-Info: The job is reading
from a Kafka Cluster, which is programmatically started for each test run.
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer
DFS to store those, and only stores "pointers" to the data in
> the DFS in ZooKeeper.
>
> Are you thinking of another highly available storage for larger data
> (megabytes) that could be used here?
>
> Greetings,
> Stephan
>
>
> On Tue, Aug 23, 2016 at 6:36 PM,
]
https://ci.apache.org/projects/flink/flink-docs-master/setup/jobmanager_high_availability.html
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz
eue and
> writes it
> > into HDFS, maybe this is also important to know.
> >
> > Thank you and best regards
> >
> > Konstantin
> > --
> > Konstantin Gregor * konstantin.gre...@tngtech.com
> <mailto:konstantin.gre...@tngtech.c
erer", 2.20, 1.80>
>
> I would like to take the biggest values from both fields, in this case
> it should be: 2.20 and 2.10, being the final result as:
> <"Rafa Nadal - Roger Federer", 2.20, 2.10>.
>
> I don't know if I am mistaken but, I think I could use the
e-flink-user-mailing-list-archive.2336050.n4.nabble.com/yarn-kill-container-due-to-running-beyond-physical-memory-limits-How-can-i-debug-memory-issue-tp7296p7317.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
>
--
Konstantin Knauf * ko
the client when starting a yarn
session "./yarn-session.sh"
lo4j.properties: JobManager/Taskmanager Logs in Standalone-Mode. Not
used, when running on YARN.
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH,
I want clear, notice that the middle
> table doesn't need to be a table, it is just what I want and I don't
> have enough knowledge on Flink to know how to do it.
>
>
> Thanks for your time!
>
>
>
> 2016-05-26 20:33 GMT+02:00 Konstantin Knauf
> <konstantin.
Event 1 from website "Y" RIGHT NOW?
>
> JavaObjectY.price
>
>
> Compare both attributes
> Get a result depending on that comparison
>
> My java object doesn't have a timestamp, but I think I should use it right?
>
>
> Thanks!
>
>
>
>
&g
ect.price, then show a message.
>
>
> Which is the (best) way of doing this? I am new using Flink and I am
> quite lost :)
>
>
> Thanks!
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Ge
ve.2336050.n4.nabble.com/How-to-measure-Flink-performance-tp6741p6863.html
> Sent from the Apache Flink User Mailing List archive. mailing list archive at
> Nabble.com.
>
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr.
nt.max-retry-attempts: 5
So I would have expected a timeout of around 120,000ms. 50,000ms is our
configured akka.watch.heartbeat.interval. Is this value used instead here?
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH,
?
Cheers,
Konstantin
--
Konstantin Knauf * konstantin.kn...@tngtech.com * +49-174-3413182
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterföhring
Geschäftsführer: Henrik Klagges, Christoph Stock, Dr. Robert Dahlke
Sitz: Unterföhring * Amtsgericht München * HRB 135082
101 - 200 of 222 matches
Mail list logo