Hi Dawid -
I’m pretty keen on keeping it alive. Do we have a sense of what it would take
to get it “to a production ready state?”
Thanks!
Ron
> On Feb 4, 2022, at 5:06 AM, Dawid Wysakowicz wrote:
>
> Hi Karthik,
>
> The reason we deprecated it is because we lacked committers who could
Hi all -
I’m trying to keep some state around for a little while after a window fires to
use as queryable state. I am intending on using something like:
.keyBy()
.timeWindow(Time.minutes(1)).allowedLateness(Time.minutes(90))
.aggregate(…)
.keyBy()
.asQueryableState(...)
My intent is to keep
than it needs
to be, so being comfortable with the deployment details of a flink cluster is
helpful.
This team is in Portland Oregon but you can likely work with us remotely as
well.
You can contact me directly or share with this list with your recommendations.
Thanks in advance
Ron
—
Ron
I just stumbled on this same problem without any associated ZK issues. We had a
Kafka broker fail that caused this issue:
2018-07-18 02:48:13,497 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Produce:
(2/4) (7e7d61b286d90c51bbd20a15796633f2) switched from
I’m joining two streams - one is a “decoration” stream that we have in a
compacted Kafka topic, produced using a view on a MySQL table AND using Kafka
Connect; the other is the “event data” we want to decorate, coming in over time
via Kafka. These streams are keyed the same way - via an “id”
A question came up from my colleagues about canary deploys and Flink. We had a
hard time understanding how we could do a canary deploy without constructing a
new cluster and deploying the job there. If you have a canary deploy model, how
do you do this?
Thanks for your help!
Ron
t well documented or this is new behavior in 1.4 (side note: I’m trying to
decide if I need to upgrade from 1.3.2 to 1.4).
Thanks!
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
GRR. Hit send too soon.
The thoughts we have right now are docker compose based - we run kafka and
flink as docker containers and inject events and watch the output.
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
> On Nov 1
Is there a good way to do integration testing for a Flink job - that is, I want
to inject a set of events and see the proper behavior?
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
Thanks Nico -
Thanks for the feedback, and nice catch on the missing volatile.
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
> On Nov 3, 2017, at 7:48 AM, Nico Kruber <n...@data-artisans.com> wrote:
>
> Hi Ro
next.f0;
if (partitionMap.containsKey(myKey)) {
List partitions = partitionMap.get(myKey);
myKey =
partitions.get(ThreadLocalRandom.current().nextInt(partitions.size()));
}
return (int)(myKey % numPartitions);
}
}
Ron
—
Ron C
new name.
I’m struggling to help and thought someone here might be able to help. I have
thought about merging two streams (the stream of new things and the stream of
the full set seen so far) but haven’t tried that yet.
I welcome any of your inputs.
Thanks!
Ron
—
Ron Crocker
Principal
What’s crazy is that I just stumbled on the same issue. Thanks for sharing!
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
> On Sep 15, 2017, at 7:30 AM, Tony Wei <tony19920...@gmail.com> wrote:
>
> Hi Aljoscha,
&
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
by a “shutdown” event into the event stream,
but my understanding is a bit cartoonish so I’m sure it’s more involved.
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
> On Dec 20, 2016, at 12:40 PM, Stephan Ewen <se...@apache.
state are pre-calculated (and stored
in that BackingStore).
Further, I want to checkpoint the entire state (including the coefficients)
to both Flink’s checkpointing system as well as the backing store. The
former is handled here, the latter is handled with another transform in my
graph.
Is there a bet
Thanks Stefano -
That helped, but just led to different pain. I think I need to reconsider how I
treat these things. Alas, the subject of a different thread.
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
> On Mar 12, 2016, a
ccountId();
int getAgentId();
long getWideMetricId();
AggregatableTimesliceStats getTimesliceStats();
}
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
tring current, Integer i) { return current +
> String.valueOf(i); }
> }
>
> [1, 2, 3, 4, 5] -> fold("start-") means: ("start-" + 1) + 2) + 3) +
> 4) + 5) = "start-12345" (as a String)
>
>
> I hope that example illustrates th
reduce, except that you define a start element (of a
> different type than the input type) and the result type is the type of the
> initial value. In reduce, the result type must be identical to the input
> type.
>
> Best, Fabian
>
> 2015-11-18 18:32 GMT+01:00 Ron Crocker &
Is there a succinct description of the distinction between these transforms?
Ron
—
Ron Crocker
Principal Engineer & Architect
( ( •)) New Relic
rcroc...@newrelic.com
M: +1 630 363 8835
21 matches
Mail list logo