Re: Queryable State Deprecation

2022-03-11 Thread Ron Crocker
Hi Dawid - I’m pretty keen on keeping it alive. Do we have a sense of what it would take to get it “to a production ready state?” Thanks! Ron > On Feb 4, 2022, at 5:06 AM, Dawid Wysakowicz wrote: > > Hi Karthik, > > The reason we deprecated it is because we lacked committers who could

timeWindow()s and queryable state

2021-03-01 Thread Ron Crocker
Hi all - I’m trying to keep some state around for a little while after a window fires to use as queryable state. I am intending on using something like: .keyBy() .timeWindow(Time.minutes(1)).allowedLateness(Time.minutes(90)) .aggregate(…) .keyBy() .asQueryableState(...) My intent is to keep

Technical consulting resources

2019-03-14 Thread Ron Crocker
than it needs to be, so being comfortable with the deployment details of a flink cluster is helpful. This team is in Portland Oregon but you can likely work with us remotely as well. You can contact me directly or share with this list with your recommendations. Thanks in advance Ron — Ron

Re: Question about the behavior of TM when it lost the zookeeper client session in HA mode

2018-07-18 Thread Ron Crocker
I just stumbled on this same problem without any associated ZK issues. We had a Kafka broker fail that caused this issue: 2018-07-18 02:48:13,497 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Produce: (2/4) (7e7d61b286d90c51bbd20a15796633f2) switched from

Forcing consuming one stream completely prior to another starting

2018-01-19 Thread Ron Crocker
I’m joining two streams - one is a “decoration” stream that we have in a compacted Kafka topic, produced using a view on a MySQL table AND using Kafka Connect; the other is the “event data” we want to decorate, coming in over time via Kafka. These streams are keyed the same way - via an “id”

Canary deploys and Flink?

2018-01-16 Thread Ron Crocker
A question came up from my colleagues about canary deploys and Flink. We had a hard time understanding how we could do a canary deploy without constructing a new cluster and deploying the job there. If you have a canary deploy model, how do you do this? Thanks for your help! Ron

Consecutive windowed operations

2017-12-14 Thread Ron Crocker
t well documented or this is new behavior in 1.4 (side note: I’m trying to decide if I need to upgrade from 1.3.2 to 1.4). Thanks! Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835

Re: Integration testing my Flink job

2017-11-14 Thread Ron Crocker
GRR. Hit send too soon. The thoughts we have right now are docker compose based - we run kafka and flink as docker containers and inject events and watch the output. Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835 > On Nov 1

Integration testing my Flink job

2017-11-14 Thread Ron Crocker
Is there a good way to do integration testing for a Flink job - that is, I want to inject a set of events and see the proper behavior? Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835

Re: Making external calls from a FlinkKafkaPartitioner

2017-11-03 Thread Ron Crocker
Thanks Nico - Thanks for the feedback, and nice catch on the missing volatile. Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835 > On Nov 3, 2017, at 7:48 AM, Nico Kruber <n...@data-artisans.com> wrote: > > Hi Ro

Making external calls from a FlinkKafkaPartitioner

2017-11-02 Thread Ron Crocker
next.f0; if (partitionMap.containsKey(myKey)) { List partitions = partitionMap.get(myKey); myKey = partitions.get(ThreadLocalRandom.current().nextInt(partitions.size())); } return (int)(myKey % numPartitions); } } Ron — Ron C

Finding things not seen in the last window

2017-09-29 Thread Ron Crocker
new name. I’m struggling to help and thought someone here might be able to help. I have thought about merging two streams (the stream of new things and the stream of the full set seen so far) but haven’t tried that yet. I welcome any of your inputs. Thanks! Ron — Ron Crocker Principal

Re: How to user ParameterTool.fromPropertiesFile() to get resource file inside my jar

2017-09-26 Thread Ron Crocker
What’s crazy is that I just stumbled on the same issue. Thanks for sharing! Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835 > On Sep 15, 2017, at 7:30 AM, Tony Wei <tony19920...@gmail.com> wrote: > > Hi Aljoscha, &

Can I see the kafka header information in the Flink connector?

2016-12-22 Thread Ron Crocker
— Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835

Re: Flink rolling upgrade support

2016-12-22 Thread Ron Crocker
by a “shutdown” event into the event stream, but my understanding is a bit cartoonish so I’m sure it’s more involved. Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835 > On Dec 20, 2016, at 12:40 PM, Stephan Ewen <se...@apache.

Re: FoldFunction accumulator checkpointing

2016-04-19 Thread Ron Crocker
state are pre-calculated (and stored in that BackingStore). Further, I want to checkpoint the entire state (including the coefficients) to both Flink’s checkpointing system as well as the backing store. The former is handled here, the latter is handled with another transform in my graph. Is there a bet

Re: Silly keyBy() error

2016-03-12 Thread Ron Crocker
Thanks Stefano - That helped, but just led to different pain. I think I need to reconsider how I treat these things. Alas, the subject of a different thread. Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835 > On Mar 12, 2016, a

Silly keyBy() error

2016-03-12 Thread Ron Crocker
ccountId(); int getAgentId(); long getWideMetricId(); AggregatableTimesliceStats getTimesliceStats(); } Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835

Re: Fold vs Reduce in DataStream API

2015-11-19 Thread Ron Crocker
tring current, Integer i) { return current + > String.valueOf(i); } > } > > [1, 2, 3, 4, 5] -> fold("start-") means: ("start-" + 1) + 2) + 3) + > 4) + 5) = "start-12345" (as a String) > > > I hope that example illustrates th

Re: Fold vs Reduce in DataStream API

2015-11-19 Thread Ron Crocker
reduce, except that you define a start element (of a > different type than the input type) and the result type is the type of the > initial value. In reduce, the result type must be identical to the input > type. > > Best, Fabian > > 2015-11-18 18:32 GMT+01:00 Ron Crocker &

Fold vs Reduce in DataStream API

2015-11-18 Thread Ron Crocker
Is there a succinct description of the distinction between these transforms? Ron — Ron Crocker Principal Engineer & Architect ( ( •)) New Relic rcroc...@newrelic.com M: +1 630 363 8835