Hi,
As far as my understanding goes, aggregated result for a window is not
included in next window.
Window would stay in state store till it gets deleted based on certain
setting however aggregated result for that window will include only the
records that occur within the window duration.
If you
Thanks John.
That partially answers my question.
I'm a little confused about when a window will expire.
As you said, I will receive at most 20 events at T2 but as time goes on
will the data from the first window always be included in the aggregated
result?
On Mon, Jan 20, 2020 at 7:55 AM John
Hey all,
I meant to do this a while back, so apologies for the delay.
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=145722808
The above has a working instruction on how to start debugger in Eclipse
Scala. I have't proofread the text, but the solution works correctly. If
One note is that 0.11.0.3 is pretty old by now and no new releases in that
series are planned. I recommend planning an upgrade to the 2.x series
whenever possible.
Ismael
On Mon, Jan 20, 2020 at 12:47 AM Manikumar
wrote:
> Hi,
>
> Your approach is correct. For minor version upgrade (0.11.0.0
Hi folks,
We have recently got a strange issue with kafka consumer.
Consumer thread was hanging infinitely.
Can someone advise how to approach this?
kafka version is 1.0.2.
we have found following error at broker side.
[2020-01-14 12:17:54,454] DEBUG [GroupMetadataManager brokerId=1002] Offset
Thanks John!
I don't think transformValues will work here because I need to remove
records which already have manual data?
Either way it doesn't matter too much as I just write them straight to
kafka.
Thanks for your help!
On Mon, Jan 20, 2020 at 4:48 PM John Roesler wrote:
> Hi Yair,
>
>
Hi Yair,
You should be fine!
Merging does preserve copartitioning.
Also processing on that partition is single-threaded, so you don’t have to
worry about races on the same key in your transformer.
Actually, you might want to use transformValues to inform Streams that you
haven’t changed the
Hi
I asked this question on stack-overflow and was wondering if anyone here
could answer it:
https://stackoverflow.com/questions/59820243/does-merging-two-kafka-streams-preserve-co-partitioning
I have 2 co-partitioned kafka topics. One contains automatically generated
data, and the other manual
Hello.
Can anyone explain me please what I'm doing wrong?
I'm trying to add sasl plaintext auth to kafka 2.2.2.
Configuration steps are below:
1. config/server.properties
sasl.enabled.mechanisms=PLAIN
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
Got some operation questions for MM2.
1. Whats is the best practice way to start MM2 after a reboot of host?
Add connect-mirror-maker.sh config/connect-mirror-maker.properties to a
systemd script to run after kafka starts?
2. We will use MM2 to mirror primary to a backup(secondary) cluster.
Hi,
Your approach is correct. For minor version upgrade (0.11.0.0 to
0.11.0.3), we can just update the brokers to new version.
Thanks,
On Mon, Jan 20, 2020 at 1:27 AM Sarath Babu
wrote:
> Hi all,
> Appreciate any help/ pointers on how to upgrade.One thought is to start
> the cluster brokers
11 matches
Mail list logo