the stock ticker apps out there, I have to imagine this is a
> really common use case.
>
> Anyone have any thoughts as to what we are best to do?
>
> Nick
>
--
Bruno Cadonna
Software Engineer at Confluent
via
> >> Topology#addProcessor() to yourprocessor topology
> >> <
> https://docs.confluent.io/current/streams/concepts.html#streams-concepts-processor-topology
> >
> >> .
> >
> >
> > is it? Doesn't `transform` need a TransformSupplier while `addProcessor`
&
if I'm using
> the suppress
>
> Anyway, I'll look into your link and try to find out the cause of these
> issues, probably starting from scratch with a simpler example
>
> Thank you for your help!
>
> --
> Alessandro Tagliapietra
>
> On Mon, Apr 15, 2019 at 10:08 PM
Hi Alessandro,
Have you considered using `transform()` (actually in your case you should
use `transformValues()`) instead of `.process()`? `transform()` and
`transformValues()` are stateful operations similar to `.process` but they
return a `KStream`. On a `KStream` you can then apply a windowed
Hi,
If you want to know how Kafka is designed and implemented, please see
the documentation under
https://kafka.apache.org/documentation/
Especially sections "Getting Started", "Design", and "Implementation".
Best,
Bruno
On Mon, May 27, 2019 at 6:03 AM V1 wrote:
>
> Hi team Kafka,
> I'm
Hi Landon,
I tried your command on apache/kafka:trunk with HEAD at commit
ba3dc494371145e8ad35d6b85f45b8fe1e44c21f and it worked.
./gradlew -v
Gradle 5.1.1
Build time:
Hi Mohan,
Could you post the log messages you see and you think you should not see?
It is hard to help you without any actual logs.
Best,
Bruno
On Wed, Jun 5, 2019 at 6:52 AM Parthasarathy, Mohan wrote:
>
> Hi,
>
> As mentioned here
>
> https://issues.apache.org/jira/browse/KAFKA-7510
>
> I
bout what we are missing
>
>
> ________
> From: Bruno Cadonna
> Sent: Tuesday, June 4, 2019 11:53 PM
> To: users@kafka.apache.org
> Subject: Re: RecordCollectorImpl: task [1_1] Error sending records
>
> Hi Mohan,
>
> Could you post the log messages you see and you think y
Hi Mohan,
Did you set a grace period on the window?
Best,
Bruno
On Tue, Jun 18, 2019 at 2:04 AM Parthasarathy, Mohan wrote:
>
> On further debugging, what we are seeing is that windows are expiring rather
> randomly as new messages are being processed. . We tested with new key for
> every
d windows expiring ?
>
> Thanks
> Mohan
>
> On 6/19/19, 12:41 AM, "Bruno Cadonna" wrote:
>
> Hi Mohan,
>
> Did you set a grace period on the window?
>
> Best,
> Bruno
>
> On Tue, Jun 18, 2019 at 2:04 AM Parthasarathy, M
Hi Mohan,
I realized that my previous statement was not clear. With a grace
period of 12 hour, suppress would wait for late events until stream
time has advanced 12 hours before a result would be emitted.
Best,
Bruno
On Wed, Jun 19, 2019 at 9:21 PM Bruno Cadonna wrote:
>
> Hi Mohan,
>
Hi,
What are duplicate messages in your use case?
1) different messages with the same content
2) the same message that is send multiple times to the broker due to
retries in the producer
3) something else
What do you mean with "identify those duplicates"? What do you want to do
with them?
For
9 at 7:58 PM Alessandro Tagliapietra <
> tagliapietra.alessan...@gmail.com> wrote:
>
> > Hi Bruno,
> >
> > thank you for your help, glad to hear that those are only bugs and not a
> > problem on my implementation,
> > I'm currently using confluent docker images,
ion": 1}
> S1 with computed metric {"timestamp": 16, "production": 10}
> S1 with filtered metric{"timestamp": 162000, "production": 1}
>
> as you can see, window for timestamp 16 is duplicated
>
> Is this because the window state isn'
Hi Jorg,
transform(), transformValues, and process() are not stateful if you do
not add any state store to them. You only need to leave the
variable-length argument empty.
Within those methods you can implement your desired filter operation.
Best,
Bruno
On Thu, Jul 4, 2019 at 11:51 AM Jorg
Hi Pawel,
It seems the exception comes from a producer. When a stream task tries
to resume after rebalancing, the producer of the task tries to
initialize the transactions and runs into the timeout. This could
happen if the broker is not reachable until the timeout is elapsed.
Could the big lag
be possible, for example for my LastValueStore to compact the
> > changelog and keep only the last value stored for each key? Because that's
> > all I would need for my use case
> >
> > Thank you very much for your help
> >
> > On Tue, Jul 9, 2019, 4:00 AM Bruno Cadon
Hi Alessandro,
I am not sure I understand your issue completely. If you start your
streams app in a new container without any existing local state, then
it is expected that the changelog topics are read from the beginning
to restore the local state stores. Am I misunderstanding you?
Best,
Bruno
valueSpecificAvroSerde.configure(serdeConfig, false);
> > >
> > > and then in aggregate()
> > >
> > > Materialized.with(Serdes.String(), valueSpecificAvroSerde)
> > >
> > > fixed the issue.
> > >
> > > Thanks in advance
Hi Gagan,
If you want to read a message, you need to poll the message from the
broker. The brokers have only very limited notion of message content. They
only know that a message has a key, a value, and some metadata, but they
are not able to interpret the contents of those message components.
when aggregating the window but I think it's an easy
> problem.
> >
> > Thank you again
> > Best
> >
> > --
> > Alessandro Tagliapietra
> >
> > On Sun, Apr 14, 2019 at 11:26 AM Bruno Cadonna
> wrote:
> >
> >> Hi Alessandro,
> >
Hi Tim,
Kafka Streams guarantees at-least-once processing semantics by
default. That means, a record is processed (e.g. added to an
aggregate) at least once but might be processed multiple times. The
cause for processing the same record multiple time are crashes as you
described. Exactly-once
e, it must roll-back commit(s) to the state store in a failure
> scenario? I haven't dug into the code to see how it works, but given the
> behavior I'm seeing it must..
>
> Tim - I actually saw your related question from last week right after I
> sent mine. :)
>
> Alex
>
Hi Alex,
what you describe about failing before offsets are committed is one
reason why records are processed multiple times under the
at-least-once processing guarantee. That is reality of life as you
stated. Kafka Streams in exactly-once mode guarantees that this
duplicate state updates do not
Hi Ghullam,
Apache Kafka is open source. See license under
https://github.com/apache/kafka/blob/trunk/LICENSE
Best,
Bruno
On Thu, Sep 5, 2019 at 10:19 PM Ghullam Mohiyudin
wrote:
>
> Hi ,
> I read the information about kafka. Now i want to create a degree final
> project using kafka. Can you
Hi Alessandro,
If you want to get each update to an aggregate, you need to disable
the cache. Otherwise, an update will only be emitted when the
aggregate is evicted or flushed from the cache.
To disable the cache, you can:
- disable it with the `Materialized` object
- set
Hi Muhammed,
RocksDB is not an in-memory store. If you use only
InMemoryKeyValueStore, you are not using any RocksDB.
Best,
Bruno
On Wed, Jul 17, 2019 at 3:26 PM Muhammed Ashik wrote:
>
> Hi I'm trying to log the rocksdb stats with the below code, but not
> observing any logs..
> I'm enabling
the topology?
>
> thx,
> Chris
>
> On Tue, Oct 29, 2019 at 2:08 PM Bruno Cadonna wrote:
>
> > Hi Chris,
> >
> > What version of Streams are you referring to?
> >
> > On the current trunk the group.id property is removed from the config
> > for th
Thank you for reaching out and filing the ticket.
Best,
Bruno
On Fri, Nov 1, 2019 at 3:19 AM Chris Toomey wrote:
>
> Thanks Bruno, filed https://issues.apache.org/jira/browse/KAFKA-9127 .
>
> On Wed, Oct 30, 2019 at 2:06 AM Bruno Cadonna wrote:
>
> > Hi Chr
Hi,
ClientMetricsTest.shouldAddCommitIdMetric should only fail if executed
from an IDE. The test fails because the test expects a file on the
class path which is not there when the test is executed from the IDE,
but is there when the test is executed from gradle. I will try to fix
the test so
Hi Thilo,
You can influence the rate of updates of aggregations by configuring
the size of the record caches with `cache.max.bytes.buffering`.
Details can be found here:
https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#aggregating
Hi Chintan,
You cannot specify time windows based on a calendar object like months.
In the following, I suppose the keys of your records are user IDs. You
could extract the months from the timestamps of the events and add
them to the key of your records. Then you can group the records by key
and
Hi Sachin,
I do not completely understand what you mean with one single
operation. Do you mean one call of a method in the DSL or the join is
processed on one processor node?
If you mean the latter, the joins in the DSL are also not processed on
one single processor node.
If you mean the
Hi Chris,
What version of Streams are you referring to?
On the current trunk the group.id property is removed from the config
for the global consumer that populates the GlobalKTable.
See the following code line
Hi Miguel,
I build Kafka with Gradle 5.2.1 and at the end of the build I get the
following message:
"Deprecated Gradle features were used in this build, making it
incompatible with Gradle 6.0."
So, maybe you ran in one of those incompatibilities.
Try to compile with a 5.x version of Gradle.
Hello Guozhang and Adam,
Regarding Guozhang's proposal please see recent discussions about
`transformValues()` and returning `null` from the transformer:
unnecessarily cause data re-partitioning. Won't this be
> in-efficient.
>
> Thanks
> Sachin
>
>
>
> On Tue, Feb 25, 2020 at 10:52 PM Bruno Cadonna wrote:
>
> > Hello Guozhang and Adam,
> >
> > Regarding Guozhang's proposal please see recent discu
o be
> forwarded downstream*/).filter((k,v) -> return v !=null)
>
> Thanks
> Sachin
>
>
> On Tue, Feb 25, 2020 at 11:48 PM Bruno Cadonna wrote:
>
> > Hi Sachin,
> >
> > I am afraid I cannot follow your point.
> >
> > You can sti
here that was only surfaced
> > > > > through this warning. That said, maybe the metric is the more
> > > appropriate
> > > > > way to bring attention to this: not sure if it's info or debug level
> > > > > though, or
> > > > > how
Hi Magnus,
with exactly-once, the producer commits the consumer offsets. Thus, if
the producer is not able to successfully commit a transaction, no
consumer offsets will be successfully committed, too.
Best,
Bruno
On Wed, Feb 26, 2020 at 1:51 PM Reftel, Magnus
wrote:
>
> Hi,
>
> From my
Hi Francis,
You need to sign-up to the Apache wiki at
https://cwiki.apache.org/confluence/signup.action
Best,
Bruno
On Tue, Feb 11, 2020 at 1:05 PM 萨尔卡 <1026203...@qq.com> wrote:
>
> i don't have a apache id. how can i apply one for create KIP?
>
>
>
> Have a nice dayFrancis Lee
>
>
> QQ :
Hi,
I am pretty sure this was intentional. All skipped records log
messages are on WARN level.
If a lot of your records are skipped on app restart with this log
message on WARN-level, they were also skipped with the log message on
DEBUG-level. You simply did not know about it before. With an
Hi Michelle,
Are you sure you do not pass a null instead of your custom store to
your topology by mistake?
How does the implementation of the `build()` method of your
`MyCustomStoreBuilder` look like?
Best,
Bruno
On Mon, Dec 30, 2019 at 12:06 AM Michelle Francois wrote:
>
> Hello,
> I want to
Thank you, Nicolas!
Bruno
On Thu, Apr 16, 2020 at 2:24 PM Nicolas Carlot
wrote:
>
> I've opened a Jira issue on the subject
> https://issues.apache.org/jira/browse/KAFKA-9880
>
>
> Le jeu. 16 avr. 2020 à 13:14, Bruno Cadonna a écrit :
>
> > Hi Nicolas,
> >
&
Hi Nicolas,
Thank you for reporting this issue.
As far as I understand, the issue is that bulk loading as done in Kafka
Streams does work as expected if FIFO compaction is used.
I would propose that you open a bug ticket. Please make sure to include
steps to reproduce the issue in the ticket.
used."
>
> You meant "doesn't" right ?
>
> Ok, I will open a ticket, but I don't think my "fix" is the correct one.
> Just ignoring the issue doesn't seem to be a correct solution :)
>
> Le jeu. 16 avr. 2020 à 11:49, Bruno Cadonna a écrit :
>
Hi Rapeepat,
1. The parallelism of Kafka Streams does not only depend on the number
of partitions of the input topic. It also depends on the structure of
your topology. Your example topology topicA => transform1 => topicB
=> transform2 => topicC would be subdivided in two subtopologies:
-
Hi Georg,
>From your description, I do not see why you need to use a global state
instead of a local one. Are there any specific reasons for that? With
a local state store you would have the previous record immediately
available.
Best,
Bruno
On Tue, May 19, 2020 at 10:23 AM Schmidt-Dumont Georg
>
> Georg Schmidt-Dumont
> BCI/ESW17
> Bosch Connected Industry
>
> Tel. +49 711 811-49893
>
> ► Take a look: https://bgn.bosch.com/alias/bci
>
>
>
> -Ursprüngliche Nachricht-
> Von: Bruno Cadonna
> Gesendet: Dienstag, 19. Mai 2020 10:52
> An: Users
t;
> Tel. +49 711 811-49893
>
> ► Take a look: https://bgn.bosch.com/alias/bci
>
>
>
> -Ursprüngliche Nachricht-
> Von: Bruno Cadonna
> Gesendet: Dienstag, 19. Mai 2020 11:42
> An: Users
> Betreff: Re: Question regarding Kafka Streams Global State Store
>
> Hi Georg,
>
gt; > Thanks,
> > One more thing, As I told you I was consuming the repartitioning topic
> > created by group by
> > and I just saw the old and new value, as you are telling me now they are
> > indeed marked as old and new,
> > is this mark visible somehow consuming the
anism should be a
> bit more transparent, but it aslo maybe that I'm plain wrong here :)
>
> Thanks !
>
> On Thu, May 14, 2020 at 9:24 PM Bruno Cadonna wrote:
>
> > Hi Raffaele,
> >
> > Change is an internal class in Streams and also its SerDes are
> > internal
Hi Raffaele,
In your example, Kafka Streams would send the new and the old value
downstream. More specifically, the groupBy() would send (as you also
observed)
London, (old value: London, new value: null)
Berlin, (old value: null, new value: Berlin)
At the count() record London, (old value:
Hi Deepak,
Do you return DeserializationHandlerResponse.CONTINUE or
DeserializationHandlerResponse.FAIL in your CustomExceptionHandler?
With DeserializationHandlerResponse.CONTINUE, the processing of records
should not stop and after the next offset commit the bad records should
not be read
Hi Pirow,
hard to to have an idea without seeing the code that is executed in the
processors.
Could you please post a minimal example that reproduces the issue?
Best,
Bruno
On 20.08.20 14:53, Pirow Engelbrecht wrote:
Hello,
I’ve got Kafka Streams up and running with the following
streams internally?
On Mon, Sep 21, 2020 at 9:01 PM Bruno Cadonna wrote:
Hi Pushkar,
If you want to keep the order, you could still use the state store I
suggested in my previous e-mail and implement a queue on top of it. For
that you need to put the events into the store with a key
the other
application starts up and required data becomes available in globalKtable
On Mon, Sep 21, 2020 at 5:42 PM Bruno Cadonna wrote:
Thank you for clarifying! Now, I think I understand.
You could put events for which required data in the global table is not
available into a state store
Hi Pushkar,
This question is better suited for
https://groups.google.com/g/confluent-platform since the Schema Registry
is part of the Confluent Platform but not of Apache Kafka.
Best,
Bruno
On 21.09.20 16:58, Pushkar Deole wrote:
Hi All,
Wanted to understand a bit more on the schema
On Tue, 22 Sep 2020 at 08:12, Bruno Cadonna wrote:
Hi Pushkar,
I think there is a misunderstanding. If a consumer polls from a
partition, it will always poll the next event independently whether the
offset was committed or not. Committed offsets are used for fault
tolerance, i.e., when
Hi Pushkar,
Is the error you are talking about, one that is thrown by Kafka Streams
or by your application? If it is thrown by Kafka Streams, could you
please post the error?
I do not completely understand what you are trying to achieve, but maybe
max.task.idle.ms [1] is the configuration
level error e.g.
say, some data required for processing an input event is not available in
the GlobalKTable since it is not yet synced with the global topic
On Mon, Sep 21, 2020 at 4:54 PM Bruno Cadonna wrote:
Hi Pushkar,
Is the error you are talking about, one that is thrown by Kafka Streams
Hi Charles,
Two transformers that share the same state store should end up into the
same sub-topology. A sub-topology is executed by as many task as the
number of partitions of the input topics. Each task processes the
records from one input partition group (i.e. the same partition from
both
Hi Sathya,
MyProcessor does not have access to MySource, because in MySource you
just build the topology that is then executed by Kafka Streams. So you
cannot send anything to MySource, because MyProcessor does not know
anything about MySource.
If you want to stop consumption upon an
Hi Will,
This looks like a bug to me.
Could you please open a Jira with the stacktrace of the exception and a
minimal repro example?
Best,
Bruno
On 08.06.21 16:51, Will Bartlett wrote:
Hi all,
I'm hitting a NPE in a very basic repro. It happens when toString() is
called on the
correct?
Thanks
On Mon, Apr 19, 2021 at 1:57 AM Bruno Cadonna wrote:
Hi Upesh,
The answers to your questions are:
1.
The configs cleanup.policy and retention.ms are topic configs. Hence,
they only affect the changelog of a state store, not the local state
store in a Kafka Streams client
Hi,
I added you to the list of contributors in the Apache Kafka JIRA
project. You can now assign tickets to yourself.
Welcome to Apache Kafka!
Best,
Bruno
On 05.06.21 15:44, 和映泉 wrote:
Please add user heyingquan to the list of contributors.
Hi Navneeth,
I need some clarifications to be able to help you.
First of all it would be useful to know if your topology is stateful,
i.e., if it has to maintain state. Since you have two topics with 72
partitions but only 72 tasks (or partitions groups to assign) that need
to be distributed
Hi Dhirendra,
You could use the kafka-configs.sh script or in Java the admin client
(see
https://kafka.apache.org/28/javadoc/org/apache/kafka/clients/admin/Admin.html)
Best,
Bruno
On 01.07.21 13:45, Dhirendra Singh wrote:
Hi All,
I want to get the value of a config from broker. I do not
Additionally, with KIP-698
(https://cwiki.apache.org/confluence/x/7CnZCQ), we will add
verifications of changelogs to find misconfigurations and report them
to the users.
Best,
Bruno
On 24.04.21 22:58, Matthias J. Sax wrote:
The topics used by Kafka Streams to backup state stores, are
Hi Marcus,
1. If you change REPLICATION_FACTOR_CONFIG without resetting the
application (or deleting the changelog and repartition topics) and
redeploy the Streams application, the replication factor of the internal
topics will not change. The replication factor will only change for new
Hi Alex,
You are right! There is no "exactly once magic" backed into the RocksDB
store code. The point is local vs remote. When a Kafka Streams client
closes dirty under EOS, the state (i.e., the content of the state store)
needs to be wiped out and to be re-created from scratch from the
Murilo
On Mon, 15 Mar 2021 at 06:21, Bruno Cadonna
wrote:
Hi Murilo,
No, you do not need any special procedure to upgrade from 2.4 to 2.7.
What you see in the logs is not an error but a warning. It should not
block you on startup forever. The warning says that the local states of
task 7_17
Hi Murilo,
No, you do not need any special procedure to upgrade from 2.4 to 2.7.
What you see in the logs is not an error but a warning. It should not
block you on startup forever. The warning says that the local states of
task 7_17 are corrupted because the offset you want to fetch of the
Hi Mickael,
Please have a look at the following bug report:
https://issues.apache.org/jira/browse/KAFKA-12508
I set its priority to blocker since the bug might break at-least-once
and exactly-once processing guarantees.
Feel free to set it back to major, if you think that it is not a
Hi Sophie,
Please have a look at the following bug report:
https://issues.apache.org/jira/browse/KAFKA-12508
I set its priority to blocker since the bug might break at-least-once
and exactly-once processing guarantees.
Feel free to set it back to major, if you think that it is not a
Hi Mickael,
Correction to my last e-mail: The bug does not break eos, but it breaks
at-least-once.
Bruno
On 19.03.21 14:54, Bruno Cadonna wrote:
Hi Mickael,
Please have a look at the following bug report:
https://issues.apache.org/jira/browse/KAFKA-12508
I set its priority to blocker
Hi Sophie,
Correction to my last e-mail: The bug does not break eos, but it breaks
at-least-once.
Bruno
On 19.03.21 14:54, Bruno Cadonna wrote:
Hi Sophie,
Please have a look at the following bug report:
https://issues.apache.org/jira/browse/KAFKA-12508
I set its priority to blocker since
Hi Alex,
I guess wiping out the state directory is easier code-wise, faster,
and/or at the time of development the developers did not design for
remote state stores. But I do actually not know the exact reason.
Off the top of my head, I do not know how to solve this for remote state
stores.
2021 at 10:20, Bruno Cadonna
wrote:
Hi Murilo,
A couple of questions:
1. What do you mean exactly with clean up?
2. Do you have acleanup policy specified on the changelog topics?
Best,
Bruno
On 15.03.21 15:03, Murilo Tavares wrote:
Hi Bruno
No, I haven't tested resetting the application
up and upgrade to 2.7. No error this time.
Thanks
Murilo
On Mon, 15 Mar 2021 at 09:53, Bruno Cadonna
wrote:
Hi Murilo,
Did you retry to upgrade again after you reset the application? Did it
work?
Best,
Bruno
On 15.03.21 14:26, Murilo Tavares wrote:
Hi Bruno
Thanks for your response.
No, I
Congrats, Tom!
Best,
Bruno
On 15.03.21 18:59, Mickael Maison wrote:
Hi all,
The PMC for Apache Kafka has invited Tom Bentley as a committer, and
we are excited to announce that he accepted!
Tom first contributed to Apache Kafka in June 2017 and has been
actively contributing since February
Congrats Bill! Well deserved!
Best,
Bruno
On 12.04.21 11:19, Satish Duggana wrote:
Congratulations Bill!!
On Thu, 8 Apr 2021 at 13:24, Tom Bentley wrote:
Congratulations Bill!
On Thu, Apr 8, 2021 at 2:36 AM Luke Chen wrote:
Congratulations Bill!
Luke
On Thu, Apr 8, 2021 at 9:17 AM
Thank you all for the kind words!
Best,
Bruno
On 08.04.21 00:34, Guozhang Wang wrote:
Hello all,
I'm happy to announce that Bruno Cadonna has accepted his invitation to
become an Apache Kafka committer.
Bruno has been contributing to Kafka since Jan. 2019 and has made 99
commits and more
Hi Upesh,
The answers to your questions are:
1.
The configs cleanup.policy and retention.ms are topic configs. Hence,
they only affect the changelog of a state store, not the local state
store in a Kafka Streams client.
Locally, window and session stores remove data they do not need
Congrats Randall! Well deserved!
Bruno
On 17.04.21 01:43, Matthias J. Sax wrote:
Hi,
It's my pleasure to announce that Randall Hauch in now a member of the
Kafka PMC.
Randall has been a Kafka committer since Feb 2019. He has remained
active in the community since becoming a committer.
Hi Chris,
your estimation looks correct to me.
I do not know how big M might be. Maybe the following link can help you
with the estimation:
https://github.com/facebook/rocksdb/wiki/Rocksdb-BlockBasedTable-Format
There are also some additional files that RocksDB keeps in its
directory. I
:
Hi Bruno, thank you for your answer.
I mean that the message that caused the exception was consumed and replaced
thread will continue from the next message. How then does it handle
uncaught exceptions, if it will fail again?
On Tue, Aug 10, 2021 at 12:33 PM Bruno Cadonna wrote:
Hi Yoda,
What
Hi Günter,
What is the timestamp of the records?
The difference between the system time on the brokers and the record
timestamp is used to decide whether a record segment should be removed
because its retention time is exceeded. So if the retention time of the
topic is set to 1.5 days, the
Hi Yoda,
What do you mean exactly with "skipping that failed message"?
Do you mean a record consumed from a topic that caused an exception that
killed the stream thread?
If the record killed the stream thread due to an exception, for example,
a deserialization exception, it will probably
Hi Bruce,
I do not know the specific root cause of your errors but what I found is
that Spring 2.7.x is compatible with clients 2.7.0 and 2.8.0, not with
3.0.0 and 2.8.1:
https://spring.io/projects/spring-kafka
Best.
Bruno
On 24.09.21 00:25, Chang Liu wrote:
Hi Kafka users,
I start
Hi Murilo,
Have you checked out the following blog post on tuning performance of
RocksDB state stores [1] especially the section on high disk I/O and
write stalls [2]?
Do you manage the off-heap memory used by RocksDB as described in the
Streams docs [3]?
I do not know what may have
, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.
A big thank you for the following 26 contributors to this release!
A. Sophie Blee-Goldman, Andras Katona, Bruno Cadonna, Chris Egerton,
Cong Ding, David Jacot, dengziming, Edoardo Comar, Ismael Juma
Hi Richard,
The group.instance.id config is orthogonal to the partition assignment
strategy. The group.instance.id is used if you want to have static
membership which is not related to the partition assignment strategy.
If you think you found a bug, could you please open a JIRA ticket with
Hi Sandip,
I just merged the PR https://github.com/apache/kafka/pull/11743 that
replaces log4j with reload4j. Reload4j will be part of Apache Kafka
3.2.0 and 3.1.1.
Best,
Bruno
On 30.03.22 04:26, Luke Chen wrote:
Hi Sandip,
We plan to replace log4j with reload4j in v3.2.0 and v3.1.1.
Hi Robin,
since this seems to be a ksql question, you will more likely get an
answer here:
https://forum.confluent.io/c/ksqldb
Best,
Bruno
On 01.02.22 10:03, Robin Helgelin wrote:
Hi,
Working on a small MVP and keep running into a dead end when it comes to
reducing data.
Began using
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 3.2.0.
* log4j 1.x is replaced with reload4j (KAFKA-9366)
* StandardAuthorizer for KRaft (KIP-801)
* Send a hint to the partition leader to recover the partition (KIP-704)
* Top-level
Hi Igor,
Sorry to hear you have issues with querying standbys!
I have two questions to clarify the situation:
1. Did you enable querying stale stores with
StoreQueryParameters.fromNameAndType(TABLE_NAME,
queryableStoreType).enableStaleStores()
as described in the blog post?
2. Since you
();
/*
*/}/*
and now it works well! thanks a lot for your help!
On 9/6/23 16:05, Bruno Cadonna wrote:
Hi Igor,
Sorry to hear you have issues with querying standbys!
I have two questions to clarify the situation:
1. Did you enable querying stale stores with
StoreQueryParameters.fromNameAndType
Hi Mariusz,
How is fooKey de-/serialized?
I ask that because maybe the serializer for fooKey cannot handle the
extended enum.
Best,
Bruno
On 9/20/23 12:22 AM, M M wrote:
Hello,
This is my first time asking a question on a mailing list, so please
forgive me any inaccuracies.
I am having a
, Bruno Cadonna, Calvin Liu, Chaitanya
Mukka, Chase Thomas, Cheryl Simmons, Chia-Ping Tsai, Chris Egerton,
Christo Lolov, Clay Johnson, Colin P. McCabe, Colt McNealy, d00791190,
Damon Xie, Danica Fine, Daniel Scanteianu, Daniel Urban, David Arthur,
David Jacot, David Mao, dengziming, Deqi Hu, Dimitar
1 - 100 of 139 matches
Mail list logo