Hi there,
I would like to automate some of my tasks using Apache Kafka. Previously i
used to do the same using Apache Airflow and which worked fine. But i want
to explore the same using Kafka whether this works better than Airflow or
not.
1) Kafka runs on Server A
2) Kafka searches for a file na
Hi there,
I would like to automate some of my tasks using Apache Kafka. Previously i
used to do the same using Apache Airflow and which worked fine. But i want
to explore the same using Kafka whether this works better than Airflow or
not.
1) Kafka runs on Server A
2) Kafka searches for a file na
Hi Christian,
> For multiple partitions is it the correct behaviour to simply assign to
partition number:offset or do I have to provide offsets for the other
partitions too?
I'm not sure I get your question here. If you are asking if you should
commit offsets of other partitions that this consume
Hi Luke,
thanks for the hints. This helps a lot already.
We already use assign as we manage offsets on the consumer side. Currently
we only have one partition and simply assign a stored offset on partition 0.
For multiple partitions is it the correct behaviour to simply assign to
partition number
Hi Christian,
Answering your question below:
> Let's assume we just have one topic with 10 partitions for simplicity.
We can now use the environment id as a key for the messages to make sure
the messages of each environment arrive in order while sharing the load on
the partitions.
> Now we want e
We have a single tenant application that we deploy to a kubernetes cluster
in many instances.
Every customer has several environments of the application. Each
application lives in a separate namespace and should be isolated from other
applications.
We plan to use kafka to communicate inside an env
just my 2 cents
the best answer is always from the real-world practices :)
RocksDB https://rocksdb.org/ is the implementation of "state store" in Kafka
Stream and it is an "embedded" kv store (which is diff than the distributed kv
store). The "state store" in Kafka Stream is also backed up by "
Hello Gareth,
A common practice for rolling up aggregations with Kafka Streams is to do
the finest granularity at processor (5 days in your case), and to
coarse-grained rolling up upon query serving through the interactive query
API -- i.e. whenever a query is issued for a 30 day aggregate you do
Hi,
We have a requirement to calculate metrics on a huge number of keys (could
be hundreds of millions, perhaps billions of keys - attempting caching on
individual keys in many cases will have almost a 0% cache hit rate). Is
Kafka Streams with RocksDB and compacting topics the right tool for a tas
App1 (IQ) and send requests
> to App2 (through kafka topic, not shown above). Conceptually it looks
same
> as your use case. What do people do if a kafka streams application (App1)
> has to offer REST interface also ?
>
> -thanks
> Mohan
uests
> to App2 (through kafka topic, not shown above). Conceptually it looks same
> as your use case. What do people do if a kafka streams application (App1)
> has to offer REST interface also ?
>
> -thanks
> Mohan
>
> On 9/30/20, 5:01 PM, "Guozhang Wang" wrote:
>
&
--> App2
|
V
REST >App3
REST API to App3 and read the local store of App1 (IQ) and send requests to
App2 (through kafka topic, not shown above). Conceptually it looks same as
your use case. What do
Hello Mohan,
If I understand correctly, your async event trigger process runs out of the
streams application, that reads the state stores of app2 through the
interactive query interface, right? This is actually a pretty common use
case pattern for IQ :)
Guozhang
On Wed, Sep 30, 2020 at 1:22 PM
Hi,
A traditional kafka streams application (App1) reading data from a kafka
topic, doing aggregations resulting in some local state. The output of this
application is consumed by a different application(App2) for doing a different
task. Under some conditions, there is an external trigger (asy
ore I get to that point I'm
> trying to figure out if/how I can implement this use case specifically in
> Kafka.
>
> The point that I'm stuck on is needing to query for specific messages
> within a topic when the app receives a request. To simplify the example,
> consider
Martin,
Thank you very much for your reply. I appreciate the perspective on securing
communications with Kafka, but before I get to that point I'm trying to figure
out if/how I can implement this use case specifically in Kafka.
The point that I'm stuck on is needing to query fo
MG>below
From: Simon Calvin
Sent: Friday, June 7, 2019 3:39 PM
To: users@kafka.apache.org
Subject: First time building a streaming app and I need help understanding how
to build out my use case
Hello, everyone. I feel like I have a use case that it is w
Hello, everyone. I feel like I have a use case that it is well suited to the
Kafka streaming paradigm, but I'm having a difficult time understanding how
certain aspects will work as I'm prototyping.
So here's my use case: Service 1 assigns a job to a user which is published
> > On Tue, Jan 8, 2019, 11:45 AM Gioacchino Vino > wrote:
> >
> >> Hi expert,
> >>
> >>
> >> I would ask you some guidelines, web-pages or comments regarding my
> >> use-case.
> >>
> >>
> >> *Requirements*:
> &
I would ask you some guidelines, web-pages or comments regarding my
use-case.
*Requirements*:
- 2000+ producers
- input rate 600k messages/s
- consumers must write in 3 different databases (so i assume 3 consumer
groups) at 600k messages/s overall (200k messages/s/database)
- latency < 500m
Latency sounds high to me, maybe your JVMs are GC'ing a lot?
Ryanne
On Tue, Jan 8, 2019, 11:45 AM Gioacchino Vino Hi expert,
>
>
> I would ask you some guidelines, web-pages or comments regarding my
> use-case.
>
>
> *Requirements*:
>
> - 2000+ producers
Hi expert,
I would ask you some guidelines, web-pages or comments regarding my
use-case.
*Requirements*:
- 2000+ producers
- input rate 600k messages/s
- consumers must write in 3 different databases (so i assume 3 consumer
groups) at 600k messages/s overall (200k messages/s/database
I have 2 systems
1. System I - A web based interface based on Oracle DB and No REST API
support
2. System II - Supports rest API's which also has web based interface
.
When a record created or updated in either of the system I want
propagate the data to other system . Ca
created.
On Wed, Jul 12, 2017 at 5:06 PM, Stephen Powis
wrote:
> Hey! I was hoping I could get some input from people more experienced with
> Kafka Streams to determine if they'd be a good use case/solution for me.
>
> I have multi-tenant clients submitting data to a Kafka to
ome input from people more experienced with
> Kafka Streams to determine if they'd be a good use case/solution for me.
>
> I have multi-tenant clients submitting data to a Kafka topic that they want
> ETL'd to a third party service. I'd like to batch and group these
Hey! I was hoping I could get some input from people more experienced with
Kafka Streams to determine if they'd be a good use case/solution for me.
I have multi-tenant clients submitting data to a Kafka topic that they want
ETL'd to a third party service. I'd like to batch and
eck if there are any reservations where remaining time before
> pickup is less than x).
>
> Number of reservations we currently fetching is ~5000 and number of
> notification/alerting rules is ~20
>
>
> Based on documentation and some blog posts I have impression that Kafka a
y are good choice for this use case but I would like to
confirm that with someone from Kafka team or to get some recommendations ...
Thanks,
Vladimir
We are using latest Kafka and Logstash versions for ingesting several
business apps logs(now few but eventually 100+) into ELK. We have a
standardized logging structure for business apps to log data into Kafka
topics and able to ingest into ELK via Kafka topics input plugin.
Currently, we are usin
wrote:
> Hello,
>
> I apologies with Matthias since I posted yesterday this issue on the wrong
> place on github :(
>
> I'm trying a simple use case of session windowing. TimeWindows works
> perfectly, however as I replace with SessionWindows, this exception is
> thr
Hello,
I apologies with Matthias since I posted yesterday this issue on the wrong
place on github :(
I'm trying a simple use case of session windowing. TimeWindows works
perfectly, however as I replace with SessionWindows, this exception is
thrown:
Exception in thread "Stre
0:22 AM, Praveen wrote:
>
>> I still think a clean cluster start should not take > 1 hr for balancing
>> though. Is this expected or am i doing something different?
>>
>> I thought this would be a common use case.
>>
>> Praveen
>>
>> On Fri, Fe
> 1 hr for balancing
> though. Is this expected or am i doing something different?
>
> I thought this would be a common use case.
>
> Praveen
>
> On Fri, Feb 10, 2017 at 10:26 AM, Onur Karaman <
> okara...@linkedin.com.invalid> wrote:
>
>> Pradeep is right.
>&
I still think a clean cluster start should not take > 1 hr for balancing
though. Is this expected or am i doing something different?
I thought this would be a common use case.
Praveen
On Fri, Feb 10, 2017 at 10:26 AM, Onur Karaman <
okara...@linkedin.com.invalid> wrote:
> Prad
Pradeep is right.
close() will try and send out a LeaveGroupRequest while a kill -9 will not.
On Fri, Feb 10, 2017 at 10:19 AM, Pradeep Gollakota
wrote:
> I believe if you're calling the .close() method on shutdown, then the
> LeaveGroupRequest will be made. If you're doing a kill -9, I'm not s
I believe if you're calling the .close() method on shutdown, then the
LeaveGroupRequest will be made. If you're doing a kill -9, I'm not sure if
that request will be made.
On Fri, Feb 10, 2017 at 8:47 AM, Praveen wrote:
> @Pradeep - I just read your thread, the 1hr pause was when all the
> consu
@Pradeep - I just read your thread, the 1hr pause was when all the
consumers where shutdown simultaneously. I'm testing out rolling restart
to get the actual numbers. The initial numbers are promising.
`STOP (1) (1min later kicks off) -> REBALANCE -> START (1) -> REBALANCE
(takes 1min to get a pa
I asked a similar question a while ago. There doesn't appear to be a way to
not triggering the rebalance. But I'm not sure why it would be taking > 1hr
in your case. For us it was pretty fast.
https://www.mail-archive.com/users@kafka.apache.org/msg23925.html
On Fri, Feb 10, 2017 at 4:28 AM, Krz
Would be great to get some input on it.
- Krzysztof Lesniewski
On 06.02.2017 08:27, Praveen wrote:
I have a 16 broker kafka cluster. There is a topic with 32 partitions
containing real time data and on the other side, I have 32 boxes w/ 1
consumer reading from these partitions.
Today our deplo
I have a 16 broker kafka cluster. There is a topic with 32 partitions
containing real time data and on the other side, I have 32 boxes w/ 1
consumer reading from these partitions.
Today our deployment strategy is stop, deploy and start the processes on
all the 32 consumers. This triggers re-balanc
> On Sep 29, 2016, at 16:39, Ali Akhtar wrote:
>
> Why did you choose Druid over Postgres / Cassandra / Elasticsearch?
Well, to be clear, we haven’t chosen it yet — we’re evaluating it.
That said, it is looking quite promising for our use case.
The Druid docs say it well:
> Drui
>>>> even druid.
>>>>>>
>>>>>> HTH
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>
y
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 29 September 2016 at 15:01, Ali Akhtar wrote:
>>
>>> It needs to be able to scale to a very large amount of data,
.
Cassandra? That would be a separate cluster and that in itself could be a
problem…
YMMV so you need to address the pros/cons of each tool specific to your
environment and skill level.
HTH
-Mike
> On Sep 29, 2016, at 8:54 AM, Ali Akhtar wrote:
>
> I have a somewhat tricky use case
>> HBase? Depending on your admin… stability could be a problem.
>> Cassandra? That would be a separate cluster and that in itself could be a
>> problem…
>>
>> YMMV so you need to address the pros/cons of each tool specific to your
>> environment and skill level
e? Or use Hive 1.x+?)
>>>>
>>>> HBase? Depending on your admin… stability could be a problem.
>>>> Cassandra? That would be a separate cluster and that in itself could be a
>>>> problem…
>>>>
>>>> YMMV so you need to
Avi,
Why did you choose Druid over Postgres / Cassandra / Elasticsearch?
On Fri, Sep 30, 2016 at 1:09 AM, Avi Flax wrote:
>
> > On Sep 29, 2016, at 09:54, Ali Akhtar wrote:
> >
> > I'd appreciate some thoughts / suggestions on which of these
> alternatives I
> > should go with (e.g, using raw
> On Sep 29, 2016, at 09:54, Ali Akhtar wrote:
>
> I'd appreciate some thoughts / suggestions on which of these alternatives I
> should go with (e.g, using raw Kafka consumers vs Spark for ETL, which
> persistent data store to use, and how to query that data store in the
> backend of the web UI,
;>> you want to write your own compaction code? Or use Hive 1.x+?)
>>>
>>> HBase? Depending on your admin… stability could be a problem.
>>> Cassandra? That would be a separate cluster and that in itself could be a
>>> problem…
>>>
>>>
.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> Disclaimer: Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other
ther property which may arise
> from relying on this email's technical content is explicitly disclaimed. The
> author will in no case be liable for any monetary damages arising from such
> loss, damage or destruction.
>
>
>
>
> On 29 September 2016 at 16:49, Ali Akhta
nt and skill level.
>
> HTH
>
> -Mike
>
>> On Sep 29, 2016, at 8:54 AM, Ali Akhtar wrote:
>>
>> I have a somewhat tricky use case, and I'm looking for ideas.
>>
>> I have 5-6 Kafka producers, reading various APIs, and writing their raw data
&
ember 2016 at 16:49, Ali Akhtar wrote:
> The business use case is to read a user's data from a variety of different
> services through their API, and then allowing the user to query that data,
> on a per service basis, as well as an aggregate across all services.
>
> The way I
The business use case is to read a user's data from a variety of different
services through their API, and then allowing the user to query that data,
on a per service basis, as well as an aggregate across all services.
The way I'm considering doing it, is to do some basic ETL (dr
lt tolerant as
>> possible.
>>
>> What's the advantage of using Spark for reading Kafka instead of direct
>> Kafka consumers?
>>
>> On Thu, Sep 29, 2016 at 8:28 PM, Cody Koeninger
>> wrote:
>>>
>>> I wouldn't give up the flexibility an
Hi Ali,
What is the business use case for this?
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
http://talebzadehmich.wordpre
Thu, Sep 29, 2016 at 8:28 PM, Cody Koeninger
> wrote:
>
>> I wouldn't give up the flexibility and maturity of a relational
>> database, unless you have a very specific use case. I'm not trashing
>> cassandra, I've used cassandra, but if all I know is that yo
t;>
>> I wouldn't give up the flexibility and maturity of a relational
>> database, unless you have a very specific use case. I'm not trashing
>> cassandra, I've used cassandra, but if all I know is that you're doing
>> analytics, I wouldn't w
as well and from
Flume to Hbase
I would have thought that if one wanted to do real time analytics with SS,
then that would be a good fit with a real time dashboard.
What is not so clear is the business use case for this.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/prof
.
What's the advantage of using Spark for reading Kafka instead of direct
Kafka consumers?
On Thu, Sep 29, 2016 at 8:28 PM, Cody Koeninger wrote:
> I wouldn't give up the flexibility and maturity of a relational
> database, unless you have a very specific use case. I'm not
I wouldn't give up the flexibility and maturity of a relational
database, unless you have a very specific use case. I'm not trashing
cassandra, I've used cassandra, but if all I know is that you're doing
analytics, I wouldn't want to give up the ability to easily do ad-h
Hi Cody
Spark direct stream is just fine for this use case.
But why postgres and not cassandra?
Is there anything specific here that i may not be aware?
Thanks
Deepak
On Thu, Sep 29, 2016 at 8:41 PM, Cody Koeninger wrote:
> How are you going to handle etl failures? Do you care about l
;>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn *
>>>>> https:/
;>>> Zeppelin to query data
>>>>>>
>>>>>> You will also need spark streaming to query data online for speed
>>>>>> layer. That data could be stored in some transient fabric like ignite or
>>>>>> even druid.
CCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk
>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destructi
;> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destructi
ing on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 29 September 2016 at 15:01, Ali Akhtar wrote:
>>
>>&g
ction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>
recommendations for a tricky use case
The web UI is actually the speed layer, it needs to be able to query the data
online, and show the results in real-time.
It also needs a custom front-end, so a system like Tableau can't be used, it
must have a custom backend + front-end.
Thanks for the recommend
29 September 2016 at 15:01, Ali Akhtar wrote:
>
>> It needs to be able to scale to a very large amount of data, yes.
>>
>> On Thu, Sep 29, 2016 at 7:00 PM, Deepak Sharma
>> wrote:
>>
>>> What is the message inflow ?
>>> If it's really hig
What is the message inflow ?
If it's really high , definitely spark will be of great use .
Thanks
Deepak
On Sep 29, 2016 19:24, "Ali Akhtar" wrote:
> I have a somewhat tricky use case, and I'm looking for ideas.
>
> I have 5-6 Kafka producers, reading various APIs,
es.
>
> On Thu, Sep 29, 2016 at 7:00 PM, Deepak Sharma
> wrote:
>
>> What is the message inflow ?
>> If it's really high , definitely spark will be of great use .
>>
>> Thanks
>> Deepak
>>
>> On Sep 29, 2016 19:24, "Ali Akhtar" wr
Ali Akhtar" wrote:
>
>> I have a somewhat tricky use case, and I'm looking for ideas.
>>
>> I have 5-6 Kafka producers, reading various APIs, and writing their raw
>> data into Kafka.
>>
>> I need to:
>>
>> - Do ETL on the data, and
I have a somewhat tricky use case, and I'm looking for ideas.
I have 5-6 Kafka producers, reading various APIs, and writing their raw
data into Kafka.
I need to:
- Do ETL on the data, and standardize it.
- Store the standardized data somewhere (HBase / Cassandra / Raw HDFS /
ElasticS
Thanks Guozhang Wang.
Hamza
De : Guozhang Wang
Envoyé : jeudi 4 août 2016 06:58:22
À : users@kafka.apache.org
Objet : Re: Re : A specific use case
Yeah, if you can buffer yourself in the process() function and then rely on
punctuate() for generating the
ang Wang"
> Pour : "users@kafka.apache.org"
> Objet : A specific use case
> Date : mer., août 3, 2016 23:38
>
> Hello Hamza,
>
> By saying "broker" I think you are actually referring to a Kafka Streams
> instance?
>
>
> Guozhang
>
> On Mon, Au
Hi,
Yes in fact .
And ï found à solution.
It was in editing the method punctuate in kafka stream processor.
- Message de réponse -
De : "Guozhang Wang"
Pour : "users@kafka.apache.org"
Objet : A specific use case
Date : mer., août 3, 2016 23:38
Hello Hamza,
By say
Hello Hamza,
By saying "broker" I think you are actually referring to a Kafka Streams
instance?
Guozhang
On Mon, Aug 1, 2016 at 1:01 AM, Hamza HACHANI
wrote:
> Good morning,
>
> I'm working on a specific use case. In fact i'm receiving messages from an
> o
Good morning,
I'm working on a specific use case. In fact i'm receiving messages from an
operator network and trying to do statistics on their number per
minute,perhour,per day ...
I would like to create a broker that receives the messages and generates a
message every minute. These
ications of changes to data in a
> dataservice/database. For each object that changes, a kafka message will be
> sent. This is easy and we've got that working no problem.
>
> Here is my use case : I want to be able to fire up a process that will
>
> 1) determine the curr
I want to use Kafka for notifications of changes to data in a
dataservice/database. For each object that changes, a kafka message will be
sent. This is easy and we've got that working no problem.
Here is my use case : I want to be able to fire up a process that will
1) determine the cu
tively soon after
>> they were written to Kafka, but there’s no mechanism for deleting records on
>> consumption.
>>
>> Ian.
>>
>>
>> ---
>> Ian Wrigley
>> Director, Education Services
>> Confluent, Inc
>>
>>> On Jul 2, 2
to Kafka, but there’s no mechanism for deleting records on
> consumption.
>
> Ian.
>
>
> ---
> Ian Wrigley
> Director, Education Services
> Confluent, Inc
>
>> On Jul 2, 2016, at 11:08 AM, Navneet Kumar
>> wrote:
>>
>> Hi All
>>
Confluent, Inc
> On Jul 2, 2016, at 11:08 AM, Navneet Kumar
> wrote:
>
> Hi All
> My Use case is I want to delete the records instantly after consuming
> them. I am using Kafka 0.90
>
>
>
> Thanks and Regards,
> Navneet Kumar
Hi All
My Use case is I want to delete the records instantly after consuming
them. I am using Kafka 0.90
Thanks and Regards,
Navneet Kumar
> > > > > > >
> > > > > > > > > Guozhang
> > > > > > > > >
> > > > > > > > > On Fri, Apr 15, 2016 at 2:11 AM, Guillermo Lammers Corral <
> > > > > > > > > guillermo.lam
neric
> > > > > > question,
> > > > > > > > I'll
> > > > > > > > > try to explain with some pseudocode.
> > > > > > > > >
> > > > > > > > > I have two KTable with a join:
> > > > > >
t; > > > > > > > ktable1.join(ktable2, (data1, data2) => new
> ResultUnion(data1,
> > > > > data2))
> > > > > > > >
> > > > > > > > I send the result to a topic result.to("resultTopic").
> &g
orral <
> >>>>>>> guillermo.lammers.cor...@tecsisa.com> wrote:
> >>>>>>>
> >>>>>>>> Hi Guozhang,
> >>>>>>>>
> >>>>>>>> Thank you very much for your reply and sorr
gt;>>>> question,
>>>>>>> I'll
>>>>>>>> try to explain with some pseudocode.
>>>>>>>>
>>>>>>>> I have two KTable with a join:
>>>>>>>>
>>>>>>&
- The streming is up & running without data in topics
> > > > > > >
> > > > > > > - I send data to "topic2", for example a key/value like that
> > > > > > ("uniqueKey1",
> > > > > > > "hello
see null values in topic "resultTopic", i.e. ("uniqueKey1",
> > null)
> > > > > >
> > > > > > - If I send data to "topic1", for example a key/value like that
> > > > > > ("uniqueKey1", "wor
of the KTable that does not have the
> > > > > corresponding data by key in the other one, obtain null values in
> the
> > > > > result final topic is the expected behavior?
> > > > >
> > > > > My next step would be use Kafka Conn
; > > >
> > > > Q: On the other hand, just to try, I have a KTable that read messages
> > in
> > > > "resultTopic" and prints them. If the stream is a KTable I am
> wondering
> > > why
> > > > is getting all the values from the
pic).
> > >
> > > Q: On the other hand, just to try, I have a KTable that read messages
> in
> > > "resultTopic" and prints them. If the stream is a KTable I am wondering
> > why
> > > is getting all the values from the topic even those
why
> > is getting all the values from the topic even those with the same key?
> >
> > Thanks in advance! Great job answering community!
> >
> > 2016-04-14 20:00 GMT+02:00 Guozhang Wang :
> >
> > > Hi Guillermo,
> > >
> > > 1) Yes in your case, the streams a
ble, and do KTable-KTable join.
> >
> > 2) Could elaborate about "achieving this"? What behavior do require in
> the
> > application logic?
> >
> >
> > Guozhang
> >
> >
> > On Thu, Apr 14, 2016 at 1:30 AM, Guillermo Lammers Corral
e streams are really a "changelog" stream, hence you
> should create the stream as KTable, and do KTable-KTable join.
>
> 2) Could elaborate about "achieving this"? What behavior do require in the
> application logic?
>
>
> Guozhang
>
>
> On Thu, Apr 14,
14, 2016 at 1:30 AM, Guillermo Lammers Corral <
guillermo.lammers.cor...@tecsisa.com> wrote:
> Hi,
>
> I am a newbie to Kafka Streams and I am using it trying to solve a
> particular use case. Let me explain.
>
> I have two sources of data both like that:
>
> Key (string)
1 - 100 of 130 matches
Mail list logo