in zookeeper. Is this the
preferred approach .
I have read that in latest Kafka version , consumer offsets can be
maintained in Kafka cluster itslef. Is there any storm spout example for
this .
Regards
Pradeep S
Hi,
I am trying to call kafka stream close based on the presence of a value in
the output of ValueTransformer.ValueTransformer produces a
List
Is there a way to avoid the foreach on Kstream and try to get the
first value alone? (like streams api method findFirst)
private void
Hi,
In kafka stream, when we use *to *method for sending values to a topic, is
there a way to mention the message key .
.to(outputTopic, Produced.with(byteArraySerde, itemEnvelopeSerde));
In Produced class , i cant find a way to set the key.
Hi,
I am running a poll loop for kafka consumer and the app is deployed in
kubernetes.I am using manual commits.Have couple of questions on exception
handling in the poll loop
1) Do i need to handle consumer rebalance scenario(when any of the consumer
pod dies) by adding a listener or will the
mber is that when you are sending data, as of 1.0.0 API
> you can have a "Txn-like" finer control to determine when you have
> successfully committed a transaction. You can check beginTransaction(),
> commitTransaction(), abortTransaction() methods to see how they can be
> uti
Hi All,
I am running a Kafka consumer(Single threaded) on kubernetes . Application
is polling the records and accummulating in memory . There is a scheduled
write of these records to S3 . Only after that i am committing the offsets
back to Kafka.
There are 4 partitions and 4 consumers(4 kubernetes
Hi ,
I would like to add a integration test for kafka rebalance scenario and
make sure retries are working as expected if enough replicas are not there
in my kafka streams application .
Do you have any info on how can i achieve this .
I was checking TestUtils class in org.apache.kafka.test, but
Hi,
I am using kafka streams app connecting to confluent kafka cluster(10.2.1).
Application is reading messages from a topic, performing a tranformation
and pushing to output topic . There is no count or aggregation performed .
Have following clarifications regarding state directory.
*1)* Will
re should be no issue
> with running on Kubernetes.
>
> Also, if there is no store (independent of disk based or in-memory)
> there will be no changelog topic.
>
>
> -Matthias
>
> On 4/21/18 8:34 AM, pradeep s wrote:
> > Hi,
> > I am using kafka streams app conn
Hi,
I have a usecase to stream messages from Kafka and buffer it in memory till
a message count is reached and then write these to output file . I am using
manual commit . I have a question on whats the maximum time i can wait
after consuming the message and till we commit back to Kafka . Is there
used for cases like this).
>
> -Matthias
>
>
> On 10/16/18 2:19 PM, pradeep s wrote:
> > Hi,
> > I have a usecase to stream messages from Kafka and buffer it in memory
> till
> > a message count is reached and then write these to output file . I am
>
Pause and resume is required since i am running a pod in kubernetes and i
am not shutting down the app
On Tue, Oct 23, 2018 at 10:33 PM pradeep s
wrote:
> Hi,
> I have a requirement to have kafka streaming start at scheduled time and
> then pause the stream when the consumer poll retu
not
> hit `max.poll.interval.ms` timeout.
>
>
> -Matthias
>
> On 10/24/18 10:25 AM, pradeep s wrote:
> > Pause and resume is required since i am running a pod in kubernetes and i
> > am not shutting down the app
> >
> > On Tue, Oct 23, 2018 at 10:33 PM pradeep s
t;
> > In your scheduled method, you set the variable to true.
> >
> > In your main consumer, each time before you call poll(), you check if
> > the variable is set to true. If yes, you resume() and reset the variable
> > to false.
> >
> > Hope this helps.
true;
}
}
On Thu, Oct 25, 2018 at 6:11 PM pradeep s
wrote:
> Hi Manoj/Matthias,
> My requirement is that to run the consumer daily once , stream the
> messages and pause when i am encountering a few empty fetches .
> I am planning to run two consumers and pausing the consumption
Hi,
I have a requirement to have kafka streaming start at scheduled time and
then pause the stream when the consumer poll returns empty fetches for 3 or
more polls.
I am starting a consumer poll loop during application startup using a
singled thread executor and then pausing the consumer when the
Hi ,
I have a requirement to stream item details to specific destinations .
There are three different kafka streams , one for item info, second for
item price and promotions and third for item availability .
I want to join all these info and produce a single message containing
item,price and
> We did all of this in one application. You may be able to accomplish the
> same thing using aggregate a different way or you may be able to use left
> join methods to accomplish the same thing. I can't share the code. Sorry.
>
> On Tue, Jan 11, 2022 at 10:46 PM pradeep s
> wr
18 matches
Mail list logo