One more thought that you could think about, have two consumer groups 1
that starts every hour for you "db consumer" and 2 for near real time , the
2ed should run all the time and populate your "memory db" like Redis and
the TTL could be arranged from Redis mechainsem

בתאריך יום ו׳, 28 במאי 2021, 21:44, מאת Ran Lupovich ‏<ranlupov...@gmail.com
>:

> So I think, You should write to your db the partition and the offset,
> while initing the real time consumer you'd read from database where to set
> the consumer starting point, kind-of the "exactly once" programming
> approach,
>
> בתאריך יום ו׳, 28 במאי 2021, 21:38, מאת Ronald Fenner ‏<
> rfen...@gamecircus.com>:
>
>> That might work if my consumers were in the same process but the db
>> consumer is a python job running under Airflow and the realtime consumer
>> wold be running as a backend service on another server.
>>
>> Also how would I seed the realtime consumer at startup if the consumer
>> isn't running which would could be possible if it hit the end stream,
>>
>> The db consumer is designed to read until no new message is delivered
>> then exit till it's next spawned.
>>
>> Ronald Fenner
>> Network Architect
>> Game Circus LLC.
>>
>> rfen...@gamecircus.com
>>
>> > On May 28, 2021, at 12:05 AM, Ran Lupovich <ranlupov...@gmail.com>
>> wrote:
>> >
>> >
>> https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)
>> >
>> > בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich ‏<
>> ranlupov...@gmail.com
>> >> :
>> >
>> >> While your DB consumer is running you get the access to the partition
>> >> ${partition} @ offset ${offset}
>> >>
>> >>
>> https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
>> >> setting your second consumers for real time just set them tostart from
>> that
>> >> point
>> >>
>> >>
>> >> בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
>> >> rfen...@gamecircus.com>:
>> >>
>> >>> I'm trying to figure out how to pragmatically read a consumer groups
>> >>> offset for a topic.
>> >>> What I'm trying to do is read the offsets of our DB consumers that run
>> >>> once an hour and batch lad all new messages. I then would have another
>> >>> consumer that monitors the offsets that have been consumed and
>> consume the
>> >>> message not yet loaded storing them in  memory to be able to send
>> them to a
>> >>> viewer. As messages get consumed they then get pruned from the in
>> memory
>> >>> cache.
>> >>>
>> >>> Basically I'm wanting to create window on the messages that haven't
>> been
>> >>> loaded into the db.
>> >>>
>> >>> I've seen ways of getting it from the command line but I'd like to
>> from
>> >>> with in code.
>> >>>
>> >>> Currently I'm using node-rdkafka.
>> >>>
>> >>> I guess as a last resort I could shell the command line for the
>> offsets
>> >>> then parse it and get it that way.
>> >>>
>> >>>
>> >>> Ronald Fenner
>> >>> Network Architect
>> >>> Game Circus LLC.
>> >>>
>> >>> rfen...@gamecircus.com
>> >>>
>> >>>
>>
>>

Reply via email to