Hey there,
I've got a few Kafka Streams services which run smoothly most of the time.
Sometimes, however, some of them get an exception "Abort sending since an error
caught with a previous record" (see below for a full example). The Stream
Service having this exception just stops its work
etries.
-Matthias
On 5/15/18 6:30 AM, Claudia Wegmann wrote:
> Hey there,
>
> I've got a few Kafka Streams services which run smoothly most of the time.
> Sometimes, however, some of them get an exception "Abort sending since an
> error caught with a previous record"
Dear community,
I updated my kafka cluster from version 1.1.0 to 2.0.0 according to the upgrade
guide for rolling upgrades today. I encountered a problem after starting the
new broker version. The log is full of
"Found a corrupted index file corresponding to log file
not, unless it was caused by an unclean shutdown, in which case
shutdown cleanly. :)
2) probably not, since the indexes are rebuilt.
FWIW i've done a bunch of 1.1.0 -> 2.0 upgrades and haven't had this issue.
For sure have seen it in the past though.
On Tue, Oct 9, 2018 at 2:09 AM Claudia Wegm
: Re: Configuration of log compaction
Hi Claudia,
Anything useful in the log cleaner log files?
Cheers,
Liam Clarke
On Tue, 18 Dec. 2018, 3:18 am Claudia Wegmann Hi,
>
> thanks for the quick response.
>
> My problem is not, that no new segments are created, but that segments
>
Dear kafka users,
I've got a problem on one of my kafka clusters. I use this cluster with kafka
streams applications. Some of this stream apps use a kafka state store.
Therefore a changelog topic is created for those stores with cleanup policy
"compact". One of these topics is running wild for
full to ensure that
retention can delete or compact old data. long 60480 [1,...] log.roll.ms
medium
On Mon, Dec 17, 2018 at 12:28 PM Claudia Wegmann
wrote:
> Dear kafka users,
>
> I've got a problem on one of my kafka clusters. I use this cluster
> with kafka streams applic
e've found that rolling broker restarts
with 0.11 are rather easy and not to be feared.
Kind regards,
Liam Clarke
On Tue, Dec 18, 2018 at 10:43 PM Claudia Wegmann
wrote:
> Hi Liam,
>
> thanks for the pointer. I found out, that the log cleaner on all kafka
> brokers died with the f
Hey there,
I've got the problem that the "__consumer_offsets" topic grows pretty big over
time. After some digging, I found offsets for consumer groups that were deleted
a long time ago still being present in the topic. Many of them are offsets for
console consumers, that have been deleted
on rolled out segments
* deletion of tombstone only occurs if the delete.retention.ms delay is expired
Best regards
On Fri, Mar 29, 2019 at 2:16 PM Claudia Wegmann wrote:
> Hey there,
>
> I've got the problem that the "__consumer_offsets" topic grows pretty
> big over ti
n up small segments but many? Up until now I use the default
configurations for log segment size. Should I reduce it?
Anyone any other ideas?
Best,
Claudia
-Ursprüngliche Nachricht-
Von: Claudia Wegmann
Gesendet: Mittwoch, 19. Dezember 2018 08:50
An: users@kafka.apache.org
B
Dear kafka experts,
I've got a kafka cluster with 3 brokers running in docker-containers on
different hosts in version 2.1.1 of kafka. The cluster is serving some kafka
streams apps. The topics are configured with replication.factor 3 and
min.insync.replicas 2. The cluster works fine most of
Hi kafka users,
since upgrading to kafka 2.1.1 version I get the following log message at every
startup of streaming services:
"No checkpoint found for task 0_16 state store TestStore changelog
test-service-TestStore-changelog-16 with EOS turned on. Reinitializing the task
and restore its
Dear kafka users,
I run a kafka cluster (version 2.1.1) with 6 brokers to process ~100 messages
per second with a number of kafka streams apps. There are currently 53 topics
with 30 partitions each. I have exactly once processing enabled. My problem is
that the __consumer_offsets topic is
, May 14, 2019 at 4:44 AM Claudia Wegmann wrote:
> Dear kafka users,
>
> I run a kafka cluster (version 2.1.1) with 6 brokers to process ~100
> messages per second with a number of kafka streams apps. There are
> currently 53 topics with 30 partitions each. I have exactly onc
15 matches
Mail list logo