Hello Gérald,
I have the exact same problem. Mirrormaker 2.0 Javadocs documentation is only
slated for release 2.5.0 (see https://issues.apache.org/jira/browse/KAFKA-8930).
I am also prototyping Mirrormaker 2.0 and I have successfully run the
Mirrormaker 2.0 scripts (connect-mirror-maker.sh)
Hi Liam and George,
Thanks for support.
Just now I got the same Jolokio from elastic documentation.
Checking its configurations. There are no metrics mentioned by you .
kafka.producer:type=producer-metrics,client-id=(.+),topic=(.+)record-send-rate
It seems I have to add few metrics in the
I stand corrected...
Sent from my iPhone
George Leonard
__
george...@gmail.com
+27 82 655 2466
> On 21 Feb 2020, at 07:22, Liam Clarke wrote:
>
> Hi Sunil,
>
> Looks like Metricbeats has a Jolokia module that will capture JMX exposed
> metrics for you:
>
Hi Sunil,
Looks like Metricbeats has a Jolokia module that will capture JMX exposed
metrics for you:
https://www.elastic.co/blog/brewing-in-beats-add-support-for-jolokia-lmx
Kind regards,
Liam Clarke
On Fri, Feb 21, 2020 at 6:16 PM Sunil CHAUDHARI
wrote:
> Hi Liam Clarke,
> Thanks for this
I’m going to say no...
Metric beat expose OS metrics. You talking about a JVM here exposing values via
JMX.
Have you looked at catching all of this using Prometheus and dash boarding it
using Grafana, as a real time dashboard.
Sent from my iPhone
George Leonard
__
Hi Liam Clarke,
Thanks for this elaboration.
Surely I will google.
One more question about Mbeans. If I am capturing system matrix on kafka broker
using metricbeat, then is it possible that I will get those Mbeans?
I know this is out of the topic, but in general, if I capture JVM metric with
any
Hi,
I wanted to understand if in this particular case my solution would work:
Say I have source records [timestamp, (K,V)] in input topic in following
order:
.. [1, (KA, AA1)] [2, (KA, AB1)] [3, (KB, B1)] ...
I create multiple streams out of input stream as:
input
.branch(
(k, v) ->
Hi Sachin,
The javadoc has a good explanation that you can refer to:
https://kafka.apache.org/24/javadoc/index.html?org/apache/kafka/streams/state/ReadOnlyWindowStore.html
As in our example, both these two would be returned.
Guozhang
On Tue, Feb 18, 2020 at 6:56 AM Sachin Mittal wrote:
>
Hello Sachin,
1) It seems from your source code, that in the stream2.transform you are
generating a new value and return a new key-value pair:
mutate value = enrich(value, result)
return new KeyValue(key, value);
---
Anyways, if you do not want to generate a new value object, and
Hello,
I am prototyping Kafka replication with Mirror Maker 2. At the beginning, I
had hard times with
org.apache.kafka.connect.errors.ConnectException: Error while attempting to
create/find topic(s) 'mm2-offsets.dst.internal'...
Caused by: java.util.concurrent.ExecutionException:
Hey Vincent, I think you need to set both the configs for the brokers and
the individual partitions you are moving themselves.
For an automated system that can make this easier, check out
https://github.com/DataDog/kafka-kit/tree/master/cmd/autothrottle, though
it requires DataDog, you can use a
Hi,
I'm using kafka-consumer-perf-test but I'm getting an error if I add the
--print-metrics option.
Here's a snippet of my output including the error:
consumer-fetch-manager-metrics:fetch-size-max:{client-id=consumer-perf-consumer-99250-1}
:
The specs of your broker machines look fine for your use case. But you'll
need to run 3
ZK nodes at least so that ZK can maintain quorum in the event of a node
failure, network partition etc. that removes a node. With two ZK nodes, one
failing node would take out your Zookeeper cluster, as quorum
The metrics are exposed in the JVM the producer is running within as
Mbeans. The long string I gave you is the relevant MBean object name. You
can connect to the JVM using JConsole to view the MBeans. There are also
multiple libraries that will scrape a JVM via JMX to extract values from
MBeans.
>
>
>
> Hi Blogspot,
>
>
>
> I have accidently put sensitive information pertaining to UK Homeoffice on
> this blog.
>
>
>
>
> https://kafkacommunity.blogspot.com/2020/01/re-kafka-24-anchore-scan-list.html
>
>
>
> Can this blog be removed immediately please and confirm me about the same?
>
> Also
Hi, I want to know which account (or email id) this blog was posted from,
so that I can try removing it.
https://kafkacommunity.blogspot.com/2020/01/re-kafka-24-anchore-scan-list.html
Best Regards
Satya Kotni
On Thu, 20 Feb 2020 at 13:11, Satya Kotni wrote:
> Hi Blogspot,
>
>
>
> I have
Aha! Thanks, Renato, that's very clear.
I think there's a couple of ways you can model this, but one thing that
applies to all of them is that you should consider the `max.task.idle.ms`
configuration option. If you set it higher than `max.poll.interval.ms`,
then Streams will be able to ensure
For 1), MM2 will work with older versions of Kafka. I've gotten it to work
with clusters as old as 0.10.2 but with some features disabled iirc.
Ryanne
On Thu, Feb 20, 2020, 2:49 AM Dean Barlan
wrote:
> Hi everyone,
>
> I have a few small questions for you regarding MirrorMaker.
>
> 1. I know
It seems that one of the brokers somehow had a high CPU utilization, like 5
of the brokers had 15%, and one had 100% utilization.
After I added more CPUs to that broker with 100% CPUs utilization, the
issue solved itself.
Peter
On Thu, 20 Feb 2020 at 10:54, Péter Sinóros-Szabó <
Hello,
We're currently testing Mirrormaker 2.0 functionality for replication between
clusters. I have successfully run the Mirrormaker 2.0 script
(connect-mirror-maker.sh) using this config, replicating between two Kubernetes
Kafka broker instances:
Clusters = MC,DC
Hello,
We have a cluster of 10 brokers.
recently we replaced some broken HDDs on a single broker (id 2 for future
reference), all data on this broker was erased.
We have a replication factor of 3 minimum on all our topics so no data was lost.
To add the broker to the cluster again I configured
Hi Liam,
Thanks a lot for your response and it really helps for my assessment.
And also I am mentioning some additional information about my requirement,
in case if you have additional information to provide kindly let me know.
No of messages(including all topics messages) per day = 2
Hi Liam Clarke,
Sorry but this is bit unclear for me.
Can you please elaborate your answer? I am beginner to Kafka.
" Producers emit metrics via JMX ":
- How to enable this? I have kafka-Manager. Can I make use of
kafka-manager? How?
On #2, you can provide an implementation of a MirrorMakerMessageHandler
that will called for each record - you ensure it's in the classpath and
pass the class name to MM using --message.handler.
On Thu, 20 Feb. 2020, 9:49 pm Dean Barlan,
wrote:
> Hi everyone,
>
> I have a few small questions
Hi,
we use Kafka 1.1.1, recently I faced with an issue/bug I can't see how to
solve.
We have a service running two instances of it, using the same consumer
group id to access some topics. When the service starts and it starts to
join the consumer group, the join does not succeed.
The application
Hi everyone,
I have a few small questions for you regarding MirrorMaker.
1. I know that MirrorMaker 2.0 is only available starting with Kafka
version 2.4. Does that mean that if I was mirroring from cluster A to
cluster B, that both clusters need to be running Kafka 2.4?
2. For MirrorMaker
Hi Naveen, a very broad question, but to use Kafka as a backbone of your
infrastructure, the brokers need to be on machines with enough disk to
store the expected data, and with good network interface capacity - we use
10Gbps NICs.
Sizing disks is a case of knowing how long you want retain data
27 matches
Mail list logo