uto.create.topics.enable property.
> >
> > Thanks,
> > Megh
> >
> >
> > On Sun, Feb 18, 2024, 15:09 Abhishek Singla >
> > wrote:
> >
> > > We only delete topics which does not have any active producers.
> > >
> > > Nowhe
correcte missig thing.
You have to remove that topic from producer as well as consumer config
both.
On Sun, 18 Feb 2024 at 2:49 PM, sunil chaudhari
wrote:
> You have to remove that topic from consumer config.
> restart consumer.
> the wait for some time.
> Then delete topic.
> th
You have to remove that topic from consumer config.
restart consumer.
the wait for some time.
Then delete topic.
this time it wont create again.
On Sun, 18 Feb 2024 at 1:07 PM, Abhishek Singla
wrote:
> Hi Team,
>
> Kafka version: 2_2.12-2.6.0
> Zookeeper version: 3.4.14
> Java version: 1.8.0_301
Hi,
anybody there who has a shell script to delete topics from cluster.
I have list of elegible topics to be deleted.
Script should accept list of files as an input.
Delete topics one by one.
please share.
Thanks in advance.
regards,
Sunil.
Hi,
point 1…. If you want to mutate the messsage you have this option.
1. Start a Ksql server
2. read from parent_topic, mutate and create child_topic where mutated
message will be published
3. your consumer will read child_topic to consume and processe the
message.
regards,
Sunil.
child_to
Hi Vinod,
you may want to contact confluent in this case.
You can contact separately, I can guide more on this.
thanks and regards,
Sunil.
On Thu, 9 Nov 2023 at 3:07 PM, Vinothkumar S
wrote:
> Hi Team,
>
> We would like to use licensed version of Kafka along with Support.
> Could you please sha
Hi Venkat,
are you planning to use Open source Apache Kafka Or Confluent?
what is your use case apart from streaming?
regards,
Sunil.
On Fri, 29 Sep 2023 at 12:27 PM, ANANTHA VENKATA GANJAM
wrote:
> TCS Confidential
>
> Hi Team,
>
> We are planning to set up Lab environment for Kafka in TCS. Pl
Kudos to Nikhil.
your explanation adds to my knowledge.
🙏
On Mon, 25 Sep 2023 at 7:16 PM, Nikhil Srivastava <
nikhilsrivastava4...@gmail.com> wrote:
> Hi Yeikel,
>
> Sharing my two cents. Would let others chime in to add to this.
>
> Based on my understanding, if connect workers (which are all p
Aug 2023 at 7:10 PM, Robson Hermes
wrote:
> Hello Sunil
>
> I'm not calling a stop, I'm straight deleting the connectors with the
> DELETE. Stopping the connector is done internally during deletion.
>
> Regards
> Robson
>
> On Mon, 21 Aug 2023 at 15:36, sunil
You have to remove connectors first using delete api
and then stop the connector
On Thu, 17 Aug 2023 at 2:51 AM, Robson Hermes
wrote:
> Hello
>
> I'm using kafka connect 7.4.0 to read data from Postgres views and write to
> another Postgres tables. So using JDBC source and sink connectors.
> All
t; Zweigstelle der Klarna Bank AB (publ), AG schwedischen Rechts mit
> Hauptsitz in Stockholm,
> Schw. Gesellschaftsregister 556737-0431
> Verwaltungsratsvorsitzender: Michael Moritz
> Geschäftsführender Direktor: Sebastian Siemiatkowski
> Leiter Zweigniederlassung: Yaron Shaer, Björn Peterse
Hi I can guess two problems here.
1. Either too many partition’s concentrated on this broker compared to
other broker
2. The partitions on this broker might have larger size as compared to the
parition on other brokers
please chech if all brokers are evenly balanced in terms of number of
partition
Hi Gaurav,
you can make use of log4j.properties for managing log files.
Define housekeeping policy and get rid of too many files.
On Sat, 22 Jul 2023 at 10:58 AM, Gaurav Pande wrote:
> Hi All,
>
> I am running Apache kafka 2.7.0 and I see that presently there are many
> server.logv files but ho
I suggest you practice this thing in some dev or test environment before
doing it to prod environment.
And copy the data, Not move.
On Tue, 18 Jul 2023 at 8:22 PM, sunil chaudhari
wrote:
>
> Better you have a rollback plan. In case the new mountpount has some issue
> then you shoul
o perform this copy activity do we need to stop Kafka broker as well?
>
> Regards,
> GP
>
> On Tue, 18 Jul, 2023, 16:24 sunil chaudhari,
> wrote:
>
> > Actually the broker refers to the base directory.
> > Example:
> > log.paths: /var/log/mykafka/data/
> >
nts, meta.properties
> in existing file system all of these should also be copied to new disk/file
> system?
>
> Regards,
> GP
>
> On Tue, 18 Jul, 2023, 12:32 sunil chaudhari,
> wrote:
>
> > I think you can copy whole data directory to new one. I dont think there
>
I think you can copy whole data directory to new one. I dont think there
will be loss or corruption
On Tue, 18 Jul 2023 at 12:20 PM, Gaurav Pande wrote:
> Hi Guys,
>
> Any help on this query please.
>
> Regards,
> GP
>
> On Mon, 17 Jul, 2023, 21:00 Gaurav Pande, wrote:
>
> > Hi Guys,
> >
> > I
I will try to answer.
rebalancing triggers when one or two consuemrs(client) leaves the group
because of any reason.
The thumb rule is Number of partitions should be equal to number of
consumer threads.
If there are 300 partitions assigned one thread each it wont rebalance
untill some consumer mark
controller and broker, and I totally accept downtime (stop service)
> >
> > So just want to ask for my case, single node, if I want to upgrade to 3.4
> > then start service under KRaft (get rid of ZK), what would be the steps?
> >
> > Thanks~
> >
> > O
How will you achieve zero downtime of you stop zookeeper and kafka?
There must be some standard steps so that stop zookeeper one by one and
start kraft same time so that it will be migrated gradually.
On Tue, 7 Mar 2023 at 9:26 AM, Zhenyu Wang wrote:
> Hi team,
>
> Here is a question about KRa
a and logs? If this helps,
> could you tell me how to do this?
>
>
>
> On Wed, Feb 15, 2023 at 8:29 PM sunil chaudhari <
> sunilmchaudhar...@gmail.com>
> wrote:
>
> > Remove all data and logs.
> > And start it.
> > Next time when you want to stop then
Remove all data and logs.
And start it.
Next time when you want to stop then dont kill the process with kill
command.
Stop it gracefully using kafka-server-stop under /bin
Kafka needs stop signal to do some cleanup operations before it stops. So
kill is not the option.
On Thu, 16 Feb 2023 at 6:49
I am just wondering even if you read document in your language, but
ultimately server.config and server.log will be in english only.. right?
On Tue, 31 Jan 2023 at 9:00 PM, Nengda Ouyang
wrote:
> I see the kafka website only have language version. Why doesn't provide
> other language.
>
- ---
>
> --override Optional property that should override values set in
>
>server.properties file
>
> --versionPrint version information and exit.
>
>
>
>
> On Sat, Jan 28, 2023 at 12:58 AM sunil chaudhari <
Please try executing..
bin/kafka-server-start.sh —help
On Fri, 27 Jan 2023 at 8:47 PM, xiao cheng wrote:
> Hey all,
>
> I recently tried to run kafka locally on my macos again after a while (it
> used to work). I followed the quickstart guide from
> https://kafka.apache.org/quickstart.
>
> Ho
://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=73638194#content/view/73638194
Please feel free to contact me if you have any questions or concerns on
this matter.
Cheers,
Sunil Chaudhari.
On Mon, 5 Dec 2022 at 11:33 PM, Colin McCabe wrote:
> Hi,
>
> Sorry, we do no
I like the way you written it without fullstop…. Lol
On Mon, 28 Nov 2022 at 8:48 PM, Schehrzade
wrote:
> I like the author Kafka and I was so impressed someone had written code or
> whatever I don’t know because I’m not from this country and I don’t know
> stuff about science and all but it was
Hi,
Are you running on windows?
If Yes, please check documentation once.
There are different executables for windows under /bin
Also make sure you are using correct version of jdk for windows.
Regards,
Sunil.
On Mon, 21 Nov 2022 at 2:26 AM, ravi r wrote:
> I downloaded
>
> kafka_2.13-3.3.1.tgz
Hi,
Use confluent. It has auto balancing feature.
You dont need to do these manual things.
On Tue, 15 Nov 2022 at 7:22 PM, Pierre Coquentin
wrote:
> Hello Luke, and thank you for your answer.
> What I would have hoped for is something more automatic, something that
> will spread the load when a
Hi Lehar,
You are right. There is no better way in open source Kafka.
However confluent has something called as Auto Rebalancing feature.
Can you check if there is free version with this feature?
It start balancing of brokers automatically when it see there is uneven
distribution of partitions.
You can try two things.
Instead of localhost, can you publish the kafka service on Hostname?
Since ur client.auth is none, can you try removing keystore from the
producer?
Regards,
Sunil.
On Fri, 7 Oct 2022 at 2:56 PM, Namita Jaokar
wrote:
> Hi All,
>
> I am trying to enable SSL in my kafka br
Hi Namita and Pasi,
Logstash as middlemen is good if and only if:
1. You dont need buffer in between and you are Ok with tight coupling
between source and destination.
2. There are sufficient number of logstash servers available in case of
large data.
In case of logstash, source and destinations sh
Hi Namita,
For Moving data from Elasticsearch to Kafka you need Elasticsearch Source
connector. I guess this is not supported connector. You may have to rely on
some community developed connector where you may not get instant support.
https://github.com/DarioBalinzo/kafka-connect-elasticsearch-sou
Hi,
I have one topic with 300 partitions.
I have 4 KSQL instances with 8 threads each on 8 core machine.
Topic has lag of around million records.
Can I increase number of threads equal to number of partitions so hat lag
will be reduced?
Or I have to reduce partitions to match total number of thread
You can try this, if you know what prometheus and how its installed
configured.
https://www.confluent.io/blog/monitor-kafka-clusters-with-prometheus-grafana-and-confluent/
On Wed, 17 Aug 2022 at 2:25 AM, Peter Bukowinski wrote:
> Richard recently answered your query. A kafka cluster does not k
Hi Sowjanya,
I am not technical support from Kafka, but I can help you in this.
Recently I upgraded one of the confluent version, so I will try to help you.
Please let me know what exactly you need.
On Tue, 16 Aug 2022 at 7:43 PM, sowjanya reddy <
sowjanyabairapured...@gmail.com> wrote:
> I Team,
Hi all,
Can anyone tell me in detail what exactly happens when I stop kafka using
kafka-stop script in /bin?
Doest it releases session or files or what cleanup operations?
Regards,
Sunil.
Hi
Good evening from India😃
I am using Kafka(confluent Platform) since long time.
Its actually a message broker in simple term which can persist data on disk
for short time in days or weeks.
It acts like a buffer in between message producer and message consumer.
Kafka reduces loading of consumers b
Rebalancing happens mainly because of these reasons:
You restart consumer
Consumer host is not reachable
You stop consumer
All above situations are fine when you have sufficient number of
consumers(threads) to read from the available partitions and all consumers
are logically distributed across m
me consumer to run twice in a row. Renaming the
> consumer group does not help with anything related to message order.
>
> Thanks!
>
> -R
>
>
>
> On Thu, Jan 6, 2022 at 12:21 AM sunil chaudhari <
> sunilmchaudhar...@gmail.com>
> wrote:
>
> > hi,
> >
hi,
Why dont you provide new name to consumer group each time you restart your
consumer?
This new consumer group will not conflict with the earlier one and it will
be treated as new consumer thread next time to get all messages again.
Regards,
Sunil.
On Wed, 5 Jan 2022 at 10:45 PM, Roger Kasinsk
Hi,
You can try reducing min.insynch replicas to 1
On Tue, 9 Nov 2021 at 1:56 AM, Kafka Life wrote:
> Dear Kafka experts
>
> i have a 10 broker kafka cluster with all topics having replication factor
> as 3 and partition 50
>
> min.in.synch replicas is 2.
>
>
> One broker went down for a hardw
Hi Kunal,
This article may help you.
https://betterprogramming.pub/kafka-acks-explained-c0515b3b707e
Cheers,
Sunil.
On Fri, 20 Aug 2021 at 8:11 PM, Kunal Goyal
wrote:
> Hello,
>
> We are exploring using Kafka for our application. Our requirement is that
> once we write some messages to Kafka,
Honestly I didnt get this question.
Please elaborate.
On Sun, 4 Jul 2021 at 5:34 PM, M. Manna wrote:
> Hello,
>
> Is it currently possible to use a single endpoint for advertised.listeners,
> which is in front of all my brokers? the flow for example
>
> broker1-->| V
> broker2--
Hi,
There is something called as heartbet consumer thread.
This threads running on consumer keeps sending heartbeats at regular
interval as per the setting heartbeat.interval.ms. It keeps on telling
broker that I am very much alive.
https://docs.confluent.io/platform/current/installation/configura
Hi Marcus,
Your first understanding is correct, provided each “consumer” means a
“consumer thread”
IMO, Second understanding about message distribution is incorrect because
there is something called as max poll records for each consumer. Its 500 by
default.
And the time between 2 polls is also ver
I think similar issue is being discussed in other email thread.
On Fri, 25 Jun 2021 at 6:09 PM, meghna murthy wrote:
> Hi Team ,
>
> When ssl.client.auth=required is set , Srver is sending Certificate
> request with DN with junk certificates to client . Server has to send what
> certificates we
Hehehe,
Banging on wrong door🤣🤣🤣 . I mean wrong email.
On Sat, 5 Jun 2021 at 8:29 PM, Jayashree Sanyal
wrote:
> Please unsubscribe me . I have tried sending several mails with no success
> .
> =
> Please refer to https://northamerica.altran.com
gt;
> בתאריך יום ג׳, 1 ביוני 2021, 20:57, מאת sunil chaudhari <
> sunilmchaudhar...@gmail.com>:
>
> > Hi,
> > Suppose:
> > Maximum Topic size is set to 1 GB
> > Retention hours: 168
> > What happens in case topic size reaches the maximum size before 168
Hi,
Suppose:
Maximum Topic size is set to 1 GB
Retention hours: 168
What happens in case topic size reaches the maximum size before 168 hours.
Will it delete few messages before its expiry though they are eligible to
stay for 168 hrs?
Regards,
Sunil.
>
> בתאריך יום ו׳, 28 במאי 2021, 15:05, מאת sunil chaudhari <
> sunilmchaudhar...@gmail.com>:
>
> > Hello Ran,
> > Whatever link you have provided is the supported SINK connector.
> > It has all settings for SSL.
> >
> > The connector I am talkin
07:03, מאת Ran Lupovich <
> ranlupov...@gmail.com
> >:
>
> >
> https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html
> >
> > בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari <
> > sunilmchaudhar...@gmail.com>:
> &
f
> CA that is signing your certifcates
>
> בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari <
> sunilmchaudhar...@gmail.com>:
>
> > Hi Ran,
> > That problem is solved already.
> > If you read complete thread and see that last problem is about https
> &g
Hi Ran,
That problem is solved already.
If you read complete thread and see that last problem is about https
connection.
On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich wrote:
> Try setting es.port = "9200" without quotes?
>
> בתאריך יום ה׳, 27 במאי 2021, 04:21,
Hello team,
Can anyone help me with this issue?
https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
Regards,
Sunil.
Hi Neeraj,
I dont think there is relation of key and the partition in that sense..
On Sat, 8 May 2021 at 3:16 AM, Neeraj Vaidya
wrote:
> Hi all,
> I think I kind of know the answer but wanted to confirm.
> If I have multiple producers sending messages with the same key, will they
> end up in t
Hi,
By the way why do you want it to setup on windows?
You can setup and run it on linux and access the user interface from
windows if you have firewall opened for running port. Default is 9021.
On Fri, 26 Mar 2021 at 10:39 AM, Satendra Negi
wrote:
> Hello Guys,
>
> Is there any way to run the
> If yes, is it safe to delete them. Do we require them later ?
> > >
> > > the url says,
> > > `make sure all the other servers in your ensemble are up and working.
> Use
> > > "stat" command on the command port to see if they are in good h
2181
>
> Q. Can you please let me know where can I set zookeeper logs for verbose
> mode for debugging the issue?
>
> Thanks,
>
> On 2020/11/22 01:26:52, sunil chaudhari
> wrote:
> > Hi,
> > Please check if it helps:
> >
> http://zookeeper.apache.org/doc/r
Hi,
Please check if it helps:
http://zookeeper.apache.org/doc/r3.4.5/zookeeperAdmin.html#sc_supportedPlatforms
You have similar symptoms in point 6 in your original email.
Try running it manually with below steps:
Clear all logs
Start zookeeper manually, watch logs for any error. Rectify it refer
Hi,
Whats your real problem?
Kafka zookeeper failing OR cant list topics.
Unless you start kafka, you wont be able to list your topics.
On Sun, 22 Nov 2020 at 12:23 AM, prat 007 wrote:
> Hi SasiKumar,
>
> Thanks for your reply. I have pasted telnet output.
>
> Thanks,
>
> On 2020/11/21 17:08:51
I am not very sure about the isolation.level setting.
However duplicates may cause due to the commit failed on the consumer side.
Please do read about max.poll.interval.ms and max.poll.records settings.
You may get some valuable inputs.
Recently i solved duplicates issue in my consumer by tuning
Hi Don,
Kafka is not meant to be a general purpose database. Its a streaming
platform. So think about retention of kafka messages rather than taking
backup.
Kafka itself has retention capability. So you can tune it as per your need.
Regards,
Sunil.
On Sun, 9 Aug 2020 at 5:13 PM, Dor Ben Dov wrot
heers,
>
> Liam Clarke-Hutchinson
>
> On Mon, Jun 22, 2020 at 4:16 PM sunil chaudhari <
> sunilmchaudhar...@gmail.com>
> wrote:
>
> > Manoj,
> > You mean I have execute this command manually for all 350 Topics which I
> > already have?
> > Is there
host:2181 --topic my-topic
> --partitions 6
>
> Thanks
> Manoj
>
>
>
> On 6/21/20, 7:38 PM, "sunil chaudhari"
> wrote:
>
> [External]
>
>
> Hi,
> I already have 350 topics created. Please guide me how can I do that
> for
>
pic.
>
> Kind regards,
>
> Liam Clarke-Hutchinson
>
> On Mon, Jun 22, 2020 at 3:16 AM sunil chaudhari <
> sunilmchaudhar...@gmail.com>
> wrote:
>
> > Hi,
> > I want to change number of partitions for all topics.
> > How can I change that? Is it se
Hi,
I want to change number of partitions for all topics.
How can I change that? Is it server.properties which I need to change?
Then, in that case I have to restart broker right?
I checked from confluent control center, there is no option to change
partitions.
Please advise.
Regards,
Sunil
Hi,
I was going through this document.
https://docs.confluent.io/current/kafka/deployment.html
“ does not require setting heap sizes more than 6 GB. This will result in a
file system cache of up to 28-30 GB on a 32 GB machine.”
Can someone please put focus on above statement? Its bit unclear to me
This is awesome. Thanks Ricardo.
On Fri, 19 Jun 2020 at 9:31 PM, Ricardo Ferreira
wrote:
> Gérald,
>
> Typically you should set the `num.io.threads` to something greater than
> the # of disks since data hits the page cache and the disk. Using the
> default of 8 when you have a JBOD of 12 attache
pology, how Logstash connect to Kafka, and
> how the code is implemented.
>
> Thanks,
>
> -- Ricardo
> On 6/19/20 7:13 AM, sunil chaudhari wrote:
>
> Hi,
> I am using kafka as a broker in my event data pipeline.
> Filebeat as producer
> Logstash as consumer.
>
>
>
Hi,
I am using kafka as a broker in my event data pipeline.
Filebeat as producer
Logstash as consumer.
Filebeat simply pushes to Kafka.
Logstash has 3 instances.
Each instance has a consumer group say consumer_mytopic which reads from
mytopic.
mytopic has 3 partitions and 2 replica.
As per my u
Simple..
Topic auto creation ON.
As soon as it encounters producer and consumers request for some topic then
it creates that topic automatically.
This is very common question and problem people come across, if they are
new. Same happened with me.😄
On Fri, 22 May 2020 at 3:28 PM, Jiamei Xie wrot
Hi,
I don't know whether this question is relevant to this group?
Sorry If I posted in wrong group.
I want to disable OPTIONS method in Confluent-control center running on port
9091.
Can someone guide me for required configurations?
Regards,
Sunil.
Again
A consumer can have one or more consumer thread.
The analogy of 12 partitions and 4 consumer is true when each consumer has
3 consumer threads.
Please don’t skip the important factor “consumer thread” in this matter.
If you run each consumer with threads then you may need max 3 consumers
Hi Prasad,
Want to correct a bit. Ots not one consumer per partitions.
Its one consumer thread per partitions.
On Thu, 26 Mar 2020 at 4:49 PM, Prasad Suhas Shembekar <
ps00516...@techmahindra.com> wrote:
> Hi,
>
> I am using Apache Kafka as a Message Broker in our application. The
> producers an
If you are not aware of Grafana and Prometheus then...
You can enable JMx_PORT on kafka broker.
Install and run kafka manager.
https://github.com/lemastero/kafka-manager
There you can see metrics on kafka manager UI.
But yes, grafana-prometheus is Better one.
On Thu, 19 Mar 2020 at 12:25 PM, 张祥
eeper is getting
close/disconnected when shutting down the Broker?
Cheers!
On Thu, Mar 12, 2020 at 10:36 AM Sunil CHAUDHARI
mailto:sunilchaudh...@dbs.com.invalid>> wrote:
> Hi,
> Whenever I try to do rolling restart and start one of my broker, I get
> this error.
>
Hi,
Whenever I try to do rolling restart and start one of my broker, I get this
error.
Can Anyone help me to get rid of this?
[2020-03-12 18:33:34,661] INFO Logs loading complete in 1553 ms.
(kafka.log.LogManager)
[2020-03-12 18:33:34,677] INFO Starting log cleanup with a period of 30 ms.
(
Hi Peter,
That was great explanation.
However I have question about the last stage where you mentioned to update
the zookeeper server in the services where single zookeeper is used.
Why do I need to do that?
Is it because only single zookeeper is used and you want to make sure high
availability of
Hello Experts,
Any thought on this?
From: Sunil CHAUDHARI
Sent: Tuesday, March 3, 2020 5:46 PM
To: users@kafka.apache.org
Subject: Please help: How to print --reporting-interval in the perf metrics?
Hi,
I want to test consumer perf using kafka-consumer-perf-test.sh
I am running below command
]
From: Sunil CHAUDHARI
Sent: Tuesday, March 3, 2020 2:43 PM
To: users@kafka.apache.org
Subject: Producer-Perf-test
Hi,
I have done performance testing of Kafka cluster using
kafka-producer-perf-test.sh
I created diff type of topics and did perf testing. Example: MB1P1R= MB is my
topic name with
Hi Himanshu,
Sorry but I pasted from excel. Dont know how it got messed up?
Will resend it.
On Tue, 3 Mar 2020 at 6:48 PM, Himanshu Shukla
wrote:
> could you please share the result in some proper way? Each field is line by
> line.
>
> On Tue, Mar 3, 2020 at 2:43 PM Sunil CHAUDH
Hi,
I want to test consumer perf using kafka-consumer-perf-test.sh
I am running below command:
./kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic MB3P3R
--messages 65495 --num-fetch-threads 9 --print-metrics --reporting-interval
5000 --show-detailed-stats > MB3P3R-consumer-Perf
Hi,
I have done performance testing of Kafka cluster using
kafka-producer-perf-test.sh
I created diff type of topics and did perf testing. Example: MB1P1R= MB is my
topic name with 1 Partition and 1 replica.
I have 3 nodes cluster.
My Observations:
* When I increased partitions, and keep re
Hi,
I am now in the process of deciding partitions and replicas for my cluster.
I am making use of perf test utilities and it really helps a lot.
Just measure perf by creating multiple topics with same number of recodrs
with diff partitions and replicas.
Then compare the througput and also look at
Hi All,
Sorry to bother you all.
It was simple. 😊
Just put one line in required .sh file
export JMX_PORT=
and it will run 😊
thanks,
Sunil.
From: Sunil CHAUDHARI
Sent: Friday, February 28, 2020 10:06 AM
To: users@kafka.apache.org
Subject: HELP in Usage of JMX port in Kafka
Hi all,
I have used
Hi all,
I have used JMX_PORT 9099 in environment variable and started Kafka.
There is not problem till now. I can see metrics on kafka-manager console. This
is fine.
However when I run kafka-consumer-perf-test.sh and kafka-producer-perf-test.sh
and similar utility under /bin then I get error gi
Hi,
We have one case where we want to send messages from PCF to Kafka endpoints.
Is it possible? How?
Regards,
Sunil.
CONFIDENTIAL NOTE:
The information contained in this email is intended only for the use of the
individual or entity named above and may contain information that is
privileged, c
Hi,
I just configured SSL on 3 brokers
Here is my configuration:
I just replaced hostnames with dummy hostname.
inter.broker.listener.name=CLIENT
listeners=CLIENT://dummyhost.mycom.com:9092,SSL://dummyhost.mycom.com:9093
advertised.listeners=CLIENT://dummyhost.mycom.com:9092
#security.inter.broker
Hello experts,
I am facing errors while starting broker with advertised.listeners.
Can someone send me the sample configuration where all below settings are
mentioned with dummy hosts names alongwith SSL?
Listeners
advertised.listeners
inter.broker.listener.name
thanks
Sunil.
CONFIDENTIAL NOTE:
Hi,
I have run consumer performance test on my kafka cluster.
Can you please help me to understand below parameters? Basically I don't know
whats unit of those parameters? I cant assume it blindly
Its only given in 2 columnts, the "Metric Name" and its "Value"
Metric Name
sent
Records per seconds, avg latency etc.
But in case of consumer perf, its not showing the throughput. How many
records read per seconds?
Regards,
Sunil Chaudhari
you say change port, then what does it
Mean?
Port of kafka jmx-port?
On Fri, 21 Feb 2020 at 5:18 PM, Karolis Pocius
wrote:
> Choose a different port or check what's already listening on 9099 using
> something like: `ss -tunapl | grep 9099`
>
> On Fri, Feb 21, 2020 at 1:08 P
Hi,
I just enlabled the Jmx port in kafka broker.
Since then I am not able to run utilities under /bin
Example when I run
./kafka-topics.sh —create
Then it throws bindException port already in use 9099
Before it was running.
Same thing happening for perf test utilities under /bin.
Please
ent is safe.
Hi Sunil,
Looks like Metricbeats has a Jolokia module that will capture JMX exposed
metrics for you:
https://www.elastic.co/blog/brewing-in-beats-add-support-for-jolokia-lmx
Kind regards,
Liam Clarke
On Fri, Feb 21, 2020 at 6:16 PM Sunil CHAUDHARI
mailto:sunil
ion on the Internet, have a Google :)
On Thu, 20 Feb. 2020, 11:51 pm Sunil CHAUDHARI,
wrote:
> Hi Liam Clarke,
> Sorry but this is bit unclear for me.
> Can you please elaborate your answer? I am beginner to Kafka.
> " Producers emit metrics via JMX ":
> - How t
l help you, assuming that your producers
are using a round robin partition assignment strategy, you could divide this
metric by your number of partitions,
kafka.producer:type=producer-metrics,client-id=(.+),topic=(.+)record-send-rate
Kind regards,
Liam Clarke
On Thu, 20 Feb. 2020, 5:5
Hi
I was referring to the article by Mr. June Rao about partitions in kafka
cluster.
https://www.confluent.io/blog/how-choose-number-topics-partitions-kafka-cluster/
"A rough formula for picking the number of partitions is based on throughput.
You measure the throughout that you can achieve on
Hi,
I have enabled SSL on zookeeper nodes using reference:
https://zookeeper.apache.org/doc/r3.5.6/zookeeperAdmin.html#Quorum+TLS
After doing this I started getting logs in zookeeper logs. 10.xx.yy.ss this
is my kafka broker
Now I want to enable SSL communication between Kafka and the zookeepe
Hi all,
Please help me to get some user interface for management and administration of
my kafka cluster.
There are some open source available, but they either have some dependencies or
those need to be built before running.
Is there any pre-build(ready to use package) which I can just download an
1 - 100 of 101 matches
Mail list logo