gt;
> WatchedEvent state:SyncConnected type:None path:null
> Node does not exist: /brokers/ids/0
>
> On Wed, Aug 9, 2017 at 9:56 AM, M. Manna <manme...@gmail.com> wrote:
>
> > Could you also provide the following outputs (assuming you have only 1
> > broker an
gt; (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.TimeoutException: Expiring 2 record(s) for
> test02-1: 1543 ms has passed since batch creation plus linger time
>
>
>
> By the way, where to set "-Djavax.security.debug=all"
This is due to partitions you are consuming from. Documentation section
explains what needs to be done.
On 10 August 2017 at 11:43, Ascot Moss wrote:
> A question:
>
> (input order)
> test1
> test2
> test3
> test 2017-08-10
> |2017-08-10 test1
> 2017-08-10 test2
>
>
>
Is there any chance we can record this email thread in Kafka?
This is Epic
On 7 Jul 2017 7:15 pm, "Kim Chew" wrote:
> Sorry Marcelo, Apache Kafka is a piece of computer software, you are
> confused it with Franz.
>
> On Fri, Jul 7, 2017 at 10:06 AM, Marcelo Vinicius
will
> impact? Do you mean something related to log files renaming?
>
> In documentation, there is no pointer with respect to I/O.
>
> Kindly advice.
>
> Thanks
> Harish
>
>
>
> On Saturday, July 15, 2017, 1:59:58 PM GMT+5:30, M. Manna <
> manme...@gmail.com>
This is a know issue. You cannot delete shared (or opened files) on
Windows, but UNIX/Linux is okay.
Please read Kafka online documentation on WIndows usage. A relevant jira
case is KAFKA-1194.
On 17 July 2017 at 11:31, Dharma Raj wrote:
> Hi,
>
> I am using kafka as
roduction until the disk
> I/O issues(For which you have given workaround) are resolved officilally?
>
>
> Thanks
> Harish
>
> On Monday, July 17, 2017, 5:31:11 PM GMT+5:30, M. Manna <
> manme...@gmail.com> wrote:
>
>
> The workaround is to download the s
eanup the logs from sepearte utility.
>
> My only concern is that - as document suggests running kafka on windows
> has some issues hence not able to decide on using it on windows production
> bed.
>
> Thanks
> Harish
>
>
>
> On Mon, 7/17/17, M. Manna <manme...@gmail.co
Hello,
I recently started using the 0.10.2.1 distro on Ubuntu Linux and trying
some basic startup for a 3 node cluster (using the setup mentioned in the
documentation).
The following is a custom startup that I have written (I have also tried
this by calling them individually in the same order):
have our own
> consumers and own logic to place the data into SQL.
>
> As you suggested, we will do some basic analysis by running Kafka for
> longer duration continuously along with log cleanup activities based on SLA.
>
>
> Thanks
> Harish
>
> On Mon, 7/17/17, M. Mann
.re...@gmail.com> wrote:
> pl check http://kafka.apache.org/documentation.html#os
>
> On Fri, Jul 7, 2017 at 1:19 PM, M. Manna <manme...@gmail.com> wrote:
>
> > Hello Sir,
> >
> >
> > Kafka is not well tested on Windows platform..There are some issues
>
That depends.
If auto creation of non-existent topic enabled (check docs), then it will
simple use the minimum partiotion and replication settings defined in
broker config to create a topic. If auto creation is disabled, your
consumer group won't do anything.
With auto creation enable - It's the
gt; Will the consumer receive those messages in this scenario?
>
>
>
> On 8 Jul 2017 4:38 a.m., "M. Manna" <manme...@gmail.com> wrote:
>
> That depends.
>
> If auto creation of non-existent topic enabled (check docs), then it will
> simple use the minimu
ache.org
Sent: 7/8/2017 4:27:55 PM
Subject: Re: Kafka behavior when consuming a topic which doesn't exist?
Oh gotcha, thanks. So a topic will be created if topic creation is
enabled.
On Sat, Jul 8, 2017 at 8:14 PM, M. Manna <manme...@gmail.com> wrote:
Please check my previous email.
O
Yes sorry that was the reason . Thanks for pointing that out :)
On 7 July 2017 at 10:03, Hu Xi <huxi...@hotmail.com> wrote:
> “Caused by: java.net.UnknownHostException: localhsot" - localhsot? A
> typo?
>
>
> ________
> 发件人: M. Mann
Hello,
As part of my PoC I wanted to check if we have two Windows 10 boxes where
1) One box will have the ZK
2) Other box will have Kafka
The idea was to physically separate zookeeper and Kafka to isolate issues.
For trial, I set it up on my Windows 10 machine where I used the
Documentation to
kafka-consumer-groups.sh --bootstrap-server broker:9092 --new-consumer
--group service-group --describe
how many brokers do you have in the cluster? if you have more than one,
list them all using a comma csv with --bootstrap-server.
Also, could you paste some results from the console printout?
Hello,
I have been running some stress tests regarding log retention whilst
writing to kafka continuously every 5 seconds.
I have set the following properties (apart from other mandatory ones):
log.retention.minutes=3
log.retention.hours.
log.retention.bytes=26214400
log.segment.bytes=10485760
Kafka has some issues wuth IO level admin functionalities e.g. file
deletion/renaming doesn't work out of the box like Linux. However, user
base is growing and there's a momentum to support Windows OS seamlessly
like Linux.
I have been running some stress tests on Windows and the performance is
Is this from 0.10.2.1? I havebeen running on both Windows and Linux but
cannot see any issues.
Anyone else?
On Tue, 18 Jul 2017 at 3:31 pm, John Yost wrote:
> I saw this recently as well. This could result from either really long GC
> pauses or slow Zookeeper responses.
Hello,
This might be too obvious for some people, but just thinking out loud here.
So we need a recommended 3 node cluster to achieve the 1 point failure
model. I am trying to deploy a 3 node cluster (3 zks and 3 brokers) in
Linux (or Even Windows, doesn't matter here).
Under the circumstance
Hello,
I suppose I must confirm that I have read the following:
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-WhydoIseeerror%22Shouldnotsetlogendoffsetonpartition%22inthebrokerlog?
I have a 3 node cluster (3 zookeepers and 3 brokers - 3 different physical
servers). my
You might want to read this first
http://www.apache.org/licenses/exports/
On 24 July 2017 at 14:33, DUGAN, TIMOTHY K wrote:
> Hello,
>
>
>
> We are looking for export compliance information related to the following
> ASF products:
>
>
>
>- Apache Maven 3.3.3, 3.1.0, 3.0.5
Hello,
Please forgive me for asking too simply question (since I haven't done any
Scala development).
I am trying to see if a fix works for Windows OS. I have made some changes
in core package and trying to run unitTest gradle command. The test already
exists in existing Kafka source code (so i
Hello,
After running an emulated 3 node cluster on my local pc, I am now trying to
deploy a 2-node cluster on two of my remote machines. The following is the
server configuration:
broker.id=1
advertised.listeners=PLAINTEXT://0.0.0.0:9092
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL
gt; AFAIK you cant deploy a 2 node cluster, or you'll have split brains issues
> "out of the box"
>
>
>
> On 19/07/17 15:11, M. Manna wrote:
>
>> Hello,
>>
>> After running an emulated 3 node cluster on my local pc, I am now trying
>> to
>>
wrote:
> To be honest, my advice would be to consider docker
>
> - install docker
> - create a swarm with both nodes
> - deploy a zookeeper service with 3 instances
> - deploy a kafka service with 3 instances
>
>
>
> On 19/07/17 15:19, M. Manna wrote:
>
>
Just to add (in case the platoform is Windows)
For Windows based cluster implementation, log/topic cleanup doesn't work
out of the box. Users are more or less aware of it, and doing their own
maintenance as workaround.
If you have issues on Topic deletion not working properly on Windows (i.e.
Please forgive my autocorrect options :(
On 28 Jun 2017 8:06 pm, "M. Manna" <manme...@gmail.com> wrote:
Hi,
OS is not an issue, I have a 3 broker setup and I have experienced this too.
How are toy atarting the brokers? Is this a concurrent start or have you
got some startup s
t;..." JMX_PORT= SCALA_VERSION=2.12.2 JAVA_HOME=/usr
> $KAFKA_INSTALL_PATH/bin/kafka-server-start.sh -daemon ..
> ```
>
> On Wed, Jun 28, 2017 at 1:08 PM, M. Manna <manme...@gmail.com> wrote:
>
> > Please forgive my autocorrect options :(
> >
> > On
fka
> again, however it's just occasionally on boot one fails to start with this
> error.
>
> On Wed, Jun 28, 2017 at 1:25 PM, M. Manna <manme...@gmail.com> wrote:
>
> > Aren't u using the same JMX port for all brokers? I dont think it
> will
> > work for more
Hi,
OS is not an issue, I have a 3 broker setup and I have experienced this too.
How are toy atarting the brokers? Is this a concurrent start or have you
got some startup scriptto bring up all the brokers?
KR,
On 28 Jun 2017 6:47 pm, "Eric Coan" wrote:
> Hello,
>
>
Hi,
Does anyone know if this will get addressed anytime soon? Any workarounds?
The functionality has been broken for a while.
KR,
Hello,
I had a question regarding records(TopicPartition partition) method in
ConsumerRecords
I am trying to use auto.offset.reset=earliest and combine it with manual
offset control.
I saw this sample code under "Manual Offset"
Hi,
I have sent numerous emails in the past about the same issue, but no
response so far.
I was wondering if we will get a patch. I am working on a synchronisation
PoC which is reliant on Log cleanup to be successful every day. I have got
auto.offset.reset=earliest and the
Please share your server configuration. How are you advertising the
listeners?
On 29 Jun 2017 13:44, "Anton Mushin" wrote:
> Hi everyone,
> I use on kafka_2.11-0.10.1.1, and I'm trying check it work. I have
> Zookeeper and Kafka on one host.
> I'm calling console
gs
> num.partitions=1
> num.recovery.threads.per.data.dir=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=30
> zookeeper.connect=localhost:2181
> zookeeper.connection.timeout.ms=6000
>
> Best,
> Anton
>
> -Ori
This is quite vague.
What commands have you executed?
What do you refer to by open files? Is it the log partition or consumer
offsets?
On 5 Jul 2017 3:21 pm, "Satyavathi Anasuri"
wrote:
> Hi,
>I have created a topic with 500 partitions in 3 node
ested on Windows platform..There are some issues running
> on
> Windows. It is recommended to run on Linux machines.
>
> On Thu, Jul 6, 2017 at 9:49 PM, M. Manna <manme...@gmail.com> wrote:
>
> > Hi,
> >
> > I have sent numerous emails in the past about the same issue, bu
Hi,
Could you Post your zookeeper.properties and server.properties file details?
Thanks,
On 9 August 2017 at 07:08, shyla deshpande wrote:
> I have even tried deleting the contents of log.dirs and dataDir before
> starting the zookeeper and kafka server, still no
Mahesh,
Thanks for sharing the info. Is having "Exactly" 8 brokers a "Must" for
you? because one of them is technically unnecessary since your cluster can
only tolerate 3 failures (even with 7 brokers).
Could you please try the following:
1) Stop the cluster.
2) Increase the number of
sed it).
Kindest Regards,
M. Manna
pher is ECDHE-RSA-AES256-SHA
>
> PSK identity hint: None
>
> Start Time: 1502309645
>
> Timeout : 7200 (sec)
>
> Verify return code: 19 (self signed certificate in certificate chain)
>
> ---
>
> Regards
>
> On Wed, Aug 9, 2017 at 10:29 PM,
r.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/tmp/kafka-logs
> num.partitions=1
> log.retention.hours=168
> log.segment.bytes=1073741824
> log.retention.check.interval.ms=30
> zookeeper.connect=localhost:2181
> zookeeper.connection.time
Hello,
i have my test/development certficates created for X509 request extensions
and SAN names cover:
DNS.1 localhost
> DNS.2 *.testsystem.net
To make things more practical, I have used the advertised.listeners and
listeners to ONLY SSL://localhost:9093.
I have verified the certificates and
ontroller.message.queue.size=10
> >>>
> >>> default.replication.factor=3
> >>>
> >>> log.dirs=/usr/log/kafka
> >>>
> >>> kafka.logs.dir=/usr/log/kafka
> >>>
> >>> num.partitions=20
> >>>
> >&
-Djavax.security.debug=all
Please share your producer/broker configs with us.
Kindest Regards,
M. Manna
On 9 August 2017 at 14:38, Ascot Moss <ascot.m...@gmail.com> wrote:
> Hi,
>
>
> I have setup Kafka 0.10.2.1 with SSL.
>
>
> Check Status:
>
> openssl s_client -deb
Hello,
I have actually formatted the question with sufficient information here:
https://stackoverflow.com/questions/45512063/console-producer-error-after-implementing-with-tls-ssl
In Summary, I have tried to debug this and it looks like the certificates
are being printed and recognised as
I tried this numerous times (regardless of PLAINTEXT/SSL connnections). The
setup is to do with a single node (1 Zeek, 1 Broker) as mentioned in the
startup docs on Kafka site.
1) Follow kafka documentation to initialise all Zk and Kafka props.
2) Enable automatic topic creation with a minimum
Hello,
I wanted to add TLS/SSL to my kafka setup. To start with, I went through
the kafka SSL documenation on main website. I have done the following:
1) Imported the signed certificates to keystore
2) Imported the root CA
3) Verified that the keystore and trust store password are correct by
I guess It's kinda late since I am already in transit for work.
Is there any plan to do something in Europe e.g. London or some other place?
On 18 Aug 2017 4:41 pm, "Gwen Shapira" wrote:
> Hi,
>
> I figured everyone in this list kinda cares about Kafka, so just making
> sure
Hello,
This question is more of a request for suggestion, since I am already using
Plain API (Producer Consumer) and trying to explore either Stream/Connect
API to solve my problem.
I need to perform adhoc read from a different server and this is not
event-driven. For example:
1) User Logs in
wrote:
> This is possible. Once you have records in your put method, its up your
> logic how you are redirecting it to multiple jdbc connections for
> insertion.
>
> In my use case i have implemented many to many sources and sinks.
>
> Regards,
> Srijith
>
> On 13-
; detailed in terms of implementation and design details.
> >
> > I want to dabble with the code but given the complexity of the code, a
> good
> > starter guide would be helpful.
> >
> > Thanks.
> >
> > On Wed, Sep 20, 2017 at 9:53 AM, M. Ma
There's a video where Jay Kreps talks about how Kafka works - YouTube has
it as the top 5 under "How Kafka Works".
On 20 Sep 2017 5:49 pm, "Raghav" wrote:
> Hi
>
> Just wondering if there is any video/blog that goes over Kafka Internal and
> under the hood design and
pkg is meant to be run on Linux. I guess there is a lot of manual
conversion I need to do here?
On 16 September 2017 at 21:43, Ted Yu <yuzhih...@gmail.com> wrote:
> Have you looked at https://github.com/confluentinc/kafka-connect-jdbc ?
>
> On Sat, Sep 16, 2017 at 1:39 PM,
> give each sink a different connect string. That should do what you want
> instantly
>
> Best Jan
>
>
> On 16.09.2017 22:51, M. Manna wrote:
>
>> Yes I have, I do need to build and run Schema Registry as a pre-requisite
>> isn't that correct? because the Qui
hidden_ip:55091': Unable to parse PLAINTEXT:hidden_ip:55091 to a
> broker endpoint
>
> > On 11 Sep 2017, at 18:06, M. Manna <manme...@gmail.com> wrote:
> >
> > Could you try with the following:
> >
> > listeners=PLAINTEXT:hi
Could you try with the following:
listeners=PLAINTEXT:hidden_ip:55091
advertised.listeners=PLAINTEXT://localhost:55091
listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL
"Until some weeks ago everything worked fine" - means you have accidentaly
changed something.
On 11 September 2017 at
Hi All,
We are running a PoC which consists of some DB level syncrhonisation
between two servers. The idea is that we will synchronise a set of tables
either at a certain interval or just using initial settings from Kafka
Connect API.
Assuming that Connect is the correct route for us, does
Hi Awadhesh,
This seems like your certificate import order (intermediate - root) is
jumbled up. Could you kindly follow the instructions on confluent.io where
Ismael Juma has provided a nice set of steps to follow for SSL setup.
Is mirror maker something you can utilise?
On 21 Aug 2017 4:03 pm, "Nomar Morado" wrote:
Hi
My brokers are currently installed in servers that's end of life.
What is the recommended way of migrating them over to new servers?
Thanks
Printing e-mails wastes
Are you planning to close these at any point? I'm not seeung any close()
calls.
On 17 Nov 2017 6:09 pm, "Ranjit Kumar" wrote:
Hi All,
I am writing java program with thread pool may be 100 threads and all
threads will have kafka producers
site. But I couldn't
connect to port 5005 at all.
On 13 May 2018 at 10:00, M. Manna <manme...@gmail.com> wrote:
> Hi Ted,
>
> I highly appreciate the response over the weekend, and thanks for pointing
> out the JIRAs.
>
> I don't believe the processes are responsible
Hello,
I have followed the graceful shutdown process by using the following (in
addition to the default controlled.shutdown.enable)
controlled.shutdown.max.retries=10
controlled.shutdown.retry.backoff.ms=3000
I am always having issues where not all the brokers are shutting
gracefully. And it's
Have you verified 1-5 run clean?
> Martin
> __
>
>
>
> --
> *From:* M. Manna <manme...@gmail.com>
> *Sent:* Wednesday, May 9, 2018 9:51 AM
> *To:* Kafka Users
> *Subject:* Log Cleanup Support for Windows [KAFKA-1194]
&g
Hi,
Assuming I got your question right - in 3-node setup, that's a "Cluster
down" scenario if one of your broker goes down.
The rule of thumb in DComp is ceil(N/2)-1 total failures are allowed -
where N is your node.
So what you are testing for, will require probably 2 more nodes.
Regards,
Hello,
Based on the Quick start on Kafka site, I was trying to use the
kafka-consumer-groups command line script
PS C:\kafka_2.11-1.1.0\bin\windows> .\kafka-consumer-groups.bat
>>> --new-consumer --bootstrap-server localhost:9092 --list
>>> The [new-consumer] option is deprecated and will be
Hello,
This issue has been outstanding for a while and impacting us both in
development and deployment time. We have had to manually build kafka core
jar and use it to work with Windows for over a year. The auto log/index
cleanup feature is very important for us on Windows because it helps us
Hello,
we have recently upgraded to 2.11-1.1.0, but for our publish-subscribe
design we are still using Producer and Consumer API from 0.10.2.1. We
understand there are more enhanced features in new 1.1 API but we aren't
using those specific items.
Upon upgrade, we confirmed that some test
>
> Cheers
>
> On Sat, May 12, 2018 at 1:42 PM, M. Manna <manme...@gmail.com> wrote:
>
> > Hello,
> >
> > We are still stuck with this issue where 2.11-1.1.0 distro is failing to
> > cleanup logs on Windows and brings the entire cluster down one by one.
. Is this expected for
2.11-1.1-.0 ? This means addition 2s delay on my poll time, which is what I
would like to avoid if possible.
Regards,
On 10 May 2018 at 18:47, M. Manna <manme...@gmail.com> wrote:
> Hello,
>
> we have recently upgraded to 2.11-1.1.0, but for our publish-sub
Hello,
Our cluster is going down one-by-one during log cleanup. This is after we
have done full upgrade from 2.10-0.10.2.1.
This is the log we receive:
[2018-05-11 17:12:21,652] WARN [ReplicaFetcher replicaId=1, leaderId=2,
fetcherId=0] Error in response for fetch request (type=FetchRequest,
rted in the same way. Is this something you also had to do,
or was it even simpler for you.
Regards,
On 17 May 2018 at 17:17, Ted Yu <yuzhih...@gmail.com> wrote:
> Can you share what you plan to write (on the mailing list) ?
>
> Thanks
>
> On Thu, May 17, 2018 at 9:15 AM, M. Manna <
I have had some success.
Is it okay for us to update Cwiki with the setup steps which I've got ? or
too generic?
On 16 May 2018 at 23:04, M. Manna <manme...@gmail.com> wrote:
> Must have missed that in Kafka-run-class.bat which sets suspend=n by
> default. Thanks for pointing that
es we set:
>
> "key.serializer" = "org.apache.kafka.common.serialization.StringSerializer"
> "value.serializer" =
> "org.apache.kafka.common.serialization.ByteArraySerializer"
> "acks" = "all"
> "retries" = 5
> "buffer.memory"
Manual commit is important where event consumption eventually leads to some
post-processing/database update/state change for your application. Without
doing all those, you cannot truly say that you have "Received" the message.
"Receiving" is interpreted differently and it's up to your target
Have you read the concepts of Consumer Group, Topic, Partitions - in Kafka
documentation?
On 18 May 2018 at 13:34, Sandip Mavani wrote:
> how can i distribute event in multiple consumer. and if one consumer
> received event then another do not get.
>
Hello,
Is there anyone who can provide a few steps for setting up Eclipse IDE to
debug Kafka cluster ? Has anyone got eclipse/scala IDE experience in Kafka
debugging ?
I have been trying for days, but couldn’t start the debugger to connect to
port 5005 when the Cluster is running. Everything
DEBUG=y; export DEBUG_SUSPEND_FLAG=y;
>
> The above would freeze the process until debugger is attached.
>
> Cheers
>
> On Wed, May 16, 2018 at 2:46 PM, M. Manna <manme...@gmail.com> wrote:
>
> > Hello,
> >
> > Is there anyone who can provide a few steps
Hi,
You said "On all of those setting there was no change in the described
behavior" - this is slightly confusing. Could you please clarify this? If
there is no change, that means everything is working :) ?
>From the provided exception stack, it seems as if you are waiting to batch
a lot of
You just spoke my mind sir!
On Fri, 8 Jun 2018, 18:43 Jacob Sheck, wrote:
> What do you mean by "The issue appears when one of the brokers starts
> being impacted
> by environmental issues within the server it's running into (for whatever
> reason)"?
>
> You should consider Kafka to be a first
Hello,
I was trying to understand the effect of consumer liveliness and use of
resources at the same time, based on this article (which is great, btw):
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
For my use case, I have only one consumer
Hello,
I was trying to remember from docs (it's been a while) how the last
committed offsets work i.e. whether it's being tracked per consumer
group-basis or something else. This is specific to when auto.offset.reset
is set to "earliest"/"latest" and the last committed offset is determined.
.
the most recent offset commit per partition
>
> Offsets are maintained per partition per consumer group, so it doesn't
> matter which member of a consumer group is reading a given partition - the
> offset will remain consistent.
>
>
> On Wed, May 30, 2018 at 9:23 AM, M. Manna w
Hello,
We are trying to move from single partition to multi-partition approach for
our topics. The purpose is:
1) Each production/testbed server will have a non-Daemon thread (consumer)
running.
2) It will consume messages, commit offset (manual), and determine next
steps if commit fails, app
topic and consumer group have 1-to-many relationship. Each topic partition
will have the messages guaranteed to be in order. Consumer rebalance issues
can be adjusted based on the backoff and other params. What is exactly your
concern regarding consumer group and rebalance?
On 29 May 2018 at
Hi,
It's not possible to answer questions based on text. You need to share your
consumer.properties, and server.properties file, and also, what exactly you
have changed from default configuration.
On 29 May 2018 at 12:51, Shantanu Deshmukh wrote:
> Hello,
>
> We have 3 broker Kafka 0.10.0.1
ull
> ssl.keystore.password = null
> ssl.keystore.type = JKS
> ssl.protocol = TLS
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.location =
> ssl.truststore.password = [hidden]
> ssl.truststore.
;<<<<<<<
>
> And here are server properties.
>
> broker.id=0
> port=9092
> delete.topic.enable=true
> message.max.bytes=150
> listeners=SSL://x.x.x.x:9092
> advertised.listeners=SSL://x.x.x.x:9092
> num.network.threads=3
> num.io.thread
This can happen for two reasons:
1) Your offsets are expired and removed. So your consumers don't know where
to start from - earliest means "Start from the beginning"
2) You are actually starting as part of a totally new consumer group - in
which case it's as designed too - start from the
Regarding graceful shutdown - I have got a response from Jan in the past -
I am simply quoting that below:
"A gracefully shutdown means the broker is only shutting down when it is
not the leader of any partition.
Therefore you should not be able to gracefully shut down your entire
cluster."
That
partition assignment. Then there is no question
of consumer-group management and subsequently rebalancing.
On Thu, May 31, 2018 at 6:00 PM M. Manna wrote:
> Hello,
>
> We are trying to move from single partition to multi-partition approach
for
> our topics. The purpose is:
>
> 1) Ea
er
> dies , will there be duplicate messages sent in this case
> Since when the new consumer comes up , it will again process from the
> uncommitted offset .
> So do i need transaction semantics in this scenario.
>
>
> On Fri, Jun 1, 2018 at 4:56 AM, M. Manna wrote:
>
> > This is a
This is actually quite nicely explained by Jason Gustafson on this article
-
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
It's technically up to the application on how to determine whether message
is fully received. If you have database txn
This is a good article on LinkedIn site - I think it's a good item to read
before hitting complicated designs
https://www.linkedin.com/pulse/exactly-once-delivery-message-distributed-system-arun-dhwaj/
On 29 May 2018 at 14:34, Thakrar, Jayesh
wrote:
> For more details, see
Hello,
I can see the this has been set as "KIP required".
https://issues.apache.org/jira/browse/KAFKA-
I have a use case where I simply want to use the key as some metadata
information (but not really for any messages), but ideally would like to
round-robin partition assignment. All I
Could you provide any broker/zk logs ? Zookeeper and Kafka logs a lot of
info during housekeeping ops such as log retentionthere must be
something there..
On 6 Feb 2018 8:24 pm, "Raghav" wrote:
> Hi
>
> While configuring a topic, we are specifying the retention
Is this Windows or Linux?
On 6 Feb 2018 8:24 pm, "Raghav" wrote:
> Hi
>
> While configuring a topic, we are specifying the retention bytes per topic
> as follows. Our retention time in hours is 48.
>
> *bin/kafka-topics.sh, --zookeeper zk-1:2181,zk-2:2181,zk-3:2181
Just a heads up. Windows doesn’t cleanup logs. There’s a pull req pending
for issue #1194.
Regards,
On Mon, 19 Feb 2018 at 09:14, Maria Pilar wrote:
> Now It´s working properly, I have changed the server.id in to the
> zookeeper. properties and I have created topics into
1 - 100 of 314 matches
Mail list logo