We plan to, but there is quite a bit of functionality that needs to be
abstracted by requests to the broker. Most of the functionality in the
topics command that interacts directly with ZK will be replaced by KIP-4
protocols (
Hi Ewen
If the trend is to hide zookeeper entirely (and most likely restricting its
network connection to Kafka only ) would it make sense to update the Kafka
topics tool ? Currently it is
> *bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor
> 1 --partitions 1 --topic
Hi Ewen
I can't reproduce the bug but I was just using the standard rest API. I'll
let you know if it happens again once 0.10.2.0 is released or if I find a
reliable way to reproduce
Regards
Stephane
On 24 Jan. 2017 3:25 pm, "Ewen Cheslack-Postava" wrote:
> There was this
The broker still accepts that version and the Scala API still includes
support for that timestamp. Note that the way this worked in previous
versions was by looking only at the timestamp for each log segment instead
of using a timestamp index within the segment.
Note that the new consumer now
The only other connections to brokers would be to the bootstrap brokers in
order to collect cluster metadata.
-Ewen
On Wed, Jan 18, 2017 at 3:48 AM, Paolo Patierno wrote:
> Hello,
>
>
> I'd like to know the number of connections that Kafka clients establish. I
> mean ...
>
Smaller servers/instances work fine for tests, as long as the workload is
scaled down as well. Most memory on a Kafka broker will end up dedicated to
page cache. For, e.g., 1GB of RAM just consider that you probably won't be
leaving much room to cache the data so your performance may suffer a bit.
The new consumer only supports committing offsets to Kafka. (It doesn't
even have connection info to ZooKeeper, which is a general trend in Kafka
clients -- all details of ZooKeeper are being hidden away from clients,
even administrative functions like creating topics.)
-Ewen
On Thu, Jan 19,
On Wed, Jan 18, 2017 at 4:56 PM, Greenhorn Techie wrote:
> Hi there,
>
> Can anyone please answer to my below follow-up questions for Ewen's
> responses.
>
> Thanks
>
>
> On Tue, 17 Jan 2017 at 00:28 Greenhorn Techie
> wrote:
>
>> Thanks
There was this issue: https://issues.apache.org/jira/browse/KAFKA-4527
which was a test failure that had to do with updating the status as soon as
the request to pause the connector was received rather than after it was
processed. The corresponding PR fixed that (and will be released in
0.10.2.0).
Jun,
Thanks for the reply. This makes sense, I think. One followup question:
During the failover when the new-leader has a stale HWM, is it possible
for this broker to return an error to consumers who are consuming
between HWM and LEO?
I saw a comment on a document that says the leader could,
Hi,
I am running a Kafka Sink Connector with Kafka 0.9.0.2. I am seeing that my
consumer is periodically throwing an error when saving offsets and then
going into group rebalance state. Please let me know what can be done to
fix this issue
2017-01-20 17:22:21 INFO WorkerSinkTask : 187 -
Version 0.10 and I don’t have the thread dump but have the KafkaServer log
where the error is there.
Thanks
Achintya
-Original Message-
From: Apurva Mehta [mailto:apu...@confluent.io]
Sent: Monday, January 23, 2017 12:49 PM
To: users@kafka.apache.org
Subject: Re: Messages are lost
Guozhang,
Thanks for the reply. I figured it out after a while. Indeed, the global
default time based retention was tripping me. I was using older data for
testing and publishing messages with explicit timestamps. It took me a
while to figure out what was happening because kafka-topics.sh
What version of kafka have you deployed? Can you post a thread dump of the
hung broker?
On Fri, Jan 20, 2017 at 12:14 PM, Ghosh, Achintya (Contractor) <
achintya_gh...@comcast.com> wrote:
> Hi there,
>
> I see the below exception in one of my node's log( cluster with 3 nodes)
> and then the node
Can anyone please update on this?
Thanks
Achintya
-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent: Friday, January 20, 2017 3:15 PM
To: users@kafka.apache.org
Subject: Messages are lost
Hi there,
I see the below exception in one of my
Hi
I was trying to secure communication between ZK and Kafka. We generate the
keytab file with principal
We were following this document -
https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
(really detailed doc)
For Kafka -
Thanks for the reply and explanation, Jun ! So yes, I was running the
DumpLogSegment tool on an active timeindex segment. But among two clusters
i tested with, I did not get the same error on the destination clutser
(which is mirroring data from source cluster) even when I ran the
DumpLogSegment
Hi Ismael,
thank you very much. I create the issue:
https://issues.apache.org/jira/browse/KAFKA-4686.
2017-01-23 11:46 GMT-02:00 Ismael Juma :
> Hi Rodrigo,
>
> Please file a JIRA so that this can be investigated.
>
> Ismael
>
> On Mon, Jan 23, 2017 at 1:32 PM, Rodrigo
Hi Rodrigo,
Please file a JIRA so that this can be investigated.
Ismael
On Mon, Jan 23, 2017 at 1:32 PM, Rodrigo Queiroz Saramago <
rodrigo.saram...@zup.com.br> wrote:
> Hello, I have a test environment with 3 brokers and 1 zookeeper nodes, in
> which clients connect using two-way ssl
Hello, I have a test environment with 3 brokers and 1 zookeeper nodes, in
which clients connect using two-way ssl authentication. I use kafka version
0.10.1.1, the system works as expected for a while, but if the node goes
down and then is restarted, something got corrupted and is not possible
20 matches
Mail list logo