[jira] [Commented] (KAFKA-1510) Force offset commits when migrating consumer offsets from zookeeper to kafka
[ https://issues.apache.org/jira/browse/KAFKA-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079219#comment-14079219 ] nicu marasoiu commented on KAFKA-1510: -- [~jkreps] Hi, can you please help me with feedback on my comment + code, or who can I ask, so that I can go in the right direction? Force offset commits when migrating consumer offsets from zookeeper to kafka Key: KAFKA-1510 URL: https://issues.apache.org/jira/browse/KAFKA-1510 Project: Kafka Issue Type: Bug Affects Versions: 0.8.2 Reporter: Joel Koshy Assignee: Joel Koshy Labels: newbie Fix For: 0.8.2 Attachments: forceCommitOnShutdownWhenDualCommit.patch When migrating consumer offsets from ZooKeeper to kafka, we have to turn on dual-commit (i.e., the consumers will commit offsets to both zookeeper and kafka) in addition to setting offsets.storage to kafka. However, when we commit offsets we only commit offsets if they have changed (since the last commit). For low-volume topics or for topics that receive data in bursts offsets may not move for a long period of time. Therefore we may want to force the commit (even if offsets have not changed) when migrating (i.e., when dual-commit is enabled) - we can add a minimum interval threshold (say force commit after every 10 auto-commits) as well as on rebalance and shutdown. Also, I think it is safe to switch the default for offsets.storage from zookeeper to kafka and set the default to dual-commit (for people who have not migrated yet). We have deployed this to the largest consumers at linkedin and have not seen any issues so far (except for the migration caveat that this jira will resolve). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (KAFKA-1451) Broker stuck due to leader election race
[ https://issues.apache.org/jira/browse/KAFKA-1451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jun Rao resolved KAFKA-1451. Resolution: Fixed Fix Version/s: 0.8.2 Thanks for the latest patch. +1 and committed to trunk. Broker stuck due to leader election race - Key: KAFKA-1451 URL: https://issues.apache.org/jira/browse/KAFKA-1451 Project: Kafka Issue Type: Bug Components: core Affects Versions: 0.8.1.1 Reporter: Maciek Makowski Assignee: Manikumar Reddy Priority: Minor Labels: newbie Fix For: 0.8.2 Attachments: KAFKA-1451.patch, KAFKA-1451_2014-07-28_20:27:32.patch, KAFKA-1451_2014-07-29_10:13:23.patch h3. Symptoms The broker does not become available due to being stuck in an infinite loop while electing leader. This can be recognised by the following line being repeatedly written to server.log: {code} [2014-05-14 04:35:09,187] INFO I wrote this conflicted ephemeral node [{version:1,brokerid:1,timestamp:1400060079108}] at /controller a while back in a different session, hence I will backoff for this node to be deleted by Zookeeper and retry (kafka.utils.ZkUtils$) {code} h3. Steps to Reproduce In a single kafka 0.8.1.1 node, single zookeeper 3.4.6 (but will likely behave the same with the ZK version included in Kafka distribution) node setup: # start both zookeeper and kafka (in any order) # stop zookeeper # stop kafka # start kafka # start zookeeper h3. Likely Cause {{ZookeeperLeaderElector}} subscribes to data changes on startup, and then triggers an election. if the deletion of ephemeral {{/controller}} node associated with previous zookeeper session of the broker happens after subscription to changes in new session, election will be invoked twice, once from {{startup}} and once from {{handleDataDeleted}}: * {{startup}}: acquire {{controllerLock}} * {{startup}}: subscribe to data changes * zookeeper: delete {{/controller}} since the session that created it timed out * {{handleDataDeleted}}: {{/controller}} was deleted * {{handleDataDeleted}}: wait on {{controllerLock}} * {{startup}}: elect -- writes {{/controller}} * {{startup}}: release {{controllerLock}} * {{handleDataDeleted}}: acquire {{controllerLock}} * {{handleDataDeleted}}: elect -- attempts to write {{/controller}} and then gets into infinite loop as a result of conflict {{createEphemeralPathExpectConflictHandleZKBug}} assumes that the existing znode was written from different session, which is not true in this case; it was written from the same session. That adds to the confusion. h3. Suggested Fix In {{ZookeeperLeaderElector.startup}} first run {{elect}} and then subscribe to data changes. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability
[ https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079373#comment-14079373 ] Jun Rao commented on KAFKA-1555: Instead of introducing a new property min.isr.required, I was thinking of just piggybacking on ack. We can introduce a new semantic when ack = -2. The semantic will be that a messages is only acked with no error if at the time the message is committed, ISR = |ack|. If a message is committed with ISR |ack|, we will return an UnderReplicatedError. This way, we don't have to change the wire protocol. Does that match what you expect? provide strong consistency with reasonable availability --- Key: KAFKA-1555 URL: https://issues.apache.org/jira/browse/KAFKA-1555 Project: Kafka Issue Type: Improvement Components: controller Affects Versions: 0.8.1.1 Reporter: Jiang Wu Assignee: Neha Narkhede In a mission critical application, we expect a kafka cluster with 3 brokers can satisfy two requirements: 1. When 1 broker is down, no message loss or service blocking happens. 2. In worse cases such as two brokers are down, service can be blocked, but no message loss happens. We found that current kafka versoin (0.8.1.1) cannot achieve the requirements due to its three behaviors: 1. when choosing a new leader from 2 followers in ISR, the one with less messages may be chosen as the leader. 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it has less messages than the leader. 3. ISR can contains only 1 broker, therefore acknowledged messages may be stored in only 1 broker. The following is an analytical proof. We consider a cluster with 3 brokers and a topic with 3 replicas, and assume that at the beginning, all 3 replicas, leader A, followers B and C, are in sync, i.e., they have the same messages and are all in ISR. According to the value of request.required.acks (acks for short), there are the following cases. 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement. 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this time, although C hasn't received m, C is still in ISR. If A is killed, C can be elected as the new leader, and consumers will miss m. 3. acks=-1. B and C restart and are removed from ISR. Producer sends a message m to A, and receives an acknowledgement. Disk failure happens in A before B and C replicate m. Message m is lost. In summary, any existing configuration cannot satisfy the requirements. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1507) Using GetOffsetShell against non-existent topic creates the topic unintentionally
[ https://issues.apache.org/jira/browse/KAFKA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079439#comment-14079439 ] Sriharsha Chintalapani commented on KAFKA-1507: --- [~junrao] [~jkreps] Thanks for the details above. Based on the comments by Jay we should be dropping creation of topics from TopicMetaData request and add createTopicRequest to the api along with topic creation properties such partitions , replication etc. And in KafkaProducer.send if the metadatarequest comes out empty we should be making a call to createTopic . In this case should we also have a boolean flag in KafkaProducer for createTopic . If both producer.createTopic and auto.create.topics.enable on broker set to true we will create a topic with user supplied config or using the defaults. I think auto creation of topics config should be on the producer side rather than the broker having it on two places might be confusing. Please let me know what you think of the above approach. Thanks. Using GetOffsetShell against non-existent topic creates the topic unintentionally - Key: KAFKA-1507 URL: https://issues.apache.org/jira/browse/KAFKA-1507 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Environment: centos Reporter: Luke Forehand Assignee: Sriharsha Chintalapani Priority: Minor Labels: newbie Attachments: KAFKA-1507.patch, KAFKA-1507_2014-07-22_10:27:45.patch, KAFKA-1507_2014-07-23_17:07:20.patch A typo in using GetOffsetShell command can cause a topic to be created which cannot be deleted (because deletion is still in progress) ./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka10:9092,kafka11:9092,kafka12:9092,kafka13:9092 --topic typo --time 1 ./kafka-topics.sh --zookeeper stormqa1/kafka-prod --describe --topic typo Topic:typo PartitionCount:8ReplicationFactor:1 Configs: Topic: typo Partition: 0Leader: 10 Replicas: 10 Isr: 10 ... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1507) Using GetOffsetShell against non-existent topic creates the topic unintentionally
[ https://issues.apache.org/jira/browse/KAFKA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079483#comment-14079483 ] Jay Kreps commented on KAFKA-1507: -- Maybe the best plan would be to retain the option we have for compatibility but default it to off, and have the new producer client make use of the new api. Using GetOffsetShell against non-existent topic creates the topic unintentionally - Key: KAFKA-1507 URL: https://issues.apache.org/jira/browse/KAFKA-1507 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Environment: centos Reporter: Luke Forehand Assignee: Sriharsha Chintalapani Priority: Minor Labels: newbie Attachments: KAFKA-1507.patch, KAFKA-1507_2014-07-22_10:27:45.patch, KAFKA-1507_2014-07-23_17:07:20.patch A typo in using GetOffsetShell command can cause a topic to be created which cannot be deleted (because deletion is still in progress) ./kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka10:9092,kafka11:9092,kafka12:9092,kafka13:9092 --topic typo --time 1 ./kafka-topics.sh --zookeeper stormqa1/kafka-prod --describe --topic typo Topic:typo PartitionCount:8ReplicationFactor:1 Configs: Topic: typo Partition: 0Leader: 10 Replicas: 10 Isr: 10 ... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (KAFKA-1563) High packet rate between brokers in kafka cluster.
Fedor Korotkiy created KAFKA-1563: - Summary: High packet rate between brokers in kafka cluster. Key: KAFKA-1563 URL: https://issues.apache.org/jira/browse/KAFKA-1563 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Fedor Korotkiy On our kafka cluster with 3 brokers and input 40MB/s we see about 100K packets/s traffic between brokers(not including consumers). Majority of packets have small size(about 20bytes of data). I have found that kafka server sets TcpNoDelay option on all sockets. https://github.com/apache/kafka/blob/0.8.1/core/src/main/scala/kafka/network/SocketServer.scala#L202 And I think that causes the issue. Can you please explain current behavior and fix it/make it configurable? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability
[ https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079494#comment-14079494 ] saurabh agarwal commented on KAFKA-1555: Excellent. Thanks. It works for us. We are ok with either introducing new sematic in existing ack property (ack=-2) or introducing new property min.isr.required. They both meet the requirement. Please suggest the next step. provide strong consistency with reasonable availability --- Key: KAFKA-1555 URL: https://issues.apache.org/jira/browse/KAFKA-1555 Project: Kafka Issue Type: Improvement Components: controller Affects Versions: 0.8.1.1 Reporter: Jiang Wu Assignee: Neha Narkhede In a mission critical application, we expect a kafka cluster with 3 brokers can satisfy two requirements: 1. When 1 broker is down, no message loss or service blocking happens. 2. In worse cases such as two brokers are down, service can be blocked, but no message loss happens. We found that current kafka versoin (0.8.1.1) cannot achieve the requirements due to its three behaviors: 1. when choosing a new leader from 2 followers in ISR, the one with less messages may be chosen as the leader. 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it has less messages than the leader. 3. ISR can contains only 1 broker, therefore acknowledged messages may be stored in only 1 broker. The following is an analytical proof. We consider a cluster with 3 brokers and a topic with 3 replicas, and assume that at the beginning, all 3 replicas, leader A, followers B and C, are in sync, i.e., they have the same messages and are all in ISR. According to the value of request.required.acks (acks for short), there are the following cases. 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement. 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this time, although C hasn't received m, C is still in ISR. If A is killed, C can be elected as the new leader, and consumers will miss m. 3. acks=-1. B and C restart and are removed from ISR. Producer sends a message m to A, and receives an acknowledgement. Disk failure happens in A before B and C replicate m. Message m is lost. In summary, any existing configuration cannot satisfy the requirements. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani reassigned KAFKA-1562: - Assignee: Sriharsha Chintalapani kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 24006: Patch for KAFKA-1420
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24006/ --- (Updated July 30, 2014, 6:18 p.m.) Review request for kafka. Bugs: KAFKA-1420 https://issues.apache.org/jira/browse/KAFKA-1420 Repository: kafka Description (updated) --- KAFKA-1420 Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests Diffs (updated) - core/src/test/scala/unit/kafka/admin/AdminTest.scala e28979827110dfbbb92fe5b152e7f1cc973de400 core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 29cc01bcef9cacd8dec1f5d662644fc6fe4994bc core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala f44568cb25edf25db857415119018fd4c9922f61 core/src/test/scala/unit/kafka/utils/TestUtils.scala c4e13c5240c8303853d08cc3b40088f8c7dae460 Diff: https://reviews.apache.org/r/24006/diff/ Testing --- Automated Thanks, Jonathan Natkins
[jira] [Updated] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests
[ https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated KAFKA-1420: Attachment: KAFKA-1420_2014-07-30_11:18:26.patch Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests -- Key: KAFKA-1420 URL: https://issues.apache.org/jira/browse/KAFKA-1420 Project: Kafka Issue Type: Bug Reporter: Guozhang Wang Labels: newbie Fix For: 0.8.2 Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch This is a follow-up JIRA from KAFKA-1389. There are a bunch of places in the unit tests where we misuse AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, where TestUtils.createTopic needs to be used instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests
[ https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079690#comment-14079690 ] Jonathan Natkins commented on KAFKA-1420: - Updated reviewboard https://reviews.apache.org/r/24006/diff/ against branch origin/trunk Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests -- Key: KAFKA-1420 URL: https://issues.apache.org/jira/browse/KAFKA-1420 Project: Kafka Issue Type: Bug Reporter: Guozhang Wang Labels: newbie Fix For: 0.8.2 Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch This is a follow-up JIRA from KAFKA-1389. There are a bunch of places in the unit tests where we misuse AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, where TestUtils.createTopic needs to be used instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 24006: Patch for KAFKA-1420
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24006/ --- (Updated July 30, 2014, 6:24 p.m.) Review request for kafka. Bugs: KAFKA-1420 https://issues.apache.org/jira/browse/KAFKA-1420 Repository: kafka Description --- KAFKA-1420 Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests Diffs (updated) - core/src/test/scala/unit/kafka/admin/AdminTest.scala e28979827110dfbbb92fe5b152e7f1cc973de400 core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 29cc01bcef9cacd8dec1f5d662644fc6fe4994bc core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala f44568cb25edf25db857415119018fd4c9922f61 core/src/test/scala/unit/kafka/utils/TestUtils.scala c4e13c5240c8303853d08cc3b40088f8c7dae460 Diff: https://reviews.apache.org/r/24006/diff/ Testing --- Automated Thanks, Jonathan Natkins
[jira] [Updated] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests
[ https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated KAFKA-1420: Attachment: KAFKA-1420_2014-07-30_11:24:55.patch Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests -- Key: KAFKA-1420 URL: https://issues.apache.org/jira/browse/KAFKA-1420 Project: Kafka Issue Type: Bug Reporter: Guozhang Wang Labels: newbie Fix For: 0.8.2 Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch, KAFKA-1420_2014-07-30_11:24:55.patch This is a follow-up JIRA from KAFKA-1389. There are a bunch of places in the unit tests where we misuse AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, where TestUtils.createTopic needs to be used instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1420) Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests
[ https://issues.apache.org/jira/browse/KAFKA-1420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079699#comment-14079699 ] Jonathan Natkins commented on KAFKA-1420: - Updated reviewboard https://reviews.apache.org/r/24006/diff/ against branch origin/trunk Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests -- Key: KAFKA-1420 URL: https://issues.apache.org/jira/browse/KAFKA-1420 Project: Kafka Issue Type: Bug Reporter: Guozhang Wang Labels: newbie Fix For: 0.8.2 Attachments: KAFKA-1420.patch, KAFKA-1420_2014-07-30_11:18:26.patch, KAFKA-1420_2014-07-30_11:24:55.patch This is a follow-up JIRA from KAFKA-1389. There are a bunch of places in the unit tests where we misuse AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK to create topics, where TestUtils.createTopic needs to be used instead. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 24006: Patch for KAFKA-1420
On July 30, 2014, 12:22 a.m., Guozhang Wang wrote: core/src/test/scala/unit/kafka/admin/AdminTest.scala, line 314 https://reviews.apache.org/r/24006/diff/1/?file=643839#file643839line314 Is there a specific reason we want to use 10 seconds instead of default 5 seconds? Sorry, I'd added this in the midst of debugging, and forgotten to remove it. I've actually changed this call, because I realized that it didn't necessarily assure me that broker 0 had caught up to the ISR yet. The test has been changed to be more reliable. On July 30, 2014, 12:22 a.m., Guozhang Wang wrote: core/src/test/scala/unit/kafka/admin/AdminTest.scala, line 317 https://reviews.apache.org/r/24006/diff/1/?file=643839#file643839line317 Is this println intended? Removed On July 30, 2014, 12:22 a.m., Guozhang Wang wrote: core/src/test/scala/unit/kafka/utils/TestUtils.scala, line 186 https://reviews.apache.org/r/24006/diff/1/?file=643842#file643842line186 Could we just set the default value of configs parameter to null, instead of creating a separate function? The reason I'd done this is that the Scala compiler complained because there's another implementation of createTopic that defines default parameter values. However, I was able to change the calls to this API to the one that uses numPartitions and replicationFactor, and stuck the Properties parameter over there. - Jonathan --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24006/#review49049 --- On July 30, 2014, 6:18 p.m., Jonathan Natkins wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24006/ --- (Updated July 30, 2014, 6:18 p.m.) Review request for kafka. Bugs: KAFKA-1420 https://issues.apache.org/jira/browse/KAFKA-1420 Repository: kafka Description --- KAFKA-1420 Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests Diffs - core/src/test/scala/unit/kafka/admin/AdminTest.scala e28979827110dfbbb92fe5b152e7f1cc973de400 core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 29cc01bcef9cacd8dec1f5d662644fc6fe4994bc core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala f44568cb25edf25db857415119018fd4c9922f61 core/src/test/scala/unit/kafka/utils/TestUtils.scala c4e13c5240c8303853d08cc3b40088f8c7dae460 Diff: https://reviews.apache.org/r/24006/diff/ Testing --- Automated Thanks, Jonathan Natkins
[jira] [Updated] (KAFKA-1333) Add consumer co-ordinator module to the server
[ https://issues.apache.org/jira/browse/KAFKA-1333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Neha Narkhede updated KAFKA-1333: - Assignee: Guozhang Wang Add consumer co-ordinator module to the server -- Key: KAFKA-1333 URL: https://issues.apache.org/jira/browse/KAFKA-1333 Project: Kafka Issue Type: Sub-task Components: consumer Affects Versions: 0.9.0 Reporter: Neha Narkhede Assignee: Guozhang Wang Scope of this JIRA is to just add a consumer co-ordinator module that doesn't do much initially. This will possibly require some refactor of the existing offset management stuff as the consumer co-ordinator and the offset group owner should be the same thing. This refactor and code review will be important as a lot of the consumer co-ordination code will go here. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 24006: Patch for KAFKA-1420
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24006/#review49139 --- core/src/test/scala/unit/kafka/admin/AdminTest.scala https://reviews.apache.org/r/24006/#comment85979 Will fix whitespace core/src/test/scala/unit/kafka/admin/AdminTest.scala https://reviews.apache.org/r/24006/#comment85980 Will fix whitespace here - Jonathan Natkins On July 30, 2014, 6:24 p.m., Jonathan Natkins wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24006/ --- (Updated July 30, 2014, 6:24 p.m.) Review request for kafka. Bugs: KAFKA-1420 https://issues.apache.org/jira/browse/KAFKA-1420 Repository: kafka Description --- KAFKA-1420 Replace AdminUtils.createOrUpdateTopicPartitionAssignmentPathInZK with TestUtils.createTopic in unit tests Diffs - core/src/test/scala/unit/kafka/admin/AdminTest.scala e28979827110dfbbb92fe5b152e7f1cc973de400 core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 29cc01bcef9cacd8dec1f5d662644fc6fe4994bc core/src/test/scala/unit/kafka/integration/UncleanLeaderElectionTest.scala f44568cb25edf25db857415119018fd4c9922f61 core/src/test/scala/unit/kafka/utils/TestUtils.scala c4e13c5240c8303853d08cc3b40088f8c7dae460 Diff: https://reviews.apache.org/r/24006/diff/ Testing --- Automated Thanks, Jonathan Natkins
Build failed in Jenkins: Kafka-trunk #237
See https://builds.apache.org/job/Kafka-trunk/237/changes Changes: [junrao] kafka-1451; Broker stuck due to leader election race; patched by Manikumar Reddy; reviewed by Jun Rao -- [...truncated 780 lines...] kafka.server.KafkaConfigTest testLogRollTimeNoConfigProvided PASSED kafka.server.SimpleFetchTest testNonReplicaSeesHwWhenFetching PASSED kafka.server.SimpleFetchTest testReplicaSeesLeoWhenFetching PASSED kafka.server.ServerShutdownTest testCleanShutdown PASSED kafka.server.ServerShutdownTest testCleanShutdownWithDeleteTopicEnabled PASSED kafka.server.HighwatermarkPersistenceTest testHighWatermarkPersistenceSinglePartition PASSED kafka.server.HighwatermarkPersistenceTest testHighWatermarkPersistenceMultiplePartitions PASSED kafka.consumer.ZookeeperConsumerConnectorTest testBasic PASSED kafka.consumer.ZookeeperConsumerConnectorTest testCompression PASSED kafka.consumer.ZookeeperConsumerConnectorTest testCompressionSetConsumption PASSED kafka.consumer.ZookeeperConsumerConnectorTest testConsumerDecoder PASSED kafka.consumer.ZookeeperConsumerConnectorTest testLeaderSelectionForPartition PASSED kafka.consumer.ConsumerIteratorTest testConsumerIteratorDeduplicationDeepIterator PASSED kafka.consumer.ConsumerIteratorTest testConsumerIteratorDecodingFailure PASSED kafka.consumer.TopicFilterTest testWhitelists PASSED kafka.consumer.TopicFilterTest testBlacklists PASSED kafka.consumer.TopicFilterTest testWildcardTopicCountGetTopicCountMapEscapeJson PASSED kafka.log.LogTest testTimeBasedLogRoll PASSED kafka.log.LogTest testSizeBasedLogRoll PASSED kafka.log.LogTest testLoadEmptyLog PASSED kafka.log.LogTest testAppendAndReadWithSequentialOffsets PASSED kafka.log.LogTest testAppendAndReadWithNonSequentialOffsets PASSED kafka.log.LogTest testReadAtLogGap PASSED kafka.log.LogTest testReadOutOfRange PASSED kafka.log.LogTest testLogRolls PASSED kafka.log.LogTest testCompressedMessages PASSED kafka.log.LogTest testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED kafka.log.LogTest testMessageSizeCheck PASSED kafka.log.LogTest testLogRecoversToCorrectOffset PASSED kafka.log.LogTest testIndexRebuild PASSED kafka.log.LogTest testTruncateTo PASSED kafka.log.LogTest testIndexResizingAtTruncation PASSED kafka.log.LogTest testBogusIndexSegmentsAreRemoved PASSED kafka.log.LogTest testReopenThenTruncate PASSED kafka.log.LogTest testAsyncDelete PASSED kafka.log.LogTest testOpenDeletesObsoleteFiles PASSED kafka.log.LogTest testAppendMessageWithNullPayload PASSED kafka.log.LogTest testCorruptLog PASSED kafka.log.LogTest testCleanShutdownFile PASSED kafka.log.OffsetIndexTest truncate PASSED kafka.log.OffsetIndexTest randomLookupTest PASSED kafka.log.OffsetIndexTest lookupExtremeCases PASSED kafka.log.OffsetIndexTest appendTooMany PASSED kafka.log.OffsetIndexTest appendOutOfOrder PASSED kafka.log.OffsetIndexTest testReopen PASSED kafka.log.LogManagerTest testCreateLog PASSED kafka.log.LogManagerTest testGetNonExistentLog PASSED kafka.log.LogManagerTest testCleanupExpiredSegments PASSED kafka.log.LogManagerTest testCleanupSegmentsToMaintainSize PASSED kafka.log.LogManagerTest testTimeBasedFlush PASSED kafka.log.LogManagerTest testLeastLoadedAssignment PASSED kafka.log.LogManagerTest testTwoLogManagersUsingSameDirFails PASSED kafka.log.LogManagerTest testCheckpointRecoveryPoints PASSED kafka.log.LogManagerTest testRecoveryDirectoryMappingWithTrailingSlash PASSED kafka.log.LogManagerTest testRecoveryDirectoryMappingWithRelativeDirectory PASSED kafka.log.CleanerTest testCleanSegments PASSED kafka.log.CleanerTest testCleaningWithDeletes PASSED kafka.log.CleanerTest testCleanSegmentsWithAbort PASSED kafka.log.CleanerTest testSegmentGrouping PASSED kafka.log.CleanerTest testBuildOffsetMap PASSED kafka.log.OffsetMapTest testBasicValidation PASSED kafka.log.OffsetMapTest testClear PASSED kafka.log.FileMessageSetTest testWrittenEqualsRead PASSED kafka.log.FileMessageSetTest testIteratorIsConsistent PASSED kafka.log.FileMessageSetTest testSizeInBytes PASSED kafka.log.FileMessageSetTest testWriteTo PASSED kafka.log.FileMessageSetTest testFileSize PASSED kafka.log.FileMessageSetTest testIterationOverPartialAndTruncation PASSED kafka.log.FileMessageSetTest testIterationDoesntChangePosition PASSED kafka.log.FileMessageSetTest testRead PASSED kafka.log.FileMessageSetTest testSearch PASSED kafka.log.FileMessageSetTest testIteratorWithLimits PASSED kafka.log.FileMessageSetTest testTruncate PASSED kafka.log.LogCleanerIntegrationTest cleanerTest PASSED kafka.log.LogSegmentTest testTruncate PASSED kafka.log.LogSegmentTest testReadOnEmptySegment PASSED kafka.log.LogSegmentTest testReadBeforeFirstOffset PASSED kafka.log.LogSegmentTest testMaxOffset PASSED kafka.log.LogSegmentTest testReadAfterLast PASSED kafka.log.LogSegmentTest testReadFromGap
[jira] [Commented] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079846#comment-14079846 ] Jonathan Natkins commented on KAFKA-1562: - If it's alright, I was planning on working on this a bit. I think I know where the issue is, and I'm in the process of fixing it at the moment. kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Review Request 24113: Patch for KAFKA-1562
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24113/ --- Review request for kafka. Bugs: KAFKA-1562 https://issues.apache.org/jira/browse/KAFKA-1562 Repository: kafka Description --- KAFKA-1562 kafka-topics.sh alter add partitions resets cleanup.policy Diffs - core/src/main/scala/kafka/admin/AdminUtils.scala b5d8714e964fee8b29b05db04d79fd6ac84f3e48 core/src/main/scala/kafka/admin/TopicCommand.scala 8d5c2e7088fc6e8bf69e775ea7f5893b94580fdf core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/24113/diff/ Testing --- Thanks, Jonathan Natkins
Re: Review Request 24113: Patch for KAFKA-1562
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24113/ --- (Updated July 30, 2014, 8:18 p.m.) Review request for kafka. Bugs: KAFKA-1562 https://issues.apache.org/jira/browse/KAFKA-1562 Repository: kafka Description --- KAFKA-1562 kafka-topics.sh alter add partitions resets cleanup.policy Diffs (updated) - core/src/main/scala/kafka/admin/AdminUtils.scala b5d8714e964fee8b29b05db04d79fd6ac84f3e48 core/src/main/scala/kafka/admin/TopicCommand.scala 8d5c2e7088fc6e8bf69e775ea7f5893b94580fdf core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/24113/diff/ Testing --- Thanks, Jonathan Natkins
Re: Review Request 24113: Patch for KAFKA-1562
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24113/ --- (Updated July 30, 2014, 8:18 p.m.) Review request for kafka. Bugs: KAFKA-1562 https://issues.apache.org/jira/browse/KAFKA-1562 Repository: kafka Description --- KAFKA-1562 kafka-topics.sh alter add partitions resets cleanup.policy Diffs - core/src/main/scala/kafka/admin/AdminUtils.scala b5d8714e964fee8b29b05db04d79fd6ac84f3e48 core/src/main/scala/kafka/admin/TopicCommand.scala 8d5c2e7088fc6e8bf69e775ea7f5893b94580fdf core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/24113/diff/ Testing (updated) --- Automated Thanks, Jonathan Natkins
[jira] [Updated] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated KAFKA-1562: Attachment: KAFKA-1562.patch kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated KAFKA-1562: Attachment: KAFKA-1562_2014-07-30_13:18:21.patch kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079897#comment-14079897 ] Jonathan Natkins commented on KAFKA-1562: - Updated reviewboard https://reviews.apache.org/r/24113/diff/ against branch origin/trunk kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14079893#comment-14079893 ] Jonathan Natkins commented on KAFKA-1562: - Created reviewboard https://reviews.apache.org/r/24113/diff/ against branch origin/trunk kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated KAFKA-1562: Status: Patch Available (was: Open) kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Assignee: Sriharsha Chintalapani Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 24113: Patch for KAFKA-1562
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24113/#review49158 --- core/src/main/scala/kafka/admin/AdminUtils.scala https://reviews.apache.org/r/24113/#comment86001 you can remove named params here and use config,true - Sriharsha Chintalapani On July 30, 2014, 8:18 p.m., Jonathan Natkins wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24113/ --- (Updated July 30, 2014, 8:18 p.m.) Review request for kafka. Bugs: KAFKA-1562 https://issues.apache.org/jira/browse/KAFKA-1562 Repository: kafka Description --- KAFKA-1562 kafka-topics.sh alter add partitions resets cleanup.policy Diffs - core/src/main/scala/kafka/admin/AdminUtils.scala b5d8714e964fee8b29b05db04d79fd6ac84f3e48 core/src/main/scala/kafka/admin/TopicCommand.scala 8d5c2e7088fc6e8bf69e775ea7f5893b94580fdf core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/24113/diff/ Testing --- Automated Thanks, Jonathan Natkins
[jira] [Updated] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sriharsha Chintalapani updated KAFKA-1562: -- Assignee: (was: Sriharsha Chintalapani) kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 24113: Patch for KAFKA-1562
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/24113/ --- (Updated July 30, 2014, 8:51 p.m.) Review request for kafka. Bugs: KAFKA-1562 https://issues.apache.org/jira/browse/KAFKA-1562 Repository: kafka Description --- KAFKA-1562 kafka-topics.sh alter add partitions resets cleanup.policy Diffs (updated) - core/src/main/scala/kafka/admin/AdminUtils.scala b5d8714e964fee8b29b05db04d79fd6ac84f3e48 core/src/main/scala/kafka/admin/TopicCommand.scala 8d5c2e7088fc6e8bf69e775ea7f5893b94580fdf core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala PRE-CREATION Diff: https://reviews.apache.org/r/24113/diff/ Testing --- Automated Thanks, Jonathan Natkins
[jira] [Updated] (KAFKA-1562) kafka-topics.sh alter add partitions resets cleanup.policy
[ https://issues.apache.org/jira/browse/KAFKA-1562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated KAFKA-1562: Attachment: KAFKA-1562_2014-07-30_13:51:25.patch kafka-topics.sh alter add partitions resets cleanup.policy -- Key: KAFKA-1562 URL: https://issues.apache.org/jira/browse/KAFKA-1562 Project: Kafka Issue Type: Bug Affects Versions: 0.8.1.1 Reporter: Kenny Attachments: KAFKA-1562.patch, KAFKA-1562_2014-07-30_13:18:21.patch, KAFKA-1562_2014-07-30_13:51:25.patch When partitions are added to an already existing topic the cleanup.policy=compact is not retained. {code} ./kafka-topics.sh --zookeeper localhost --create --partitions 1 --replication-factor 1 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:1ReplicationFactor:1 Configs:cleanup.policy=compact Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 ./kafka-topics.sh --zookeeper localhost --alter --partitions 3 --topic KTEST --config cleanup.policy=compact ./kafka-topics.sh --zookeeper localhost --describe --topic KTEST Topic:KTEST PartitionCount:3ReplicationFactor:1 Configs: Topic: KTESTPartition: 0Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 1Leader: 0 Replicas: 0 Isr: 0 Topic: KTESTPartition: 2Leader: 0 Replicas: 0 Isr: 0 {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1510) Force offset commits when migrating consumer offsets from zookeeper to kafka
[ https://issues.apache.org/jira/browse/KAFKA-1510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080164#comment-14080164 ] Joel Koshy commented on KAFKA-1510: --- [~nmarasoi] - sure thing. Will get back to you on this. Force offset commits when migrating consumer offsets from zookeeper to kafka Key: KAFKA-1510 URL: https://issues.apache.org/jira/browse/KAFKA-1510 Project: Kafka Issue Type: Bug Affects Versions: 0.8.2 Reporter: Joel Koshy Assignee: Joel Koshy Labels: newbie Fix For: 0.8.2 Attachments: forceCommitOnShutdownWhenDualCommit.patch When migrating consumer offsets from ZooKeeper to kafka, we have to turn on dual-commit (i.e., the consumers will commit offsets to both zookeeper and kafka) in addition to setting offsets.storage to kafka. However, when we commit offsets we only commit offsets if they have changed (since the last commit). For low-volume topics or for topics that receive data in bursts offsets may not move for a long period of time. Therefore we may want to force the commit (even if offsets have not changed) when migrating (i.e., when dual-commit is enabled) - we can add a minimum interval threshold (say force commit after every 10 auto-commits) as well as on rebalance and shutdown. Also, I think it is safe to switch the default for offsets.storage from zookeeper to kafka and set the default to dual-commit (for people who have not migrated yet). We have deployed this to the largest consumers at linkedin and have not seen any issues so far (except for the migration caveat that this jira will resolve). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication
[ https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080174#comment-14080174 ] Joe Stein commented on KAFKA-1477: -- [~jkreps] agreed, next week sounds good add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication -- Key: KAFKA-1477 URL: https://issues.apache.org/jira/browse/KAFKA-1477 Project: Kafka Issue Type: New Feature Reporter: Joe Stein Assignee: Ivan Lyutov Fix For: 0.8.2 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, KAFKA-1477_2014-06-03_13:46:17.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1555) provide strong consistency with reasonable availability
[ https://issues.apache.org/jira/browse/KAFKA-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080210#comment-14080210 ] Joe Stein commented on KAFKA-1555: -- +1 to introduce a new semantic when ack = -2. The semantic will be that a messages is only acked with no error if at the time the message is committed, ISR = |ack|. If a message is committed with ISR |ack|, we will return an UnderReplicatedError provide strong consistency with reasonable availability --- Key: KAFKA-1555 URL: https://issues.apache.org/jira/browse/KAFKA-1555 Project: Kafka Issue Type: Improvement Components: controller Affects Versions: 0.8.1.1 Reporter: Jiang Wu Assignee: Neha Narkhede In a mission critical application, we expect a kafka cluster with 3 brokers can satisfy two requirements: 1. When 1 broker is down, no message loss or service blocking happens. 2. In worse cases such as two brokers are down, service can be blocked, but no message loss happens. We found that current kafka versoin (0.8.1.1) cannot achieve the requirements due to its three behaviors: 1. when choosing a new leader from 2 followers in ISR, the one with less messages may be chosen as the leader. 2. even when replica.lag.max.messages=0, a follower can stay in ISR when it has less messages than the leader. 3. ISR can contains only 1 broker, therefore acknowledged messages may be stored in only 1 broker. The following is an analytical proof. We consider a cluster with 3 brokers and a topic with 3 replicas, and assume that at the beginning, all 3 replicas, leader A, followers B and C, are in sync, i.e., they have the same messages and are all in ISR. According to the value of request.required.acks (acks for short), there are the following cases. 1. acks=0, 1, 3. Obviously these settings do not satisfy the requirement. 2. acks=2. Producer sends a message m. It's acknowledged by A and B. At this time, although C hasn't received m, C is still in ISR. If A is killed, C can be elected as the new leader, and consumers will miss m. 3. acks=-1. B and C restart and are removed from ISR. Producer sends a message m to A, and receives an acknowledgement. Disk failure happens in A before B and C replicate m. Message m is lost. In summary, any existing configuration cannot satisfy the requirements. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication
[ https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080335#comment-14080335 ] Gwen Shapira commented on KAFKA-1477: - [~joestein] [~jkreps] So the next steps are fleshing out the exact requirements and what we will support? And my understanding is that this will be done in public, probably in the wiki? I'm interested in adding the feedback that I've been getting from our customers into the mix. add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication -- Key: KAFKA-1477 URL: https://issues.apache.org/jira/browse/KAFKA-1477 Project: Kafka Issue Type: New Feature Reporter: Joe Stein Assignee: Ivan Lyutov Fix For: 0.8.2 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, KAFKA-1477_2014-06-03_13:46:17.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication
[ https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14080355#comment-14080355 ] Joe Stein commented on KAFKA-1477: -- Hey [~gwenshap] take a look where we are so far https://cwiki.apache.org/confluence/display/KAFKA/Security very much welcome feedback. (just granted you access to the wiki). add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication -- Key: KAFKA-1477 URL: https://issues.apache.org/jira/browse/KAFKA-1477 Project: Kafka Issue Type: New Feature Reporter: Joe Stein Assignee: Ivan Lyutov Fix For: 0.8.2 Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, KAFKA-1477_2014-06-03_13:46:17.patch -- This message was sent by Atlassian JIRA (v6.2#6252)