Re: offset storage as kafka with zookeeper 3.4.6

2015-06-11 Thread Joel Koshy
 Is it mandatory to use the zookeeper that comes with kafka for offset
 storage to be migrated to kafka?
If you want to move offsets from zookeeper to Kafka then yes you
need to have a phase where all consumers in your group set dual commit
to true. If you are starting a fresh consumer group then you can
turn off dual-commit.

 But nothing is being written to this topic, while the consumer offsets
 continue to reside on zookeeper.

The zookeeper offsets won't be removed. However, are they changing?
How are you verifying that nothing is written to this topic? If you
are trying to consume it, then you will need to set
exclude.internal.topics=false in your consumer properties. You can
also check consumer mbeans that give the KafkaCommitRate or enable
trace logging in either the consumer or the broker's request logs to
check if offset commit request are getting sent out to the cluster.

On Thu, Jun 11, 2015 at 01:03:09AM -0700, Kris K wrote:
 I am trying to migrate the offset storage to kafka (3 brokers of version
 0.8.2.1) using the consumer property offsets.storage=kafka.  I noticed that
 a new topic, __consumer_offsets got created.
 But nothing is being written to this topic, while the consumer offsets
 continue to reside on zookeeper.
 
 I am using a 3 node zookeeper ensemble (version 3.4.6) and not using the
 one that comes with kafka.
 
 The current config consumer.properties now contains:
 
 offsets.storage=kafka
 dual.commit.enabled=false
 exclude.internal.topics=false
 
 Is it mandatory to use the zookeeper that comes with kafka for offset
 storage to be migrated to kafka?
 
 I tried both the approaches:
 
 1. As listed on slide 34 of
 http://www.slideshare.net/jjkoshy/offset-management-in-kafka.
 2. By deleting the zookeeper data directories and kafka log directories.
 
 None of them worked.
 
 Thanks
 Kris



[jira] [Created] (KAFKA-2263) Update Is it possible to delete a topic wiki FAQ answer

2015-06-11 Thread Stevo Slavic (JIRA)
Stevo Slavic created KAFKA-2263:
---

 Summary: Update Is it possible to delete a topic wiki FAQ answer
 Key: KAFKA-2263
 URL: https://issues.apache.org/jira/browse/KAFKA-2263
 Project: Kafka
  Issue Type: Task
  Components: website
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Trivial


Answer to the mentioned 
[FAQ|https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Isitpossibletodeleteatopic?]
 hasn't been updated since delete feature became available in 0.8.2.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-23 - Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-06-11 Thread Ashish Singh
Hi Guys,

This has been lying around for quite some time. Should I start a voting
thread on this?

On Thu, May 7, 2015 at 12:20 PM, Ashish Singh asi...@cloudera.com wrote:

 Had to change the title of the page and that surprisingly changed the link
 as well. KIP-23 is now available at here
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=56852556
 .

 On Thu, May 7, 2015 at 11:34 AM, Ashish Singh asi...@cloudera.com wrote:

 Hi Guys,

 I just added a KIP, KIP-23 - Add JSON/CSV output and looping options to
 ConsumerGroupCommand
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-23, for KAFKA-313
 https://issues.apache.org/jira/browse/KAFKA-313. The changes made as
 part of the JIRA can be found here https://reviews.apache.org/r/28096/.

 Comments and suggestions are welcome!

 --

 Regards,
 Ashish




 --

 Regards,
 Ashish




-- 

Regards,
Ashish


Re: Review Request 34789: Patch for KAFKA-2168

2015-06-11 Thread Jason Gustafson


 On June 9, 2015, 7:58 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
  lines 797-798
  https://reviews.apache.org/r/34789/diff/8/?file=980075#file980075line797
 
  Hmm, seekToBegining() is supposed to be a blocking call. Basically, at 
  the end of the call, we expect the fetch offset to be set to the beginning. 
  This is now changed to async, which doesn't match the intended behavior. We 
  need to think through if this matters or not.
  
  Ditto for seekToEnd().
 
 Jason Gustafson wrote:
 Since we always update fetch positions before a new fetch and in 
 position(), it didn't seem necessary to make it synchronous. I thought this 
 handling might be more consistent with how new subscriptions are handled 
 (which are asynchronous and defer the initial offset fetch until the next 
 poll or position). That being said, I don't have a strong feeling about it, 
 so we could return to the blocking version.
 
 Jun Rao wrote:
 Making this async may be fine. One implication is that we call position() 
 immediately after seekToBeginning(), we may not be able to get the correct 
 offset.
 
 Jason Gustafson wrote:
 We should be able to get the right offset since we always update offsets 
 before returning the current position, but we might have to block for it. 
 It's similar to if you call subscribe(topic) and then try to get its position 
 immediately.
 
 Jun Rao wrote:
 That may work. However, if one calls seekToBegining() followed by 
 seekToEnd(), will we guarantee that position() returns the end offset?

Yes, this will work. The latest seek will overwrite any pending ones.


 On June 9, 2015, 7:58 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
  lines 319-322
  https://reviews.apache.org/r/34789/diff/8/?file=980075#file980075line319
 
  Could we add an example of how to use the new wakeup() call, especially 
  with closing the consumer properly? For example, does the consumer thread 
  just catch the ConsumerWakeupException and then call close()?

I've added an example in the latest patch.


 On June 9, 2015, 7:58 p.m., Jun Rao wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
  lines 1039-1040
  https://reviews.apache.org/r/34789/diff/8/?file=980075#file980075line1039
 
  The returned response may be ready already after the offsetBefore call 
  due to needing metadata refresh. Since we don't check the ready state 
  immediately afterward, we may be delaying the processing of metadata 
  refresh by the request timeout.
 
 Jason Gustafson wrote:
 This is a pretty good point. One of the reasons working with 
 NetworkClient is tricky is that you need several polls to complete a request: 
 one to connect, one to send, and one to receive. In this case, the result 
 might not be ready because we are in the middle of connecting to the broker, 
 in which case we need to call poll() to finish the connect. If we don't, then 
 then next request will just fail for the same reason. I'll look to see if 
 there's a way to fix this to avoid unnecessary calls to poll.

I struggled a bit trying to fix this. In the latest patch, I changed the notion 
of remedy to a retryAction and included polling as one of the possible 
actions. Then if the result is finished, we only would call poll if the result 
indicates that it's needed. The only case where I actually use this is when a 
connection has just been initiated.


- Jason


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34789/#review87190
---


On June 11, 2015, 9:10 p.m., Jason Gustafson wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34789/
 ---
 
 (Updated June 11, 2015, 9:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2168
 https://issues.apache.org/jira/browse/KAFKA-2168
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2168; refactored callback handling to prevent unnecessary requests
 
 
 KAFKA-2168; address review comments
 
 
 KAFKA-2168; fix rebase error and checkstyle issue
 
 
 KAFKA-2168; address review comments and add docs
 
 
 KAFKA-2168; handle polling with timeout 0
 
 
 KAFKA-2168; timeout=0 means return immediately
 
 
 KAFKA-2168; address review comments
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 8f587bc0705b65b3ef37c86e0c25bb43ab8803de 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 1ca75f83d3667f7d01da1ae2fd9488fb79562364 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerWakeupException.java
  PRE-CREATION 
   

Re: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)

2015-06-11 Thread Joel Koshy
Discussion aside, was there any significant material change besides
the additions below? If so, then we can avoid the overhead of another
vote unless someone wants to down-vote these changes.

Joel

On Thu, Jun 11, 2015 at 06:36:36PM +, Aditya Auradkar wrote:
 Andrii,
 
 Do we need a new voting thread for this KIP? The last round of votes had 3 
 binding +1's but there's been a fair amount of discussion since then.
 
 Aditya
 
 
 From: Aditya Auradkar
 Sent: Thursday, June 11, 2015 10:32 AM
 To: dev@kafka.apache.org
 Subject: RE: [DISCUSS] KIP-4 - Command line and centralized administrative 
 operations (Thread 2)
 
 I've made two changes to the document:
 - Removed the TMR evolution piece since we agreed to retain this.
 - Added two new API's to the admin client spec. (Alter and Describe config).
 
 Please review.
 
 Aditya
 
 
 From: Ashish Singh [asi...@cloudera.com]
 Sent: Friday, May 29, 2015 8:36 AM
 To: dev@kafka.apache.org
 Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative 
 operations (Thread 2)
 
 +1 on discussing this on next KIP hangout. I will update KIP-24 before that.
 
 On Fri, May 29, 2015 at 3:40 AM, Andrii Biletskyi 
 andrii.bilets...@stealth.ly wrote:
 
  Guys,
 
  I won't be able to attend next meeting. But in the latest patch for KIP-4
  Phase 1
  I didn't even evolve TopicMetadataRequest to v1 since we won't be able
  to change config with AlterTopicRequest, hence with this patch TMR will
  still
  return isr. Taking this into account I think yes - it would be good to fix
  ISR issue,
  although I didn't consider it to be a critical one (isr was part of TMR
  from the very
  beginning and almost no code relies on this piece of request).
 
  Thanks,
  Andrii Biletskyi
 
  On Fri, May 29, 2015 at 8:50 AM, Aditya Auradkar 
  aaurad...@linkedin.com.invalid wrote:
 
   Thanks. Perhaps we should leave TMR unchanged for now. Should we discuss
   this during the next hangout?
  
   Aditya
  
   
   From: Jun Rao [j...@confluent.io]
   Sent: Thursday, May 28, 2015 5:32 PM
   To: dev@kafka.apache.org
   Subject: Re: [DISCUSS] KIP-4 - Command line and centralized
  administrative
   operations (Thread 2)
  
   There is a reasonable use case of ISR in KAFKA-2225. Basically, for
   economical reasons, we may want to let a consumer fetch from a replica in
   ISR that's in the same zone. In order to support that, it will be
   convenient to have TMR return the correct ISR for the consumer to choose.
  
   So, perhaps it's worth fixing the ISR inconsistency issue in KAFKA-1367
   (there is some new discussion there on what it takes to fix this). If we
  do
   that, we can leave TMR unchanged.
  
   Thanks,
  
   Jun
  
   On Tue, May 26, 2015 at 1:13 PM, Aditya Auradkar 
   aaurad...@linkedin.com.invalid wrote:
  
Andryii,
   
I made a few edits to this document as discussed in the KIP-21 thread.
   
   
  
  https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
   
With these changes. the only difference between
  TopicMetadataResponse_V1
and V0 is the removal of the ISR field. I've altered the KIP with the
assumption that this is a good enough reason by itself to evolve the
request/response protocol. Any concerns there?
   
Thanks,
Aditya
   

From: Mayuresh Gharat [gharatmayures...@gmail.com]
Sent: Thursday, May 21, 2015 8:29 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-4 - Command line and centralized
   administrative
operations (Thread 2)
   
Hi Jun,
   
Thanks a lot. I get it now.
 Point 4) will actually enable clients to who don't want to create a
   topic
with default partitions, if it does not exist and then can manually
   create
the topic with their own configs(#partitions).
   
Thanks,
   
Mayuresh
   
On Thu, May 21, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote:
   
 Mayuresh,

 The current plan is the following.

 1. Add TMR v1, which still triggers auto topic creation.
 2. Change the consumer client to TMR v1. Change the producer client
  to
use
 TMR v1 and on UnknownTopicException, issue TopicCreateRequest to
explicitly
 create the topic with the default server side partitions and
  replicas.
 3. At some later time after the new clients are released and
  deployed,
 disable auto topic creation in TMR v1. This will make sure consumers
never
 create new topics.
 4. If needed, we can add a new config in the producer to control
   whether
 TopicCreateRequest should be issued or not on UnknownTopicException.
  If
 this is disabled and the topic doesn't exist, send will fail and the
   user
 is expected to create the topic manually.

 Thanks,

 Jun



Pending review requests

2015-06-11 Thread Ashish Singh
Hey Guys,

I have a few JIRAs in patch available state for some time. I will really
appreciate if someone can review them.

1. https://issues.apache.org/jira/browse/KAFKA-2132
2. https://issues.apache.org/jira/browse/KAFKA-2005
3. https://issues.apache.org/jira/browse/KAFKA-1722
4. https://issues.apache.org/jira/browse/KAFKA-313 (KIP-23)

-- 

Regards,
Ashish


Re: Review Request 34789: Patch for KAFKA-2168

2015-06-11 Thread Jason Gustafson


 On June 3, 2015, 6:44 a.m., Ewen Cheslack-Postava wrote:
  clients/src/main/java/org/apache/kafka/clients/consumer/internals/BrokerResponse.java,
   line 15
  https://reviews.apache.org/r/34789/diff/3/?file=976967#file976967line15
 
  The classes named XResponse may be a bit confusing because the protocol 
  responses use that terminology. Future? Result?
 
 Jason Gustafson wrote:
 Agreed. In fact, they were XResult initially. I changed them because 
 BrokerResult and CoordinatorResult didn't seems to suggest as clearly what 
 they were for as BrokerResponse and CoordinatorResponse. I considered Future 
 as well, but its usage is a bit different than traditional Java Futures. 
 Perhaps XReply?
 
 Ewen Cheslack-Postava wrote:
 Even though there's no blocking get(), XFuture might be the clearest. 
 XReply would work, but has a similar issue that it gets confusing whether 
 XResponse or XReply is the actual message received back vs. the processed 
 data that you wanted to extract.

In the latest patch, I changed DelayedResult to RequestFuture. Think that's 
better?


- Jason


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34789/#review86338
---


On June 11, 2015, 9:10 p.m., Jason Gustafson wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34789/
 ---
 
 (Updated June 11, 2015, 9:10 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2168
 https://issues.apache.org/jira/browse/KAFKA-2168
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2168; refactored callback handling to prevent unnecessary requests
 
 
 KAFKA-2168; address review comments
 
 
 KAFKA-2168; fix rebase error and checkstyle issue
 
 
 KAFKA-2168; address review comments and add docs
 
 
 KAFKA-2168; handle polling with timeout 0
 
 
 KAFKA-2168; timeout=0 means return immediately
 
 
 KAFKA-2168; address review comments
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
 8f587bc0705b65b3ef37c86e0c25bb43ab8803de 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
 1ca75f83d3667f7d01da1ae2fd9488fb79562364 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerWakeupException.java
  PRE-CREATION 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 d1d1ec178f60dc47d408f52a89e52886c1a093a2 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 f50da825756938c193d7f07bee953e000e2627d9 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/OffsetResetStrategy.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  c1496a0851526f3c7d3905ce4bdff2129c83a6c1 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java
  56281ee15cc33dfc96ff64d5b1e596047c7132a4 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
  e7cfaaad296fa6e325026a5eee1aaf9b9c0fe1fe 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFuture.java
  PRE-CREATION 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
  cee75410127dd1b86c1156563003216d93a086b3 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/MockConsumerTest.java 
 677edd385f35d4262342b567262c0b874876d25b 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
  b06c4a73e2b4e9472cd772c8bc32bf4a29f431bb 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
  419541011d652becf0cda7a5e62ce813cddb1732 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/HeartbeatTest.java
  ecc78cedf59a994fcf084fa7a458fe9ed5386b00 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/SubscriptionStateTest.java
  e000cf8e10ebfacd6c9ee68d7b88ff8c157f73c6 
 
 Diff: https://reviews.apache.org/r/34789/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jason Gustafson
 




[jira] [Updated] (KAFKA-2168) New consumer poll() can block other calls like position(), commit(), and close() indefinitely

2015-06-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2168:
---
Attachment: KAFKA-2168_2015-06-11_14:09:59.patch

 New consumer poll() can block other calls like position(), commit(), and 
 close() indefinitely
 -

 Key: KAFKA-2168
 URL: https://issues.apache.org/jira/browse/KAFKA-2168
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
 Attachments: KAFKA-2168.patch, KAFKA-2168_2015-06-01_16:03:38.patch, 
 KAFKA-2168_2015-06-02_17:09:37.patch, KAFKA-2168_2015-06-03_18:20:23.patch, 
 KAFKA-2168_2015-06-03_21:06:42.patch, KAFKA-2168_2015-06-04_14:36:04.patch, 
 KAFKA-2168_2015-06-05_12:01:28.patch, KAFKA-2168_2015-06-05_12:44:48.patch, 
 KAFKA-2168_2015-06-11_14:09:59.patch


 The new consumer is currently using very coarse-grained synchronization. For 
 most methods this isn't a problem since they finish quickly once the lock is 
 acquired, but poll() might run for a long time (and commonly will since 
 polling with long timeouts is a normal use case). This means any operations 
 invoked from another thread may block until the poll() call completes.
 Some example use cases where this can be a problem:
 * A shutdown hook is registered to trigger shutdown and invokes close(). It 
 gets invoked from another thread and blocks indefinitely.
 * User wants to manage offset commit themselves in a background thread. If 
 the commit policy is not purely time based, it's not currently possibly to 
 make sure the call to commit() will be processed promptly.
 Two possible solutions to this:
 1. Make sure a lock is not held during the actual select call. Since we have 
 multiple layers (KafkaConsumer - NetworkClient - Selector - nio Selector) 
 this is probably hard to make work cleanly since locking is currently only 
 performed at the KafkaConsumer level and we'd want it unlocked around a 
 single line of code in Selector.
 2. Wake up the selector before synchronizing for certain operations. This 
 would require some additional coordination to make sure the caller of 
 wakeup() is able to acquire the lock promptly (instead of, e.g., the poll() 
 thread being woken up and then promptly reacquiring the lock with a 
 subsequent long poll() call).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2168) New consumer poll() can block other calls like position(), commit(), and close() indefinitely

2015-06-11 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582524#comment-14582524
 ] 

Jason Gustafson commented on KAFKA-2168:


Updated reviewboard https://reviews.apache.org/r/34789/diff/
 against branch upstream/trunk

 New consumer poll() can block other calls like position(), commit(), and 
 close() indefinitely
 -

 Key: KAFKA-2168
 URL: https://issues.apache.org/jira/browse/KAFKA-2168
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
 Attachments: KAFKA-2168.patch, KAFKA-2168_2015-06-01_16:03:38.patch, 
 KAFKA-2168_2015-06-02_17:09:37.patch, KAFKA-2168_2015-06-03_18:20:23.patch, 
 KAFKA-2168_2015-06-03_21:06:42.patch, KAFKA-2168_2015-06-04_14:36:04.patch, 
 KAFKA-2168_2015-06-05_12:01:28.patch, KAFKA-2168_2015-06-05_12:44:48.patch, 
 KAFKA-2168_2015-06-11_14:09:59.patch


 The new consumer is currently using very coarse-grained synchronization. For 
 most methods this isn't a problem since they finish quickly once the lock is 
 acquired, but poll() might run for a long time (and commonly will since 
 polling with long timeouts is a normal use case). This means any operations 
 invoked from another thread may block until the poll() call completes.
 Some example use cases where this can be a problem:
 * A shutdown hook is registered to trigger shutdown and invokes close(). It 
 gets invoked from another thread and blocks indefinitely.
 * User wants to manage offset commit themselves in a background thread. If 
 the commit policy is not purely time based, it's not currently possibly to 
 make sure the call to commit() will be processed promptly.
 Two possible solutions to this:
 1. Make sure a lock is not held during the actual select call. Since we have 
 multiple layers (KafkaConsumer - NetworkClient - Selector - nio Selector) 
 this is probably hard to make work cleanly since locking is currently only 
 performed at the KafkaConsumer level and we'd want it unlocked around a 
 single line of code in Selector.
 2. Wake up the selector before synchronizing for certain operations. This 
 would require some additional coordination to make sure the caller of 
 wakeup() is able to acquire the lock promptly (instead of, e.g., the poll() 
 thread being woken up and then promptly reacquiring the lock with a 
 subsequent long poll() call).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34789: Patch for KAFKA-2168

2015-06-11 Thread Jason Gustafson

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34789/
---

(Updated June 11, 2015, 9:10 p.m.)


Review request for kafka.


Bugs: KAFKA-2168
https://issues.apache.org/jira/browse/KAFKA-2168


Repository: kafka


Description (updated)
---

KAFKA-2168; refactored callback handling to prevent unnecessary requests


KAFKA-2168; address review comments


KAFKA-2168; fix rebase error and checkstyle issue


KAFKA-2168; address review comments and add docs


KAFKA-2168; handle polling with timeout 0


KAFKA-2168; timeout=0 means return immediately


KAFKA-2168; address review comments


Diffs (updated)
-

  clients/src/main/java/org/apache/kafka/clients/consumer/Consumer.java 
8f587bc0705b65b3ef37c86e0c25bb43ab8803de 
  clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java 
1ca75f83d3667f7d01da1ae2fd9488fb79562364 
  
clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerWakeupException.java
 PRE-CREATION 
  clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
d1d1ec178f60dc47d408f52a89e52886c1a093a2 
  clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
f50da825756938c193d7f07bee953e000e2627d9 
  
clients/src/main/java/org/apache/kafka/clients/consumer/OffsetResetStrategy.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
 c1496a0851526f3c7d3905ce4bdff2129c83a6c1 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java 
56281ee15cc33dfc96ff64d5b1e596047c7132a4 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/Heartbeat.java
 e7cfaaad296fa6e325026a5eee1aaf9b9c0fe1fe 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/RequestFuture.java
 PRE-CREATION 
  
clients/src/main/java/org/apache/kafka/clients/consumer/internals/SubscriptionState.java
 cee75410127dd1b86c1156563003216d93a086b3 
  clients/src/test/java/org/apache/kafka/clients/consumer/MockConsumerTest.java 
677edd385f35d4262342b567262c0b874876d25b 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
 b06c4a73e2b4e9472cd772c8bc32bf4a29f431bb 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
 419541011d652becf0cda7a5e62ce813cddb1732 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/HeartbeatTest.java
 ecc78cedf59a994fcf084fa7a458fe9ed5386b00 
  
clients/src/test/java/org/apache/kafka/clients/consumer/internals/SubscriptionStateTest.java
 e000cf8e10ebfacd6c9ee68d7b88ff8c157f73c6 

Diff: https://reviews.apache.org/r/34789/diff/


Testing
---


Thanks,

Jason Gustafson



[jira] [Commented] (KAFKA-2168) New consumer poll() can block other calls like position(), commit(), and close() indefinitely

2015-06-11 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582528#comment-14582528
 ] 

Jay Kreps commented on KAFKA-2168:
--

Hey [~guozhang], have you had a chance to look at this? It would be good to get 
your thoughts as it relates somewhat to the refactoring you did...

 New consumer poll() can block other calls like position(), commit(), and 
 close() indefinitely
 -

 Key: KAFKA-2168
 URL: https://issues.apache.org/jira/browse/KAFKA-2168
 Project: Kafka
  Issue Type: Bug
  Components: clients, consumer
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson
 Attachments: KAFKA-2168.patch, KAFKA-2168_2015-06-01_16:03:38.patch, 
 KAFKA-2168_2015-06-02_17:09:37.patch, KAFKA-2168_2015-06-03_18:20:23.patch, 
 KAFKA-2168_2015-06-03_21:06:42.patch, KAFKA-2168_2015-06-04_14:36:04.patch, 
 KAFKA-2168_2015-06-05_12:01:28.patch, KAFKA-2168_2015-06-05_12:44:48.patch, 
 KAFKA-2168_2015-06-11_14:09:59.patch


 The new consumer is currently using very coarse-grained synchronization. For 
 most methods this isn't a problem since they finish quickly once the lock is 
 acquired, but poll() might run for a long time (and commonly will since 
 polling with long timeouts is a normal use case). This means any operations 
 invoked from another thread may block until the poll() call completes.
 Some example use cases where this can be a problem:
 * A shutdown hook is registered to trigger shutdown and invokes close(). It 
 gets invoked from another thread and blocks indefinitely.
 * User wants to manage offset commit themselves in a background thread. If 
 the commit policy is not purely time based, it's not currently possibly to 
 make sure the call to commit() will be processed promptly.
 Two possible solutions to this:
 1. Make sure a lock is not held during the actual select call. Since we have 
 multiple layers (KafkaConsumer - NetworkClient - Selector - nio Selector) 
 this is probably hard to make work cleanly since locking is currently only 
 performed at the KafkaConsumer level and we'd want it unlocked around a 
 single line of code in Selector.
 2. Wake up the selector before synchronizing for certain operations. This 
 would require some additional coordination to make sure the caller of 
 wakeup() is able to acquire the lock promptly (instead of, e.g., the poll() 
 thread being woken up and then promptly reacquiring the lock with a 
 subsequent long poll() call).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-06-11 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582532#comment-14582532
 ] 

Ashish K Singh commented on KAFKA-1367:
---

[~jjkoshy] thanks for confirming. I will get started on the suggested solution 
for this issue. We will probably need a separate JIRA for KIP-24.

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
Assignee: Ashish K Singh
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.txt


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-23 - Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-06-11 Thread Gwen Shapira
Maybe bring it up at the next KIP call, to make sure everyone is aware?

On Thu, Jun 11, 2015 at 2:17 PM, Ashish Singh asi...@cloudera.com wrote:
 Hi Guys,

 This has been lying around for quite some time. Should I start a voting
 thread on this?

 On Thu, May 7, 2015 at 12:20 PM, Ashish Singh asi...@cloudera.com wrote:

 Had to change the title of the page and that surprisingly changed the link
 as well. KIP-23 is now available at here
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=56852556
 .

 On Thu, May 7, 2015 at 11:34 AM, Ashish Singh asi...@cloudera.com wrote:

 Hi Guys,

 I just added a KIP, KIP-23 - Add JSON/CSV output and looping options to
 ConsumerGroupCommand
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-23, for KAFKA-313
 https://issues.apache.org/jira/browse/KAFKA-313. The changes made as
 part of the JIRA can be found here https://reviews.apache.org/r/28096/.

 Comments and suggestions are welcome!

 --

 Regards,
 Ashish




 --

 Regards,
 Ashish




 --

 Regards,
 Ashish


Re: [DISCUSS] KIP-23 - Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-06-11 Thread Ashish Singh
Jun,

Can we add this as part of next KIP's agenda?

On Thu, Jun 11, 2015 at 3:00 PM, Gwen Shapira gshap...@cloudera.com wrote:

 Maybe bring it up at the next KIP call, to make sure everyone is aware?

 On Thu, Jun 11, 2015 at 2:17 PM, Ashish Singh asi...@cloudera.com wrote:
  Hi Guys,
 
  This has been lying around for quite some time. Should I start a voting
  thread on this?
 
  On Thu, May 7, 2015 at 12:20 PM, Ashish Singh asi...@cloudera.com
 wrote:
 
  Had to change the title of the page and that surprisingly changed the
 link
  as well. KIP-23 is now available at here
  
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=56852556
  .
 
  On Thu, May 7, 2015 at 11:34 AM, Ashish Singh asi...@cloudera.com
 wrote:
 
  Hi Guys,
 
  I just added a KIP, KIP-23 - Add JSON/CSV output and looping options to
  ConsumerGroupCommand
  https://cwiki.apache.org/confluence/display/KAFKA/KIP-23, for
 KAFKA-313
  https://issues.apache.org/jira/browse/KAFKA-313. The changes made as
  part of the JIRA can be found here 
 https://reviews.apache.org/r/28096/.
 
  Comments and suggestions are welcome!
 
  --
 
  Regards,
  Ashish
 
 
 
 
  --
 
  Regards,
  Ashish
 
 
 
 
  --
 
  Regards,
  Ashish




-- 

Regards,
Ashish


[jira] [Updated] (KAFKA-2005) Generate html report for system tests

2015-06-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2005:
---
   Resolution: Fixed
Fix Version/s: 0.8.3
   Status: Resolved  (was: Patch Available)

Thanks for the patch. +1 and committed to trunk.

 Generate html report for system tests
 -

 Key: KAFKA-2005
 URL: https://issues.apache.org/jira/browse/KAFKA-2005
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2005.patch


 System test results are kind of huge and painful to read. A html report will 
 be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2249:
---
Status: In Progress  (was: Patch Available)

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2132) Move Log4J appender to a separate module

2015-06-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2132:

Summary: Move Log4J appender to a separate module  (was: Move Log4J 
appender to clients module)

 Move Log4J appender to a separate module
 

 Key: KAFKA-2132
 URL: https://issues.apache.org/jira/browse/KAFKA-2132
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh
 Attachments: KAFKA-2132.patch, KAFKA-2132_2015-04-27_19:59:46.patch, 
 KAFKA-2132_2015-04-30_12:22:02.patch, KAFKA-2132_2015-04-30_15:53:17.patch


 Log4j appender is just a producer.
 Since we have a new producer in the clients module, no need to keep Log4J 
 appender in core and force people to package all of Kafka with their apps.
 Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2005) Generate html report for system tests

2015-06-11 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582646#comment-14582646
 ] 

Ashish K Singh commented on KAFKA-2005:
---

Thanks [~junrao] for reviewing and committing.

 Generate html report for system tests
 -

 Key: KAFKA-2005
 URL: https://issues.apache.org/jira/browse/KAFKA-2005
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Fix For: 0.8.3

 Attachments: KAFKA-2005.patch


 System test results are kind of huge and painful to read. A html report will 
 be very useful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35347: Patch for KAFKA-2249

2015-06-11 Thread Gwen Shapira


 On June 11, 2015, 11:22 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaConfig.scala, line 485
  https://reviews.apache.org/r/35347/diff/1/?file=982452#file982452line485
 
  Is there a particular reason to change this to a long?

It is used as LONG everywhere in the code.


 On June 11, 2015, 11:22 p.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/KafkaConfig.scala, line 492
  https://reviews.apache.org/r/35347/diff/1/?file=982452#file982452line492
 
  Is there a particular reason to change this to a long?

It is used as LONG everywhere in the code.


- Gwen


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35347/#review87643
---


On June 11, 2015, 6:09 a.m., Gwen Shapira wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35347/
 ---
 
 (Updated June 11, 2015, 6:09 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2249
 https://issues.apache.org/jira/browse/KAFKA-2249
 
 
 Repository: kafka
 
 
 Description
 ---
 
 modified KafkaConfig to implement AbstractConfig. This resulted in somewhat 
 cleaner code, and we preserve the original Properties for use by 
 MetricReporter
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
 c4fa058692f50abb4f47bd344119d805c60123f5 
   core/src/main/scala/kafka/controller/KafkaController.scala 
 69bba243a9a511cc5292b43da0cc48e421a428b0 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 2d75186a110075e0c322db4b9f7a8c964a7a3e88 
   core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
 b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 ace6321b36d809946554d205bc926c9c76a43bd6 
 
 Diff: https://reviews.apache.org/r/35347/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Gwen Shapira
 




Re: Review Request 33614: Patch for KAFKA-2132

2015-06-11 Thread Aditya Auradkar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33614/#review87638
---


Hey Ashish,

I've left a few minor comments. Thanks!


log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140037

Can you add some javadoc?



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140032

can this be logged as info since it's infrequent?



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140027

perhaps wrap this inside an isDebugEnabled check?



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140029

can you change this to this.syncSend?



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140028

perhaps you can use the ternary operator here?


- Aditya Auradkar


On April 30, 2015, 10:53 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33614/
 ---
 
 (Updated April 30, 2015, 10:53 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2132
 https://issues.apache.org/jira/browse/KAFKA-2132
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2132: Move Log4J appender to clients module
 
 
 Diffs
 -
 
   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
   checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
   core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala 
 5d36a019e3dbfb93737a9cd23404dcd1c5d836d1 
   core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
 41366a14590d318fced0e83d6921d8035fa882da 
   
 log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
  PRE-CREATION 
   
 log4j-appender/src/test/java/org/apache/kafka/log4jappender/KafkaLog4jAppenderTest.java
  PRE-CREATION 
   
 log4j-appender/src/test/java/org/apache/kafka/log4jappender/MockKafkaLog4jAppender.java
  PRE-CREATION 
   settings.gradle 83f764e6a4a15a5fdba232dce74a369870f26b45 
 
 Diff: https://reviews.apache.org/r/33614/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Commented] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582698#comment-14582698
 ] 

Gwen Shapira commented on KAFKA-2238:
-

[~aauradkar] I made significant changes to KafkaConfig in KAFKA-2249, which 
should resolve (or at least help with) some of the Metric issues.

Will KAFKA-2249 resolve your issues as well?

 KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
 ---

 Key: KAFKA-2238
 URL: https://issues.apache.org/jira/browse/KAFKA-2238
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2238.patch


 All metrics config values are not included in KafkaConfig and consequently 
 cannot be configured into the brokers. This is because the 
 KafkaMetricsReporter is passed a properties object generated by calling 
 toProps on KafkaConfig
 KafkaMetricsReporter.startReporters(new 
 VerifiableProperties(serverConfig.toProps))
 However, KafkaConfig never writes these values into the properties object and 
 hence these aren't configurable. The defaults always apply
 Add the following metrics to KafkaConfig
 kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
 kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14581505#comment-14581505
 ] 

Gwen Shapira commented on KAFKA-2249:
-

Created reviewboard https://reviews.apache.org/r/35347/diff/
 against branch trunk

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2249:

Assignee: Gwen Shapira
  Status: Patch Available  (was: Open)

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
Assignee: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2249) KafkaConfig does not preserve original Properties

2015-06-11 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2249:

Attachment: KAFKA-2249.patch

 KafkaConfig does not preserve original Properties
 -

 Key: KAFKA-2249
 URL: https://issues.apache.org/jira/browse/KAFKA-2249
 Project: Kafka
  Issue Type: Bug
Reporter: Gwen Shapira
 Attachments: KAFKA-2249.patch


 We typically generate configuration from properties objects (or maps).
 The old KafkaConfig, and the new ProducerConfig and ConsumerConfig all retain 
 the original Properties object, which means that if the user specified 
 properties that are not part of ConfigDef definitions, they are still 
 accessible.
 This is important especially for MetricReporters where we want to allow users 
 to pass arbitrary properties for the reporter.
 One way to support this is by having KafkaConfig implement AbstractConfig, 
 which will give us other nice functionality too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-06-11 Thread Dapeng Sun

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review87532
---



core/src/main/scala/kafka/server/KafkaConfig.scala
https://reviews.apache.org/r/34492/#comment139881

Why add a new config file path? could authorization related config options 
be merged into Kafka Config?


- Dapeng Sun


On 六月 5, 2015, 7:07 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated 六月 5, 2015, 7:07 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 3d483bc7518ad76f9548772522751afb4d046b78 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.java PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.java PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.java PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 387e387998fc3a6c9cb585dab02b5f77b0381fbf 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 6f25afd0e5df98258640252661dee271b1795111 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 e66710d2368334ece66f70d55f57b3f888262620 
   core/src/test/resources/acl.json PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 71f48c07723e334e6489efab500a43fa93a52d0c 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Review Request 35347: Patch for KAFKA-2249

2015-06-11 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35347/
---

Review request for kafka.


Bugs: KAFKA-2249
https://issues.apache.org/jira/browse/KAFKA-2249


Repository: kafka


Description
---

modified KafkaConfig to implement AbstractConfig. This resulted in somewhat 
cleaner code, and we preserve the original Properties for use by MetricReporter


Diffs
-

  clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java 
c4fa058692f50abb4f47bd344119d805c60123f5 
  core/src/main/scala/kafka/controller/KafkaController.scala 
69bba243a9a511cc5292b43da0cc48e421a428b0 
  core/src/main/scala/kafka/server/KafkaApis.scala 
d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
2d75186a110075e0c322db4b9f7a8c964a7a3e88 
  core/src/main/scala/kafka/server/ReplicaFetcherThread.scala 
b31b432a226ba79546dd22ef1d2acbb439c2e9a3 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
ace6321b36d809946554d205bc926c9c76a43bd6 

Diff: https://reviews.apache.org/r/35347/diff/


Testing
---


Thanks,

Gwen Shapira



Re: Review Request 35261: Patch for KAFKA-2232

2015-06-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35261/#review87661
---


Thanks for the patch. A few comments below.


clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java
https://reviews.apache.org/r/35261/#comment140068

MockProducer and MockConsumer are meant for testing a Kafka application. 
So, it's convenient to include them in the client package instead of the test 
package.



clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java
https://reviews.apache.org/r/35261/#comment140069

The comment is inaccurate. We are passing in an empty cluster, not a null. 
This is an existing problem, but could you fix it in this jira too?



clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java
https://reviews.apache.org/r/35261/#comment140070

Could we add another constructor to pass in the partitioner?



clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java
https://reviews.apache.org/r/35261/#comment140071

We should pass in the key/value object, instead of null to the partitioner.


- Jun Rao


On June 9, 2015, 7 p.m., Alexander Pakulov wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35261/
 ---
 
 (Updated June 9, 2015, 7 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2232
 https://issues.apache.org/jira/browse/KAFKA-2232
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2232: Make MockProducer generic
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java 
 f50da825756938c193d7f07bee953e000e2627d9 
   clients/src/main/java/org/apache/kafka/clients/producer/MockProducer.java 
 e66491cc82f11641df6516e7d7abb4a808c27368 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/MockConsumerTest.java 
 677edd385f35d4262342b567262c0b874876d25b 
   
 clients/src/test/java/org/apache/kafka/clients/producer/MockProducerTest.java 
 6372f1a7f7f77d96ba7be05eb927c004f7fefb73 
   clients/src/test/java/org/apache/kafka/test/MockSerializer.java 
 e75d2e4e58ae0cdbe276d3a3b652e47795984791 
 
 Diff: https://reviews.apache.org/r/35261/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Alexander Pakulov
 




Re: [Discussion] New Consumer API / Protocol

2015-06-11 Thread Jun Rao
Guozhang,

Perhaps we can discuss this in our KIP hangout next week?

Thanks,

Jun

On Tue, Jun 9, 2015 at 1:12 PM, Guozhang Wang wangg...@gmail.com wrote:

 This email is to kick-off some discussion around the changes we want to
 make on the new consumer APIs as well as their semantics. Here are a
 not-comprehensive list of items in my mind:

 1. Poll(timeout): current definition of timeout states The time, in
 milliseconds, spent waiting in poll if data is not available. If 0, waits
 indefinitely. While in the current implementation, we have different
 semantics as stated, for example:

 a) poll(timeout) can return before timeout elapsed with empty consumed
 data.
 b) poll(timeout) can return after more than timeout elapsed due to
 blocking event like join-group, coordinator discovery, etc.

 We should think a bit more on what semantics we really want to provide and
 how to provide it in implementation.

 2. Thread safeness: currently we have a coarsen-grained locking mechanism
 that provides thread safeness but blocks commit / position / etc calls
 while poll() is in process. We are considering to remove the
 coarsen-grained locking with an additional Consumer.wakeup() call to break
 the polling, and instead suggest users to have one consumer client per
 thread, which aligns with the design of a single-threaded consumer
 (KAFKA-2123).

 3. Commit(): we want to improve the async commit calls to add a callback
 handler upon commit completes, and guarantee ordering of commit calls with
 retry policies (KAFKA-2168). In addition, we want to extend the API to
 expose attaching / fetching offset metadata stored in the Kafka offset
 manager.

 4. OffsetFetchRequest: currently for handling OffsetCommitRequest we check
 the generation id and the assigned partitions before accepting the request
 if the group is using Kafka for partition management, but for
 OffsetFetchRequest we cannot do this checking since it does not include
 groupId / consumerId information. Do people think this is OK or we should
 add this as we did in OffsetCommitRequest?

 5. New APIs: there are some other requests to add:

 a) offsetsBeforeTime(timestamp): or would seekToEnd and seekToBeginning
 sufficient?

 b) listTopics(): or should we just enforce users to use AdminUtils for such
 operations?

 There may be other issues that I have missed here, so folks just bring it
 up if you thought about anything else.

 -- Guozhang



Re: Review Request 33614: Patch for KAFKA-2132

2015-06-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/33614/#review87670
---


Thanks for the patch. A few comments below.


build.gradle
https://reviews.apache.org/r/33614/#comment140087

Could you check if this is needed? I think a compile dependency implies a 
testCompile dependency.



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140084

Should we use Logger inside log4j? Should we use LogLog as in the original 
scala code?



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140085

Would it be better to use the built-in StringSerializer?



log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
https://reviews.apache.org/r/33614/#comment140086

We can just use ConfigException.


- Jun Rao


On April 30, 2015, 10:53 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/33614/
 ---
 
 (Updated April 30, 2015, 10:53 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2132
 https://issues.apache.org/jira/browse/KAFKA-2132
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2132: Move Log4J appender to clients module
 
 
 Diffs
 -
 
   build.gradle fef515b3b2276b1f861e7cc2e33e74c3ce5e405b 
   checkstyle/import-control.xml f2e6cec267e67ce8e261341e373718e14a8e8e03 
   core/src/main/scala/kafka/producer/KafkaLog4jAppender.scala 
 5d36a019e3dbfb93737a9cd23404dcd1c5d836d1 
   core/src/test/scala/unit/kafka/log4j/KafkaLog4jAppenderTest.scala 
 41366a14590d318fced0e83d6921d8035fa882da 
   
 log4j-appender/src/main/java/org/apache/kafka/log4jappender/KafkaLog4jAppender.java
  PRE-CREATION 
   
 log4j-appender/src/test/java/org/apache/kafka/log4jappender/KafkaLog4jAppenderTest.java
  PRE-CREATION 
   
 log4j-appender/src/test/java/org/apache/kafka/log4jappender/MockKafkaLog4jAppender.java
  PRE-CREATION 
   settings.gradle 83f764e6a4a15a5fdba232dce74a369870f26b45 
 
 Diff: https://reviews.apache.org/r/33614/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




[jira] [Updated] (KAFKA-2132) Move Log4J appender to a separate module

2015-06-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2132:
---
Status: In Progress  (was: Patch Available)

 Move Log4J appender to a separate module
 

 Key: KAFKA-2132
 URL: https://issues.apache.org/jira/browse/KAFKA-2132
 Project: Kafka
  Issue Type: Improvement
Reporter: Gwen Shapira
Assignee: Ashish K Singh
 Attachments: KAFKA-2132.patch, KAFKA-2132_2015-04-27_19:59:46.patch, 
 KAFKA-2132_2015-04-30_12:22:02.patch, KAFKA-2132_2015-04-30_15:53:17.patch


 Log4j appender is just a producer.
 Since we have a new producer in the clients module, no need to keep Log4J 
 appender in core and force people to package all of Kafka with their apps.
 Lets move the Log4jAppender to clients module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-23 - Add JSON/CSV output and looping options to ConsumerGroupCommand

2015-06-11 Thread Neha Narkhede
Thanks for submitting the KIP, Ashish! Few questions.

1. Can you specify more details around how you expect csv output to be
used. Same for json.
2. If we add these options, would you still need the old format. If
csv/json offers more convenience, should we have a plan to phase out the
old format?

On Thu, Jun 11, 2015 at 6:05 PM, Ashish Singh asi...@cloudera.com wrote:

 Jun,

 Can we add this as part of next KIP's agenda?

 On Thu, Jun 11, 2015 at 3:00 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

  Maybe bring it up at the next KIP call, to make sure everyone is aware?
 
  On Thu, Jun 11, 2015 at 2:17 PM, Ashish Singh asi...@cloudera.com
 wrote:
   Hi Guys,
  
   This has been lying around for quite some time. Should I start a voting
   thread on this?
  
   On Thu, May 7, 2015 at 12:20 PM, Ashish Singh asi...@cloudera.com
  wrote:
  
   Had to change the title of the page and that surprisingly changed the
  link
   as well. KIP-23 is now available at here
   
 
 https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=56852556
   .
  
   On Thu, May 7, 2015 at 11:34 AM, Ashish Singh asi...@cloudera.com
  wrote:
  
   Hi Guys,
  
   I just added a KIP, KIP-23 - Add JSON/CSV output and looping options
 to
   ConsumerGroupCommand
   https://cwiki.apache.org/confluence/display/KAFKA/KIP-23, for
  KAFKA-313
   https://issues.apache.org/jira/browse/KAFKA-313. The changes made
 as
   part of the JIRA can be found here 
  https://reviews.apache.org/r/28096/.
  
   Comments and suggestions are welcome!
  
   --
  
   Regards,
   Ashish
  
  
  
  
   --
  
   Regards,
   Ashish
  
  
  
  
   --
  
   Regards,
   Ashish
 



 --

 Regards,
 Ashish




-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-2238) KafkaMetricsConfig cannot be configured in broker (KafkaConfig)

2015-06-11 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582722#comment-14582722
 ] 

Jun Rao commented on KAFKA-2238:


[~auradkar], took a look at your patch. Currently, the Kafka metrics reporter 
is instantiated in Kafka.scala before KafkaConfig is constructed. So, even if 
you put those properties related to reporters to KafkaConfig, they won't be 
used. We will have to move the instantiation of the reporters to KafkaServer 
for this to work. However, since currently we only have a CSV reporter, I am 
not sure if it's worth doing. 

 KafkaMetricsConfig cannot be configured in broker (KafkaConfig)
 ---

 Key: KAFKA-2238
 URL: https://issues.apache.org/jira/browse/KAFKA-2238
 Project: Kafka
  Issue Type: Bug
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
 Attachments: KAFKA-2238.patch


 All metrics config values are not included in KafkaConfig and consequently 
 cannot be configured into the brokers. This is because the 
 KafkaMetricsReporter is passed a properties object generated by calling 
 toProps on KafkaConfig
 KafkaMetricsReporter.startReporters(new 
 VerifiableProperties(serverConfig.toProps))
 However, KafkaConfig never writes these values into the properties object and 
 hence these aren't configurable. The defaults always apply
 Add the following metrics to KafkaConfig
 kafka.metrics.reporters, kafka.metrics.polling.interval.secs, 
 kafka.csv.metrics.reporter.enabled, kafka.csv.metrics.dir



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-25 System test improvements

2015-06-11 Thread Jun Rao
+1

Thanks,

Jun

On Wed, Jun 10, 2015 at 6:10 PM, Geoffrey Anderson ge...@confluent.io
wrote:

 Hi Kafka,

 After a few rounds of discussion on KIP-25, there doesn't seem to be
 opposition, so I'd like to propose a vote.

 Thanks,
 Geoff

 On Mon, Jun 8, 2015 at 10:56 PM, Geoffrey Anderson ge...@confluent.io
 wrote:

  Hi KIP-25 thread,
 
  I consolidated some of the questions from this thread and elsewhere.
 
  Q: Can we see a map of what system-test currently tests, which ones we
  want to replace and JIRAs for replacing?
  A: Initial draft here:
 
 https://cwiki.apache.org/confluence/display/KAFKA/Roadmap+-+port+existing+system+tests
 
  Q: Will ducktape be maintained separately as a github repo?
  A: Yes https://github.com/confluentinc/ducktape
 
  Q: How easy is viewing the test results and logs, how will test output be
  structured?
  A: Hierarchical structure as outlined here:
  https://github.com/confluentinc/ducktape/wiki/Design-overview#output
 
  Q: Does it support code coverage? If not, how easy/ difficult would it be
  to support?
  A: It does not, and we have no immediate plans to support this.
 Difficulty
  unclear.
 
  Q: It would be nice if each Kafka version that we release will also
  have a separate tests artifact that users can download, untar and
 easily
  run against a Kafka cluster of the same version.
  A: This seems reasonable and not too much extra work. Definitely open to
  discussion on this.
 
  Q: Why not share running services across multiple tests?
  A: Prefer to optimize for simplicity and correctness over what might be a
  questionable improvement in run-time.
 
  Q: Are regressions - in the road map?
  A: yes
 
  Q: Are Jepsen style tests involving network failures in the road map?
  A: yes
 
  Thanks much,
  Geoff
 
 
 



[jira] [Resolved] (KAFKA-2263) Update Is it possible to delete a topic wiki FAQ answer

2015-06-11 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-2263.

Resolution: Fixed

Thanks for reporting this. Updated the wiki.

 Update Is it possible to delete a topic wiki FAQ answer
 -

 Key: KAFKA-2263
 URL: https://issues.apache.org/jira/browse/KAFKA-2263
 Project: Kafka
  Issue Type: Task
  Components: website
Affects Versions: 0.8.2.1
Reporter: Stevo Slavic
Priority: Trivial
  Labels: newbie

 Answer to the mentioned 
 [FAQ|https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Isitpossibletodeleteatopic?]
  hasn't been updated since delete feature became available in 0.8.2.x



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2266) Client Selector can drop idle connections without notifying NetworkClient

2015-06-11 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-2266:
--

 Summary: Client Selector can drop idle connections without 
notifying NetworkClient
 Key: KAFKA-2266
 URL: https://issues.apache.org/jira/browse/KAFKA-2266
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


I've run into this while testing the new consumer. The class 
org.apache.kafka.common.networ.Selector has code to drop idle connections, but 
when one is dropped, it is not added to the list of disconnections. This causes 
inconsistency between Selector and NetworkClient, which depends on this list to 
update its internal connection states. When a new request is sent to 
NetworkClient, it still sees the connection as good and forwards it to 
Selector, which results in an IllegalStateException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 35371: Patch for KAFKA-2266

2015-06-11 Thread Jason Gustafson

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35371/
---

Review request for kafka.


Bugs: KAFKA-2266
https://issues.apache.org/jira/browse/KAFKA-2266


Repository: kafka


Description
---

KAFKA-2266; add dropped idle connections to the list of disconnects in Selector


Diffs
-

  clients/src/main/java/org/apache/kafka/common/network/Selector.java 
effb1e63081ed2c1fff6d08d4ecdf8a3cb43e40a 
  clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
d23b4b6a7060eeefa9f47f292fd818c881d789c1 

Diff: https://reviews.apache.org/r/35371/diff/


Testing
---


Thanks,

Jason Gustafson



[jira] [Updated] (KAFKA-2266) Client Selector can drop idle connections without notifying NetworkClient

2015-06-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2266:
---
Attachment: KAFKA-2266.patch

 Client Selector can drop idle connections without notifying NetworkClient
 -

 Key: KAFKA-2266
 URL: https://issues.apache.org/jira/browse/KAFKA-2266
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Attachments: KAFKA-2266.patch


 I've run into this while testing the new consumer. The class 
 org.apache.kafka.common.networ.Selector has code to drop idle connections, 
 but when one is dropped, it is not added to the list of disconnections. This 
 causes inconsistency between Selector and NetworkClient, which depends on 
 this list to update its internal connection states. When a new request is 
 sent to NetworkClient, it still sees the connection as good and forwards it 
 to Selector, which results in an IllegalStateException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2266) Client Selector can drop idle connections without notifying NetworkClient

2015-06-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2266:
---
Status: Patch Available  (was: Open)

 Client Selector can drop idle connections without notifying NetworkClient
 -

 Key: KAFKA-2266
 URL: https://issues.apache.org/jira/browse/KAFKA-2266
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Attachments: KAFKA-2266.patch


 I've run into this while testing the new consumer. The class 
 org.apache.kafka.common.networ.Selector has code to drop idle connections, 
 but when one is dropped, it is not added to the list of disconnections. This 
 causes inconsistency between Selector and NetworkClient, which depends on 
 this list to update its internal connection states. When a new request is 
 sent to NetworkClient, it still sees the connection as good and forwards it 
 to Selector, which results in an IllegalStateException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2266) Client Selector can drop idle connections without notifying NetworkClient

2015-06-11 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582762#comment-14582762
 ] 

Jason Gustafson commented on KAFKA-2266:


Created reviewboard https://reviews.apache.org/r/35371/diff/
 against branch upstream/trunk

 Client Selector can drop idle connections without notifying NetworkClient
 -

 Key: KAFKA-2266
 URL: https://issues.apache.org/jira/browse/KAFKA-2266
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Attachments: KAFKA-2266.patch


 I've run into this while testing the new consumer. The class 
 org.apache.kafka.common.networ.Selector has code to drop idle connections, 
 but when one is dropped, it is not added to the list of disconnections. This 
 causes inconsistency between Selector and NetworkClient, which depends on 
 this list to update its internal connection states. When a new request is 
 sent to NetworkClient, it still sees the connection as good and forwards it 
 to Selector, which results in an IllegalStateException. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 35371: Patch for KAFKA-2266

2015-06-11 Thread Jun Rao

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35371/#review87667
---


Thanks for the patch. Good catch! A minor comment below.


clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java
https://reviews.apache.org/r/35371/#comment140081

Waiting a second on this could be long and is not always reliable. Perhaps, 
we can introduce a waitUntil util like in kafka.utils.TestUtils and use it here.


- Jun Rao


On June 12, 2015, 1:02 a.m., Jason Gustafson wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35371/
 ---
 
 (Updated June 12, 2015, 1:02 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2266
 https://issues.apache.org/jira/browse/KAFKA-2266
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2266; add dropped idle connections to the list of disconnects in 
 Selector
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/common/network/Selector.java 
 effb1e63081ed2c1fff6d08d4ecdf8a3cb43e40a 
   clients/src/test/java/org/apache/kafka/common/network/SelectorTest.java 
 d23b4b6a7060eeefa9f47f292fd818c881d789c1 
 
 Diff: https://reviews.apache.org/r/35371/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jason Gustafson
 




Re: [VOTE] KIP-25 System test improvements

2015-06-11 Thread Jun Rao
+1

Thanks,

Jun

On Wed, Jun 10, 2015 at 6:10 PM, Geoffrey Anderson ge...@confluent.io
wrote:

 Hi Kafka,

 After a few rounds of discussion on KIP-25, there doesn't seem to be
 opposition, so I'd like to propose a vote.

 Thanks,
 Geoff

 On Mon, Jun 8, 2015 at 10:56 PM, Geoffrey Anderson ge...@confluent.io
 wrote:

  Hi KIP-25 thread,
 
  I consolidated some of the questions from this thread and elsewhere.
 
  Q: Can we see a map of what system-test currently tests, which ones we
  want to replace and JIRAs for replacing?
  A: Initial draft here:
 
 https://cwiki.apache.org/confluence/display/KAFKA/Roadmap+-+port+existing+system+tests
 
  Q: Will ducktape be maintained separately as a github repo?
  A: Yes https://github.com/confluentinc/ducktape
 
  Q: How easy is viewing the test results and logs, how will test output be
  structured?
  A: Hierarchical structure as outlined here:
  https://github.com/confluentinc/ducktape/wiki/Design-overview#output
 
  Q: Does it support code coverage? If not, how easy/ difficult would it be
  to support?
  A: It does not, and we have no immediate plans to support this.
 Difficulty
  unclear.
 
  Q: It would be nice if each Kafka version that we release will also
  have a separate tests artifact that users can download, untar and
 easily
  run against a Kafka cluster of the same version.
  A: This seems reasonable and not too much extra work. Definitely open to
  discussion on this.
 
  Q: Why not share running services across multiple tests?
  A: Prefer to optimize for simplicity and correctness over what might be a
  questionable improvement in run-time.
 
  Q: Are regressions - in the road map?
  A: yes
 
  Q: Are Jepsen style tests involving network failures in the road map?
  A: yes
 
  Thanks much,
  Geoff
 
 
 



offset storage as kafka with zookeeper 3.4.6

2015-06-11 Thread Kris K
I am trying to migrate the offset storage to kafka (3 brokers of version
0.8.2.1) using the consumer property offsets.storage=kafka.  I noticed that
a new topic, __consumer_offsets got created.
But nothing is being written to this topic, while the consumer offsets
continue to reside on zookeeper.

I am using a 3 node zookeeper ensemble (version 3.4.6) and not using the
one that comes with kafka.

The current config consumer.properties now contains:

offsets.storage=kafka
dual.commit.enabled=false
exclude.internal.topics=false

Is it mandatory to use the zookeeper that comes with kafka for offset
storage to be migrated to kafka?

I tried both the approaches:

1. As listed on slide 34 of
http://www.slideshare.net/jjkoshy/offset-management-in-kafka.
2. By deleting the zookeeper data directories and kafka log directories.

None of them worked.

Thanks
Kris


[jira] [Created] (KAFKA-2264) SESSION_TIMEOUT_MS_CONFIG in ConsumerConfig should be int

2015-06-11 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2264:
--

 Summary: SESSION_TIMEOUT_MS_CONFIG in ConsumerConfig should be int
 Key: KAFKA-2264
 URL: https://issues.apache.org/jira/browse/KAFKA-2264
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
Reporter: Jun Rao
Priority: Trivial


In our wire protocol, session timeout is an int in JoinGroupRequest. However, 
in ConsumerConfig, session timeout is a long. We should make them consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: offset storage as kafka with zookeeper 3.4.6

2015-06-11 Thread Kris K
If you want to move offsets from zookeeper to Kafka then yes you
need to have a phase where all consumers in your group set dual commit
to true. If you are starting a fresh consumer group then you can
turn off dual-commit.
I followed these steps to move the offsets from zookeeper to kafka:
1. Set dual commit to true, exclude.internal.topics=false and offset
storage to kafka on all 3 consumer.properties files
2. Rolling restart on all 3 brokers. All the consumers are high level
consumers with auto commit enable set to false
3. Left these settings for about an hour while data kept flowing through
some topics (not all)
4. Used the utility ./kafka-console-consumer.sh --topic __consumer_offsets
--zookeeper xxx:2181,yyy:2181,zzz:2181 --formatter
kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
../config/consumer.properties and found that nothing is written to this
topic
5. Changed dual commit to false followed by a rolling restart. All the
consumers are high level consumers
6. Zookeeper offsets kept changing while nothing gets written to
__consumer_offsets

In order to reproduce the issue:
1. Brought down all 3 brokers and 3 nodes of zookeeper
2. Deleted all the contents of snapshot and transaction log directories of
zookeeper
3. Added myid files in snapshot directories on all zk nodes with node ids
4. Deleted all the contents of kafka log directories on all 3 brokers
5. Set dual commit to false, exclude.internal.topics=false and offset
storage to kafka on all 3 consumer.properties files
6. Brought the environment up. All the consumers are high level consumers
with auto commit enable set to false
7. Consumer offsets still got recorded on zookeeper and kept changing
while __consumer_offsets
was empty

When I did a standalone installation with single broker and used the
zookeeper that comes with kafka, the offsets got written to __consumer_offsets.
This made me ask the question about using zookeeper 3.4.6 against the one
the comes with kafka.

You can
also check consumer mbeans that give the KafkaCommitRate or enable
trace logging in either the consumer or the broker's request logs to
check if offset commit request are getting sent out to the cluster.
I will check on this

Thanks,
Kris


On Thu, Jun 11, 2015 at 7:45 AM, Joel Koshy jjkosh...@gmail.com wrote:

  Is it mandatory to use the zookeeper that comes with kafka for offset
  storage to be migrated to kafka?
 If you want to move offsets from zookeeper to Kafka then yes you
 need to have a phase where all consumers in your group set dual commit
 to true. If you are starting a fresh consumer group then you can
 turn off dual-commit.

  But nothing is being written to this topic, while the consumer offsets
  continue to reside on zookeeper.

 The zookeeper offsets won't be removed. However, are they changing?
 How are you verifying that nothing is written to this topic? If you
 are trying to consume it, then you will need to set
 exclude.internal.topics=false in your consumer properties. You can
 also check consumer mbeans that give the KafkaCommitRate or enable
 trace logging in either the consumer or the broker's request logs to
 check if offset commit request are getting sent out to the cluster.

 On Thu, Jun 11, 2015 at 01:03:09AM -0700, Kris K wrote:
  I am trying to migrate the offset storage to kafka (3 brokers of version
  0.8.2.1) using the consumer property offsets.storage=kafka.  I noticed
 that
  a new topic, __consumer_offsets got created.
  But nothing is being written to this topic, while the consumer offsets
  continue to reside on zookeeper.
 
  I am using a 3 node zookeeper ensemble (version 3.4.6) and not using the
  one that comes with kafka.
 
  The current config consumer.properties now contains:
 
  offsets.storage=kafka
  dual.commit.enabled=false
  exclude.internal.topics=false
 
  Is it mandatory to use the zookeeper that comes with kafka for offset
  storage to be migrated to kafka?
 
  I tried both the approaches:
 
  1. As listed on slide 34 of
  http://www.slideshare.net/jjkoshy/offset-management-in-kafka.
  2. By deleting the zookeeper data directories and kafka log directories.
 
  None of them worked.
 
  Thanks
  Kris




Re: Review Request 35201: Fix KAFKA-2253

2015-06-11 Thread Jiangjie Qin


 On June 11, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/DelayedOperation.scala, lines 264-266
  https://reviews.apache.org/r/35201/diff/2/?file=980805#file980805line264
 
  Not sure if we need this check. Since all writes to watchersForKey are 
  sync-ed, it's ok to remove a watcher as long as its count is 0.
  
  I am bit concerned about the overhead on the removeWatchersLock, which 
  is global. For example, if you have 1000 requests/sec and each request has 
  1000 partitions, that lock is going to be access 1million times in a sec. 
  Could you do some tests/profiling before and after we introduced the global 
  lock to see if this could be an issue?

The lock is only grabbed when a watcher has no operations in it. But I agree 
that could be an issue.
I'm wondering is there a reason that we have to remove a watcher immediately 
when its count become zero? Can we just let the reaper remove empty watchers?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35201/#review87495
---


On June 8, 2015, 6:47 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35201/
 ---
 
 (Updated June 8, 2015, 6:47 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2253
 https://issues.apache.org/jira/browse/KAFKA-2253
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Incorporated Jiangjie and Onur's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/DelayedOperation.scala 
 123078d97a7bfe2121655c00f3b2c6af21c53015 
 
 Diff: https://reviews.apache.org/r/35201/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




Re: Review Request 35201: Fix KAFKA-2253

2015-06-11 Thread Onur Karaman


 On June 11, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/DelayedOperation.scala, lines 264-266
  https://reviews.apache.org/r/35201/diff/2/?file=980805#file980805line264
 
  Not sure if we need this check. Since all writes to watchersForKey are 
  sync-ed, it's ok to remove a watcher as long as its count is 0.
  
  I am bit concerned about the overhead on the removeWatchersLock, which 
  is global. For example, if you have 1000 requests/sec and each request has 
  1000 partitions, that lock is going to be access 1million times in a sec. 
  Could you do some tests/profiling before and after we introduced the global 
  lock to see if this could be an issue?
 
 Jiangjie Qin wrote:
 The lock is only grabbed when a watcher has no operations in it. But I 
 agree that could be an issue.
 I'm wondering is there a reason that we have to remove a watcher 
 immediately when its count become zero? Can we just let the reaper remove 
 empty watchers?

+1 on letting the reaper do it.


- Onur


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35201/#review87495
---


On June 8, 2015, 6:47 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35201/
 ---
 
 (Updated June 8, 2015, 6:47 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2253
 https://issues.apache.org/jira/browse/KAFKA-2253
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Incorporated Jiangjie and Onur's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/DelayedOperation.scala 
 123078d97a7bfe2121655c00f3b2c6af21c53015 
 
 Diff: https://reviews.apache.org/r/35201/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




RE: [DISCUSS] KIP-25 System test improvements

2015-06-11 Thread Aditya Auradkar
Hi Geoffrey,

Thanks for the writeup. Couple of questions:
- Is it possible to configure suites using ducktape? For example: assume all 
the tests in system_tests have been migrated to ducktape. Can I run a subset of 
all tests grouped by functional areas i.e. replication, broker failure etc.

- Ducktape allows us to run tests on a vagrant cluster or on a static cluster 
configured via JSON. Once ported to ducktape, can we very easily run the 
existing system tests in both flavors?

Thanks,
Aditya


From: Geoffrey Anderson [ge...@confluent.io]
Sent: Monday, June 08, 2015 10:56 PM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-25 System test improvements

Hi KIP-25 thread,

I consolidated some of the questions from this thread and elsewhere.

Q: Can we see a map of what system-test currently tests, which ones we want
to replace and JIRAs for replacing?
A: Initial draft here:
https://cwiki.apache.org/confluence/display/KAFKA/Roadmap+-+port+existing+system+tests

Q: Will ducktape be maintained separately as a github repo?
A: Yes https://github.com/confluentinc/ducktape

Q: How easy is viewing the test results and logs, how will test output be
structured?
A: Hierarchical structure as outlined here:
https://github.com/confluentinc/ducktape/wiki/Design-overview#output

Q: Does it support code coverage? If not, how easy/ difficult would it be
to support?
A: It does not, and we have no immediate plans to support this. Difficulty
unclear.

Q: It would be nice if each Kafka version that we release will also
have a separate tests artifact that users can download, untar and easily
run against a Kafka cluster of the same version.
A: This seems reasonable and not too much extra work. Definitely open to
discussion on this.

Q: Why not share running services across multiple tests?
A: Prefer to optimize for simplicity and correctness over what might be a
questionable improvement in run-time.

Q: Are regressions - in the road map?
A: yes

Q: Are Jepsen style tests involving network failures in the road map?
A: yes

Thanks much,
Geoff



On Mon, Jun 8, 2015 at 4:55 PM, Geoffrey Anderson ge...@confluent.io
wrote:

 Hi Gwen,

 I don't see any problem with this as long as we're convinced there's a
 good use case, which seems to be true.

 Cheers,
 Geoff

 On Thu, Jun 4, 2015 at 5:20 PM, Gwen Shapira gshap...@cloudera.com
 wrote:

 Not completely random places :)
 People may use Cloudera / HWX distributions which include Kafka, but want
 to verify that these bits match a specific upstream release.

 I think having the tests separately will be useful for this. In this case,
 finding the tests are not a big issue - we'll add a download link :)

 On Thu, Jun 4, 2015 at 5:00 PM, Jiangjie Qin j...@linkedin.com.invalid
 wrote:

  Hey Gwen,
 
  Currently the test and code are downloaded at the same time. Supposedly
  the tests in the same repository should cover match the code.
  Are you saying people downloaded a release from some random place and
 want
  to verify it? If that is the case, does that mean people still need to
  find the correct place to download the right test artifact?
 
  Thanks,
 
  Jiangjie (Becket) Qin
 
 
 
  On 6/4/15, 4:29 PM, Gwen Shapira gshap...@cloudera.com wrote:
 
  Hi,
  
  Reviving the discussion a bit :)
  
  I think it will be nice if each Kafka version that we release will also
  have a separate tests artifact that users can download, untar and
 easily
  run against a Kafka cluster of the same version.
  
  The idea is that if someone downloads packages that claim to contain
  something of a specific Kafka version (i.e. Kafka 0.8.2.0 + patches),
  users
  can easily download the tests and verify that it indeed passes the
 tests
  for this version and therefore behaves the way this version is
 expected to
  behave.
  
  Does it make sense?
  
  Gwen
  
  On Thu, May 21, 2015 at 3:26 PM, Geoffrey Anderson ge...@confluent.io
 
  wrote:
  
   Hi Ashish,
  
   Looks like Ewen already hit the main points, but a few additions:
  
   1. ducktape repo is here: https://github.com/confluentinc/ducktape
   ducktape itself will be pip installable in the near future, and Kafka
   system tests will be able to depend on a particular version of
 ducktape.
  
   2.  The reporting is nothing fancy. We're definitely open to
 feedback,
  but
   it consists of:
   - top level summary of the test run (simple PASS/FAIL for each test)
   - top level info and debug logs
   - per-test info and debug logs
   - per-test service logs gathered from each service used in the
 test.
  For
   example, if your test pulls up a Kafka cluster with 5 brokers, the
 end
   result will have the Kafka logs from each of those 5 machines.
  
   Cheers,
   Geoff
  
   On Thu, May 21, 2015 at 3:15 PM, Ewen Cheslack-Postava
  e...@confluent.io
   wrote:
  
Ashish,
   
1. That was the plan. We put some effort into cleanly separating
 the
framework so it would be reusable across many projects.

[jira] [Updated] (KAFKA-2136) Client side protocol changes to return quota delays

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2136:
---
Labels: quotas  (was: )

 Client side protocol changes to return quota delays
 ---

 Key: KAFKA-2136
 URL: https://issues.apache.org/jira/browse/KAFKA-2136
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2136.patch, KAFKA-2136_2015-05-06_18:32:48.patch, 
 KAFKA-2136_2015-05-06_18:35:54.patch, KAFKA-2136_2015-05-11_14:50:56.patch, 
 KAFKA-2136_2015-05-12_14:40:44.patch, KAFKA-2136_2015-06-09_10:07:13.patch, 
 KAFKA-2136_2015-06-09_10:10:25.patch


 As described in KIP-13, evolve the protocol to return a throttle_time_ms in 
 the Fetch and the ProduceResponse objects. Add client side metrics on the new 
 producer and consumer to expose the delay time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2171) System Test for Quotas

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2171:
---
Labels: quotas  (was: )

 System Test for Quotas
 --

 Key: KAFKA-2171
 URL: https://issues.apache.org/jira/browse/KAFKA-2171
 Project: Kafka
  Issue Type: Sub-task
Reporter: Dong Lin
Assignee: Dong Lin
  Labels: quotas
 Attachments: KAFKA-2171.patch, KAFKA-2171.patch

   Original Estimate: 336h
  Remaining Estimate: 336h

 Initial setup and configuration:
 In all scenarios, we create the following entities and topic:
 - 1 kafka brokers
 - 1 topic with replication factor = 1, ackNum = -1, and parition = 6
 - 1 producer performance
 - 2 console consumers
 - 3 jmx tools, one for each of the producer and consumers
 - we consider two rates are approximately the same if they differ by at most 
 10%.
 Scenario 1: validate the effectiveness of default producer and consumer quota
 1) Let default_producer_quota = default_consumer_quota = 2 Bytes/sec
 2) Produce 2000 messages of 2 bytes each (with clientId = 
 producer_performance)
 3) Wait until producer stops
 4) Two consumers consume from the topic (with clientId = group1 and group2 
 respectively )
 5) verify that actual rate is within 10% of expected rate (quota)
 Scenario 2: validate the effectiveness of producer and consumer quota override
 1) Let default_producer_quota = default_consumer_quota = 2 Bytes/sec
 Override quota of producer_performance and group1 to be 15000 Bytes/sec
 2) Produce 2000 messages of 2 bytes each (with clientId = 
 producer_performance)
 3) Wait until producer stops
 4) Two consumers consume from the topic (with clientId = group1 and group2 
 respectively )
 5) verify that actual rate is within 10% of expected rate (quota)
 Scenario 3: validate the effectiveness of quota sharing
 1) Let default_producer_quota = default_consumer_quota = 2 Bytes/sec
 Override quota of producer_performance and group1 to be 15000 Bytes/sec
 2) Produce 2000 messages of 2 bytes each (with clientId = 
 producer_performance)
 3) Wait until producer stops
 4) Two consumers consume from the topic (with clientId = group1 for both 
 consumers)
 5) verify that actual rate is within 10% of expected rate (quota)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2084) byte rate metrics per client ID (producer and consumer)

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2084:
---
Labels: quotas  (was: )

 byte rate metrics per client ID (producer and consumer)
 ---

 Key: KAFKA-2084
 URL: https://issues.apache.org/jira/browse/KAFKA-2084
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2084.patch, KAFKA-2084_2015-04-09_18:10:56.patch, 
 KAFKA-2084_2015-04-10_17:24:34.patch, KAFKA-2084_2015-04-21_12:21:18.patch, 
 KAFKA-2084_2015-04-21_12:28:05.patch, KAFKA-2084_2015-05-05_15:27:35.patch, 
 KAFKA-2084_2015-05-05_17:52:02.patch, KAFKA-2084_2015-05-11_16:16:01.patch, 
 KAFKA-2084_2015-05-26_11:50:50.patch, KAFKA-2084_2015-06-02_17:02:00.patch, 
 KAFKA-2084_2015-06-02_17:09:28.patch, KAFKA-2084_2015-06-02_17:10:52.patch, 
 KAFKA-2084_2015-06-04_16:31:22.patch


 We need to be able to track the bytes-in/bytes-out rate on a per-client ID 
 basis. This is necessary for quotas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2241) AbstractFetcherThread.shutdown() should not block on ReadableByteChannel.read(buffer)

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2241:
---
Labels: quotas  (was: )

 AbstractFetcherThread.shutdown() should not block on 
 ReadableByteChannel.read(buffer)
 -

 Key: KAFKA-2241
 URL: https://issues.apache.org/jira/browse/KAFKA-2241
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
  Labels: quotas
 Attachments: KAFKA-2241.patch, KAFKA-2241_2015-06-03_15:30:35.patch, 
 client.java, server.java


 This is likely a bug from Java. This affects Kafka and here is the patch to 
 fix it.
 Here is the description of the bug. By description of SocketChannel in Java 7 
 Documentation. If another thread interrupts the current thread while the read 
 operation is in progress, the it should closes the channel and throw 
 ClosedByInterruptException. However, we find that interrupting the thread 
 will not unblock the channel immediately. Instead, it waits for response or 
 socket timeout before throwing an exception.
 This will cause problem in the following scenario. Suppose one 
 console_consumer_1 is reading from a topic, and due to quota delay or 
 whatever reason, it block on channel.read(buffer). At this moment, another 
 console_consumer_2 joins and triggers rebalance at console_consumer_1. But 
 consumer_1 will block waiting on the channel.read before it can release 
 partition ownership, causing consumer_2 to fail after a number of failed 
 attempts to obtain partition ownership.
 In other words, AbstractFetcherThread.shutdown() is not guaranteed to 
 shutdown due to this bug.
 The problem is confirmed with Java 1.7 and java 1.6. To check it by yourself, 
 you can use the attached server.java and client.java -- start the server 
 before the client and see if client unblock after interruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2191) Measured rate should not be infinite

2015-06-11 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin updated KAFKA-2191:

Labels: quotas  (was: )

 Measured rate should not be infinite
 

 Key: KAFKA-2191
 URL: https://issues.apache.org/jira/browse/KAFKA-2191
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin
  Labels: quotas
 Attachments: KAFKA-2191.patch, KAFKA-2191.patch, 
 KAFKA-2191_2015-05-13_15:32:15.patch, KAFKA-2191_2015-05-14_00:34:30.patch, 
 KAFKA-2191_2015-05-27_17:28:41.patch


 Rate.measure() is called, it calculates elapsed time as now - 
 stat.oldest(now).lastWindowMs. But the stat.oldest(now) may equal now due to 
 the way SampledStat is implemented. As a result, Rate.measure() may return 
 Infinite. 
 This bug needs to be fixed in order for quota implementation to work properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2205) Generalize TopicConfigManager to handle multiple entity configs

2015-06-11 Thread Aditya Auradkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Auradkar updated KAFKA-2205:
---
Labels: quotas  (was: )

 Generalize TopicConfigManager to handle multiple entity configs
 ---

 Key: KAFKA-2205
 URL: https://issues.apache.org/jira/browse/KAFKA-2205
 Project: Kafka
  Issue Type: Sub-task
Reporter: Aditya Auradkar
Assignee: Aditya Auradkar
  Labels: quotas
 Attachments: KAFKA-2205.patch


 Acceptance Criteria:
 - TopicConfigManager should be generalized to handle Topic and Client configs 
 (and any type of config in the future). As described in KIP-21
 - Add a ConfigCommand tool to change topic and client configuration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)

2015-06-11 Thread Aditya Auradkar
I've made two changes to the document:
- Removed the TMR evolution piece since we agreed to retain this.
- Added two new API's to the admin client spec. (Alter and Describe config).

Please review.

Aditya


From: Ashish Singh [asi...@cloudera.com]
Sent: Friday, May 29, 2015 8:36 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative 
operations (Thread 2)

+1 on discussing this on next KIP hangout. I will update KIP-24 before that.

On Fri, May 29, 2015 at 3:40 AM, Andrii Biletskyi 
andrii.bilets...@stealth.ly wrote:

 Guys,

 I won't be able to attend next meeting. But in the latest patch for KIP-4
 Phase 1
 I didn't even evolve TopicMetadataRequest to v1 since we won't be able
 to change config with AlterTopicRequest, hence with this patch TMR will
 still
 return isr. Taking this into account I think yes - it would be good to fix
 ISR issue,
 although I didn't consider it to be a critical one (isr was part of TMR
 from the very
 beginning and almost no code relies on this piece of request).

 Thanks,
 Andrii Biletskyi

 On Fri, May 29, 2015 at 8:50 AM, Aditya Auradkar 
 aaurad...@linkedin.com.invalid wrote:

  Thanks. Perhaps we should leave TMR unchanged for now. Should we discuss
  this during the next hangout?
 
  Aditya
 
  
  From: Jun Rao [j...@confluent.io]
  Sent: Thursday, May 28, 2015 5:32 PM
  To: dev@kafka.apache.org
  Subject: Re: [DISCUSS] KIP-4 - Command line and centralized
 administrative
  operations (Thread 2)
 
  There is a reasonable use case of ISR in KAFKA-2225. Basically, for
  economical reasons, we may want to let a consumer fetch from a replica in
  ISR that's in the same zone. In order to support that, it will be
  convenient to have TMR return the correct ISR for the consumer to choose.
 
  So, perhaps it's worth fixing the ISR inconsistency issue in KAFKA-1367
  (there is some new discussion there on what it takes to fix this). If we
 do
  that, we can leave TMR unchanged.
 
  Thanks,
 
  Jun
 
  On Tue, May 26, 2015 at 1:13 PM, Aditya Auradkar 
  aaurad...@linkedin.com.invalid wrote:
 
   Andryii,
  
   I made a few edits to this document as discussed in the KIP-21 thread.
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
  
   With these changes. the only difference between
 TopicMetadataResponse_V1
   and V0 is the removal of the ISR field. I've altered the KIP with the
   assumption that this is a good enough reason by itself to evolve the
   request/response protocol. Any concerns there?
  
   Thanks,
   Aditya
  
   
   From: Mayuresh Gharat [gharatmayures...@gmail.com]
   Sent: Thursday, May 21, 2015 8:29 PM
   To: dev@kafka.apache.org
   Subject: Re: [DISCUSS] KIP-4 - Command line and centralized
  administrative
   operations (Thread 2)
  
   Hi Jun,
  
   Thanks a lot. I get it now.
Point 4) will actually enable clients to who don't want to create a
  topic
   with default partitions, if it does not exist and then can manually
  create
   the topic with their own configs(#partitions).
  
   Thanks,
  
   Mayuresh
  
   On Thu, May 21, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote:
  
Mayuresh,
   
The current plan is the following.
   
1. Add TMR v1, which still triggers auto topic creation.
2. Change the consumer client to TMR v1. Change the producer client
 to
   use
TMR v1 and on UnknownTopicException, issue TopicCreateRequest to
   explicitly
create the topic with the default server side partitions and
 replicas.
3. At some later time after the new clients are released and
 deployed,
disable auto topic creation in TMR v1. This will make sure consumers
   never
create new topics.
4. If needed, we can add a new config in the producer to control
  whether
TopicCreateRequest should be issued or not on UnknownTopicException.
 If
this is disabled and the topic doesn't exist, send will fail and the
  user
is expected to create the topic manually.
   
Thanks,
   
Jun
   
   
On Thu, May 21, 2015 at 5:27 PM, Mayuresh Gharat 
gharatmayures...@gmail.com
 wrote:
   
 Hi,
 I had a question about TopicMetadata Request.
 Currently the way it works is :

 1) Suppose a topic T1 does not exist.
 2) Client wants to produce data to T1 using producer P1.
 3) Since T1 does not exist, P1 issues a TopicMetadata request to
  kafka.
 This in turn creates the default number of partition. The number of
 partitions is a cluster wide config.
 4) Same goes for a consumer. If the topic does not exist and new
  topic
will
 be created when the consumer issues TopicMetadata request.

 Here are 2 use cases where it might not be suited :

 The auto creation flag for topics  is turned  ON.

 a) Some clients might 

Re: Review Request 35201: Fix KAFKA-2253

2015-06-11 Thread Jun Rao


 On June 11, 2015, 1:07 a.m., Jun Rao wrote:
  core/src/main/scala/kafka/server/DelayedOperation.scala, lines 264-266
  https://reviews.apache.org/r/35201/diff/2/?file=980805#file980805line264
 
  Not sure if we need this check. Since all writes to watchersForKey are 
  sync-ed, it's ok to remove a watcher as long as its count is 0.
  
  I am bit concerned about the overhead on the removeWatchersLock, which 
  is global. For example, if you have 1000 requests/sec and each request has 
  1000 partitions, that lock is going to be access 1million times in a sec. 
  Could you do some tests/profiling before and after we introduced the global 
  lock to see if this could be an issue?
 
 Jiangjie Qin wrote:
 The lock is only grabbed when a watcher has no operations in it. But I 
 agree that could be an issue.
 I'm wondering is there a reason that we have to remove a watcher 
 immediately when its count become zero? Can we just let the reaper remove 
 empty watchers?
 
 Onur Karaman wrote:
 +1 on letting the reaper do it.

Both tryCompleteElseWatch() and checkAndComplete() need to grab the global read 
lock. This is my main concern since this means every produce/fetch request pays 
a synchronization overhead on a single semaphore for every partition in the 
request. Some performance testing on the impact of this will be useful.


- Jun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35201/#review87495
---


On June 8, 2015, 6:47 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35201/
 ---
 
 (Updated June 8, 2015, 6:47 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2253
 https://issues.apache.org/jira/browse/KAFKA-2253
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Incorporated Jiangjie and Onur's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/DelayedOperation.scala 
 123078d97a7bfe2121655c00f3b2c6af21c53015 
 
 Diff: https://reviews.apache.org/r/35201/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




Re: Review Request 35231: Fix KAFKA-1740

2015-06-11 Thread Onur Karaman

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35231/#review87583
---



core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala
https://reviews.apache.org/r/35231/#comment139973

This can happen in two ways:
1. An automatic group management (subscribes to topics) consumer that sends 
an OffsetCommitRequest whose groupId hashes to the coordinator but hasn't first 
done a join group.
2. A manual group management (subscribes to partitions) consumer that sends 
an OffsetCommitRequest whose groupId hashes to the coordinator.

Should these be distinguishable? We can do this with an added flag in 
OffsetCommitRequest.



core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala
https://reviews.apache.org/r/35231/#comment139977

Same as the earlier comment but for OffsetFetchRequests.

This can happen in two ways:
1. An automatic group management (subscribes to topics) consumer that sends 
an OffsetFetchRequest whose groupId hashes to the coordinator but hasn't first 
done a join group.
2. A manual group management (subscribes to partitions) consumer that sends 
an OffsetFetchRequest whose groupId hashes to the coordinator.

Should these be distinguishable? We can do this with an added flag in 
OffsetFetchRequest.


- Onur Karaman


On June 8, 2015, 11:12 p.m., Guozhang Wang wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35231/
 ---
 
 (Updated June 8, 2015, 11:12 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-1740
 https://issues.apache.org/jira/browse/KAFKA-1740
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Move offset manager to coordinator, add validation logic for offset commit 
 and fetch
 
 
 Diffs
 -
 
   
 clients/src/main/java/org/apache/kafka/clients/consumer/internals/Coordinator.java
  c1496a0851526f3c7d3905ce4bdff2129c83a6c1 
   clients/src/main/java/org/apache/kafka/common/protocol/Errors.java 
 5b898c8f8ad5d0469f469b600c4b2eb13d1fc662 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitResponse.java
  70844d65369f6ff300cbeb513dbb6650050c7eec 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchRequest.java
  deec1fa480d5a5c5884a1c007b076aa64e902472 
   
 clients/src/main/java/org/apache/kafka/common/requests/OffsetFetchResponse.java
  512a0ef7e619d54e74122c38119209f5cf9590e3 
   
 clients/src/test/java/org/apache/kafka/clients/consumer/internals/CoordinatorTest.java
  b06c4a73e2b4e9472cd772c8bc32bf4a29f431bb 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 dacbdd089f96c67b7bbc1c0cd36490d4fe094c1b 
   core/src/main/scala/kafka/cluster/Partition.scala 
 730a232482fdf77be5704cdf5941cfab3828db88 
   core/src/main/scala/kafka/common/OffsetMetadataAndError.scala 
 6b4242c7cd1df9b3465db0fec35a25102c76cd60 
   core/src/main/scala/kafka/common/Topic.scala 
 ad759786d1c22f67c47808c0b8f227eb2b1a9aa8 
   core/src/main/scala/kafka/coordinator/ConsumerCoordinator.scala 
 51e89c87ee2c20fc7f976536f01fa1055fb8e670 
   core/src/main/scala/kafka/coordinator/CoordinatorMetadata.scala 
 c39e6de34ee531c6dfa9107b830752bd7f8fbe59 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 d63bc18d795a6f0e6994538ca55a9a46f7fb8ffd 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 b320ce9f6a12c0ee392e91beb82e8804d167f9f4 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 5cca85cf727975f6d3acb2223fd186753ad761dc 
   core/src/main/scala/kafka/server/ReplicaManager.scala 
 59c9bc3ac3a8afc07a6f8c88c5871304db588d17 
   core/src/test/scala/integration/kafka/api/ConsumerTest.scala 
 17b17b9b1520c7cc2e2ba96cdb1f9ff06e47bcad 
   core/src/test/scala/integration/kafka/api/IntegrationTestHarness.scala 
 07b1ff47bfc3cd3f948c9533c8dc977fa36d996f 
   core/src/test/scala/unit/kafka/admin/TopicCommandTest.scala 
 c7136f20972614ac47aa57ab13e3c94ef775a4b7 
   core/src/test/scala/unit/kafka/consumer/TopicFilterTest.scala 
 4f124af5c3e946045a78ad1519c37372a72c8985 
   core/src/test/scala/unit/kafka/coordinator/CoordinatorMetadataTest.scala 
 08854c5e6ec249368206298b2ac2623df18f266a 
   core/src/test/scala/unit/kafka/server/OffsetCommitTest.scala 
 528525b719ec916e16f8b3ae3715bec4b5dcc47d 
 
 Diff: https://reviews.apache.org/r/35231/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Guozhang Wang
 




Re: [DISCUSS] KIP-25 System test improvements

2015-06-11 Thread Geoffrey Anderson
Hi Aditya,

(1) There are a currently a few different ways to target a specific test or
subset of tests.

If for example tests were organized like the current system tests, where
suites are grouped by directory, you could run

cd system_test_dir
ducktape replication_testsuite/

You can also target tests in a particular file (ducktape path_to_file),
all tests in a test class (ducktape path_to_file::test_class), or a
particular test method in a class (ducktape
path_to_file::test_class.test_method)

(2) ducktape runs on a vagrant cluster by parsing the information returned
by the vagrant ssh-config command into JSON configuration used by the
JsonCluster class, so in that sense we are already using the JSON flavor.

I see a few potential challenges, but nothing too bad.
- There may be some work involved in getting ssh configs right
- A couple assumptions about where files are deployed on remote machines
are baked into some of the Service classes, so some minor refactoring may
be needed to make this a little more general. This would be a good thing.

In any case, we're happy to help out/take pull requests on ducktape etc.

Best,
Geoff








On Thu, Jun 11, 2015 at 10:13 AM, Aditya Auradkar 
aaurad...@linkedin.com.invalid wrote:

 Hi Geoffrey,

 Thanks for the writeup. Couple of questions:
 - Is it possible to configure suites using ducktape? For example: assume
 all the tests in system_tests have been migrated to ducktape. Can I run a
 subset of all tests grouped by functional areas i.e. replication, broker
 failure etc.

 - Ducktape allows us to run tests on a vagrant cluster or on a static
 cluster configured via JSON. Once ported to ducktape, can we very easily
 run the existing system tests in both flavors?

 Thanks,
 Aditya

 
 From: Geoffrey Anderson [ge...@confluent.io]
 Sent: Monday, June 08, 2015 10:56 PM
 To: dev@kafka.apache.org
 Subject: Re: [DISCUSS] KIP-25 System test improvements

 Hi KIP-25 thread,

 I consolidated some of the questions from this thread and elsewhere.

 Q: Can we see a map of what system-test currently tests, which ones we want
 to replace and JIRAs for replacing?
 A: Initial draft here:

 https://cwiki.apache.org/confluence/display/KAFKA/Roadmap+-+port+existing+system+tests

 Q: Will ducktape be maintained separately as a github repo?
 A: Yes https://github.com/confluentinc/ducktape

 Q: How easy is viewing the test results and logs, how will test output be
 structured?
 A: Hierarchical structure as outlined here:
 https://github.com/confluentinc/ducktape/wiki/Design-overview#output

 Q: Does it support code coverage? If not, how easy/ difficult would it be
 to support?
 A: It does not, and we have no immediate plans to support this. Difficulty
 unclear.

 Q: It would be nice if each Kafka version that we release will also
 have a separate tests artifact that users can download, untar and easily
 run against a Kafka cluster of the same version.
 A: This seems reasonable and not too much extra work. Definitely open to
 discussion on this.

 Q: Why not share running services across multiple tests?
 A: Prefer to optimize for simplicity and correctness over what might be a
 questionable improvement in run-time.

 Q: Are regressions - in the road map?
 A: yes

 Q: Are Jepsen style tests involving network failures in the road map?
 A: yes

 Thanks much,
 Geoff



 On Mon, Jun 8, 2015 at 4:55 PM, Geoffrey Anderson ge...@confluent.io
 wrote:

  Hi Gwen,
 
  I don't see any problem with this as long as we're convinced there's a
  good use case, which seems to be true.
 
  Cheers,
  Geoff
 
  On Thu, Jun 4, 2015 at 5:20 PM, Gwen Shapira gshap...@cloudera.com
  wrote:
 
  Not completely random places :)
  People may use Cloudera / HWX distributions which include Kafka, but
 want
  to verify that these bits match a specific upstream release.
 
  I think having the tests separately will be useful for this. In this
 case,
  finding the tests are not a big issue - we'll add a download link :)
 
  On Thu, Jun 4, 2015 at 5:00 PM, Jiangjie Qin j...@linkedin.com.invalid
 
  wrote:
 
   Hey Gwen,
  
   Currently the test and code are downloaded at the same time.
 Supposedly
   the tests in the same repository should cover match the code.
   Are you saying people downloaded a release from some random place and
  want
   to verify it? If that is the case, does that mean people still need to
   find the correct place to download the right test artifact?
  
   Thanks,
  
   Jiangjie (Becket) Qin
  
  
  
   On 6/4/15, 4:29 PM, Gwen Shapira gshap...@cloudera.com wrote:
  
   Hi,
   
   Reviving the discussion a bit :)
   
   I think it will be nice if each Kafka version that we release will
 also
   have a separate tests artifact that users can download, untar and
  easily
   run against a Kafka cluster of the same version.
   
   The idea is that if someone downloads packages that claim to contain
   something of a specific Kafka version (i.e. Kafka 0.8.2.0 + 

RE: [DISCUSS] KIP-4 - Command line and centralized administrative operations (Thread 2)

2015-06-11 Thread Aditya Auradkar
Andrii,

Do we need a new voting thread for this KIP? The last round of votes had 3 
binding +1's but there's been a fair amount of discussion since then.

Aditya


From: Aditya Auradkar
Sent: Thursday, June 11, 2015 10:32 AM
To: dev@kafka.apache.org
Subject: RE: [DISCUSS] KIP-4 - Command line and centralized administrative 
operations (Thread 2)

I've made two changes to the document:
- Removed the TMR evolution piece since we agreed to retain this.
- Added two new API's to the admin client spec. (Alter and Describe config).

Please review.

Aditya


From: Ashish Singh [asi...@cloudera.com]
Sent: Friday, May 29, 2015 8:36 AM
To: dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-4 - Command line and centralized administrative 
operations (Thread 2)

+1 on discussing this on next KIP hangout. I will update KIP-24 before that.

On Fri, May 29, 2015 at 3:40 AM, Andrii Biletskyi 
andrii.bilets...@stealth.ly wrote:

 Guys,

 I won't be able to attend next meeting. But in the latest patch for KIP-4
 Phase 1
 I didn't even evolve TopicMetadataRequest to v1 since we won't be able
 to change config with AlterTopicRequest, hence with this patch TMR will
 still
 return isr. Taking this into account I think yes - it would be good to fix
 ISR issue,
 although I didn't consider it to be a critical one (isr was part of TMR
 from the very
 beginning and almost no code relies on this piece of request).

 Thanks,
 Andrii Biletskyi

 On Fri, May 29, 2015 at 8:50 AM, Aditya Auradkar 
 aaurad...@linkedin.com.invalid wrote:

  Thanks. Perhaps we should leave TMR unchanged for now. Should we discuss
  this during the next hangout?
 
  Aditya
 
  
  From: Jun Rao [j...@confluent.io]
  Sent: Thursday, May 28, 2015 5:32 PM
  To: dev@kafka.apache.org
  Subject: Re: [DISCUSS] KIP-4 - Command line and centralized
 administrative
  operations (Thread 2)
 
  There is a reasonable use case of ISR in KAFKA-2225. Basically, for
  economical reasons, we may want to let a consumer fetch from a replica in
  ISR that's in the same zone. In order to support that, it will be
  convenient to have TMR return the correct ISR for the consumer to choose.
 
  So, perhaps it's worth fixing the ISR inconsistency issue in KAFKA-1367
  (there is some new discussion there on what it takes to fix this). If we
 do
  that, we can leave TMR unchanged.
 
  Thanks,
 
  Jun
 
  On Tue, May 26, 2015 at 1:13 PM, Aditya Auradkar 
  aaurad...@linkedin.com.invalid wrote:
 
   Andryii,
  
   I made a few edits to this document as discussed in the KIP-21 thread.
  
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
  
   With these changes. the only difference between
 TopicMetadataResponse_V1
   and V0 is the removal of the ISR field. I've altered the KIP with the
   assumption that this is a good enough reason by itself to evolve the
   request/response protocol. Any concerns there?
  
   Thanks,
   Aditya
  
   
   From: Mayuresh Gharat [gharatmayures...@gmail.com]
   Sent: Thursday, May 21, 2015 8:29 PM
   To: dev@kafka.apache.org
   Subject: Re: [DISCUSS] KIP-4 - Command line and centralized
  administrative
   operations (Thread 2)
  
   Hi Jun,
  
   Thanks a lot. I get it now.
Point 4) will actually enable clients to who don't want to create a
  topic
   with default partitions, if it does not exist and then can manually
  create
   the topic with their own configs(#partitions).
  
   Thanks,
  
   Mayuresh
  
   On Thu, May 21, 2015 at 6:16 PM, Jun Rao j...@confluent.io wrote:
  
Mayuresh,
   
The current plan is the following.
   
1. Add TMR v1, which still triggers auto topic creation.
2. Change the consumer client to TMR v1. Change the producer client
 to
   use
TMR v1 and on UnknownTopicException, issue TopicCreateRequest to
   explicitly
create the topic with the default server side partitions and
 replicas.
3. At some later time after the new clients are released and
 deployed,
disable auto topic creation in TMR v1. This will make sure consumers
   never
create new topics.
4. If needed, we can add a new config in the producer to control
  whether
TopicCreateRequest should be issued or not on UnknownTopicException.
 If
this is disabled and the topic doesn't exist, send will fail and the
  user
is expected to create the topic manually.
   
Thanks,
   
Jun
   
   
On Thu, May 21, 2015 at 5:27 PM, Mayuresh Gharat 
gharatmayures...@gmail.com
 wrote:
   
 Hi,
 I had a question about TopicMetadata Request.
 Currently the way it works is :

 1) Suppose a topic T1 does not exist.
 2) Client wants to produce data to T1 using producer P1.
 3) Since T1 does not exist, P1 issues a TopicMetadata request to
  kafka.
 This in turn creates the 

[jira] [Created] (KAFKA-2265) creating a topic with large number of partitions takes a long time

2015-06-11 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2265:
--

 Summary: creating a topic with large number of partitions takes a 
long time
 Key: KAFKA-2265
 URL: https://issues.apache.org/jira/browse/KAFKA-2265
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 0.8.2.1
Reporter: Jun Rao
 Fix For: 0.8.3


Currently, creating a topic with 3K partitions can take 15 mins. We should be 
able to do that much faster. There is perhaps some redundant accesses to ZK 
during topic creation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1367) Broker topic metadata not kept in sync with ZooKeeper

2015-06-11 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582433#comment-14582433
 ] 

Joel Koshy commented on KAFKA-1367:
---

[~singhashish] - yes that is a good summary. BrokerMetadataRequest - probably 
yes, but that is now orthogonal.

 Broker topic metadata not kept in sync with ZooKeeper
 -

 Key: KAFKA-1367
 URL: https://issues.apache.org/jira/browse/KAFKA-1367
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.0, 0.8.1
Reporter: Ryan Berdeen
Assignee: Ashish K Singh
  Labels: newbie++
 Fix For: 0.8.3

 Attachments: KAFKA-1367.txt


 When a broker is restarted, the topic metadata responses from the brokers 
 will be incorrect (different from ZooKeeper) until a preferred replica leader 
 election.
 In the metadata, it looks like leaders are correctly removed from the ISR 
 when a broker disappears, but followers are not. Then, when a broker 
 reappears, the ISR is never updated.
 I used a variation of the Vagrant setup created by Joe Stein to reproduce 
 this with latest from the 0.8.1 branch: 
 https://github.com/also/kafka/commit/dba36a503a5e22ea039df0f9852560b4fb1e067c



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)