[jira] [Commented] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2015-07-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635215#comment-14635215
 ] 

Ismael Juma commented on KAFKA-251:
---

[~eribeiro], I don't know if it's still relevant as it's a pretty told ticket. 
[~junrao] should know. By the way, you should be able to assign tickets to 
yourself.

 The ConsumerStats MBean's PartOwnerStats  attribute is a string
 ---

 Key: KAFKA-251
 URL: https://issues.apache.org/jira/browse/KAFKA-251
 Project: Kafka
  Issue Type: Bug
Reporter: Pierre-Yves Ritschard
 Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
 0001-Provide-a-patch-for-KAFKA-251.patch


 The fact that the PartOwnerStats is a string prevents monitoring systems from 
 graphing consumer lag. There should be one mbean per [ topic, partition, 
 groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-21 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-2260:
---
Comment: was deleted

(was: [~dasch] The KIP is: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish)

 Allow specifying expected offset on produce
 ---

 Key: KAFKA-2260
 URL: https://issues.apache.org/jira/browse/KAFKA-2260
 Project: Kafka
  Issue Type: Improvement
Reporter: Ben Kirwin
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Attachments: expected-offsets.patch


 I'd like to propose a change that adds a simple CAS-like mechanism to the 
 Kafka producer. This update has a small footprint, but enables a bunch of 
 interesting uses in stream processing or as a commit log for process state.
 h4. Proposed Change
 In short:
 - Allow the user to attach a specific offset to each message produced.
 - The server assigns offsets to messages in the usual way. However, if the 
 expected offset doesn't match the actual offset, the server should fail the 
 produce request instead of completing the write.
 This is a form of optimistic concurrency control, like the ubiquitous 
 check-and-set -- but instead of checking the current value of some state, it 
 checks the current offset of the log.
 h4. Motivation
 Much like check-and-set, this feature is only useful when there's very low 
 contention. Happily, when Kafka is used as a commit log or as a 
 stream-processing transport, it's common to have just one producer (or a 
 small number) for a given partition -- and in many of these cases, predicting 
 offsets turns out to be quite useful.
 - We get the same benefits as the 'idempotent producer' proposal: a producer 
 can retry a write indefinitely and be sure that at most one of those attempts 
 will succeed; and if two producers accidentally write to the end of the 
 partition at once, we can be certain that at least one of them will fail.
 - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
 messages consecutively to a partition, even if the list is much larger than 
 the buffer size or the producer has to be restarted.
 - If a process is using Kafka as a commit log -- reading from a partition to 
 bootstrap, then writing any updates to that same partition -- it can be sure 
 that it's seen all of the messages in that partition at the moment it does 
 its first (successful) write.
 There's a bunch of other similar use-cases here, but they all have roughly 
 the same flavour.
 h4. Implementation
 The major advantage of this proposal over other suggested transaction / 
 idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
 currently-unused field, adds no new APIs, and requires very little new code 
 or additional work from the server.
 - Produced messages already carry an offset field, which is currently ignored 
 by the server. This field could be used for the 'expected offset', with a 
 sigil value for the current behaviour. (-1 is a natural choice, since it's 
 already used to mean 'next available offset'.)
 - We'd need a new error and error code for a 'CAS failure'.
 - The server assigns offsets to produced messages in 
 {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
 changed, this method would assign offsets in the same way -- but if they 
 don't match the offset in the message, we'd return an error instead of 
 completing the write.
 - To avoid breaking existing clients, this behaviour would need to live 
 behind some config flag. (Possibly global, but probably more useful 
 per-topic?)
 I understand all this is unsolicited and possibly strange: happy to answer 
 questions, and if this seems interesting, I'd be glad to flesh this out into 
 a full KIP or patch. (And apologies if this is the wrong venue for this sort 
 of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Contributor request

2015-07-21 Thread Edward Ribeiro
Hello,

I'm interested in being added to the contributor list for Apache Kafka so
that I may assign myself to newbie JIRA tickets, please.

My JIRA handle is eribeiro.

Cheers,
Eddie


Re: Review Request 35734: Patch for KAFKA-2293

2015-07-21 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/35734/#review92416
---

Ship it!


Ship It!

- Grant Henke


On June 22, 2015, 5:35 p.m., Aditya Auradkar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/35734/
 ---
 
 (Updated June 22, 2015, 5:35 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2293
 https://issues.apache.org/jira/browse/KAFKA-2293
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Fix for 2293
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/cluster/Partition.scala 
 6cb647711191aee8d36e9ff15bdc2af4f1c95457 
 
 Diff: https://reviews.apache.org/r/35734/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Aditya Auradkar
 




Re: Review Request 36341: Patch for KAFKA-2311

2015-07-21 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36341/#review92417
---

Ship it!


Ship It!

- Grant Henke


On July 9, 2015, 1:04 a.m., Tim Brooks wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36341/
 ---
 
 (Updated July 9, 2015, 1:04 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2311
 https://issues.apache.org/jira/browse/KAFKA-2311
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Remove unnecessary close check
 
 
 Diffs
 -
 
   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
 1f0e51557c4569f0980b72652846b250d00e05d6 
 
 Diff: https://reviews.apache.org/r/36341/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Tim Brooks
 




[jira] [Created] (KAFKA-2352) Possible memory leak in MirrorMaker and/or new Producer

2015-07-21 Thread Kostya Golikov (JIRA)
Kostya Golikov created KAFKA-2352:
-

 Summary: Possible memory leak in MirrorMaker and/or new Producer
 Key: KAFKA-2352
 URL: https://issues.apache.org/jira/browse/KAFKA-2352
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Kostya Golikov


I've been playing around with Mirror Maker (version from trunk, dated July 7th) 
and got a few problems, most noticeable of which is that MirrorMaker exhausts 
it's memory pool, even though it's set to relatively huge value 132 MB, and 
individual messages are around 2 KB. Batch size is set to just 2 messages (full 
configs are attached). 

{code}
[2015-07-21 15:19:52,915] FATAL [mirrormaker-thread-1] Mirror maker thread 
failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
org.apache.kafka.clients.producer.BufferExhaustedException: You have exhausted 
the 134217728 bytes of memory you configured for the client and the client is 
configured to error rather than block when memory is exhausted.
at 
org.apache.kafka.clients.producer.internals.BufferPool.allocate(BufferPool.java:124)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:172)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:388)
at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:380)
at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:311)
{code}

Am I doing something wrong? Any help in further diagnosing of this problem 
might be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2338:
--
Attachment: KAFKA-2338_2015-07-21_13:21:19.patch

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-21 Thread Edward Ribeiro

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/
---

(Updated July 21, 2015, 4:21 p.m.)


Review request for kafka.


Bugs: KAFKA-2338
https://issues.apache.org/jira/browse/KAFKA-2338


Repository: kafka


Description
---

KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
update broker and consumer settings


Diffs (updated)
-

  core/src/main/scala/kafka/admin/TopicCommand.scala 
4e28bf1c08414e8e96e6ca639b927d51bfeb4616 

Diff: https://reviews.apache.org/r/36578/diff/


Testing
---


Thanks,

Edward Ribeiro



Re: Review Request 36565: Patch for KAFKA-2345

2015-07-21 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36565/#review92422
---

Ship it!


Ship It!

- Grant Henke


On July 17, 2015, 5:21 p.m., Ashish Singh wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36565/
 ---
 
 (Updated July 17, 2015, 5:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2345
 https://issues.apache.org/jira/browse/KAFKA-2345
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2345: Attempt to delete a topic already marked for deletion throws 
 ZkNodeExistsException
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/AdminUtils.scala 
 f06edf41c732a7b794e496d0048b0ce6f897e72b 
   
 core/src/main/scala/kafka/common/TopicAlreadyMarkedForDeletionException.scala 
 PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/36565/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Ashish Singh
 




Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Sriharsha Chintalapani
+1 . I think phasing out a good idea but rather than x months  we should move 
to github PRs for any new JIRAs that are not already in review board.
For the JIRA’s that are in review board we can continue to use that until  they 
merged in.

-Harsha


On July 21, 2015 at 8:11:17 AM, Ashish Singh (asi...@cloudera.com) wrote:

+1 non-binding.  

A suggestion, we should try to phase out old system of reviews gradually,  
instead of forcing it over a night. Maybe a time bound switch? We can say  
like in x months from now we will completely move to PRs?  

On Tuesday, July 21, 2015, Ismael Juma ism...@juma.me.uk wrote:  

 Hi all,  
  
 I would like to start a vote on switching to GitHub pull requests for new  
 contributions. To be precise, the vote is on whether we should:  
  
 * Update the documentation to tell users to use pull requests instead of  
 patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)  
 * Use pull requests for new contributions  
  
 In a previous discussion[1], everyone that participated was in favour. It's  
 also worth reading the Contributing Code Changes wiki page[2] (if you  
 haven't already) to understand the flow.  
  
 A number of pull requests have been merged in the last few weeks to test  
 this flow and I believe it's working well enough. As usual, there is always  
 room for improvement and I expect is to tweak things as time goes on.  
  
 The main downside of using GitHub pull requests is that we don't have write  
 access to https://github.com/apache/kafka. That means that we rely on  
 commit hooks to close integrated pull requests (the merge script takes care  
 of formatting the message so that this happens) and the PR creator or  
 Apache Infra to close pull requests that are not integrated.  
  
 Regarding existing contributions, I think it's up to the contributor to  
 decide whether they want to resubmit it as a pull request or not. I expect  
 that there will be a transition period where the old and new way will  
 co-exist. But that can be discussed separately.  
  
 The vote will run for 72 hours.  
  
 +1 (non-binding) from me.  
  
 Best,  
 Ismael  
  
 [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82  
 [2]  
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes  
  


--  
Ashish h  


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-21 Thread Daniel Schierbeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635281#comment-14635281
 ] 

Daniel Schierbeck commented on KAFKA-2260:
--

Where is the KIP being discussed? I couldn't find any mention of this in the 
dev list archive.

 Allow specifying expected offset on produce
 ---

 Key: KAFKA-2260
 URL: https://issues.apache.org/jira/browse/KAFKA-2260
 Project: Kafka
  Issue Type: Improvement
Reporter: Ben Kirwin
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Attachments: expected-offsets.patch


 I'd like to propose a change that adds a simple CAS-like mechanism to the 
 Kafka producer. This update has a small footprint, but enables a bunch of 
 interesting uses in stream processing or as a commit log for process state.
 h4. Proposed Change
 In short:
 - Allow the user to attach a specific offset to each message produced.
 - The server assigns offsets to messages in the usual way. However, if the 
 expected offset doesn't match the actual offset, the server should fail the 
 produce request instead of completing the write.
 This is a form of optimistic concurrency control, like the ubiquitous 
 check-and-set -- but instead of checking the current value of some state, it 
 checks the current offset of the log.
 h4. Motivation
 Much like check-and-set, this feature is only useful when there's very low 
 contention. Happily, when Kafka is used as a commit log or as a 
 stream-processing transport, it's common to have just one producer (or a 
 small number) for a given partition -- and in many of these cases, predicting 
 offsets turns out to be quite useful.
 - We get the same benefits as the 'idempotent producer' proposal: a producer 
 can retry a write indefinitely and be sure that at most one of those attempts 
 will succeed; and if two producers accidentally write to the end of the 
 partition at once, we can be certain that at least one of them will fail.
 - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
 messages consecutively to a partition, even if the list is much larger than 
 the buffer size or the producer has to be restarted.
 - If a process is using Kafka as a commit log -- reading from a partition to 
 bootstrap, then writing any updates to that same partition -- it can be sure 
 that it's seen all of the messages in that partition at the moment it does 
 its first (successful) write.
 There's a bunch of other similar use-cases here, but they all have roughly 
 the same flavour.
 h4. Implementation
 The major advantage of this proposal over other suggested transaction / 
 idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
 currently-unused field, adds no new APIs, and requires very little new code 
 or additional work from the server.
 - Produced messages already carry an offset field, which is currently ignored 
 by the server. This field could be used for the 'expected offset', with a 
 sigil value for the current behaviour. (-1 is a natural choice, since it's 
 already used to mean 'next available offset'.)
 - We'd need a new error and error code for a 'CAS failure'.
 - The server assigns offsets to produced messages in 
 {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
 changed, this method would assign offsets in the same way -- but if they 
 don't match the offset in the message, we'd return an error instead of 
 completing the write.
 - To avoid breaking existing clients, this behaviour would need to live 
 behind some config flag. (Possibly global, but probably more useful 
 per-topic?)
 I understand all this is unsolicited and possibly strange: happy to answer 
 questions, and if this seems interesting, I'd be glad to flesh this out into 
 a full KIP or patch. (And apologies if this is the wrong venue for this sort 
 of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2188) JBOD Support

2015-07-21 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635283#comment-14635283
 ] 

Flavio Junqueira commented on KAFKA-2188:
-

hey tim, I had a look at the proposal, and I have some feedback, mostly 
questions at this point. I like this improvement, and in general, I've found 
that we can improve quite a bit exception handling in Kafka. This is clearly 
one such great effort. Specifically, here are more concrete points:

# In the exception handler section, I'd say that the best approach is to be 
conservative and remove the drive in the case of an error. Let's not optimize 
too much trying to get the exact partitions that are affected by an error and 
such. If there is an error, then let an operator check it out and reinsert the 
drive when fixed. As part of this comment, I'd say that it'd be a good feature 
to allow drives to be inserted (manually).
# In the notifying controller discussion, could you be more specific about the 
race you're concerned about? I can tell that you're pointing out to a potential 
race, but I'm not sure what it is.
# Open question 1: disk availability. It's kind of hard to detect exactly what 
happened with a faulty disk. It could be disk full, drive is bad, or even just 
some annoying data corruption. I don't think it is worth spending tons of time 
and effort trying to make a great check. If we spot an error, then remove the 
drive and log it. I don't know if there is any typical mechanism to notify 
operators with Kafka.
# Open question 2: log read. I think I know the problem you're referring to, 
and I'll have a look to see if I can suggest some decent alternative, but we 
might need to make it a bit less efficient to be able to handle IO errors 
properly.
# Open question 3: restart partition. This is about the race I asked above. 
# Open question 4: operation retries. What would be a situation in which it is 
worth retrying? 

I was actually wondering if some users would be interested in the case of 
leaving a fraction of the drives unused to replace faulty drives over time. The 
advantage is to be able to maintain the capacity of a broker despite faulty 
drives, but surely you have some unused IO capacity in the broker. 

 JBOD Support
 

 Key: KAFKA-2188
 URL: https://issues.apache.org/jira/browse/KAFKA-2188
 Project: Kafka
  Issue Type: Bug
Reporter: Andrii Biletskyi
Assignee: Andrii Biletskyi
 Attachments: KAFKA-2188.patch, KAFKA-2188.patch, KAFKA-2188.patch


 https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-388) Add a highly available consumer co-ordinator to a Kafka cluster

2015-07-21 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-388:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

This is now handled in KAFKA-1326.

 Add a highly available consumer co-ordinator to a Kafka cluster
 ---

 Key: KAFKA-388
 URL: https://issues.apache.org/jira/browse/KAFKA-388
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.7
Reporter: Neha Narkhede
Assignee: Neha Narkhede
  Labels: patch
 Attachments: kafka-388_v1.patch


 This JIRA will add a highly available co-ordinator for consumer rebalancing. 
 Detailed design of the co-ordinator leader election and startup procedure is 
 documented here - 
 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Detailed+Consumer+Coordinator+Design



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Neha Narkhede
+1 (binding)

Agree with Ismael. We may not want to rush to push the PR right away.
Having said that, if it works well with say, 10 patches, I'd consider that
enough to require the new JIRAs to submit patches using the PRs instead.

Thanks,
Neha

On Tue, Jul 21, 2015 at 8:19 AM, Sriharsha Chintalapani 
harsh...@fastmail.fm wrote:

 +1 . I think phasing out a good idea but rather than x months  we should
 move to github PRs for any new JIRAs that are not already in review board.
 For the JIRA’s that are in review board we can continue to use that until
  they merged in.

 -Harsha


 On July 21, 2015 at 8:11:17 AM, Ashish Singh (asi...@cloudera.com) wrote:

 +1 non-binding.

 A suggestion, we should try to phase out old system of reviews gradually,
 instead of forcing it over a night. Maybe a time bound switch? We can say
 like in x months from now we will completely move to PRs?

 On Tuesday, July 21, 2015, Ismael Juma ism...@juma.me.uk wrote:

  Hi all,
 
  I would like to start a vote on switching to GitHub pull requests for new
  contributions. To be precise, the vote is on whether we should:
 
  * Update the documentation to tell users to use pull requests instead of
  patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
  * Use pull requests for new contributions
 
  In a previous discussion[1], everyone that participated was in favour.
 It's
  also worth reading the Contributing Code Changes wiki page[2] (if you
  haven't already) to understand the flow.
 
  A number of pull requests have been merged in the last few weeks to test
  this flow and I believe it's working well enough. As usual, there is
 always
  room for improvement and I expect is to tweak things as time goes on.
 
  The main downside of using GitHub pull requests is that we don't have
 write
  access to https://github.com/apache/kafka. That means that we rely on
  commit hooks to close integrated pull requests (the merge script takes
 care
  of formatting the message so that this happens) and the PR creator or
  Apache Infra to close pull requests that are not integrated.
 
  Regarding existing contributions, I think it's up to the contributor to
  decide whether they want to resubmit it as a pull request or not. I
 expect
  that there will be a transition period where the old and new way will
  co-exist. But that can be discussed separately.
 
  The vote will run for 72 hours.
 
  +1 (non-binding) from me.
 
  Best,
  Ismael
 
  [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
  [2]
 
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
 


 --
 Ashish h




-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635208#comment-14635208
 ] 

Edward Ribeiro commented on KAFKA-251:
--

[~ijuma] Hi, if it is still relevant and no one is working on it could you 
assign it to me, please?

 The ConsumerStats MBean's PartOwnerStats  attribute is a string
 ---

 Key: KAFKA-251
 URL: https://issues.apache.org/jira/browse/KAFKA-251
 Project: Kafka
  Issue Type: Bug
Reporter: Pierre-Yves Ritschard
 Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
 0001-Provide-a-patch-for-KAFKA-251.patch


 The fact that the PartOwnerStats is a string prevents monitoring systems from 
 graphing consumer lag. There should be one mbean per [ topic, partition, 
 groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635224#comment-14635224
 ] 

Edward Ribeiro commented on KAFKA-251:
--

I cannot. Afaik, it depends on the project, and some (many?) are restricting 
the assign operation to committers and core members, what I can understand 
perfectly and comply with.

 The ConsumerStats MBean's PartOwnerStats  attribute is a string
 ---

 Key: KAFKA-251
 URL: https://issues.apache.org/jira/browse/KAFKA-251
 Project: Kafka
  Issue Type: Bug
Reporter: Pierre-Yves Ritschard
 Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
 0001-Provide-a-patch-for-KAFKA-251.patch


 The fact that the PartOwnerStats is a string prevents monitoring systems from 
 graphing consumer lag. There should be one mbean per [ topic, partition, 
 groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Ashish Singh
+1 non-binding.

A suggestion, we should try to phase out old system of reviews gradually,
instead of forcing it over a night. Maybe a time bound switch? We can say
like in x months from now we will completely move to PRs?

On Tuesday, July 21, 2015, Ismael Juma ism...@juma.me.uk wrote:

 Hi all,

 I would like to start a vote on switching to GitHub pull requests for new
 contributions. To be precise, the vote is on whether we should:

 * Update the documentation to tell users to use pull requests instead of
 patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
 * Use pull requests for new contributions

 In a previous discussion[1], everyone that participated was in favour. It's
 also worth reading the Contributing Code Changes wiki page[2] (if you
 haven't already) to understand the flow.

 A number of pull requests have been merged in the last few weeks to test
 this flow and I believe it's working well enough. As usual, there is always
 room for improvement and I expect is to tweak things as time goes on.

 The main downside of using GitHub pull requests is that we don't have write
 access to https://github.com/apache/kafka. That means that we rely on
 commit hooks to close integrated pull requests (the merge script takes care
 of formatting the message so that this happens) and the PR creator or
 Apache Infra to close pull requests that are not integrated.

 Regarding existing contributions, I think it's up to the contributor to
 decide whether they want to resubmit it as a pull request or not. I expect
 that there will be a transition period where the old and new way will
 co-exist. But that can be discussed separately.

 The vote will run for 72 hours.

 +1 (non-binding) from me.

 Best,
 Ismael

 [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
 [2]
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes



-- 
Ashish h


[jira] [Commented] (KAFKA-2188) JBOD Support

2015-07-21 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635305#comment-14635305
 ] 

Jun Rao commented on KAFKA-2188:


Another thing that's worth mentioning is that currently we don't have 
throttling when bootstrapping new replicas. Bootstrapping too many new replicas 
at the same time can degrade the performance of the cluster. So, if we want to 
do any kind of auto re-replication, it would be good if we have the replication 
throttling in place first.

 JBOD Support
 

 Key: KAFKA-2188
 URL: https://issues.apache.org/jira/browse/KAFKA-2188
 Project: Kafka
  Issue Type: Bug
Reporter: Andrii Biletskyi
Assignee: Andrii Biletskyi
 Attachments: KAFKA-2188.patch, KAFKA-2188.patch, KAFKA-2188.patch


 https://cwiki.apache.org/confluence/display/KAFKA/KIP-18+-+JBOD+Support



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2352) Possible memory leak in MirrorMaker and/or new Producer

2015-07-21 Thread Kostya Golikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostya Golikov updated KAFKA-2352:
--
Description: 
I've been playing around with Mirror Maker (version from trunk, dated July 7th) 
and got a few problems, most noticeable of which is that MirrorMaker exhausts 
it's memory pool, even though it's size set to relatively huge value of 132 MB, 
and individual messages are around 2 KB. Batch size is set to just 2 messages 
(full configs are attached). 

{code}
[2015-07-21 15:19:52,915] FATAL [mirrormaker-thread-1] Mirror maker thread 
failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
org.apache.kafka.clients.producer.BufferExhaustedException: You have exhausted 
the 134217728 bytes of memory you configured for the client and the client is 
configured to error rather than block when memory is exhausted.
at 
org.apache.kafka.clients.producer.internals.BufferPool.allocate(BufferPool.java:124)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:172)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:388)
at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:380)
at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:311)
{code}

Am I doing something wrong? Any help in further diagnosing of this problem 
might be handy.

  was:
I've been playing around with Mirror Maker (version from trunk, dated July 7th) 
and got a few problems, most noticeable of which is that MirrorMaker exhausts 
it's memory pool, even though it's set to relatively huge value 132 MB, and 
individual messages are around 2 KB. Batch size is set to just 2 messages (full 
configs are attached). 

{code}
[2015-07-21 15:19:52,915] FATAL [mirrormaker-thread-1] Mirror maker thread 
failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
org.apache.kafka.clients.producer.BufferExhaustedException: You have exhausted 
the 134217728 bytes of memory you configured for the client and the client is 
configured to error rather than block when memory is exhausted.
at 
org.apache.kafka.clients.producer.internals.BufferPool.allocate(BufferPool.java:124)
at 
org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:172)
at 
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:388)
at 
kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:380)
at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
at 
kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:311)
{code}

Am I doing something wrong? Any help in further diagnosing of this problem 
might be handy.


 Possible memory leak in MirrorMaker and/or new Producer
 ---

 Key: KAFKA-2352
 URL: https://issues.apache.org/jira/browse/KAFKA-2352
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Kostya Golikov
 Attachments: consumer.conf, output.log, producer.conf


 I've been playing around with Mirror Maker (version from trunk, dated July 
 7th) and got a few problems, most noticeable of which is that MirrorMaker 
 exhausts it's memory pool, even though it's size set to relatively huge value 
 of 132 MB, and individual messages are around 2 KB. Batch size is set to just 
 2 messages (full configs are attached). 
 {code}
 [2015-07-21 15:19:52,915] FATAL [mirrormaker-thread-1] Mirror maker thread 
 failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
 org.apache.kafka.clients.producer.BufferExhaustedException: You have 
 exhausted the 134217728 bytes of memory you configured for the client and the 
 client is configured to error rather than block when memory is exhausted.
   at 
 org.apache.kafka.clients.producer.internals.BufferPool.allocate(BufferPool.java:124)
   at 
 

[jira] [Updated] (KAFKA-2352) Possible memory leak in MirrorMaker and/or new Producer

2015-07-21 Thread Kostya Golikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostya Golikov updated KAFKA-2352:
--
Attachment: producer.conf
consumer.conf
output.log

 Possible memory leak in MirrorMaker and/or new Producer
 ---

 Key: KAFKA-2352
 URL: https://issues.apache.org/jira/browse/KAFKA-2352
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.8.2.1
Reporter: Kostya Golikov
 Attachments: consumer.conf, output.log, producer.conf


 I've been playing around with Mirror Maker (version from trunk, dated July 
 7th) and got a few problems, most noticeable of which is that MirrorMaker 
 exhausts it's memory pool, even though it's set to relatively huge value 132 
 MB, and individual messages are around 2 KB. Batch size is set to just 2 
 messages (full configs are attached). 
 {code}
 [2015-07-21 15:19:52,915] FATAL [mirrormaker-thread-1] Mirror maker thread 
 failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
 org.apache.kafka.clients.producer.BufferExhaustedException: You have 
 exhausted the 134217728 bytes of memory you configured for the client and the 
 client is configured to error rather than block when memory is exhausted.
   at 
 org.apache.kafka.clients.producer.internals.BufferPool.allocate(BufferPool.java:124)
   at 
 org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:172)
   at 
 org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:388)
   at 
 kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:380)
   at 
 kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
   at 
 kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:311)
   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
   at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:311)
 {code}
 Am I doing something wrong? Any help in further diagnosing of this problem 
 might be handy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: auto.offset.reset docs not in sync with valida...

2015-07-21 Thread sslavic
GitHub user sslavic opened a pull request:

https://github.com/apache/kafka/pull/91

auto.offset.reset docs not in sync with validation

In this commit 
https://github.com/apache/kafka/commit/0699ff2ce60abb466cab5315977a224f1a70a4da#diff-5533ddc72176acd1c32f5abbe94aa672
 among other things auto.offset.reset possible options were changed from 
smallest to earliest and from largest to latest, but not in documentation for 
that configuration property.

This patch fixes documentation for auto.offset.reset consumer configuration 
property so it is in sync with validation logic.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sslavic/kafka patch-5

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/91.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #91


commit 5ed569c2e2be373eb589f02ea94c53602cf07813
Author: Stevo Slavić ssla...@gmail.com
Date:   2015-07-21T14:45:19Z

auto.offset.reset docs not in sync with validation

In this commit 
https://github.com/apache/kafka/commit/0699ff2ce60abb466cab5315977a224f1a70a4da#diff-5533ddc72176acd1c32f5abbe94aa672
 among other things auto.offset.reset possible options were changed from 
smallest to earliest and from largest to latest, but not in documentation for 
that configuration property.

This patch fixes documentation for auto.offset.reset consumer configuration 
property so it is in sync with validation logic.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Review Request 36548: Patch for KAFKA-2336

2015-07-21 Thread Grant Henke


 On July 21, 2015, 2:43 p.m., Ismael Juma wrote:
  core/src/main/scala/kafka/server/OffsetManager.scala, line 454
  https://reviews.apache.org/r/36548/diff/2/?file=1013441#file1013441line454
 
  Is `topicData` guaranteed to have a key for `topic`? If not, it's 
  better to do `topicData.get(topic)` to avoid the `NoSuchElementException`.

Yes it is guranteed. See ZkUtils.getPartitionAssignmentForTopics.


- Grant


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36548/#review92406
---


On July 16, 2015, 6:04 p.m., Grant Henke wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36548/
 ---
 
 (Updated July 16, 2015, 6:04 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2336
 https://issues.apache.org/jira/browse/KAFKA-2336
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Fix Scala style
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/server/OffsetManager.scala 
 47b6ce93da320a565435b4a7916a0c4371143b8a 
 
 Diff: https://reviews.apache.org/r/36548/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Grant Henke
 




[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635216#comment-14635216
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Oh, ignore the previous message, [~gwenshap], I will be reworking the patch to 
fix it.

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Ismael Juma
On Tue, Jul 21, 2015 at 4:11 PM, Ashish Singh asi...@cloudera.com wrote:

 +1 non-binding.

 A suggestion, we should try to phase out old system of reviews gradually,
 instead of forcing it over a night.


I agree.

Maybe a time bound switch? We can say
 like in x months from now we will completely move to PRs?


Not sure if we need this to start with. Maybe we just encourage the new way
for new contributions and update the documentation and see how it goes. If
the transition is a bit slow, we can have a discussion about setting a time
bound.

Best,
Ismael


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-21 Thread Jun Rao
For 1, yes, when there is a transient leader change, it's guaranteed that a
prefix of the messages in a request will be committed. However, it seems
that the client needs to know what subset of messages are committed in
order to resume the sending. Then the question is how.

As Flavio indicated, for the use cases that you listed, it would be useful
to figure out the exact logic by using this feature. For example, in the
partition K/V store example, when we fail over to a new writer to the
commit log, the zombie writer can publish new messages to the log after the
new writer takes over, but before it publishes any message. We probably
need to outline how this case can be handled properly.

Thanks,

Jun

On Mon, Jul 20, 2015 at 12:05 PM, Ben Kirwin b...@kirw.in wrote:

 Hi Jun,

 Thanks for the close reading! Responses inline.

  Thanks for the write-up. The single producer use case you mentioned makes
  sense. It would be useful to include that in the KIP wiki.

 Great -- I'll make sure that the wiki is clear about this.

  1. What happens when the leader of the partition changes in the middle
 of a
  produce request? In this case, the producer client is not sure whether
 the
  request succeeds or not. If there is only a single message in the
 request,
  the producer can just resend the request. If it sees an OffsetMismatch
  error, it knows that the previous send actually succeeded and can proceed
  with the next write. This is nice since it not only allows the producer
 to
  proceed during transient failures in the broker, it also avoids
 duplicates
  during producer resend. One caveat is when there are multiple messages in
  the same partition in a produce request. The issue is that in our current
  replication protocol, it's possible for some, but not all messages in the
  request to be committed. This makes resend a bit harder to deal with
 since
  on receiving an OffsetMismatch error, it's not clear which messages have
  been committed. One possibility is to expect that compression is enabled,
  in which case multiple messages are compressed into a single message. I
 was
  thinking that another possibility is for the broker to return the current
  high watermark when sending an OffsetMismatch error. Based on this info,
  the producer can resend the subset of messages that have not been
  committed. However, this may not work in a compacted topic since there
 can
  be holes in the offset.

 This is a excellent question. It's my understanding that at least a
 *prefix* of messages will be committed (right?) -- which seems to be
 enough for many cases. I'll try and come up with a more concrete
 answer here.

  2. Is this feature only intended to be used with ack = all? The client
  doesn't get the offset with ack = 0. With ack = 1, it's possible for a
  previously acked message to be lost during leader transition, which will
  make the client logic more complicated.

 It's true that acks = 0 doesn't seem to be particularly useful; in all
 the cases I've come across, the client eventually wants to know about
 the mismatch error. However, it seems like there are some cases where
 acks = 1 would be fine -- eg. in a bulk load of a fixed dataset,
 losing messages during a leader transition just means you need to
 rewind / restart the load, which is not especially catastrophic. For
 many other interesting cases, acks = all is probably preferable.

  3. How does the producer client know the offset to send the first
 message?
  Do we need to expose an API in producer to get the current high
 watermark?

 You're right, it might be irritating to have to go through the
 consumer API just for this. There are some cases where the offsets are
 already available -- like the commit-log-for-KV-store example -- but
 in general, being able to get the offsets from the producer interface
 does sound convenient.

  We plan to have a KIP discussion meeting tomorrow at 11am PST. Perhaps
 you
  can describe this KIP a bit then?

 Sure, happy to join.

  Thanks,
 
  Jun
 
 
 
  On Sat, Jul 18, 2015 at 10:37 AM, Ben Kirwin b...@kirw.in wrote:
 
  Just wanted to flag a little discussion that happened on the ticket:
 
 
 https://issues.apache.org/jira/browse/KAFKA-2260?focusedCommentId=14632259page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14632259
 
  In particular, Yasuhiro Matsuda proposed an interesting variant on
  this that performs the offset check on the message key (instead of
  just the partition), with bounded space requirements, at the cost of
  potentially some spurious failures. (ie. the produce request may fail
  even if that particular key hasn't been updated recently.) This
  addresses a couple of the drawbacks of the per-key approach mentioned
  at the bottom of the KIP.
 
  On Fri, Jul 17, 2015 at 6:47 PM, Ben Kirwin b...@kirw.in wrote:
   Hi all,
  
   So, perhaps it's worth adding a couple specific examples of where this
   feature is useful, to make this a bit more concrete:
  
   

[jira] [Commented] (KAFKA-2338) Warn users if they change max.message.bytes that they also need to update broker and consumer settings

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635330#comment-14635330
 ] 

Edward Ribeiro commented on KAFKA-2338:
---

Updated reviewboard https://reviews.apache.org/r/36578/diff/
 against branch origin/trunk

 Warn users if they change max.message.bytes that they also need to update 
 broker and consumer settings
 --

 Key: KAFKA-2338
 URL: https://issues.apache.org/jira/browse/KAFKA-2338
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.8.2.1
Reporter: Ewen Cheslack-Postava
 Fix For: 0.8.3

 Attachments: KAFKA-2338.patch, KAFKA-2338_2015-07-18_00:37:31.patch, 
 KAFKA-2338_2015-07-21_13:21:19.patch


 We already have KAFKA-1756 filed to more completely address this issue, but 
 it is waiting for some other major changes to configs to completely protect 
 users from this problem.
 This JIRA should address the low hanging fruit to at least warn users of the 
 potential problems. Currently the only warning is in our documentation.
 1. Generate a warning in the kafka-topics.sh tool when they change this 
 setting on a topic to be larger than the default. This needs to be very 
 obvious in the output.
 2. Currently, the broker's replica fetcher isn't logging any useful error 
 messages when replication can't succeed because a message size is too large. 
 Logging an error here would allow users that get into a bad state to find out 
 why it is happening more easily. (Consumers should already be logging a 
 useful error message.)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636098#comment-14636098
 ] 

Jay Kreps commented on KAFKA-2350:
--

+1 on resume instead of unpause though it doesn't match subscribe/unsubscribe.

The original motivation for this was to be able to subscribe at the topic level 
but be able to say that while you're still subscribed to a given partition you 
can't take more data for that partition at this particular moment. Generalizing 
that to allow pausing a whole topic makes sense too.

[~becket_qin] I think your idea is having unsubscribe(partition) have the same 
effect as pause(partition) when you are subscribed at the topic level would be 
intuitive, but the logic of how that would work might be a bit complex. If 
someone is smart enough to work out the details that could be more elegant than 
a new api. The challenge is that partition level subscribe/unsubscribe is 
currently an error if you are subscribed at the topic level and the details of 
that control whether group management etc is used too.

 Add KafkaConsumer pause capability
 --

 Key: KAFKA-2350
 URL: https://issues.apache.org/jira/browse/KAFKA-2350
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson

 There are some use cases in stream processing where it is helpful to be able 
 to pause consumption of a topic. For example, when joining two topics, you 
 may need to delay processing of one topic while you wait for the consumer of 
 the other topic to catch up. The new consumer currently doesn't provide a 
 nice way to do this. If you skip poll() or if you unsubscribe, then a 
 rebalance will be triggered and your partitions will be reassigned.
 One way to achieve this would be to add two new methods to KafkaConsumer:
 {code}
 void pause(String... topics);
 void unpause(String... topics);
 {code}
 When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
 new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2355) Creating a unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)
Edward Ribeiro created KAFKA-2355:
-

 Summary: Creating a unit test to validate the deletion of a 
partition marked as deleted
 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor


Trying to delete a partition marked as deleted throws 
{{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636105#comment-14636105
 ] 

Jason Gustafson commented on KAFKA-2350:


Hey [~becket_qin], thanks for the suggestion. I think my only concern is that 
this would make the API more confusing. It would give two meanings to 
subscribe(partition) which depend on whether automatic assignment is used. I 
agree with you about minimizing the complexity of the consumer API, but I'd 
probably rather have the explicit methods if we think the use case is valid.

 Add KafkaConsumer pause capability
 --

 Key: KAFKA-2350
 URL: https://issues.apache.org/jira/browse/KAFKA-2350
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson

 There are some use cases in stream processing where it is helpful to be able 
 to pause consumption of a topic. For example, when joining two topics, you 
 may need to delay processing of one topic while you wait for the consumer of 
 the other topic to catch up. The new consumer currently doesn't provide a 
 nice way to do this. If you skip poll() or if you unsubscribe, then a 
 rebalance will be triggered and your partitions will be reassigned.
 One way to achieve this would be to add two new methods to KafkaConsumer:
 {code}
 void pause(String... topics);
 void unpause(String... topics);
 {code}
 When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
 new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Summary: Add an unit test to validate the deletion of a partition marked as 
deleted  (was: Creating a unit test to validate the deletion of a partition 
marked as deleted)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Issue Type: Test  (was: Sub-task)
Parent: (was: KAFKA-2345)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Test
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36670: Patch for KAFKA-2355

2015-07-21 Thread Ashish Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36670/#review92551
---

Ship it!


Thanks for the patch Edward.

- Ashish Singh


On July 22, 2015, 1:46 a.m., Edward Ribeiro wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36670/
 ---
 
 (Updated July 22, 2015, 1:46 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2355
 https://issues.apache.org/jira/browse/KAFKA-2355
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2355 Add an unit test to validate the deletion of a partition marked as 
 deleted
 
 
 Diffs
 -
 
   core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
 fa8ce259a2832ab86f9dda8c1d409b2c42d43ae9 
 
 Diff: https://reviews.apache.org/r/36670/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Edward Ribeiro
 




Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Edward Ribeiro
+1 (non binding)

On Tue, Jul 21, 2015 at 7:36 PM, Jay Kreps j...@confluent.io wrote:

 +1

 -Jay

 On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma ism...@juma.me.uk wrote:

  Hi all,
 
  I would like to start a vote on switching to GitHub pull requests for new
  contributions. To be precise, the vote is on whether we should:
 
  * Update the documentation to tell users to use pull requests instead of
  patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
  * Use pull requests for new contributions
 
  In a previous discussion[1], everyone that participated was in favour.
 It's
  also worth reading the Contributing Code Changes wiki page[2] (if you
  haven't already) to understand the flow.
 
  A number of pull requests have been merged in the last few weeks to test
  this flow and I believe it's working well enough. As usual, there is
 always
  room for improvement and I expect is to tweak things as time goes on.
 
  The main downside of using GitHub pull requests is that we don't have
 write
  access to https://github.com/apache/kafka. That means that we rely on
  commit hooks to close integrated pull requests (the merge script takes
 care
  of formatting the message so that this happens) and the PR creator or
  Apache Infra to close pull requests that are not integrated.
 
  Regarding existing contributions, I think it's up to the contributor to
  decide whether they want to resubmit it as a pull request or not. I
 expect
  that there will be a transition period where the old and new way will
  co-exist. But that can be discussed separately.
 
  The vote will run for 72 hours.
 
  +1 (non-binding) from me.
 
  Best,
  Ismael
 
  [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
  [2]
 
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
 



[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636116#comment-14636116
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Created reviewboard https://reviews.apache.org/r/36670/diff/
 against branch origin/trunk

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Attachment: KAFKA-2355.patch

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36670: Patch for KAFKA-2355

2015-07-21 Thread Edward Ribeiro

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36670/
---

Review request for kafka.


Bugs: KAFKA-2355
https://issues.apache.org/jira/browse/KAFKA-2355


Repository: kafka


Description
---

KAFKA-2355 Add an unit test to validate the deletion of a partition marked as 
deleted


Diffs
-

  core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala 
fa8ce259a2832ab86f9dda8c1d409b2c42d43ae9 

Diff: https://reviews.apache.org/r/36670/diff/


Testing
---


Thanks,

Edward Ribeiro



Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin


 On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
  Thanks for looking into that. Exception handling was the most challenging 
  part of rewriting SocketServer, so I'm glad to see more eyes on this 
  implementation.
  
  I have a concern regarding the right way to handle an unexpected exceptions.

Hi Gwen, thanks for the quick review. I replied to your comments below. Mind 
taking another look?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/#review92496
---


On July 22, 2015, 5:02 a.m., Jiangjie Qin wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36664/
 ---
 
 (Updated July 22, 2015, 5:02 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2353
 https://issues.apache.org/jira/browse/KAFKA-2353
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressed Gwen's comments
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36664/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jiangjie Qin
 




[jira] [Commented] (KAFKA-251) The ConsumerStats MBean's PartOwnerStats attribute is a string

2015-07-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635231#comment-14635231
 ] 

Ismael Juma commented on KAFKA-251:
---

[~eribeiro], I am not a committer and I can. :) Ask in the mailing list to be 
added as a contributor for JIRA and Confluence (you can see many such messages 
in the archive).

 The ConsumerStats MBean's PartOwnerStats  attribute is a string
 ---

 Key: KAFKA-251
 URL: https://issues.apache.org/jira/browse/KAFKA-251
 Project: Kafka
  Issue Type: Bug
Reporter: Pierre-Yves Ritschard
 Attachments: 0001-Incorporate-Jun-Rao-s-comments-on-KAFKA-251.patch, 
 0001-Provide-a-patch-for-KAFKA-251.patch


 The fact that the PartOwnerStats is a string prevents monitoring systems from 
 graphing consumer lag. There should be one mbean per [ topic, partition, 
 groupid ] group.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2260) Allow specifying expected offset on produce

2015-07-21 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635289#comment-14635289
 ] 

Ismael Juma commented on KAFKA-2260:


[~dasch] The KIP is: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish

 Allow specifying expected offset on produce
 ---

 Key: KAFKA-2260
 URL: https://issues.apache.org/jira/browse/KAFKA-2260
 Project: Kafka
  Issue Type: Improvement
Reporter: Ben Kirwin
Assignee: Ewen Cheslack-Postava
Priority: Minor
 Attachments: expected-offsets.patch


 I'd like to propose a change that adds a simple CAS-like mechanism to the 
 Kafka producer. This update has a small footprint, but enables a bunch of 
 interesting uses in stream processing or as a commit log for process state.
 h4. Proposed Change
 In short:
 - Allow the user to attach a specific offset to each message produced.
 - The server assigns offsets to messages in the usual way. However, if the 
 expected offset doesn't match the actual offset, the server should fail the 
 produce request instead of completing the write.
 This is a form of optimistic concurrency control, like the ubiquitous 
 check-and-set -- but instead of checking the current value of some state, it 
 checks the current offset of the log.
 h4. Motivation
 Much like check-and-set, this feature is only useful when there's very low 
 contention. Happily, when Kafka is used as a commit log or as a 
 stream-processing transport, it's common to have just one producer (or a 
 small number) for a given partition -- and in many of these cases, predicting 
 offsets turns out to be quite useful.
 - We get the same benefits as the 'idempotent producer' proposal: a producer 
 can retry a write indefinitely and be sure that at most one of those attempts 
 will succeed; and if two producers accidentally write to the end of the 
 partition at once, we can be certain that at least one of them will fail.
 - It's possible to 'bulk load' Kafka this way -- you can write a list of n 
 messages consecutively to a partition, even if the list is much larger than 
 the buffer size or the producer has to be restarted.
 - If a process is using Kafka as a commit log -- reading from a partition to 
 bootstrap, then writing any updates to that same partition -- it can be sure 
 that it's seen all of the messages in that partition at the moment it does 
 its first (successful) write.
 There's a bunch of other similar use-cases here, but they all have roughly 
 the same flavour.
 h4. Implementation
 The major advantage of this proposal over other suggested transaction / 
 idempotency mechanisms is its minimality: it gives the 'obvious' meaning to a 
 currently-unused field, adds no new APIs, and requires very little new code 
 or additional work from the server.
 - Produced messages already carry an offset field, which is currently ignored 
 by the server. This field could be used for the 'expected offset', with a 
 sigil value for the current behaviour. (-1 is a natural choice, since it's 
 already used to mean 'next available offset'.)
 - We'd need a new error and error code for a 'CAS failure'.
 - The server assigns offsets to produced messages in 
 {{ByteBufferMessageSet.validateMessagesAndAssignOffsets}}. After this 
 changed, this method would assign offsets in the same way -- but if they 
 don't match the offset in the message, we'd return an error instead of 
 completing the write.
 - To avoid breaking existing clients, this behaviour would need to live 
 behind some config flag. (Possibly global, but probably more useful 
 per-topic?)
 I understand all this is unsolicited and possibly strange: happy to answer 
 questions, and if this seems interesting, I'd be glad to flesh this out into 
 a full KIP or patch. (And apologies if this is the wrong venue for this sort 
 of thing!)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Grant Henke
+1 non-binding

On Tue, Jul 21, 2015 at 11:19 AM, Neha Narkhede n...@confluent.io wrote:

 +1 (binding)

 Agree with Ismael. We may not want to rush to push the PR right away.
 Having said that, if it works well with say, 10 patches, I'd consider that
 enough to require the new JIRAs to submit patches using the PRs instead.

 Thanks,
 Neha

 On Tue, Jul 21, 2015 at 8:19 AM, Sriharsha Chintalapani 
 harsh...@fastmail.fm wrote:

  +1 . I think phasing out a good idea but rather than x months  we should
  move to github PRs for any new JIRAs that are not already in review
 board.
  For the JIRA’s that are in review board we can continue to use that until
   they merged in.
 
  -Harsha
 
 
  On July 21, 2015 at 8:11:17 AM, Ashish Singh (asi...@cloudera.com)
 wrote:
 
  +1 non-binding.
 
  A suggestion, we should try to phase out old system of reviews gradually,
  instead of forcing it over a night. Maybe a time bound switch? We can say
  like in x months from now we will completely move to PRs?
 
  On Tuesday, July 21, 2015, Ismael Juma ism...@juma.me.uk wrote:
 
   Hi all,
  
   I would like to start a vote on switching to GitHub pull requests for
 new
   contributions. To be precise, the vote is on whether we should:
  
   * Update the documentation to tell users to use pull requests instead
 of
   patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
   * Use pull requests for new contributions
  
   In a previous discussion[1], everyone that participated was in favour.
  It's
   also worth reading the Contributing Code Changes wiki page[2] (if you
   haven't already) to understand the flow.
  
   A number of pull requests have been merged in the last few weeks to
 test
   this flow and I believe it's working well enough. As usual, there is
  always
   room for improvement and I expect is to tweak things as time goes on.
  
   The main downside of using GitHub pull requests is that we don't have
  write
   access to https://github.com/apache/kafka. That means that we rely on
   commit hooks to close integrated pull requests (the merge script takes
  care
   of formatting the message so that this happens) and the PR creator or
   Apache Infra to close pull requests that are not integrated.
  
   Regarding existing contributions, I think it's up to the contributor to
   decide whether they want to resubmit it as a pull request or not. I
  expect
   that there will be a transition period where the old and new way will
   co-exist. But that can be discussed separately.
  
   The vote will run for 72 hours.
  
   +1 (non-binding) from me.
  
   Best,
   Ismael
  
   [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
   [2]
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
  
 
 
  --
  Ashish h
 



 --
 Thanks,
 Neha




-- 
Grant Henke
Solutions Consultant | Cloudera
ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Guozhang Wang
+1

On Tue, Jul 21, 2015 at 9:31 AM, Grant Henke ghe...@cloudera.com wrote:

 +1 non-binding

 On Tue, Jul 21, 2015 at 11:19 AM, Neha Narkhede n...@confluent.io wrote:

  +1 (binding)
 
  Agree with Ismael. We may not want to rush to push the PR right away.
  Having said that, if it works well with say, 10 patches, I'd consider
 that
  enough to require the new JIRAs to submit patches using the PRs instead.
 
  Thanks,
  Neha
 
  On Tue, Jul 21, 2015 at 8:19 AM, Sriharsha Chintalapani 
  harsh...@fastmail.fm wrote:
 
   +1 . I think phasing out a good idea but rather than x months  we
 should
   move to github PRs for any new JIRAs that are not already in review
  board.
   For the JIRA’s that are in review board we can continue to use that
 until
they merged in.
  
   -Harsha
  
  
   On July 21, 2015 at 8:11:17 AM, Ashish Singh (asi...@cloudera.com)
  wrote:
  
   +1 non-binding.
  
   A suggestion, we should try to phase out old system of reviews
 gradually,
   instead of forcing it over a night. Maybe a time bound switch? We can
 say
   like in x months from now we will completely move to PRs?
  
   On Tuesday, July 21, 2015, Ismael Juma ism...@juma.me.uk wrote:
  
Hi all,
   
I would like to start a vote on switching to GitHub pull requests for
  new
contributions. To be precise, the vote is on whether we should:
   
* Update the documentation to tell users to use pull requests instead
  of
patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
* Use pull requests for new contributions
   
In a previous discussion[1], everyone that participated was in
 favour.
   It's
also worth reading the Contributing Code Changes wiki page[2] (if
 you
haven't already) to understand the flow.
   
A number of pull requests have been merged in the last few weeks to
  test
this flow and I believe it's working well enough. As usual, there is
   always
room for improvement and I expect is to tweak things as time goes on.
   
The main downside of using GitHub pull requests is that we don't have
   write
access to https://github.com/apache/kafka. That means that we rely
 on
commit hooks to close integrated pull requests (the merge script
 takes
   care
of formatting the message so that this happens) and the PR creator or
Apache Infra to close pull requests that are not integrated.
   
Regarding existing contributions, I think it's up to the contributor
 to
decide whether they want to resubmit it as a pull request or not. I
   expect
that there will be a transition period where the old and new way will
co-exist. But that can be discussed separately.
   
The vote will run for 72 hours.
   
+1 (non-binding) from me.
   
Best,
Ismael
   
[1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
[2]
   
  
 
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes
   
  
  
   --
   Ashish h
  
 
 
 
  --
  Thanks,
  Neha
 



 --
 Grant Henke
 Solutions Consultant | Cloudera
 ghe...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke




-- 
-- Guozhang


Re: [DISCUSS] KIP-27 - Conditional Publish

2015-07-21 Thread Yasuhiro Matsuda
In KV store usage, all instances are writers, aren't they? There is no
leader or master, thus there is no fail over. The offset based CAS ensures
an update is based on the latest value and doesn't care who is writing the
new value.

I think the idea of the offset based CAS is great. I think it works very
well with Event Sourcing. It may be a bit weak for ensuring the single
writer though.


On Tue, Jul 21, 2015 at 8:45 AM, Jun Rao j...@confluent.io wrote:

 For 1, yes, when there is a transient leader change, it's guaranteed that a
 prefix of the messages in a request will be committed. However, it seems
 that the client needs to know what subset of messages are committed in
 order to resume the sending. Then the question is how.

 As Flavio indicated, for the use cases that you listed, it would be useful
 to figure out the exact logic by using this feature. For example, in the
 partition K/V store example, when we fail over to a new writer to the
 commit log, the zombie writer can publish new messages to the log after the
 new writer takes over, but before it publishes any message. We probably
 need to outline how this case can be handled properly.

 Thanks,

 Jun

 On Mon, Jul 20, 2015 at 12:05 PM, Ben Kirwin b...@kirw.in wrote:

  Hi Jun,
 
  Thanks for the close reading! Responses inline.
 
   Thanks for the write-up. The single producer use case you mentioned
 makes
   sense. It would be useful to include that in the KIP wiki.
 
  Great -- I'll make sure that the wiki is clear about this.
 
   1. What happens when the leader of the partition changes in the middle
  of a
   produce request? In this case, the producer client is not sure whether
  the
   request succeeds or not. If there is only a single message in the
  request,
   the producer can just resend the request. If it sees an OffsetMismatch
   error, it knows that the previous send actually succeeded and can
 proceed
   with the next write. This is nice since it not only allows the producer
  to
   proceed during transient failures in the broker, it also avoids
  duplicates
   during producer resend. One caveat is when there are multiple messages
 in
   the same partition in a produce request. The issue is that in our
 current
   replication protocol, it's possible for some, but not all messages in
 the
   request to be committed. This makes resend a bit harder to deal with
  since
   on receiving an OffsetMismatch error, it's not clear which messages
 have
   been committed. One possibility is to expect that compression is
 enabled,
   in which case multiple messages are compressed into a single message. I
  was
   thinking that another possibility is for the broker to return the
 current
   high watermark when sending an OffsetMismatch error. Based on this
 info,
   the producer can resend the subset of messages that have not been
   committed. However, this may not work in a compacted topic since there
  can
   be holes in the offset.
 
  This is a excellent question. It's my understanding that at least a
  *prefix* of messages will be committed (right?) -- which seems to be
  enough for many cases. I'll try and come up with a more concrete
  answer here.
 
   2. Is this feature only intended to be used with ack = all? The client
   doesn't get the offset with ack = 0. With ack = 1, it's possible for a
   previously acked message to be lost during leader transition, which
 will
   make the client logic more complicated.
 
  It's true that acks = 0 doesn't seem to be particularly useful; in all
  the cases I've come across, the client eventually wants to know about
  the mismatch error. However, it seems like there are some cases where
  acks = 1 would be fine -- eg. in a bulk load of a fixed dataset,
  losing messages during a leader transition just means you need to
  rewind / restart the load, which is not especially catastrophic. For
  many other interesting cases, acks = all is probably preferable.
 
   3. How does the producer client know the offset to send the first
  message?
   Do we need to expose an API in producer to get the current high
  watermark?
 
  You're right, it might be irritating to have to go through the
  consumer API just for this. There are some cases where the offsets are
  already available -- like the commit-log-for-KV-store example -- but
  in general, being able to get the offsets from the producer interface
  does sound convenient.
 
   We plan to have a KIP discussion meeting tomorrow at 11am PST. Perhaps
  you
   can describe this KIP a bit then?
 
  Sure, happy to join.
 
   Thanks,
  
   Jun
  
  
  
   On Sat, Jul 18, 2015 at 10:37 AM, Ben Kirwin b...@kirw.in wrote:
  
   Just wanted to flag a little discussion that happened on the ticket:
  
  
 
 https://issues.apache.org/jira/browse/KAFKA-2260?focusedCommentId=14632259page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14632259
  
   In particular, Yasuhiro Matsuda proposed an interesting variant on
   this that performs 

Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Gwen Shapira
+1 (binding) on using PRs.

It sounds like we need additional discussion on how the transition
will happen. Maybe move that to a separate thread, to keep the vote
easy to follow.

On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma ism...@juma.me.uk wrote:
 Hi all,

 I would like to start a vote on switching to GitHub pull requests for new
 contributions. To be precise, the vote is on whether we should:

 * Update the documentation to tell users to use pull requests instead of
 patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
 * Use pull requests for new contributions

 In a previous discussion[1], everyone that participated was in favour. It's
 also worth reading the Contributing Code Changes wiki page[2] (if you
 haven't already) to understand the flow.

 A number of pull requests have been merged in the last few weeks to test
 this flow and I believe it's working well enough. As usual, there is always
 room for improvement and I expect is to tweak things as time goes on.

 The main downside of using GitHub pull requests is that we don't have write
 access to https://github.com/apache/kafka. That means that we rely on
 commit hooks to close integrated pull requests (the merge script takes care
 of formatting the message so that this happens) and the PR creator or
 Apache Infra to close pull requests that are not integrated.

 Regarding existing contributions, I think it's up to the contributor to
 decide whether they want to resubmit it as a pull request or not. I expect
 that there will be a transition period where the old and new way will
 co-exist. But that can be discussed separately.

 The vote will run for 72 hours.

 +1 (non-binding) from me.

 Best,
 Ismael

 [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
 [2]
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes


Re: Kafka High level consumer rebalancing

2015-07-21 Thread Pranay Agarwal
Any ideas?

On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal agarwalpran...@gmail.com
wrote:

 Hi all,

 Is there any way I can force Zookeeper/Kafka to rebalance new consumers
 only for subset of total number of partitions. I have a situation where out
 of 120 partitions 60 have been already consumed, but the zookeeper also
 assigns these empty/inactive partitions as well for the re-balancing, I
 want my resources to be used only for the partitions which still have some
 messages left to read.

 Thanks
 -Pranay



[jira] [Updated] (KAFKA-2342) KafkaConsumer rebalance with in-flight fetch can cause invalid position

2015-07-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2342:
---
Summary: KafkaConsumer rebalance with in-flight fetch can cause invalid 
position  (was: transient unit test failure in 
testConsumptionWithBrokerFailures)

 KafkaConsumer rebalance with in-flight fetch can cause invalid position
 ---

 Key: KAFKA-2342
 URL: https://issues.apache.org/jira/browse/KAFKA-2342
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Jason Gustafson

 If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end 
 up updating the fetch position of a partition to an offset which is no longer 
 valid. The consequence is that we may end up either returning to the user 
 messages with an unexpected position or we may fail to give back the right 
 offset in position(). 
 Additionally, this bug causes transient test failures in 
 ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
 exception:
 kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
 java.lang.NullPointerException
 at 
 org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
 at 
 kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
 at 
 kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2342) transient unit test failure in testConsumptionWithBrokerFailures

2015-07-21 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2342:
---
Description: 
If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end up 
updating the fetch position of a partition to an offset which is no longer 
valid. The consequence is that we may end up either returning to the user 
messages with an unexpected position or we may fail to give back the right 
offset in position(). 

Additionally, this bug causes transient test failures in 
ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
exception:

kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
java.lang.NullPointerException
at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
at 
kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
at 
kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)


  was:
Saw the following transient unit test failure.

kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
java.lang.NullPointerException
at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
at 
kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
at 
kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



 transient unit test failure in testConsumptionWithBrokerFailures
 

 Key: KAFKA-2342
 URL: https://issues.apache.org/jira/browse/KAFKA-2342
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Affects Versions: 0.8.3
Reporter: Jun Rao
Assignee: Jason Gustafson

 If a rebalance occurs with an in-flight fetch, the new KafkaConsumer can end 
 up updating the fetch position of a partition to an offset which is no longer 
 valid. The consequence is that we may end up either returning to the user 
 messages with an unexpected position or we may fail to give back the right 
 offset in position(). 
 Additionally, this bug causes transient test failures in 
 ConsumerBounceTest.testConsumptionWithBrokerFailures with the following 
 exception:
 kafka.api.ConsumerBounceTest  testConsumptionWithBrokerFailures FAILED
 java.lang.NullPointerException
 at 
 org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:949)
 at 
 kafka.api.ConsumerBounceTest.consumeWithBrokerFailures(ConsumerBounceTest.scala:86)
 at 
 kafka.api.ConsumerBounceTest.testConsumptionWithBrokerFailures(ConsumerBounceTest.scala:61)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636074#comment-14636074
 ] 

Jiangjie Qin edited comment on KAFKA-2350 at 7/22/15 12:44 AM:
---

I am thinking that currently we keep two collections of topic partitions in 
KafkaConsumer, one for user subscription, the other for coordinator assignment. 
Can we do something to the existing code to let subscribe/unsubscribe support 
pause/unpause as well?

Maybe we can have one subscription set and one assigned partition validation 
set.
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
will affect both assigned partition validation set and subscription set. If 
Kafka based partition assignment is not used, assigned partition validation set 
will be null.

{code}
void subscribe(TopicPartition... partitions)
void unsubscribe(TopicPartition... partitions)
{code}
will only change the subscription set. Calling them won't trigger rebalance. 
But the topics subscribed to has to be in assigned partition set if it is null.

Every time when we call poll, we only use the subscription set.

In this way, user can simply use
{code}
void subscribe(TopicPartitions... partitions)
void unsubscribe(TopicPartitions... partitons)
{code}
to do the pause and unpause.

Some other benefits might be:
1. We don't add two more interface to the already somewhat complicated API.
2. We get validation for manual subscription.


was (Author: becket_qin):
I am thinking that currently we keep two collections of topic partitions in 
KafkaConsumer, one for user subscription, the other for coordinator assignment. 
Can we do something to the existing code to let subscribe/unsubscribe support 
pause/unpause as well?

Maybe we can have one subscription set and one assigned partition validation 
set.
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
will affect both assigned partition set and subscription set. If Kafka based 
partition assignment is not used, assigned partition set will be null.

{code}
void subscribe(TopicPartition... partitions)
void unsubscribe(TopicPartition... partitions)
{code}
will only change the subscription set. Calling them won't trigger rebalance. 
But the topics subscribed to has to be in assigned partition set if it is null.

In this way, user can simply use
{code}
void subscribe(TopicPartitions... partitions)
void unsubscribe(TopicPartitions... partitons)
{code}
to do the pause and unpause.

Some other benefits might be:
1. We don't add two more interface to the already somewhat complicated API.
2. We get validation for manual subscription.

 Add KafkaConsumer pause capability
 --

 Key: KAFKA-2350
 URL: https://issues.apache.org/jira/browse/KAFKA-2350
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson

 There are some use cases in stream processing where it is helpful to be able 
 to pause consumption of a topic. For example, when joining two topics, you 
 may need to delay processing of one topic while you wait for the consumer of 
 the other topic to catch up. The new consumer currently doesn't provide a 
 nice way to do this. If you skip poll() or if you unsubscribe, then a 
 rebalance will be triggered and your partitions will be reassigned.
 One way to achieve this would be to add two new methods to KafkaConsumer:
 {code}
 void pause(String... topics);
 void unpause(String... topics);
 {code}
 When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
 new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636235#comment-14636235
 ] 

Jiangjie Qin commented on KAFKA-2353:
-

Updated reviewboard https://reviews.apache.org/r/36664/diff/
 against branch origin/trunk

 SocketServer.Processor should catch exception and close the socket properly 
 in configureNewConnections.
 ---

 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2353.patch, KAFKA-2353_2015-07-21_22:02:24.patch


 We see an increasing number of sockets in CLOSE_WAIT status in our production 
 environment in recent couple of days. From the thread dump it seems one of 
 the Processor thread has died but the acceptor was still putting many new 
 connections its new connection queue.
 The cause of dead Processor thread was due to we are not catching all the 
 exceptions in the Processor thread. For example, in our case it seems to be 
 an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/
---

(Updated July 22, 2015, 5:02 a.m.)


Review request for kafka.


Bugs: KAFKA-2353
https://issues.apache.org/jira/browse/KAFKA-2353


Repository: kafka


Description (updated)
---

Addressed Gwen's comments


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36664/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Updated] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Ribeiro updated KAFKA-2355:
--
Affects Version/s: 0.8.2.1
   Status: Patch Available  (was: Open)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.8.2.1
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin


 On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
  core/src/main/scala/kafka/network/SocketServer.scala, line 465
  https://reviews.apache.org/r/36664/diff/1/?file=1018238#file1018238line465
 
  Turns out that catching Throwable is a really bad idea: 
  https://www.sumologic.com/2014/05/05/why-you-should-never-catch-throwable-in-scala/

Ah... Didn't know that before. I explicitly listed the exceptions.


 On July 21, 2015, 11:15 p.m., Gwen Shapira wrote:
  core/src/main/scala/kafka/network/SocketServer.scala, line 400
  https://reviews.apache.org/r/36664/diff/1/?file=1018238#file1018238line400
 
  So in case of unexpected exception, we log an error and keep running?
  
  Isn't it better to kill the processor, since we don't know what's the 
  state of the system? If the acceptor keeps placing messages in the queue 
  for a dead processor, isn't it a separate issue?

This part I'm not quite sure. I am not very experienced in the error handling 
in such case, so please correct me if I missed something.
Here is what I thought.

The way it currently works is that the acceptor will 
1. accept new connection request and create new socket channel
2. choose a processor and put the socket channel into the processor's new 
connection queue

The processor will just take the socket channels from the queue and register it 
to the selector.
If the processor runs and get an uncaught exception, there are several 
possibilities. 
Case 1: The exception was from one socket channel. 
Case 2: The exception was associated with a bad request. 
In case 1, ideally we should just disconnect that socket channel without 
affecting other socket channels.
In case 2, I think we should log the error and skip the message - assuming 
client will retry sending data if no response was received for a given peoriod 
of time.

I am not sure if letting processor exit is a good idea because this will lead 
to the result of a badly behaving client screw the entire cluster - it might 
screw processors one by one. Comparing with that, I kind of leaning towards 
keeping the processor running and serving other normal TCP connections if 
possible, but log the error so monitoring system can detect and see if human 
intervention is needed.

Also, I don't know what to do here to prevent the thread from exiting without 
catching all the throwables.
According to this blog 
http://www.tzavellas.com/techblog/2010/09/20/catching-throwable-in-scala/
I guess I can rethrow all the ControlThrowables, but intercept the rests?


- Jiangjie


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/#review92496
---


On July 21, 2015, 11:03 p.m., Jiangjie Qin wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36664/
 ---
 
 (Updated July 21, 2015, 11:03 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2353
 https://issues.apache.org/jira/browse/KAFKA-2353
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Catch exception in kafka.network.Processor to avoid socket leak and exiting 
 unexpectedly.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36664/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jiangjie Qin
 




[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636319#comment-14636319
 ] 

Jiangjie Qin commented on KAFKA-2350:
-

[~jkreps][~hachikuji], I actually was not proposing reuse
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
So I was thinking we follow the current convention which is:
1. If you are subscribing/unsubscribing to a partition explicitly, you are on 
your own
2. If you are subscribing/unsubscribing to a topic, we use consumer coordinator 
for partition assignment.

I assume the only use case we are trying to address is when user is using 
consumer coordinator and want to temporarily stop consuming from a topic 
without triggering a consumer rebalance.

If so, to unsubscribe from a topic we can do something like fowllowing
{code}
...
for(TopicPartition tp : consumer.assignedTopicPartitions.get(topic)) {
unsubscribe(tp);
}
{code}
To resubscribe, we can do the similar but just call subscribe(tp) instead

This approach might need to expose an interface of assignedTopicPartitions(), 
but I can see that useful in quite a few use cases.

 Add KafkaConsumer pause capability
 --

 Key: KAFKA-2350
 URL: https://issues.apache.org/jira/browse/KAFKA-2350
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson

 There are some use cases in stream processing where it is helpful to be able 
 to pause consumption of a topic. For example, when joining two topics, you 
 may need to delay processing of one topic while you wait for the consumer of 
 the other topic to catch up. The new consumer currently doesn't provide a 
 nice way to do this. If you skip poll() or if you unsubscribe, then a 
 rebalance will be triggered and your partitions will be reassigned.
 One way to achieve this would be to add two new methods to KafkaConsumer:
 {code}
 void pause(String... topics);
 void unpause(String... topics);
 {code}
 When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
 new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2355) Add an unit test to validate the deletion of a partition marked as deleted

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636119#comment-14636119
 ] 

Edward Ribeiro commented on KAFKA-2355:
---

Hi [~singhashish] and [~gwenshap]. I hope you don't get annoyed, but I saw that 
KAFKA-2345 was lacking a unit test to validate the new exception being thrown 
by KAFKA-2345. As this issue was already closed I created a new ticket, and 
added the unit test. Please, let me know what you think. Cheers! :)

 Add an unit test to validate the deletion of a partition marked as deleted
 --

 Key: KAFKA-2355
 URL: https://issues.apache.org/jira/browse/KAFKA-2355
 Project: Kafka
  Issue Type: Sub-task
Reporter: Edward Ribeiro
Priority: Minor
 Fix For: 0.8.3

 Attachments: KAFKA-2355.patch


 Trying to delete a partition marked as deleted throws 
 {{TopicAlreadyMarkedForDeletionException}} so this ticket add a unit test to 
 validate this behaviour. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36578: Patch for KAFKA-2338

2015-07-21 Thread Ewen Cheslack-Postava

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36578/#review92526
---



core/src/main/scala/kafka/server/AbstractFetcherThread.scala (line 166)
https://reviews.apache.org/r/36578/#comment146731

Well, 2 things. First, I only took a quick look after I couldn't reproduce 
the issue, so I could have missed an error code that can be returned and which 
*would* allow us to issue a warning here.

Second, the other warning is still useful. If users start to see stalls in 
their consumer, they may start looking at consumer logs, but eventually it will 
make sense to look at the brokers as well. And for the replication failure, 
they're likely to look at all the brokers for the given partition. Providing 
help *anywhere* is much better than the current situation of just silently 
stalling.


- Ewen Cheslack-Postava


On July 21, 2015, 4:21 p.m., Edward Ribeiro wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36578/
 ---
 
 (Updated July 21, 2015, 4:21 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2338
 https://issues.apache.org/jira/browse/KAFKA-2338
 
 
 Repository: kafka
 
 
 Description
 ---
 
 KAFKA-2338 Warn users if they change max.message.bytes that they also need to 
 update broker and consumer settings
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/admin/TopicCommand.scala 
 4e28bf1c08414e8e96e6ca639b927d51bfeb4616 
 
 Diff: https://reviews.apache.org/r/36578/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Edward Ribeiro
 




Re: Kafka High level consumer rebalancing

2015-07-21 Thread Pranay Agarwal
Thanks Mayuresh,

Can I at least control the rebalance of consumers? Currently consumes die
after specific partition has no more messages, and there is rebalance of
consumes triggered, which causes more consumers to die who get assigned to
empty partition(because zookeeper treat empty partition no differently).

Is there way I can control so that there is no rebalancing of consumers
when some consumer die?

Thanks
-Pranay

On Tue, Jul 21, 2015 at 11:25 AM, Mayuresh Gharat 
gharatmayures...@gmail.com wrote:

 Not sure if you can do that with High level consumer.

 Thanks,

 Mayuresh

 On Tue, Jul 21, 2015 at 10:53 AM, Pranay Agarwal agarwalpran...@gmail.com
 
 wrote:

  Any ideas?
 
  On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal 
 agarwalpran...@gmail.com
  wrote:
 
   Hi all,
  
   Is there any way I can force Zookeeper/Kafka to rebalance new consumers
   only for subset of total number of partitions. I have a situation where
  out
   of 120 partitions 60 have been already consumed, but the zookeeper also
   assigns these empty/inactive partitions as well for the re-balancing, I
   want my resources to be used only for the partitions which still have
  some
   messages left to read.
  
   Thanks
   -Pranay
  
 



 --
 -Regards,
 Mayuresh R. Gharat
 (862) 250-7125



[jira] [Resolved] (KAFKA-863) System Test - update 0.7 version of kafka-run-class.sh for Migration Tool test cases

2015-07-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-863.
-
   Resolution: Won't Fix
 Assignee: Ewen Cheslack-Postava
Fix Version/s: 0.8.3

Actually, this looks like it was resolved in KAFKA-1645 since the migration 
tool system tests were removed. In any case, those tests have been deprecated 
by KIP-25 and will be removed soon.

 System Test - update 0.7 version of kafka-run-class.sh for Migration Tool 
 test cases
 

 Key: KAFKA-863
 URL: https://issues.apache.org/jira/browse/KAFKA-863
 Project: Kafka
  Issue Type: Task
Reporter: John Fung
Assignee: Ewen Cheslack-Postava
  Labels: kafka-0.8, replication-testing
 Fix For: 0.8.3

 Attachments: kafka-863-v1.patch


 The 0.7 version is located at: 
 system_test/migration_tool_testsuite/0.7/bin/kafka-run-class.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Submitting a patch (Jira errors)

2015-07-21 Thread Mayuresh Gharat
Hi,

I had to clean up existing kafka repo on my linux box and start with a
fresh one.

I followed the instructions here :

https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review

I am trying to upload a patch and I am getting these errors :

Configuring reviewboard url to https://reviews.apache.org
Updating your remote branches to pull the latest changes
Verifying JIRA connection configurations
/usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
InsecurePlatformWarning: A true SSLContext object is not available. This
prevents urllib3 from configuring SSL appropriately and may cause certain
SSL connections to fail. For more information, see
https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
.
  InsecurePlatformWarning
/usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
SecurityWarning: Certificate has no `subjectAltName`, falling back to check
for a `commonName` for now. This feature is being removed by major browsers
and deprecated by RFC 2818. (See
https://github.com/shazow/urllib3/issues/497 for details.)
  SecurityWarning
Failed to login to the JIRA instance class 'jira.exceptions.JIRAError'
HTTP 403: CAPTCHA_CHALLENGE; login-url=
https://issues.apache.org/jira/login.jsp;
https://issues.apache.org/jira/rest/auth/1/session



Any help here will be appreciated.

-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Created] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2353:
---

 Summary: SocketServer.Processor should catch exception and close 
the socket properly in configureNewConnections.
 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


We see an increasing number of sockets in CLOSE_WAIT status in our production 
environment in recent couple of days. From the thread dump it seems one of the 
Processor thread has died but the acceptor was still putting many new 
connections its new connection queue.

The cause of dead Processor thread was due to we are not catching all the 
exceptions in the Processor thread. For example, in our case it seems to be an 
exception thrown in configureNewConnections().





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Submitting a patch (Jira errors)

2015-07-21 Thread Aditya Auradkar
Did you setup your jira.ini?

On Tue, Jul 21, 2015 at 11:52 AM, Mayuresh Gharat 
gharatmayures...@gmail.com wrote:

 Hi,

 I had to clean up existing kafka repo on my linux box and start with a
 fresh one.

 I followed the instructions here :


 https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review

 I am trying to upload a patch and I am getting these errors :

 Configuring reviewboard url to https://reviews.apache.org
 Updating your remote branches to pull the latest changes
 Verifying JIRA connection configurations

 /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
 InsecurePlatformWarning: A true SSLContext object is not available. This
 prevents urllib3 from configuring SSL appropriately and may cause certain
 SSL connections to fail. For more information, see

 https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
 .
   InsecurePlatformWarning

 /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
 SecurityWarning: Certificate has no `subjectAltName`, falling back to check
 for a `commonName` for now. This feature is being removed by major browsers
 and deprecated by RFC 2818. (See
 https://github.com/shazow/urllib3/issues/497 for details.)
   SecurityWarning
 Failed to login to the JIRA instance class 'jira.exceptions.JIRAError'
 HTTP 403: CAPTCHA_CHALLENGE; login-url=
 https://issues.apache.org/jira/login.jsp;
 https://issues.apache.org/jira/rest/auth/1/session



 Any help here will be appreciated.

 --
 -Regards,
 Mayuresh R. Gharat
 (862) 250-7125



Re: Submitting a patch (Jira errors)

2015-07-21 Thread Mayuresh Gharat
Yes.

Thanks,

Mayuresh

On Tue, Jul 21, 2015 at 12:27 PM, Aditya Auradkar 
aaurad...@linkedin.com.invalid wrote:

 Did you setup your jira.ini?

 On Tue, Jul 21, 2015 at 11:52 AM, Mayuresh Gharat 
 gharatmayures...@gmail.com wrote:

  Hi,
 
  I had to clean up existing kafka repo on my linux box and start with a
  fresh one.
 
  I followed the instructions here :
 
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review
 
  I am trying to upload a patch and I am getting these errors :
 
  Configuring reviewboard url to https://reviews.apache.org
  Updating your remote branches to pull the latest changes
  Verifying JIRA connection configurations
 
 
 /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
  InsecurePlatformWarning: A true SSLContext object is not available. This
  prevents urllib3 from configuring SSL appropriately and may cause certain
  SSL connections to fail. For more information, see
 
 
 https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
  .
InsecurePlatformWarning
 
 
 /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
  SecurityWarning: Certificate has no `subjectAltName`, falling back to
 check
  for a `commonName` for now. This feature is being removed by major
 browsers
  and deprecated by RFC 2818. (See
  https://github.com/shazow/urllib3/issues/497 for details.)
SecurityWarning
  Failed to login to the JIRA instance class 'jira.exceptions.JIRAError'
  HTTP 403: CAPTCHA_CHALLENGE; login-url=
  https://issues.apache.org/jira/login.jsp;
  https://issues.apache.org/jira/rest/auth/1/session
 
 
 
  Any help here will be appreciated.
 
  --
  -Regards,
  Mayuresh R. Gharat
  (862) 250-7125
 




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Kafka High level consumer rebalancing

2015-07-21 Thread Mayuresh Gharat
Not sure if you can do that with High level consumer.

Thanks,

Mayuresh

On Tue, Jul 21, 2015 at 10:53 AM, Pranay Agarwal agarwalpran...@gmail.com
wrote:

 Any ideas?

 On Mon, Jul 20, 2015 at 2:34 PM, Pranay Agarwal agarwalpran...@gmail.com
 wrote:

  Hi all,
 
  Is there any way I can force Zookeeper/Kafka to rebalance new consumers
  only for subset of total number of partitions. I have a situation where
 out
  of 120 partitions 60 have been already consumed, but the zookeeper also
  assigns these empty/inactive partitions as well for the re-balancing, I
  want my resources to be used only for the partitions which still have
 some
  messages left to read.
 
  Thanks
  -Pranay
 




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[jira] [Updated] (KAFKA-2299) kafka-patch-review tool does not correctly capture testing done

2015-07-21 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-2299:
--
Resolution: Won't Fix
Status: Resolved  (was: Patch Available)

Moving to Github PRs, so this may not be useful anymore.

 kafka-patch-review tool does not correctly capture testing done
 ---

 Key: KAFKA-2299
 URL: https://issues.apache.org/jira/browse/KAFKA-2299
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Attachments: KAFKA-2299.patch


 kafka-patch-review tool does not correctly capture testing done when 
 specified with -t or --testing-done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (KAFKA-2299) kafka-patch-review tool does not correctly capture testing done

2015-07-21 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh closed KAFKA-2299.
-

 kafka-patch-review tool does not correctly capture testing done
 ---

 Key: KAFKA-2299
 URL: https://issues.apache.org/jira/browse/KAFKA-2299
 Project: Kafka
  Issue Type: Bug
Reporter: Ashish K Singh
Assignee: Ashish K Singh
 Attachments: KAFKA-2299.patch


 kafka-patch-review tool does not correctly capture testing done when 
 specified with -t or --testing-done.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Parth Brahmbhatt
+1 (non-binding)

Thanks
Parth

On 7/21/15, 10:24 AM, Gwen Shapira gshap...@cloudera.com wrote:

+1 (binding) on using PRs.

It sounds like we need additional discussion on how the transition
will happen. Maybe move that to a separate thread, to keep the vote
easy to follow.

On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma ism...@juma.me.uk wrote:
 Hi all,

 I would like to start a vote on switching to GitHub pull requests for
new
 contributions. To be precise, the vote is on whether we should:

 * Update the documentation to tell users to use pull requests instead of
 patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
 * Use pull requests for new contributions

 In a previous discussion[1], everyone that participated was in favour.
It's
 also worth reading the Contributing Code Changes wiki page[2] (if you
 haven't already) to understand the flow.

 A number of pull requests have been merged in the last few weeks to test
 this flow and I believe it's working well enough. As usual, there is
always
 room for improvement and I expect is to tweak things as time goes on.

 The main downside of using GitHub pull requests is that we don't have
write
 access to https://github.com/apache/kafka. That means that we rely on
 commit hooks to close integrated pull requests (the merge script takes
care
 of formatting the message so that this happens) and the PR creator or
 Apache Infra to close pull requests that are not integrated.

 Regarding existing contributions, I think it's up to the contributor to
 decide whether they want to resubmit it as a pull request or not. I
expect
 that there will be a transition period where the old and new way will
 co-exist. But that can be discussed separately.

 The vote will run for 72 hours.

 +1 (non-binding) from me.

 Best,
 Ismael

 [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
 [2]
 
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Chang
es




[jira] [Commented] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635915#comment-14635915
 ] 

Michael Graff commented on KAFKA-2354:
--

Closing as this now appears to be a local error on our wrappers.

 setting log.dirs property makes tools fail if there is a comma
 --

 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff

 If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
 kafka-topics --describe --zookeeper localhost/kafka
 Error: Could not find or load main class .u1.kafka,
 The broker will start, however.  If the tools are run from a machine without 
 multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Graff resolved KAFKA-2354.
--
Resolution: Not A Problem

 setting log.dirs property makes tools fail if there is a comma
 --

 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff

 If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
 kafka-topics --describe --zookeeper localhost/kafka
 Error: Could not find or load main class .u1.kafka,
 The broker will start, however.  If the tools are run from a machine without 
 multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92488
---

Ship it!


Latest patch looks good to me.

- Jiangjie Qin


On July 21, 2015, 9:58 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated July 21, 2015, 9:58 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat


On July 21, 2015, 8:18 p.m., Mayuresh Gharat wrote:
  T

Yes. Got it, I thought that we should be catching all exceptions and exit. But 
doing the above will catch the exception and exit when its shutting down and 
thats the only thing that this ticket considers.


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92465
---


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated July 21, 2015, 8:11 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Grant Henke

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92470
---



core/src/main/scala/kafka/network/SocketServer.scala (line 266)
https://reviews.apache.org/r/36652/#comment146669

What errors were seen that should be caught here? Can we catch a more 
specific exception and provide a better message?


- Grant Henke


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated July 21, 2015, 8:11 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




[jira] [Created] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)
Michael Graff created KAFKA-2354:


 Summary: setting log.dirs property makes tools fail if there is a 
comma
 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff


If one sets log.dirs=/path1/kafka,/path2/kafka, the tools fail to run:

kafka-topics --describe --zookeeper localhost/kafka
Error: Could not find or load main class .u1.kafka,

The broker will start, however.  If the tools are run from a machine without 
multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Edward Ribeiro (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635864#comment-14635864
 ] 

Edward Ribeiro commented on KAFKA-2354:
---

Hi [~Skandragon], unfortunately, I was unable to reproduce this issue on both 
trunk and 0.8.2.1. Could you provide more details of your reproduce steps, 
please? In the meantime more experienced Kafka devs can be luckier/smarter than 
me to reproduce it.

 setting log.dirs property makes tools fail if there is a comma
 --

 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff

 If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
 kafka-topics --describe --zookeeper localhost/kafka
 Error: Could not find or load main class .u1.kafka,
 The broker will start, however.  If the tools are run from a machine without 
 multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/
---

(Updated July 21, 2015, 9:58 p.m.)


Review request for kafka.


Bugs: KAFKA-2351
https://issues.apache.org/jira/browse/KAFKA-2351


Repository: kafka


Description
---

Added a try-catch to catch any exceptions thrown by the nioSelector


Diffs (updated)
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36652/diff/


Testing
---


Thanks,

Mayuresh Gharat



[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2351:
---
Attachment: KAFKA-2351_2015-07-21_14:58:13.patch

 Brokers are having a problem shutting down correctly
 

 Key: KAFKA-2351
 URL: https://issues.apache.org/jira/browse/KAFKA-2351
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat
 Attachments: KAFKA-2351.patch, KAFKA-2351_2015-07-21_14:58:13.patch


 The run() in Acceptor during shutdown might throw an exception that is not 
 caught and it never reaches shutdownComplete due to which the latch is not 
 counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635885#comment-14635885
 ] 

Mayuresh Gharat commented on KAFKA-2351:


Updated reviewboard https://reviews.apache.org/r/36652/diff/
 against branch origin/trunk

 Brokers are having a problem shutting down correctly
 

 Key: KAFKA-2351
 URL: https://issues.apache.org/jira/browse/KAFKA-2351
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat
 Attachments: KAFKA-2351.patch, KAFKA-2351_2015-07-21_14:58:13.patch


 The run() in Acceptor during shutdown might throw an exception that is not 
 caught and it never reaches shutdownComplete due to which the latch is not 
 counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2351:
---
Status: Patch Available  (was: Open)

 Brokers are having a problem shutting down correctly
 

 Key: KAFKA-2351
 URL: https://issues.apache.org/jira/browse/KAFKA-2351
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat
 Attachments: KAFKA-2351.patch


 The run() in Acceptor during shutdown might throw an exception that is not 
 caught and it never reaches shutdownComplete due to which the latch is not 
 counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mayuresh Gharat updated KAFKA-2351:
---
Attachment: KAFKA-2351.patch

 Brokers are having a problem shutting down correctly
 

 Key: KAFKA-2351
 URL: https://issues.apache.org/jira/browse/KAFKA-2351
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat
 Attachments: KAFKA-2351.patch


 The run() in Acceptor during shutdown might throw an exception that is not 
 caught and it never reaches shutdownComplete due to which the latch is not 
 counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/
---

Review request for kafka.


Bugs: KAFKA-2351
https://issues.apache.org/jira/browse/KAFKA-2351


Repository: kafka


Description
---

Added a try-catch to catch any exceptions thrown by the nioSelector


Diffs
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36652/diff/


Testing
---


Thanks,

Mayuresh Gharat



Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92465
---


Thanks for the patch, some comments.


core/src/main/scala/kafka/network/SocketServer.scala (lines 234 - 235)
https://reviews.apache.org/r/36652/#comment146660

We probably want to keep the try-catch inside the while loop.



core/src/main/scala/kafka/network/SocketServer.scala (line 235)
https://reviews.apache.org/r/36652/#comment146661

Open source Kafka convention is to put the bracket on the same line.


T

- Jiangjie Qin


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated July 21, 2015, 8:11 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Grant Henke


 On July 21, 2015, 8:26 p.m., Grant Henke wrote:
  core/src/main/scala/kafka/network/SocketServer.scala, line 266
  https://reviews.apache.org/r/36652/diff/1/?file=1018073#file1018073line266
 
  What errors were seen that should be caught here? Can we catch a more 
  specific exception and provide a better message?
 
 Mayuresh Gharat wrote:
 The nioSelector can throw different exceptions : IOException,  
 ClosedSelectorException, IllegalArgumentException. We can have different 
 catch for each of them. But we thought that the log will telll us what 
 exception was thrown when we pass it to error()

I assumed you are adding this because you saw a specific error. I wasn't sure 
what error you saw and if more context could be given to the user. Perhaps the 
error you saw is fairly common during shutdown and should be ignored, and not 
logged at the error level. But all others should be handle as you are here.


- Grant


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92470
---


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated July 21, 2015, 8:11 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




Re: Submitting a patch (Jira errors)

2015-07-21 Thread Mayuresh Gharat
Resolved this.

Thanks,

Mayuresh

On Tue, Jul 21, 2015 at 12:59 PM, Mayuresh Gharat 
gharatmayures...@gmail.com wrote:

 Yes.

 Thanks,

 Mayuresh

 On Tue, Jul 21, 2015 at 12:27 PM, Aditya Auradkar 
 aaurad...@linkedin.com.invalid wrote:

 Did you setup your jira.ini?

 On Tue, Jul 21, 2015 at 11:52 AM, Mayuresh Gharat 
 gharatmayures...@gmail.com wrote:

  Hi,
 
  I had to clean up existing kafka repo on my linux box and start with a
  fresh one.
 
  I followed the instructions here :
 
 
 
 https://cwiki.apache.org/confluence/display/KAFKA/Patch+submission+and+review
 
  I am trying to upload a patch and I am getting these errors :
 
  Configuring reviewboard url to https://reviews.apache.org
  Updating your remote branches to pull the latest changes
  Verifying JIRA connection configurations
 
 
 /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/util/ssl_.py:90:
  InsecurePlatformWarning: A true SSLContext object is not available. This
  prevents urllib3 from configuring SSL appropriately and may cause
 certain
  SSL connections to fail. For more information, see
 
 
 https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning
  .
InsecurePlatformWarning
 
 
 /usr/lib/python2.6/site-packages/requests-2.7.0-py2.6.egg/requests/packages/urllib3/connection.py:251:
  SecurityWarning: Certificate has no `subjectAltName`, falling back to
 check
  for a `commonName` for now. This feature is being removed by major
 browsers
  and deprecated by RFC 2818. (See
  https://github.com/shazow/urllib3/issues/497 for details.)
SecurityWarning
  Failed to login to the JIRA instance class 'jira.exceptions.JIRAError'
  HTTP 403: CAPTCHA_CHALLENGE; login-url=
  https://issues.apache.org/jira/login.jsp;
  https://issues.apache.org/jira/rest/auth/1/session
 
 
 
  Any help here will be appreciated.
 
  --
  -Regards,
  Mayuresh R. Gharat
  (862) 250-7125
 




 --
 -Regards,
 Mayuresh R. Gharat
 (862) 250-7125




-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Review Request 36652: Patch for KAFKA-2351

2015-07-21 Thread Mayuresh Gharat


 On July 21, 2015, 8:26 p.m., Grant Henke wrote:
  core/src/main/scala/kafka/network/SocketServer.scala, line 266
  https://reviews.apache.org/r/36652/diff/1/?file=1018073#file1018073line266
 
  What errors were seen that should be caught here? Can we catch a more 
  specific exception and provide a better message?

The nioSelector can throw different exceptions : IOException,  
ClosedSelectorException, IllegalArgumentException. We can have different catch 
for each of them. But we thought that the log will telll us what exception was 
thrown when we pass it to error()


- Mayuresh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36652/#review92470
---


On July 21, 2015, 8:11 p.m., Mayuresh Gharat wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36652/
 ---
 
 (Updated July 21, 2015, 8:11 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2351
 https://issues.apache.org/jira/browse/KAFKA-2351
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Added a try-catch to catch any exceptions thrown by the nioSelector
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36652/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Mayuresh Gharat
 




[jira] [Commented] (KAFKA-2351) Brokers are having a problem shutting down correctly

2015-07-21 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635734#comment-14635734
 ] 

Mayuresh Gharat commented on KAFKA-2351:


Created reviewboard https://reviews.apache.org/r/36652/diff/
 against branch origin/trunk

 Brokers are having a problem shutting down correctly
 

 Key: KAFKA-2351
 URL: https://issues.apache.org/jira/browse/KAFKA-2351
 Project: Kafka
  Issue Type: Bug
Reporter: Mayuresh Gharat
Assignee: Mayuresh Gharat
 Attachments: KAFKA-2351.patch


 The run() in Acceptor during shutdown might throw an exception that is not 
 caught and it never reaches shutdownComplete due to which the latch is not 
 counted down and the broker will not be able to shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2354) setting log.dirs property makes tools fail if there is a comma

2015-07-21 Thread Michael Graff (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Graff updated KAFKA-2354:
-
Description: 
If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:

kafka-topics --describe --zookeeper localhost/kafka
Error: Could not find or load main class .u1.kafka,

The broker will start, however.  If the tools are run from a machine without 
multiple entries in log.dirs, it works.

  was:
If one sets log.dirs=/path1/kafka,/path2/kafka, the tools fail to run:

kafka-topics --describe --zookeeper localhost/kafka
Error: Could not find or load main class .u1.kafka,

The broker will start, however.  If the tools are run from a machine without 
multiple entries in log.dirs, it works.


 setting log.dirs property makes tools fail if there is a comma
 --

 Key: KAFKA-2354
 URL: https://issues.apache.org/jira/browse/KAFKA-2354
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: centos
Reporter: Michael Graff

 If one sets log.dirs=/u1/kafka,/u2/kafka, the tools fail to run:
 kafka-topics --describe --zookeeper localhost/kafka
 Error: Could not find or load main class .u1.kafka,
 The broker will start, however.  If the tools are run from a machine without 
 multiple entries in log.dirs, it works.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2350) Add KafkaConsumer pause capability

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636074#comment-14636074
 ] 

Jiangjie Qin commented on KAFKA-2350:
-

I am thinking that currently we keep two collections of topic partitions in 
KafkaConsumer, one for user subscription, the other for coordinator assignment. 
Can we do something to the existing code to let subscribe/unsubscribe support 
pause/unpause as well?

Maybe we can have one subscription set and one assigned partition validation 
set.
{code}
void subscribe(String topic)
void unsubscribe(String topic)
{code}
will affect both assigned partition set and subscription set. If Kafka based 
partition assignment is not used, assigned partition set will be null.

{code}
void subscribe(TopicPartition... partitions)
void unsubscribe(TopicPartition... partitions)
{code}
will only change the subscription set. Calling them won't trigger rebalance. 
But the topics subscribed to has to be in assigned partition set if it is null.

In this way, user can simply use
{code}
void subscribe(TopicPartitions... partitions)
void unsubscribe(TopicPartitions... partitons)
{code}
to do the pause and unpause.

Some other benefits might be:
1. We don't add two more interface to the already somewhat complicated API.
2. We get validation for manual subscription.

 Add KafkaConsumer pause capability
 --

 Key: KAFKA-2350
 URL: https://issues.apache.org/jira/browse/KAFKA-2350
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson

 There are some use cases in stream processing where it is helpful to be able 
 to pause consumption of a topic. For example, when joining two topics, you 
 may need to delay processing of one topic while you wait for the consumer of 
 the other topic to catch up. The new consumer currently doesn't provide a 
 nice way to do this. If you skip poll() or if you unsubscribe, then a 
 rebalance will be triggered and your partitions will be reassigned.
 One way to achieve this would be to add two new methods to KafkaConsumer:
 {code}
 void pause(String... topics);
 void unpause(String... topics);
 {code}
 When a topic is paused, a call to KafkaConsumer.poll will not initiate any 
 new fetches for that topic. After it is unpaused, fetches will begin again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Switch to GitHub pull requests for new contributions

2015-07-21 Thread Jay Kreps
+1

-Jay

On Tue, Jul 21, 2015 at 4:28 AM, Ismael Juma ism...@juma.me.uk wrote:

 Hi all,

 I would like to start a vote on switching to GitHub pull requests for new
 contributions. To be precise, the vote is on whether we should:

 * Update the documentation to tell users to use pull requests instead of
 patches and Review Board (i.e. merge KAFKA-2321 and KAFKA-2349)
 * Use pull requests for new contributions

 In a previous discussion[1], everyone that participated was in favour. It's
 also worth reading the Contributing Code Changes wiki page[2] (if you
 haven't already) to understand the flow.

 A number of pull requests have been merged in the last few weeks to test
 this flow and I believe it's working well enough. As usual, there is always
 room for improvement and I expect is to tweak things as time goes on.

 The main downside of using GitHub pull requests is that we don't have write
 access to https://github.com/apache/kafka. That means that we rely on
 commit hooks to close integrated pull requests (the merge script takes care
 of formatting the message so that this happens) and the PR creator or
 Apache Infra to close pull requests that are not integrated.

 Regarding existing contributions, I think it's up to the contributor to
 decide whether they want to resubmit it as a pull request or not. I expect
 that there will be a transition period where the old and new way will
 co-exist. But that can be discussed separately.

 The vote will run for 72 hours.

 +1 (non-binding) from me.

 Best,
 Ismael

 [1] http://search-hadoop.com/m/uyzND1N6CDH1DUc82
 [2]
 https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes



Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Jiangjie Qin

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/
---

Review request for kafka.


Bugs: KAFKA-2353
https://issues.apache.org/jira/browse/KAFKA-2353


Repository: kafka


Description
---

Catch exception in kafka.network.Processor to avoid socket leak and exiting 
unexpectedly.


Diffs
-

  core/src/main/scala/kafka/network/SocketServer.scala 
91319fa010b140cca632e5fa8050509bd2295fc9 

Diff: https://reviews.apache.org/r/36664/diff/


Testing
---


Thanks,

Jiangjie Qin



[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14635987#comment-14635987
 ] 

Jiangjie Qin commented on KAFKA-2353:
-

Created reviewboard https://reviews.apache.org/r/36664/diff/
 against branch origin/trunk

 SocketServer.Processor should catch exception and close the socket properly 
 in configureNewConnections.
 ---

 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2353.patch


 We see an increasing number of sockets in CLOSE_WAIT status in our production 
 environment in recent couple of days. From the thread dump it seems one of 
 the Processor thread has died but the acceptor was still putting many new 
 connections its new connection queue.
 The cause of dead Processor thread was due to we are not catching all the 
 exceptions in the Processor thread. For example, in our case it seems to be 
 an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 36664: Patch for KAFKA-2353

2015-07-21 Thread Gwen Shapira

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36664/#review92496
---


Thanks for looking into that. Exception handling was the most challenging part 
of rewriting SocketServer, so I'm glad to see more eyes on this implementation.

I have a concern regarding the right way to handle an unexpected exceptions.


core/src/main/scala/kafka/network/SocketServer.scala (line 400)
https://reviews.apache.org/r/36664/#comment146707

So in case of unexpected exception, we log an error and keep running?

Isn't it better to kill the processor, since we don't know what's the state 
of the system? If the acceptor keeps placing messages in the queue for a dead 
processor, isn't it a separate issue?



core/src/main/scala/kafka/network/SocketServer.scala (line 461)
https://reviews.apache.org/r/36664/#comment146708

Turns out that catching Throwable is a really bad idea: 
https://www.sumologic.com/2014/05/05/why-you-should-never-catch-throwable-in-scala/


- Gwen Shapira


On July 21, 2015, 11:03 p.m., Jiangjie Qin wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/36664/
 ---
 
 (Updated July 21, 2015, 11:03 p.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2353
 https://issues.apache.org/jira/browse/KAFKA-2353
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Catch exception in kafka.network.Processor to avoid socket leak and exiting 
 unexpectedly.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/network/SocketServer.scala 
 91319fa010b140cca632e5fa8050509bd2295fc9 
 
 Diff: https://reviews.apache.org/r/36664/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jiangjie Qin
 




[jira] [Updated] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2353:

Attachment: KAFKA-2353.patch

 SocketServer.Processor should catch exception and close the socket properly 
 in configureNewConnections.
 ---

 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2353.patch


 We see an increasing number of sockets in CLOSE_WAIT status in our production 
 environment in recent couple of days. From the thread dump it seems one of 
 the Processor thread has died but the acceptor was still putting many new 
 connections its new connection queue.
 The cause of dead Processor thread was due to we are not catching all the 
 exceptions in the Processor thread. For example, in our case it seems to be 
 an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2353:

Status: Patch Available  (was: Open)

 SocketServer.Processor should catch exception and close the socket properly 
 in configureNewConnections.
 ---

 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2353.patch


 We see an increasing number of sockets in CLOSE_WAIT status in our production 
 environment in recent couple of days. From the thread dump it seems one of 
 the Processor thread has died but the acceptor was still putting many new 
 connections its new connection queue.
 The cause of dead Processor thread was due to we are not catching all the 
 exceptions in the Processor thread. For example, in our case it seems to be 
 an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636020#comment-14636020
 ] 

Jiangjie Qin commented on KAFKA-2353:
-

[~gwenshap] Can you help take a look at this patch when get some time since you 
made a lot of change in the socket server in KAFKA-1928 :) Thanks.

 SocketServer.Processor should catch exception and close the socket properly 
 in configureNewConnections.
 ---

 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2353.patch


 We see an increasing number of sockets in CLOSE_WAIT status in our production 
 environment in recent couple of days. From the thread dump it seems one of 
 the Processor thread has died but the acceptor was still putting many new 
 connections its new connection queue.
 The cause of dead Processor thread was due to we are not catching all the 
 exceptions in the Processor thread. For example, in our case it seems to be 
 an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2353) SocketServer.Processor should catch exception and close the socket properly in configureNewConnections.

2015-07-21 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636040#comment-14636040
 ] 

Gwen Shapira commented on KAFKA-2353:
-

I left comments in RB :)

 SocketServer.Processor should catch exception and close the socket properly 
 in configureNewConnections.
 ---

 Key: KAFKA-2353
 URL: https://issues.apache.org/jira/browse/KAFKA-2353
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin
 Attachments: KAFKA-2353.patch


 We see an increasing number of sockets in CLOSE_WAIT status in our production 
 environment in recent couple of days. From the thread dump it seems one of 
 the Processor thread has died but the acceptor was still putting many new 
 connections its new connection queue.
 The cause of dead Processor thread was due to we are not catching all the 
 exceptions in the Processor thread. For example, in our case it seems to be 
 an exception thrown in configureNewConnections().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-07-21 Thread Parth Brahmbhatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Parth Brahmbhatt updated KAFKA-2210:

Attachment: KAFKA-2210_2015-07-21_17:08:21.patch

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


 On June 1, 2015, 1:11 a.m., Jun Rao wrote:
  Thanks for that patch. A few comments below.
  
  Also, two common types of users are consumers and publishers. Currently, if 
  you want to allow a user to consume from topic t in consumer group g, you 
  have to grant (1) read permission on topic t; (2) read permission on group 
  g; (3) describe permission on topic t; (4) describe permission on group g; 
  (5) create permission on offset topic; (6) describe permission on offset 
  topic. Similarly, if you want to allow a user to publish to a topic t, you 
  need to grant (1) write permission to topic t; (2) create permission on the 
  cluster; (3) describe permission on topic t. These are a quite a few 
  individual permission for the admin to remember and set. I am wondering if 
  we can grant permissions to these two types of users in a simpler way, at 
  least through the cli. For example, for a consumer, based on the topics and 
  the consumer group, it would be nice if the cli can grant the necessary 
  permissions automatically, instead of having to require the admin to set 
  each indivial permission.

I will handle this as part of the CLI PR.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review85791
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated July 22, 2015, 12:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 18f5b5b895af1469876b2223841fd90a2dd255e0 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 04a02e08a54139ee1a298c5354731bae009efef3 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/
---

(Updated July 22, 2015, 12:08 a.m.)


Review request for kafka.


Bugs: KAFKA-2210
https://issues.apache.org/jira/browse/KAFKA-2210


Repository: kafka


Description (updated)
---

Addressing review comments from Jun.


Adding CREATE check for offset topic only if the topic does not exist already.


Addressing some more comments.


Removing acl.json file


Moving PermissionType to trait instead of enum. Following the convention for 
defining constants.


Adding authorizer.config.path back.


Addressing more comments from Jun.


Addressing more comments.


Diffs (updated)
-

  core/src/main/scala/kafka/api/OffsetRequest.scala 
f418868046f7c99aefdccd9956541a0cb72b1500 
  core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
  core/src/main/scala/kafka/common/ErrorMapping.scala 
c75c68589681b2c9d6eba2b440ce5e58cddf6370 
  core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
  core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
  core/src/main/scala/kafka/server/KafkaApis.scala 
18f5b5b895af1469876b2223841fd90a2dd255e0 
  core/src/main/scala/kafka/server/KafkaConfig.scala 
dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
  core/src/main/scala/kafka/server/KafkaServer.scala 
18917bc4464b9403b16d85d20c3fd4c24893d1d3 
  core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/OperationTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala PRE-CREATION 
  core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
PRE-CREATION 
  core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
04a02e08a54139ee1a298c5354731bae009efef3 

Diff: https://reviews.apache.org/r/34492/diff/


Testing
---


Thanks,

Parth Brahmbhatt



[jira] [Commented] (KAFKA-2210) KafkaAuthorizer: Add all public entities, config changes and changes to KafkaAPI and kafkaServer to allow pluggable authorizer implementation.

2015-07-21 Thread Parth Brahmbhatt (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14636051#comment-14636051
 ] 

Parth Brahmbhatt commented on KAFKA-2210:
-

Updated reviewboard https://reviews.apache.org/r/34492/diff/
 against branch origin/trunk

 KafkaAuthorizer: Add all public entities, config changes and changes to 
 KafkaAPI and kafkaServer to allow pluggable authorizer implementation.
 --

 Key: KAFKA-2210
 URL: https://issues.apache.org/jira/browse/KAFKA-2210
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Parth Brahmbhatt
Assignee: Parth Brahmbhatt
 Attachments: KAFKA-2210.patch, KAFKA-2210_2015-06-03_16:36:11.patch, 
 KAFKA-2210_2015-06-04_16:07:39.patch, KAFKA-2210_2015-07-09_18:00:34.patch, 
 KAFKA-2210_2015-07-14_10:02:19.patch, KAFKA-2210_2015-07-14_14:13:19.patch, 
 KAFKA-2210_2015-07-20_16:42:18.patch, KAFKA-2210_2015-07-21_17:08:21.patch


 This is the first subtask for Kafka-1688. As Part of this jira we intend to 
 agree on all the public entities, configs and changes to existing kafka 
 classes to allow pluggable authorizer implementation.
 Please see KIP-11 
 https://cwiki.apache.org/confluence/display/KAFKA/KIP-11+-+Authorization+Interface
  for detailed design. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


 On July 21, 2015, 1:57 a.m., Edward Ribeiro wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 151
  https://reviews.apache.org/r/34492/diff/8/?file=1017303#file1017303line151
 
  Please, put a space between ``if`` and ``(`` here.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92355
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated July 22, 2015, 12:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 18f5b5b895af1469876b2223841fd90a2dd255e0 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 04a02e08a54139ee1a298c5354731bae009efef3 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


 On July 21, 2015, 1:43 a.m., Edward Ribeiro wrote:
  core/src/main/scala/kafka/server/KafkaApis.scala, line 624
  https://reviews.apache.org/r/34492/diff/8/?file=1017303#file1017303line624
 
  Lines L#620 and L#621 could be merged (with a )  into a single 
  if-condition. No need for nested if-conditions here. ;-)

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92350
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated July 22, 2015, 12:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 18f5b5b895af1469876b2223841fd90a2dd255e0 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 04a02e08a54139ee1a298c5354731bae009efef3 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


 On July 21, 2015, 1:55 a.m., Edward Ribeiro wrote:
  core/src/main/scala/kafka/security/auth/Operation.scala, line 43
  https://reviews.apache.org/r/34492/diff/8/?file=1017299#file1017299line43
 
  The ``return`` here is redundant.

Fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92354
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated July 22, 2015, 12:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 18f5b5b895af1469876b2223841fd90a2dd255e0 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 04a02e08a54139ee1a298c5354731bae009efef3 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




Re: Review Request 34492: Patch for KAFKA-2210

2015-07-21 Thread Parth Brahmbhatt


 On July 21, 2015, 1:50 a.m., Edward Ribeiro wrote:
  core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala, line 28
  https://reviews.apache.org/r/34492/diff/8/?file=1017298#file1017298line28
 
  Please, put a space between ``if`` and ``(``.

fixed.


- Parth


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34492/#review92353
---


On July 22, 2015, 12:08 a.m., Parth Brahmbhatt wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/34492/
 ---
 
 (Updated July 22, 2015, 12:08 a.m.)
 
 
 Review request for kafka.
 
 
 Bugs: KAFKA-2210
 https://issues.apache.org/jira/browse/KAFKA-2210
 
 
 Repository: kafka
 
 
 Description
 ---
 
 Addressing review comments from Jun.
 
 
 Adding CREATE check for offset topic only if the topic does not exist already.
 
 
 Addressing some more comments.
 
 
 Removing acl.json file
 
 
 Moving PermissionType to trait instead of enum. Following the convention for 
 defining constants.
 
 
 Adding authorizer.config.path back.
 
 
 Addressing more comments from Jun.
 
 
 Addressing more comments.
 
 
 Diffs
 -
 
   core/src/main/scala/kafka/api/OffsetRequest.scala 
 f418868046f7c99aefdccd9956541a0cb72b1500 
   core/src/main/scala/kafka/common/AuthorizationException.scala PRE-CREATION 
   core/src/main/scala/kafka/common/ErrorMapping.scala 
 c75c68589681b2c9d6eba2b440ce5e58cddf6370 
   core/src/main/scala/kafka/security/auth/Acl.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Authorizer.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/KafkaPrincipal.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Operation.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/PermissionType.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/Resource.scala PRE-CREATION 
   core/src/main/scala/kafka/security/auth/ResourceType.scala PRE-CREATION 
   core/src/main/scala/kafka/server/KafkaApis.scala 
 18f5b5b895af1469876b2223841fd90a2dd255e0 
   core/src/main/scala/kafka/server/KafkaConfig.scala 
 dbe170f87331f43e2dc30165080d2cb7dfe5fdbf 
   core/src/main/scala/kafka/server/KafkaServer.scala 
 18917bc4464b9403b16d85d20c3fd4c24893d1d3 
   core/src/test/scala/unit/kafka/security/auth/AclTest.scala PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/KafkaPrincipalTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/OperationTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/PermissionTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/security/auth/ResourceTypeTest.scala 
 PRE-CREATION 
   core/src/test/scala/unit/kafka/server/KafkaConfigConfigDefTest.scala 
 04a02e08a54139ee1a298c5354731bae009efef3 
 
 Diff: https://reviews.apache.org/r/34492/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Parth Brahmbhatt
 




  1   2   >