[jira] [Commented] (KAFKA-1096) An old controller coming out of long GC could update its epoch to the latest controller's epoch

2014-06-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028822#comment-14028822
 ] 

Sriharsha Chintalapani commented on KAFKA-1096:
---

Created reviewboard https://reviews.apache.org/r/22496/diff/
 against branch origin/trunk

> An old controller coming out of long GC could update its epoch to the latest 
> controller's epoch
> ---
>
> Key: KAFKA-1096
> URL: https://issues.apache.org/jira/browse/KAFKA-1096
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2
>
> Attachments: KAFKA-1096.patch
>
>
> If a controller GCs for too long, we could have two controllers in the 
> cluster. The controller epoch is supposed to minimize the damage in such a 
> situation, as the brokers will reject the requests sent by the controller 
> with an older epoch.
> When the old controller is still in long GC, a new controller could be 
> elected. This will fire ControllerEpochListener on the old controller. When 
> it comes out of GC, its ControllerEpochListener will update its own epoch to 
> the new controller's epoch. So both controllers are now able to send out 
> requests with the same controller epoch until the old controller's 
> handleNewSession() can execute in the controller lock. 
> ControllerEpochListener does not seem necessary, so we can probably delete it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1096) An old controller coming out of long GC could update its epoch to the latest controller's epoch

2014-06-11 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1096:
--

Attachment: KAFKA-1096.patch

> An old controller coming out of long GC could update its epoch to the latest 
> controller's epoch
> ---
>
> Key: KAFKA-1096
> URL: https://issues.apache.org/jira/browse/KAFKA-1096
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.8.0
>Reporter: Swapnil Ghike
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2
>
> Attachments: KAFKA-1096.patch
>
>
> If a controller GCs for too long, we could have two controllers in the 
> cluster. The controller epoch is supposed to minimize the damage in such a 
> situation, as the brokers will reject the requests sent by the controller 
> with an older epoch.
> When the old controller is still in long GC, a new controller could be 
> elected. This will fire ControllerEpochListener on the old controller. When 
> it comes out of GC, its ControllerEpochListener will update its own epoch to 
> the new controller's epoch. So both controllers are now able to send out 
> requests with the same controller epoch until the old controller's 
> handleNewSession() can execute in the controller lock. 
> ControllerEpochListener does not seem necessary, so we can probably delete it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 22496: Patch for KAFKA-1096

2014-06-11 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22496/
---

Review request for kafka.


Bugs: KAFKA-1096
https://issues.apache.org/jira/browse/KAFKA-1096


Repository: kafka


Description
---

KAFKA-1096. An old controller coming out of long GC could update its epoch to 
the latest controller's epoch.


Diffs
-

  core/src/main/scala/kafka/controller/KafkaController.scala 
8af48ab500779d3d851d25050e1308f5e7b588a6 

Diff: https://reviews.apache.org/r/22496/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



Re: question about produce requests and responses

2014-06-11 Thread Jun Rao
The topic/partition ordering in the producer response may not be the same
as that in the request. In each request, there is supposed to be one
instance per topic/partition.

Thanks,

Jun


On Wed, Jun 11, 2014 at 5:27 PM, Dave Peterson 
wrote:

> Hello, I have a question about produce requests and responses.
> I send a produce request containing multiple topics, and get
> back a response from the broker containing all topics from the
> request.  However, the ordering of the topics in the response
> differs from the ordering of the topics in the request.
>
> Is this expected behavior?  If so, then suppose I send a
> request in which the same topic appears twice.  To avoid
> ambiguity when processing the response, I'm guessing that I
> should either avoid sending multiple instances of the same
> topic in one request, or make sure the instances are
> distinguishable by the partition(s) they contain.  Is this
> correct?
>
> Likewise, suppose I send a produce request containing a single
> instance of some topic, and the instance contains multiple
> partitions (each with a corresponding message set).  In the
> produce response, is it possible that the ordering of the ACKs
> for the partitions will differ from ordering of the partitions
> in the request?
>
>
> Thanks,
> Dave
>


Re: Review Request 22482: Patch for KAFKA-1491

2014-06-11 Thread Guozhang Wang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22482/#review45456
---

Ship it!


Ship It!

- Guozhang Wang


On June 11, 2014, 11:18 p.m., Joel Koshy wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/22482/
> ---
> 
> (Updated June 11, 2014, 11:18 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-1491
> https://issues.apache.org/jira/browse/KAFKA-1491
> 
> 
> Repository: kafka
> 
> 
> Description
> ---
> 
> Always read coordinator information in consumer metadata response
> 
> 
> Diffs
> -
> 
>   core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
> f8cf6c326d7dab001f21caa8bb0f20900b2090b1 
>   core/src/main/scala/kafka/client/ClientUtils.scala 
> ba5fbdcd9e60f953575e529325caf4c41e22f22d 
> 
> Diff: https://reviews.apache.org/r/22482/diff/
> 
> 
> Testing
> ---
> 
> 
> Thanks,
> 
> Joel Koshy
> 
>



question about produce requests and responses

2014-06-11 Thread Dave Peterson
Hello, I have a question about produce requests and responses.
I send a produce request containing multiple topics, and get
back a response from the broker containing all topics from the
request.  However, the ordering of the topics in the response
differs from the ordering of the topics in the request.

Is this expected behavior?  If so, then suppose I send a
request in which the same topic appears twice.  To avoid
ambiguity when processing the response, I'm guessing that I
should either avoid sending multiple instances of the same
topic in one request, or make sure the instances are
distinguishable by the partition(s) they contain.  Is this
correct?

Likewise, suppose I send a produce request containing a single
instance of some topic, and the instance contains multiple
partitions (each with a corresponding message set).  In the
produce response, is it possible that the ordering of the ACKs
for the partitions will differ from ordering of the partitions
in the request?


Thanks,
Dave


[jira] [Commented] (KAFKA-1491) ConsumerMetadataResponse is not read completely

2014-06-11 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028570#comment-14028570
 ] 

Joel Koshy commented on KAFKA-1491:
---

Created reviewboard  against branch origin/trunk

> ConsumerMetadataResponse is not read completely
> ---
>
> Key: KAFKA-1491
> URL: https://issues.apache.org/jira/browse/KAFKA-1491
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
> Attachments: KAFKA-1491.patch, KAFKA-1491.patch
>
>
> This is a regression after KAFKA-1437
> The broker always populates the coordinator broker field, but the consumer 
> may do a partial read if error code is non-zero. It should always read the 
> field or we will probably end up with a buffer overflow exception of some 
> sort when reading from a response that has a non-zero error code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1491) ConsumerMetadataResponse is not read completely

2014-06-11 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028571#comment-14028571
 ] 

Joel Koshy commented on KAFKA-1491:
---

Created reviewboard https://reviews.apache.org/r/22482/diff/
 against branch origin/trunk

> ConsumerMetadataResponse is not read completely
> ---
>
> Key: KAFKA-1491
> URL: https://issues.apache.org/jira/browse/KAFKA-1491
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
> Attachments: KAFKA-1491.patch, KAFKA-1491.patch
>
>
> This is a regression after KAFKA-1437
> The broker always populates the coordinator broker field, but the consumer 
> may do a partial read if error code is non-zero. It should always read the 
> field or we will probably end up with a buffer overflow exception of some 
> sort when reading from a response that has a non-zero error code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 22482: Patch for KAFKA-1491

2014-06-11 Thread Joel Koshy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22482/
---

Review request for kafka.


Bugs: KAFKA-1491
https://issues.apache.org/jira/browse/KAFKA-1491


Repository: kafka


Description
---

Always read coordinator information in consumer metadata response


Diffs
-

  core/src/main/scala/kafka/api/ConsumerMetadataResponse.scala 
f8cf6c326d7dab001f21caa8bb0f20900b2090b1 
  core/src/main/scala/kafka/client/ClientUtils.scala 
ba5fbdcd9e60f953575e529325caf4c41e22f22d 

Diff: https://reviews.apache.org/r/22482/diff/


Testing
---


Thanks,

Joel Koshy



[jira] [Updated] (KAFKA-1491) ConsumerMetadataResponse is not read completely

2014-06-11 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1491:
--

Attachment: KAFKA-1491.patch

> ConsumerMetadataResponse is not read completely
> ---
>
> Key: KAFKA-1491
> URL: https://issues.apache.org/jira/browse/KAFKA-1491
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
> Attachments: KAFKA-1491.patch, KAFKA-1491.patch
>
>
> This is a regression after KAFKA-1437
> The broker always populates the coordinator broker field, but the consumer 
> may do a partial read if error code is non-zero. It should always read the 
> field or we will probably end up with a buffer overflow exception of some 
> sort when reading from a response that has a non-zero error code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1491) ConsumerMetadataResponse is not read completely

2014-06-11 Thread Joel Koshy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Koshy updated KAFKA-1491:
--

Attachment: KAFKA-1491.patch

> ConsumerMetadataResponse is not read completely
> ---
>
> Key: KAFKA-1491
> URL: https://issues.apache.org/jira/browse/KAFKA-1491
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
> Attachments: KAFKA-1491.patch, KAFKA-1491.patch
>
>
> This is a regression after KAFKA-1437
> The broker always populates the coordinator broker field, but the consumer 
> may do a partial read if error code is non-zero. It should always read the 
> field or we will probably end up with a buffer overflow exception of some 
> sort when reading from a response that has a non-zero error code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1491) ConsumerMetadataResponse is not read completely

2014-06-11 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028556#comment-14028556
 ] 

Joel Koshy commented on KAFKA-1491:
---

So it turns out this is not as bad as I thought.

We read ConsumerMetadatResponse's through a request channel in ClientUtils.
The entire response is received and we just call readFrom on the underlying
buffer. It's fine if we ignore the remaining since retries create a new
BoundedByteBufferReceive.

I verified this locally.

That said, we may as well change the code to always read the coordinator.



> ConsumerMetadataResponse is not read completely
> ---
>
> Key: KAFKA-1491
> URL: https://issues.apache.org/jira/browse/KAFKA-1491
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>
> This is a regression after KAFKA-1437
> The broker always populates the coordinator broker field, but the consumer 
> may do a partial read if error code is non-zero. It should always read the 
> field or we will probably end up with a buffer overflow exception of some 
> sort when reading from a response that has a non-zero error code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1291) Make wrapper shell scripts for important tools

2014-06-11 Thread Sebastian Geller (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Geller updated KAFKA-1291:


Status: Patch Available  (was: Open)

> Make wrapper shell scripts for important tools
> --
>
> Key: KAFKA-1291
> URL: https://issues.apache.org/jira/browse/KAFKA-1291
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>  Labels: newbie, usability
> Fix For: 0.8.2
>
> Attachments: KAFKA-1291.patch, KAFKA-1291.patch
>
>
> It is nice to have a proper command for the important tools just to help with 
> discoverability. I noticed that mirror maker doesn't have such a wrapper. 
> Neither does consumer offset checker. It would be good to do an audit and 
> think of any tools that should have a wrapper that don't.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 22479: KAFKA-1291: Make wrapper shell scripts for important tools

2014-06-11 Thread Sebastian Geller

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22479/
---

(Updated June 11, 2014, 9:38 p.m.)


Review request for kafka.


Changes
---

Updated description.


Summary (updated)
-

KAFKA-1291: Make wrapper shell scripts for important tools


Bugs: KAFKA-1291
https://issues.apache.org/jira/browse/KAFKA-1291


Repository: kafka


Description (updated)
---

- Added wrapper scripts for most of the tools
- Added and updated windows wrapper scripts

Is this patch missing any important tool?


Diffs
-

  bin/kafka-consumer-offset-checker.sh PRE-CREATION 
  bin/kafka-get-partition-offset-shell.sh PRE-CREATION 
  bin/kafka-mirror-maker.sh PRE-CREATION 
  bin/kafka-replica-verification.sh PRE-CREATION 
  bin/kafka-verify-consumer-rebalance.sh PRE-CREATION 
  bin/windows/kafka-consumer-offset-checker.bat PRE-CREATION 
  bin/windows/kafka-consumer-perf-test.bat PRE-CREATION 
  bin/windows/kafka-get-partition-offset-shell.bat PRE-CREATION 
  bin/windows/kafka-mirror-maker.bat PRE-CREATION 
  bin/windows/kafka-preferred-replica-election.bat PRE-CREATION 
  bin/windows/kafka-producer-perf-test.bat PRE-CREATION 
  bin/windows/kafka-reassign-partitions.bat PRE-CREATION 
  bin/windows/kafka-replay-log-producer.bat PRE-CREATION 
  bin/windows/kafka-replica-verification.bat PRE-CREATION 
  bin/windows/kafka-simple-consumer-perf-test.bat PRE-CREATION 
  bin/windows/kafka-simple-consumer-shell.bat PRE-CREATION 
  bin/windows/kafka-verify-consumer-rebalance.bat PRE-CREATION 
  bin/windows/zookeeper-shell.bat PRE-CREATION 

Diff: https://reviews.apache.org/r/22479/diff/


Testing (updated)
---

- Executed all scripts on unix and windows


Thanks,

Sebastian Geller



[jira] [Updated] (KAFKA-1291) Make wrapper shell scripts for important tools

2014-06-11 Thread Sebastian Geller (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Geller updated KAFKA-1291:


Attachment: KAFKA-1291.patch

> Make wrapper shell scripts for important tools
> --
>
> Key: KAFKA-1291
> URL: https://issues.apache.org/jira/browse/KAFKA-1291
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>  Labels: newbie, usability
> Fix For: 0.8.2
>
> Attachments: KAFKA-1291.patch, KAFKA-1291.patch
>
>
> It is nice to have a proper command for the important tools just to help with 
> discoverability. I noticed that mirror maker doesn't have such a wrapper. 
> Neither does consumer offset checker. It would be good to do an audit and 
> think of any tools that should have a wrapper that don't.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1291) Make wrapper shell scripts for important tools

2014-06-11 Thread Sebastian Geller (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028433#comment-14028433
 ] 

Sebastian Geller commented on KAFKA-1291:
-

Created reviewboard https://reviews.apache.org/r/22479/diff/
 against branch origin/trunk

> Make wrapper shell scripts for important tools
> --
>
> Key: KAFKA-1291
> URL: https://issues.apache.org/jira/browse/KAFKA-1291
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>  Labels: newbie, usability
> Fix For: 0.8.2
>
> Attachments: KAFKA-1291.patch, KAFKA-1291.patch
>
>
> It is nice to have a proper command for the important tools just to help with 
> discoverability. I noticed that mirror maker doesn't have such a wrapper. 
> Neither does consumer offset checker. It would be good to do an audit and 
> think of any tools that should have a wrapper that don't.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1291) Make wrapper shell scripts for important tools

2014-06-11 Thread Sebastian Geller (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Geller updated KAFKA-1291:


Attachment: KAFKA-1291.patch

> Make wrapper shell scripts for important tools
> --
>
> Key: KAFKA-1291
> URL: https://issues.apache.org/jira/browse/KAFKA-1291
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>  Labels: newbie, usability
> Fix For: 0.8.2
>
> Attachments: KAFKA-1291.patch, KAFKA-1291.patch
>
>
> It is nice to have a proper command for the important tools just to help with 
> discoverability. I noticed that mirror maker doesn't have such a wrapper. 
> Neither does consumer offset checker. It would be good to do an audit and 
> think of any tools that should have a wrapper that don't.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1291) Make wrapper shell scripts for important tools

2014-06-11 Thread Sebastian Geller (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028431#comment-14028431
 ] 

Sebastian Geller commented on KAFKA-1291:
-

Created reviewboard  against branch origin/trunk

> Make wrapper shell scripts for important tools
> --
>
> Key: KAFKA-1291
> URL: https://issues.apache.org/jira/browse/KAFKA-1291
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.1
>Reporter: Jay Kreps
>  Labels: newbie, usability
> Fix For: 0.8.2
>
> Attachments: KAFKA-1291.patch, KAFKA-1291.patch
>
>
> It is nice to have a proper command for the important tools just to help with 
> discoverability. I noticed that mirror maker doesn't have such a wrapper. 
> Neither does consumer offset checker. It would be good to do an audit and 
> think of any tools that should have a wrapper that don't.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 22479: Patch for KAFKA-1291

2014-06-11 Thread Sebastian Geller

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/22479/
---

Review request for kafka.


Bugs: KAFKA-1291
https://issues.apache.org/jira/browse/KAFKA-1291


Repository: kafka


Description
---

fixed typo for windows java heap options


Diffs
-

  bin/kafka-consumer-offset-checker.sh PRE-CREATION 
  bin/kafka-get-partition-offset-shell.sh PRE-CREATION 
  bin/kafka-mirror-maker.sh PRE-CREATION 
  bin/kafka-replica-verification.sh PRE-CREATION 
  bin/kafka-verify-consumer-rebalance.sh PRE-CREATION 
  bin/windows/kafka-consumer-offset-checker.bat PRE-CREATION 
  bin/windows/kafka-consumer-perf-test.bat PRE-CREATION 
  bin/windows/kafka-get-partition-offset-shell.bat PRE-CREATION 
  bin/windows/kafka-mirror-maker.bat PRE-CREATION 
  bin/windows/kafka-preferred-replica-election.bat PRE-CREATION 
  bin/windows/kafka-producer-perf-test.bat PRE-CREATION 
  bin/windows/kafka-reassign-partitions.bat PRE-CREATION 
  bin/windows/kafka-replay-log-producer.bat PRE-CREATION 
  bin/windows/kafka-replica-verification.bat PRE-CREATION 
  bin/windows/kafka-simple-consumer-perf-test.bat PRE-CREATION 
  bin/windows/kafka-simple-consumer-shell.bat PRE-CREATION 
  bin/windows/kafka-verify-consumer-rebalance.bat PRE-CREATION 
  bin/windows/zookeeper-shell.bat PRE-CREATION 

Diff: https://reviews.apache.org/r/22479/diff/


Testing
---


Thanks,

Sebastian Geller



[jira] [Created] (KAFKA-1491) ConsumerMetadataResponse is not read completely

2014-06-11 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-1491:
-

 Summary: ConsumerMetadataResponse is not read completely
 Key: KAFKA-1491
 URL: https://issues.apache.org/jira/browse/KAFKA-1491
 Project: Kafka
  Issue Type: Bug
Reporter: Joel Koshy


This is a regression after KAFKA-1437

The broker always populates the coordinator broker field, but the consumer may 
do a partial read if error code is non-zero. It should always read the field or 
we will probably end up with a buffer overflow exception of some sort when 
reading from a response that has a non-zero error code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-06-11 Thread Ivan Lyutov (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028352#comment-14028352
 ] 

Ivan Lyutov commented on KAFKA-1477:


Rajasekar, the previous patch failed to apply on latest trunk 
version(https://github.com/stealthly/kafka/tree/v0.8.2_KAFKA-1477). So, I had 
to do make the patch manually. 
There are still some items pending, such as making security off by default and 
more testing stuff. So, you can expect one more patch in the nearest future.

> add authentication layer and initial JKS x509 implementation for brokers, 
> producers and consumer for network communication
> --
>
> Key: KAFKA-1477
> URL: https://issues.apache.org/jira/browse/KAFKA-1477
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
> Fix For: 0.8.2
>
> Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
> KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
> KAFKA-1477_2014-06-03_13:46:17.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-179) Log files always touched when broker is bounced

2014-06-11 Thread Joel Koshy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028313#comment-14028313
 ] 

Joel Koshy commented on KAFKA-179:
--

It might have been fixed in KAFKA-615. Maybe you can do a quick test to confirm 
and close this as resolved-implemented.

> Log files always touched when broker is bounced
> ---
>
> Key: KAFKA-179
> URL: https://issues.apache.org/jira/browse/KAFKA-179
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Joel Koshy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2
>
>
> It looks like the latest log segment is always touched when the broker upon 
> start-up regardless of whether it has corrupt data or not, which fudges the 
> segment's mtime. Minor issue, but I found it a bit misleading when trying to 
> verify a log cleanup setting in production. I think it should be as simple as 
> adding a guard in FileMessageSet's recover method to skip truncate if 
> validUpTo == the length of the segment. Will test this later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-179) Log files always touched when broker is bounced

2014-06-11 Thread Raul Castro Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028219#comment-14028219
 ] 

Raul Castro Fernandez commented on KAFKA-179:
-

The function recoverLog() in Log.scala checks whether the log has the 
CleanShutdownMarker. If it exists, then it returns without touching any 
MessageSet (lines 173-176). Does the description of the JIRA refers to this 
function?

> Log files always touched when broker is bounced
> ---
>
> Key: KAFKA-179
> URL: https://issues.apache.org/jira/browse/KAFKA-179
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Joel Koshy
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2
>
>
> It looks like the latest log segment is always touched when the broker upon 
> start-up regardless of whether it has corrupt data or not, which fudges the 
> segment's mtime. Minor issue, but I found it a bit misleading when trying to 
> verify a log cleanup setting in production. I think it should be as simple as 
> adding a guard in FileMessageSet's recover method to skip truncate if 
> validUpTo == the length of the segment. Will test this later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1489) Global threshold on data retention size

2014-06-11 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028184#comment-14028184
 ] 

Jay Kreps commented on KAFKA-1489:
--

Go for it!

One slight oddity to consider is this. Different nodes will have different 
partitions. So the amount of data retained for different replicas of the same 
partition may vary quite a lot. A replica on a node with lots of data will 
retain little, and one on a more empty broker will retain lots. The current 
per-partition retention strategies are only approximately the same across nodes 
as well, but this will potentially be much more extreme.

In fact, in steady state any partition movement will simultaneously cause data 
to get purged to free up space.

I don't think this is necessarily a problem but we will need to warn people.

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>Assignee: Jay Kreps
>  Labels: newbie
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in Kafka can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1489) Global threshold on data retention size

2014-06-11 Thread James Oliver (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028159#comment-14028159
 ] 

James Oliver commented on KAFKA-1489:
-

I'll work this one if there are no objections.

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>Assignee: Jay Kreps
>  Labels: newbie
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in Kafka can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-06-11 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14028125#comment-14028125
 ] 

Jakob Homan commented on KAFKA-1490:


We're also discussing this in SAMZA-283.

> remove gradlew initial setup output from source distribution
> 
>
> Key: KAFKA-1490
> URL: https://issues.apache.org/jira/browse/KAFKA-1490
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
>Priority: Blocker
> Fix For: 0.8.2
>
>
> Our current source releases contains lots of stuff in the gradle folder we do 
> not need



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1489) Global threshold on data retention size

2014-06-11 Thread Neha Narkhede (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neha Narkhede updated KAFKA-1489:
-

Labels: newbie  (was: )

> Global threshold on data retention size
> ---
>
> Key: KAFKA-1489
> URL: https://issues.apache.org/jira/browse/KAFKA-1489
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1.1
>Reporter: Andras Sereny
>Assignee: Jay Kreps
>  Labels: newbie
>
> Currently, Kafka has per topic settings to control the size of one single log 
> (log.retention.bytes). With lots of topics of different volume and as they 
> grow in number, it could become tedious to maintain topic level settings 
> applying to a single log. 
> Often, a chunk of disk space is dedicated to Kafka that hosts all logs 
> stored, so it'd make sense to have a configurable threshold to control how 
> much space *all* data in Kafka can take up.
> See also:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
> http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1382) Update zkVersion on partition state update failures

2014-06-11 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027984#comment-14027984
 ] 

Sriharsha Chintalapani commented on KAFKA-1382:
---

Updated reviewboard https://reviews.apache.org/r/21899/diff/
 against branch origin/trunk

> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1382) Update zkVersion on partition state update failures

2014-06-11 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-1382:
--

Attachment: KAFKA-1382_2014-06-11_09:37:22.patch

> Update zkVersion on partition state update failures
> ---
>
> Key: KAFKA-1382
> URL: https://issues.apache.org/jira/browse/KAFKA-1382
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joel Koshy
>Assignee: Sriharsha Chintalapani
> Fix For: 0.8.2
>
> Attachments: KAFKA-1382.patch, KAFKA-1382_2014-05-30_21:19:21.patch, 
> KAFKA-1382_2014-05-31_15:50:25.patch, KAFKA-1382_2014-06-04_12:30:40.patch, 
> KAFKA-1382_2014-06-07_09:00:56.patch, KAFKA-1382_2014-06-09_18:23:42.patch, 
> KAFKA-1382_2014-06-11_09:37:22.patch
>
>
> Our updateIsr code is currently:
>   private def updateIsr(newIsr: Set[Replica]) {
> debug("Updated ISR for partition [%s,%d] to %s".format(topic, 
> partitionId, newIsr.mkString(",")))
> val newLeaderAndIsr = new LeaderAndIsr(localBrokerId, leaderEpoch, 
> newIsr.map(r => r.brokerId).toList, zkVersion)
> // use the epoch of the controller that made the leadership decision, 
> instead of the current controller epoch
> val (updateSucceeded, newVersion) = 
> ZkUtils.conditionalUpdatePersistentPath(zkClient,
>   ZkUtils.getTopicPartitionLeaderAndIsrPath(topic, partitionId),
>   ZkUtils.leaderAndIsrZkData(newLeaderAndIsr, controllerEpoch), zkVersion)
> if (updateSucceeded){
>   inSyncReplicas = newIsr
>   zkVersion = newVersion
>   trace("ISR updated to [%s] and zkVersion updated to 
> [%d]".format(newIsr.mkString(","), zkVersion))
> } else {
>   info("Cached zkVersion [%d] not equal to that in zookeeper, skip 
> updating ISR".format(zkVersion))
> }
> We encountered an interesting scenario recently when a large producer fully
> saturated the broker's NIC for over an hour. The large volume of data led to
> a number of ISR shrinks (and subsequent expands). The NIC saturation
> affected the zookeeper client heartbeats and led to a session timeout. The
> timeline was roughly as follows:
> - Attempt to expand ISR
> - Expansion written to zookeeper (confirmed in zookeeper transaction logs)
> - Session timeout after around 13 seconds (the configured timeout is 20
>   seconds) so that lines up.
> - zkclient reconnects to zookeeper (with the same session ID) and retries
>   the write - but uses the old zkVersion. This fails because the zkVersion
>   has already been updated (above).
> - The ISR expand keeps failing after that and the only way to get out of it
>   is to bounce the broker.
> In the above code, if the zkVersion is different we should probably update
> the cached version and even retry the expansion until it succeeds.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 21899: Patch for KAFKA-1382

2014-06-11 Thread Sriharsha Chintalapani

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/21899/
---

(Updated June 11, 2014, 4:37 p.m.)


Review request for kafka.


Bugs: KAFKA-1382
https://issues.apache.org/jira/browse/KAFKA-1382


Repository: kafka


Description (updated)
---

KAFKA-1382. Update zkVersion on partition state update failures.


KAFKA-1382. Update zkVersion on partition state update failures.


KAFKA-1382. Update zkVersion on partition state update failures.


KAFKA-1382. Update zkVersion on partition state update failures. added unit 
tests for ReplicationUtils


KAFKA-1382. Update zkVersion on partition state update failures. added unit 
tests for ReplicationUtils


KAFKA-1382. Update zkVersion on partition state update failures. added unit 
tests for ReplicationUtils.


KAFKA-1382. Update zkVersion on partition state update failures.


KAFKA-1382. Update zkVersion on partition state update failures.


KAFKA-1382. Update zkVersion on partition state update failures.


KAFKA-1382. Update zkVersion on partition state update failures.


Diffs (updated)
-

  core/src/main/scala/kafka/cluster/Partition.scala 
518d2df5ae702d8c0937e1f9603fd11a54e24be8 
  core/src/main/scala/kafka/controller/KafkaController.scala 
e776423b8a38da6f08b2262c8141abf2064d37d2 
  core/src/main/scala/kafka/controller/PartitionStateMachine.scala 
6457b56340a1b5440b07612f69dcffe4b051f919 
  core/src/main/scala/kafka/controller/ReplicaStateMachine.scala 
2f0f29d9b76d847700bb64d6d54515b6a926a253 
  core/src/main/scala/kafka/utils/ReplicationUtils.scala PRE-CREATION 
  core/src/main/scala/kafka/utils/ZkUtils.scala 
fcbe269b6057b45793ea95f357890d5d6922e8d4 
  core/src/test/scala/unit/kafka/utils/ReplicationUtilsTest.scala PRE-CREATION 

Diff: https://reviews.apache.org/r/21899/diff/


Testing
---


Thanks,

Sriharsha Chintalapani



Build failed in Jenkins: Kafka-trunk #206

2014-06-11 Thread Apache Jenkins Server
See 

Changes:

[jay.kreps] KAFKA-1326 Refactor Sender to support consumer.

--
[...truncated 949 lines...]
kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testOpenDeletesObsoleteFiles PASSED

kafka.log.LogTest > testAppendMessageWithNullPayload PASSED

kafka.log.LogTest > testCorruptLog PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.server.ReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.RequestPurgatoryTest > testRequestSatisfaction PASSED

kafka.server.RequestPurgatoryTest > testRequestExpiry PASSED

kafka.server.LogOffsetTest > testGetOffsetsForUnknownTopic PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeLatestTime PASSED

kafka.server.LogOffsetTest > testEmptyLogsGetOffsets PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeNow PASSED

kafka.server.LogOffsetTest > testGetOffsetsBeforeEarliestTime PASSED

kafka.server.SimpleFetchTest > testNonReplicaSeesHwWhenFetching PASSED

kafka.server.SimpleFetchTest > testReplicaSeesLeoWhenFetching PASSED

kafka.server.ServerShutdownTest > testCleanShutdown PASSED

kafka.server.ServerShutdownTest > testCleanShutdownWithDeleteTopicEnabled PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresSingleLogSegment PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointNoFailuresMultipleLogSegments 
PASSED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresMultipleLogSegments 
PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets PASSED

kafka.server.OffsetCommitTest > testCommitAndFetchOffsets PASSED

kafka.server.OffsetCommitTest > testLargeMetadataPayload PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeHoursProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeMinutesProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeNoConfigProvided PASSED

kafka.server.KafkaConfigTest > testLogRetentionTimeBothMinutesAndHoursProvided 
PASSED

kafka.server.KafkaConfigTest > testAdvertiseDefaults PASSED

kafka.server.KafkaConfigTest > testAdvertiseConfigured PASSED

kafka.server.KafkaConfigTest > testUncleanLeaderElectionDefault PASSED

kafka.server.KafkaConfigTest > testUncleanElectionDisabled PASSED

kafka.server.KafkaConfigTest > testUncleanElectionEnabled PASSED

kafka.server.KafkaConfigTest > testUncleanElectionInvalid PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.DynamicConfigChangeTest > testConfigChange PASSED

kafka.server.DynamicConfigChangeTest > testConfigChangeOnNonExistingTopic PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceSinglePartition PASSED

kafka.server.HighwatermarkPersistenceTest > 
testHighWatermarkPersistenceMultiplePartitions PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.ReplicaManagerTest > testHighWaterMarkDirectoryMapping PASSED

kafka.server.ReplicaManagerTest > testHighwaterMarkRelativeDirectoryMapping 
PASSED

kafka.server.LeaderElectionTest > testLeaderElectionAndEpoch PASSED

kafka.server.LeaderElectionTest > testLeaderElectionWithStaleControllerEpoch 
PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.TopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.TopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.TopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.TopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.FetcherTest > testFetcher PAS

[jira] [Commented] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-06-11 Thread Jakob Homan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027952#comment-14027952
 ] 

Jakob Homan commented on KAFKA-1490:


This is being discussed for other projects, and how Aurora dealt with this 
problem [on the incubator 
list|http://mail-archives.apache.org/mod_mbox/incubator-general/201406.mbox/%3CCADiKvVs%3DtKDbp3TWRnxds5dVepqcX4kWeYbj7xUx%2BZoDNM_Lyg%40mail.gmail.com%3E].

> remove gradlew initial setup output from source distribution
> 
>
> Key: KAFKA-1490
> URL: https://issues.apache.org/jira/browse/KAFKA-1490
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
>Priority: Blocker
> Fix For: 0.8.2
>
>
> Our current source releases contains lots of stuff in the gradle folder we do 
> not need



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (KAFKA-1316) Refactor Sender

2014-06-11 Thread Jay Kreps (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Kreps resolved KAFKA-1316.
--

Resolution: Fixed

> Refactor Sender
> ---
>
> Key: KAFKA-1316
> URL: https://issues.apache.org/jira/browse/KAFKA-1316
> Project: Kafka
>  Issue Type: Sub-task
>  Components: producer 
>Reporter: Jay Kreps
>Assignee: Jay Kreps
> Attachments: KAFKA-1316.patch, KAFKA-1316.patch, 
> KAFKA-1316_2014-06-03_11:15:38.patch, KAFKA-1316_2014-06-03_14:33:33.patch, 
> KAFKA-1316_2014-06-07_11:20:38.patch
>
>
> Currently most of the logic of the producer I/O thread is in Sender.java.
> However we will need to do a fair number of similar things in the new 
> consumer. Specifically:
>  - Track in-flight requests
>  - Fetch metadata
>  - Manage connection lifecycle
> It may be possible to refactor some of this into a helper class that can be 
> shared with the consumer. This will require some detailed thought.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (KAFKA-1456) Add LZ4 and LZ4C as a compression codec

2014-06-11 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein resolved KAFKA-1456.
--

Resolution: Fixed

> Add LZ4 and LZ4C as a compression codec
> ---
>
> Key: KAFKA-1456
> URL: https://issues.apache.org/jira/browse/KAFKA-1456
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Joe Stein
>  Labels: newbie
> Fix For: 0.8.2
>
> Attachments: KAFKA-1456.patch, KAFKA-1456_2014-05-19_15:01:10.patch, 
> KAFKA-1456_2014-05-19_16:39:01.patch, KAFKA-1456_2014-05-19_18:19:32.patch, 
> KAFKA-1456_2014-05-19_23:24:27.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (KAFKA-1477) add authentication layer and initial JKS x509 implementation for brokers, producers and consumer for network communication

2014-06-11 Thread Rajasekar Elango (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14027817#comment-14027817
 ] 

Rajasekar Elango commented on KAFKA-1477:
-

Ivan, Looks like the patch you attached to Jira yesterday is uploaded to 
review. Could you upload..? Can you also provide short summary of changes in 
last patch?

> add authentication layer and initial JKS x509 implementation for brokers, 
> producers and consumer for network communication
> --
>
> Key: KAFKA-1477
> URL: https://issues.apache.org/jira/browse/KAFKA-1477
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Joe Stein
>Assignee: Ivan Lyutov
> Fix For: 0.8.2
>
> Attachments: KAFKA-1477-binary.patch, KAFKA-1477.patch, 
> KAFKA-1477_2014-06-02_16:59:40.patch, KAFKA-1477_2014-06-02_17:24:26.patch, 
> KAFKA-1477_2014-06-03_13:46:17.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (KAFKA-1490) remove gradlew initial setup output from source distribution

2014-06-11 Thread Joe Stein (JIRA)
Joe Stein created KAFKA-1490:


 Summary: remove gradlew initial setup output from source 
distribution
 Key: KAFKA-1490
 URL: https://issues.apache.org/jira/browse/KAFKA-1490
 Project: Kafka
  Issue Type: Bug
Reporter: Joe Stein
Assignee: Ivan Lyutov
Priority: Blocker
 Fix For: 0.8.2


Our current source releases contains lots of stuff in the gradle folder we do 
not need



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (KAFKA-1487) add test jars to gradle build for packaging and release

2014-06-11 Thread Joe Stein (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joe Stein updated KAFKA-1487:
-

 Priority: Critical  (was: Major)
Affects Version/s: 0.8.1.1
Fix Version/s: (was: 0.8.1.1)
   0.8.2
 Assignee: Ivan Lyutov
  Summary: add test jars to gradle build for packaging and release  
(was: Missing jars in 0.8.1.1 release)

> add test jars to gradle build for packaging and release
> ---
>
> Key: KAFKA-1487
> URL: https://issues.apache.org/jira/browse/KAFKA-1487
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.1.1
>Reporter: Guozhang Wang
>Assignee: Ivan Lyutov
>Priority: Critical
> Fix For: 0.8.2
>
>
> From Jacob Homan:
> "I don't see the 8.1.1 test jar in the ASF repo 
> (https://repository.apache.org/index.html#nexus-search;gav~org.apache.kafka~kafka_2.9.1kw,versionexpand),
>  nor the Scala 2.10 version of any of the jars.  Is this an oversight; if so 
> can we publish them?"



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (KAFKA-1489) Global threshold on data retention size

2014-06-11 Thread Andras Sereny (JIRA)
Andras Sereny created KAFKA-1489:


 Summary: Global threshold on data retention size
 Key: KAFKA-1489
 URL: https://issues.apache.org/jira/browse/KAFKA-1489
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.8.1.1
Reporter: Andras Sereny
Assignee: Jay Kreps


Currently, Kafka has per topic settings to control the size of one single log 
(log.retention.bytes). With lots of topics of different volume and as they grow 
in number, it could become tedious to maintain topic level settings applying to 
a single log. 

Often, a chunk of disk space is dedicated to Kafka that hosts all logs stored, 
so it'd make sense to have a configurable threshold to control how much space 
*all* data in Kafka can take up.

See also:
http://mail-archives.apache.org/mod_mbox/kafka-users/201406.mbox/browser
http://mail-archives.apache.org/mod_mbox/kafka-users/201311.mbox/%3c20131107015125.gc9...@jkoshy-ld.linkedin.biz%3E



--
This message was sent by Atlassian JIRA
(v6.2#6252)