[jira] [Created] (KAFKA-2724) ZooKeeper authentication documentation

2015-11-02 Thread Flavio Junqueira (JIRA)
Flavio Junqueira created KAFKA-2724:
---

 Summary: ZooKeeper authentication documentation 
 Key: KAFKA-2724
 URL: https://issues.apache.org/jira/browse/KAFKA-2724
 Project: Kafka
  Issue Type: Sub-task
Reporter: Flavio Junqueira
Assignee: Flavio Junqueira
Priority: Blocker


Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2724) Document ZooKeeper authentication

2015-11-02 Thread Flavio Junqueira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flavio Junqueira updated KAFKA-2724:

Summary: Document ZooKeeper authentication   (was: ZooKeeper authentication 
documentation )

> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


SSL authorization mechanizm

2015-11-02 Thread Lukasz.Debowczyk
Hi,

My company is currently looking at Kafka as message broker. One of key aspects 
is security.  I'm currently looking at authentication/authorization mechanisms 
in Kafka 0.9.0.0-SNAPSHOT. We have decided that SSL based 
authentication/authorization will be sufficient for us at  the begging.
We have managed to get mechanism working, but I have couple of questions:


1)  In page 
https://cwiki.apache.org/confluence/display/KAFKA/Security#Security-Authorization
 you are describing username extraction mechanism like this: "When the client 
authenticates using SSL, the user name will be the first element in the Subject 
Alternate Name field of the client certificate.". I found it isn't implemented 
in current Kafka sources . Will it be implemented in the future?

2)  I found that currently username is a concatenation of standard 
certificate fields and it looks like this: 
"CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". It's ok for 
us, but it turned out that kafka.admin.AclCommand don't accept username 
containing commas, as they are used in list of users. To get it working I had 
to change  kafka.admin.AclCommand to accept commas in a username. The question 
is: am I doing something wrong or is it an unfinished feature?

Kind regards
Łukasz Dębowczyk


unsubcribe

2015-11-02 Thread Sungju Bong



[jira] [Commented] (KAFKA-2724) Document ZooKeeper authentication

2015-11-02 Thread Flavio Junqueira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985606#comment-14985606
 ] 

Flavio Junqueira commented on KAFKA-2724:
-

A security.html file is being introduced in KAFKA-2681, and here I'll add the 
part related to ZooKeeper. 

> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-2255.
-
Resolution: Fixed

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-11-02 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985657#comment-14985657
 ] 

Gwen Shapira commented on KAFKA-2255:
-

Actually, for 0.9.0.0 we will automatically generate the docs from the Config 
classes, so the documentation for all parameters will be included 
automatically,.

No need to do anything here.

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985663#comment-14985663
 ] 

Jun Rao commented on KAFKA-2658:


[~rsivaram], we had a chat with a security consulting firm last week. It 
actually strongly discourages the support of SASL/PLAIN in Kafka. The main 
reason is that the plain password is not encrypted during the wire transfer and 
can create a security loophole. Instead, it's better to support CRAM-MD5, which 
is more secure. Given that, I don't think we can include this in 0.9.0.0.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-11-02 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985672#comment-14985672
 ] 

Gwen Shapira commented on KAFKA-2441:
-

[~harsha_ch], if you are busy, mind if I pick this one too?

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2722: Improve ISR change propagation.

2015-11-02 Thread becketqin
GitHub user becketqin reopened a pull request:

https://github.com/apache/kafka/pull/402

KAFKA-2722: Improve ISR change propagation.

The patch has two changes:
1. fixed a bug in controller that it sends UpdateMetadataRequest of all the 
partitions in the cluster.
2. Uses the following rules to propagate ISR change: 1) if there are ISR 
changes pending propagation and the last ISR change is more than five seconds 
ago, propagate the changes. 2) if there is ISR change at T in the recent five 
seconds, delay the propagation until T + 5s. 3) if the last propagation is more 
than 1 min ago, ignore rule No.2 and propagate ISR change if there are changes 
pending propagation.

This algorithm avoids a fixed configuration of ISR propagation interval as 
we discussed about in KIP-29.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2722

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/402.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #402


commit 13892856d806183536657f0c3ea2aa63b1f1c4f2
Author: Jiangjie Qin 
Date:   2015-11-02T01:26:27Z

Improve ISR change propagation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Improve ISR change propagation.

2015-11-02 Thread becketqin
GitHub user becketqin reopened a pull request:

https://github.com/apache/kafka/pull/402

Improve ISR change propagation.

The patch has two changes:
1. fixed a bug in controller that it sends UpdateMetadataRequest of all the 
partitions in the cluster.
2. Uses the following rules to propagate ISR change: 1) if there are ISR 
changes pending propagation and the last ISR change is more than five seconds 
ago, propagate the changes. 2) if there is ISR change at T in the recent five 
seconds, delay the propagation until T + 5s. 3) if the last propagation is more 
than 1 min ago, ignore rule No.2 and propagate ISR change if there are changes 
pending propagation.

This algorithm avoids a fixed configuration of ISR propagation interval as 
we discussed about in KIP-29.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2722

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/402.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #402


commit 13892856d806183536657f0c3ea2aa63b1f1c4f2
Author: Jiangjie Qin 
Date:   2015-11-02T01:26:27Z

Improve ISR change propagation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2722: Improve ISR change propagation.

2015-11-02 Thread becketqin
Github user becketqin closed the pull request at:

https://github.com/apache/kafka/pull/402


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2681) SASL authentication in official docs

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985669#comment-14985669
 ] 

ASF GitHub Bot commented on KAFKA-2681:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/401


> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2681) SASL authentication in official docs

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2681:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 401
[https://github.com/apache/kafka/pull/401]

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #744

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2681: Added SASL documentation

--
[...truncated 67 lines...]
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:264:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:380:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:75:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^
:194:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  maybeSetDefaultProperty(producerProps, 
ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor 

[jira] [Updated] (KAFKA-2687) Add support for ListGroups and DescribeGroup APIs

2015-11-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2687:
---
Reviewer: Jun Rao

> Add support for ListGroups and DescribeGroup APIs
> -
>
> Key: KAFKA-2687
> URL: https://issues.apache.org/jira/browse/KAFKA-2687
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Since the new consumer currently has no persistence in Zookeeper (pending 
> outcome of KAFKA-2017), there is no way for administrators to investigate 
> group status including getting the list of members in the group and their 
> partition assignments. We therefore propose to modify GroupMetadataRequest 
> (previously known as ConsumerMetadataRequest) to return group metadata when 
> received by the respective group's coordinator. When received by another 
> broker, the request will be handled as before: by only returning coordinator 
> host and port information.
> {code}
> GroupMetadataRequest => GroupId IncludeMetadata
>   GroupId => String
>   IncludeMetadata => Boolean
> GroupMetadataResponse => ErrorCode Coordinator GroupMetadata
>   ErrorCode => int16
>   Coordinator => Id Host Port
> Id => int32
> Host => string
> Port => int32
>   GroupMetadata => State ProtocolType Generation Protocol Leader  Members
> State => String
> ProtocolType => String
> Generation => int32
> Protocol => String
> Leader => String
> Members => [Member MemberMetadata MemberAssignment]
>   Member => MemberIp ClientId
> MemberIp => String
> ClientId => String
>   MemberMetadata => Bytes
>   MemberAssignment => Bytes
> {code}
> The request schema includes a flag to indicate whether metadata is needed, 
> which saves clients from having to read all group metadata when they are just 
> trying to find the coordinator. This is important to reduce group overhead 
> for use cases which involve a large number of topic subscriptions (e.g. 
> mirror maker).
> Tools will use the protocol type to determine how to parse metadata. For 
> example, when the protocolType is "consumer", the tool can use 
> ConsumerProtocol to parse the member metadata as topic subscriptions and 
> partition assignments. 
> The detailed proposal can be found below.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-40%3A+ListGroups+and+DescribeGroup



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2722) Improve ISR change propagation

2015-11-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2722:

Priority: Blocker  (was: Major)

> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: Improve ISR change propagation.

2015-11-02 Thread becketqin
Github user becketqin closed the pull request at:

https://github.com/apache/kafka/pull/402


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2722) Improve ISR change propagation

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985686#comment-14985686
 ] 

ASF GitHub Bot commented on KAFKA-2722:
---

Github user becketqin closed the pull request at:

https://github.com/apache/kafka/pull/402


> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2722) Improve ISR change propagation

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985687#comment-14985687
 ] 

ASF GitHub Bot commented on KAFKA-2722:
---

GitHub user becketqin reopened a pull request:

https://github.com/apache/kafka/pull/402

KAFKA-2722: Improve ISR change propagation.

The patch has two changes:
1. fixed a bug in controller that it sends UpdateMetadataRequest of all the 
partitions in the cluster.
2. Uses the following rules to propagate ISR change: 1) if there are ISR 
changes pending propagation and the last ISR change is more than five seconds 
ago, propagate the changes. 2) if there is ISR change at T in the recent five 
seconds, delay the propagation until T + 5s. 3) if the last propagation is more 
than 1 min ago, ignore rule No.2 and propagate ISR change if there are changes 
pending propagation.

This algorithm avoids a fixed configuration of ISR propagation interval as 
we discussed about in KIP-29.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-2722

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/402.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #402


commit 13892856d806183536657f0c3ea2aa63b1f1c4f2
Author: Jiangjie Qin 
Date:   2015-11-02T01:26:27Z

Improve ISR change propagation.




> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-11-02 Thread Vinoth Chandar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985707#comment-14985707
 ] 

Vinoth Chandar commented on KAFKA-2580:
---

[~jkreps] ah ok. point taken :) 

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-11-02 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985750#comment-14985750
 ] 

Ashish K Singh commented on KAFKA-2716:
---

[~ijuma], [~gwenshap] I agree that not breaking existing usage could have been 
a reason for keeping Log4jAppender dependency in core. However, do you guys 
think that we can not change that even while moving to 0.9.0? The prime reason 
we moved Log4jAppender out of core was that users should not have to depend on 
core just to be able to use Log4jAppender. However, it seems counter-intuitive 
to me that we want to make people not to depend on core got Log4jAppender and 
still continue to make it possible to depend on core for just Log4jAppender. I 
suggest we remove Log4jAppender dependency from core. If we choose to do so, 
next concern would be why was log4j-appender system tests were failing when 
Log4jAppender was removed from core's dependency. I was able to figure out the 
reason. We need to add {{log4j-appender}} to tools dependency, as 
VerifiableLog4jAppender uses it. I have verified that we do not need 
Log4jAppender's dependency in core to be able to make system tests work. I will 
submit a PR shortly, feel free to try and review it.

> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-02 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985751#comment-14985751
 ] 

Rajini Sivaram commented on KAFKA-2658:
---

[~junrao] As described in the RFC for SASL/PLAIN 
(https://tools.ietf.org/html/rfc4616), PLAIN mechanism is intended for use with 
a secure transport protocol like TLS. I dont believe CRAM-MD5 is secure enough 
to use without TLS either. WIth TLS, unencrypted password in SASL/PLAIN 
shouldn't be a concern.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.9.0 release branch

2015-11-02 Thread Jason Gustafson
I added KAFKA-2691 as well, which improves client handling of authorization
errors.

-Jason

On Mon, Nov 2, 2015 at 10:25 AM, Becket Qin  wrote:

> Hi Jun,
>
> I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
> scalability issue we saw.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Nov 2, 2015 at 9:16 AM, Jun Rao  wrote:
>
> > Hi, everyone,
> >
> > We are getting close to the 0.9.0 release. The current plan is to have
> the
> > following remaining 0.9.0 blocker issues resolved this week, cut the
> 0.9.0
> > release branch by Nov. 6 (Friday) and start the RC on Nov. 9 (Monday).
> >
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
> >
> > Thanks,
> >
> > Jun
> >
>


[GitHub] kafka pull request: KAFKA-2681: Added SASL documentation. It doesn...

2015-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/401


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: 0.9.0 release branch

2015-11-02 Thread Becket Qin
Hi Jun,

I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
scalability issue we saw.

Thanks,

Jiangjie (Becket) Qin

On Mon, Nov 2, 2015 at 9:16 AM, Jun Rao  wrote:

> Hi, everyone,
>
> We are getting close to the 0.9.0 release. The current plan is to have the
> following remaining 0.9.0 blocker issues resolved this week, cut the 0.9.0
> release branch by Nov. 6 (Friday) and start the RC on Nov. 9 (Monday).
>
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
>
> Thanks,
>
> Jun
>


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-11-02 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985696#comment-14985696
 ] 

Jay Kreps commented on KAFKA-2580:
--

[~vinothchandar] All I'm saying is that you have to kind of do some back of the 
envelope math to see when the bookkeeping overhead of the LRU outweighs the 
additional FDs--for these O(#partitions) structures it's worth being thoughtful 
about memory usage etc.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2528) Quota Performance Evaluation

2015-11-02 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-2528.
-
Resolution: Fixed

> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluationRelease.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2716: Make Kafka core not depend on log4...

2015-11-02 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/405

KAFKA-2716: Make Kafka core not depend on log4j-appender



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2716

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #405


commit d4d69ab4fec81efd18eee68303291bb90dc4ca29
Author: Ashish Singh 
Date:   2015-11-02T18:54:28Z

KAFKA-2716: Make Kafka core not depend on log4j-appender




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2691) Improve handling of authorization failure during metadata refresh

2015-11-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-2691:
---
Priority: Blocker  (was: Major)

> Improve handling of authorization failure during metadata refresh
> -
>
> Key: KAFKA-2691
> URL: https://issues.apache.org/jira/browse/KAFKA-2691
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> There are two problems, one more severe than the other:
> 1. The consumer blocks indefinitely if there is non-transient authorization 
> failure during metadata refresh due to KAFKA-2391
> 2. We get a TimeoutException instead of an AuthorizationException in the 
> producer for the same case
> If the fix for KAFKA-2391 is to add a timeout, then we will have issue `2` in 
> both producer and consumer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2716) Make Kafka core not depend on log4j-appender

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985754#comment-14985754
 ] 

ASF GitHub Bot commented on KAFKA-2716:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/405

KAFKA-2716: Make Kafka core not depend on log4j-appender



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2716

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/405.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #405


commit d4d69ab4fec81efd18eee68303291bb90dc4ca29
Author: Ashish Singh 
Date:   2015-11-02T18:54:28Z

KAFKA-2716: Make Kafka core not depend on log4j-appender




> Make Kafka core not depend on log4j-appender
> 
>
> Key: KAFKA-2716
> URL: https://issues.apache.org/jira/browse/KAFKA-2716
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Investigate why core needs to depend on log4j-appender. AFAIK, there is no 
> real dependency, however it the dependency is removed, tests won't build it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2722) Improve ISR change propagation

2015-11-02 Thread Jiangjie Qin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin updated KAFKA-2722:

Fix Version/s: 0.9.0.0

> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-11-02 Thread Sriharsha Chintalapani (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985681#comment-14985681
 ] 

Sriharsha Chintalapani commented on KAFKA-2441:
---

[~gwenshap] Please go ahead. Thanks.

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2528) Quota Performance Evaluation

2015-11-02 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985697#comment-14985697
 ] 

Jay Kreps commented on KAFKA-2528:
--

[~lindong] Great!

> Quota Performance Evaluation
> 
>
> Key: KAFKA-2528
> URL: https://issues.apache.org/jira/browse/KAFKA-2528
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Assignee: Dong Lin
> Attachments: QuotaPerformanceEvaluationRelease.pdf
>
>
> In this document we present the results of experiments we did at LinkedIn, to 
> validate the basic functionality of quota, as well as the performances 
> benefits of using quota in a heterogeneous multi-tenant environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2017: Persist Group Metadata and Assignm...

2015-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/386


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #748

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2017: Persist Group Metadata and Assignment before Responding

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 7c33475274cb6e65a8e8d907e7fef6e56bc8c8e6
 > git rev-list e466ccd711ae00c5bb046c18aacf353b1a460dcd # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8937991392631496031.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 23.926 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2816441104387804983.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 24.058 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986850#comment-14986850
 ] 

ASF GitHub Bot commented on KAFKA-2017:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/386


> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2018) Add metadata to consumer registry info

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2018.
--
Resolution: Fixed

This is solved as part of KAFKA-2017.

> Add metadata to consumer registry info
> --
>
> Key: KAFKA-2018
> URL: https://issues.apache.org/jira/browse/KAFKA-2018
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> While working on the new consumer and the coordinator, I found there are some 
> consumer metadata info that should better be kept track of. For example, in 
> the new consumer the consumer id is assigned by the coordinator, which is 
> just the group name + index, and we have lost useful information such as host 
> name in the consumer id. Adding a metadata in the consumer registry info as 
> we did in the consumer commit message would be very useful in this case.
> Since join group request protocol has not been exposed as the new consumer 
> release, I think we can just change its format without bumping up request 
> version. I am also wondering if a KIP is required for this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2017) Persist Coordinator State for Coordinator Failover

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2017:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 386
[https://github.com/apache/kafka/pull/386]

> Persist Coordinator State for Coordinator Failover
> --
>
> Key: KAFKA-2017
> URL: https://issues.apache.org/jira/browse/KAFKA-2017
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Onur Karaman
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-2017.patch, KAFKA-2017_2015-05-20_09:13:39.patch, 
> KAFKA-2017_2015-05-21_19:02:47.patch
>
>
> When a coordinator fails, the group membership protocol tries to failover to 
> a new coordinator without forcing all the consumers rejoin their groups. This 
> is possible if the coordinator persists its state so that the state can be 
> transferred during coordinator failover. This state consists of most of the 
> information in GroupRegistry and ConsumerRegistry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2018) Add metadata to consumer registry info

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2018:
-
Fix Version/s: (was: 0.10.0.0)
   0.9.0.0

> Add metadata to consumer registry info
> --
>
> Key: KAFKA-2018
> URL: https://issues.apache.org/jira/browse/KAFKA-2018
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.0.0
>
>
> While working on the new consumer and the coordinator, I found there are some 
> consumer metadata info that should better be kept track of. For example, in 
> the new consumer the consumer id is assigned by the coordinator, which is 
> just the group name + index, and we have lost useful information such as host 
> name in the consumer id. Adding a metadata in the consumer registry info as 
> we did in the consumer commit message would be very useful in this case.
> Since join group request protocol has not been exposed as the new consumer 
> release, I think we can just change its format without bumping up request 
> version. I am also wondering if a KIP is required for this change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2698) add paused API

2015-11-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986776#comment-14986776
 ] 

Jun Rao commented on KAFKA-2698:


[~guozhang], is this an 0.9.0 blocker?

> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2518) Update NOTICE file

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2518:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 404
[https://github.com/apache/kafka/pull/404]

> Update NOTICE file
> --
>
> Key: KAFKA-2518
> URL: https://issues.apache.org/jira/browse/KAFKA-2518
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Flavio Junqueira
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> According to this page from ASF legal:
> {noformat}
> http://www.apache.org/legal/src-headers.html
> {noformat}
> the years in the NOTICE header should reflect the product name and years of 
> distribution of the current and past versions of the product. The current 
> NOTICE file says only 2012. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2518) Update NOTICE file

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985792#comment-14985792
 ] 

ASF GitHub Bot commented on KAFKA-2518:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/404


> Update NOTICE file
> --
>
> Key: KAFKA-2518
> URL: https://issues.apache.org/jira/browse/KAFKA-2518
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Flavio Junqueira
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> According to this page from ASF legal:
> {noformat}
> http://www.apache.org/legal/src-headers.html
> {noformat}
> the years in the NOTICE header should reflect the product name and years of 
> distribution of the current and past versions of the product. The current 
> NOTICE file says only 2012. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-11-02 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2580:
---
Assignee: (was: Grant Henke)

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2658) Implement SASL/PLAIN

2015-11-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985939#comment-14985939
 ] 

Jun Rao commented on KAFKA-2658:


[~rsivaram], yes, perhaps enforcing that SASL/PLAIN can only be used with TLS 
will work. Perhaps it's worth discussing that in a separate KIP so that we can 
get feedback from people more familiar with security. In any case, given the 
release timeline, it's probably too late to include this jira in 0.9.0.

> Implement SASL/PLAIN
> 
>
> Key: KAFKA-2658
> URL: https://issues.apache.org/jira/browse/KAFKA-2658
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> KAFKA-1686 supports SASL/Kerberos using GSSAPI. We should enable more SASL 
> mechanisms. SASL/PLAIN would enable a simpler use of SASL, which along with 
> SSL provides a secure Kafka that uses username/password for client 
> authentication.
> SASL/PLAIN protocol and its uses are described in 
> [https://tools.ietf.org/html/rfc4616]. It is supported in Java.
> This should be implemented after KAFKA-1686. This task should also hopefully 
> enable simpler unit testing of the SASL code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2441) SSL/TLS in official docs

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985779#comment-14985779
 ] 

ASF GitHub Bot commented on KAFKA-2441:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/406

KAFKA-2441: SSL/TLS in official docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2441

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/406.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #406


commit f5dcf2fb69a44e3083973af362b2c337bcc48ff3
Author: Gwen Shapira 
Date:   2015-11-02T19:01:03Z

KAFKA-2441L: SSL/TLS in official docs




> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.9.0 release branch

2015-11-02 Thread Cliff Rhyne
Hi Jun,

I openned KAFKA-2725 based on my experience with duplicate message
processing with auto-commit off.  I think it's a fairly small change,
especially for someone familiar with the kafka code-base but it makes a big
impact for clients not using auto-commit.  Can this be included in 0.9.0?

Thanks,
Cliff

On Mon, Nov 2, 2015 at 12:57 PM, Jason Gustafson  wrote:

> I added KAFKA-2691 as well, which improves client handling of authorization
> errors.
>
> -Jason
>
> On Mon, Nov 2, 2015 at 10:25 AM, Becket Qin  wrote:
>
> > Hi Jun,
> >
> > I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
> > scalability issue we saw.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Mon, Nov 2, 2015 at 9:16 AM, Jun Rao  wrote:
> >
> > > Hi, everyone,
> > >
> > > We are getting close to the 0.9.0 release. The current plan is to have
> > the
> > > following remaining 0.9.0 blocker issues resolved this week, cut the
> > 0.9.0
> > > release branch by Nov. 6 (Friday) and start the RC on Nov. 9 (Monday).
> > >
> > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> >
>



-- 
Cliff Rhyne
Software Engineering Lead
m: 760-917-7823
e: crh...@signal.co
signal.co


Cut Through the Noise

This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confidential and privileged
information. Any unauthorized use of this email is strictly prohibited.
©2015 Signal. All rights reserved.


[GitHub] kafka pull request: KAFKA-2441: SSL/TLS in official docs

2015-11-02 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/406

KAFKA-2441: SSL/TLS in official docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2441

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/406.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #406


commit f5dcf2fb69a44e3083973af362b2c337bcc48ff3
Author: Gwen Shapira 
Date:   2015-11-02T19:01:03Z

KAFKA-2441L: SSL/TLS in official docs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2518: Update NOTICE file

2015-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/404


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2719) Kafka classpath has grown too large and breaks some system tests

2015-11-02 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985813#comment-14985813
 ] 

Geoff Anderson commented on KAFKA-2719:
---

[~rsivaram] I wonder if this problem is better addressed in the services rather 
than changes to the kafka-run-class script?


> Kafka classpath has grown too large and breaks some system tests
> 
>
> Key: KAFKA-2719
> URL: https://issues.apache.org/jira/browse/KAFKA-2719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> The jars added under KAFKA-2369 makes the Kafka command line used in system 
> tests much higher than 4096 due to more jars in the classpath. Since the ps 
> command used to find processes in system tests truncates the command line, 
> some system tests are failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2723) Standardize new consumer exceptions

2015-11-02 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985921#comment-14985921
 ] 

Guozhang Wang commented on KAFKA-2723:
--

Thanks [~hachikuji], a few clarification questions:

1) Currently OffsetOutOfRangeException is extending RetriableException which 
extends ApiException, while NoOffsetForPartition extends KafkaException 
directly. Hence when we merge them into InvalidOffsetException it will be 
extending directly from KafkaException, right?

2) Currently AuthorizationException extends from ApiException, will it be 
extending directly from KafkaException then?

Otherwise LGTM.

> Standardize new consumer exceptions
> ---
>
> Key: KAFKA-2723
> URL: https://issues.apache.org/jira/browse/KAFKA-2723
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The purpose of this ticket is to standardize and cleanup the exceptions 
> thrown by the new consumer to ensure 1) that exceptions are only raised when 
> there is no reasonable way of handling them internally,  2) that raised 
> exceptions are documented properly, 3) that exceptions provide enough 
> information for handling.
> For all blocking methods, the following exceptions are possible:
> - AuthorizationException (can only thrown if cluster is configured for 
> authorization)
> - WakeupException (only thrown with an explicit call to wakeup())
> - ApiException (invalid session timeout, invalid groupId, inconsistent 
> assignment strategy, etc.)
> Additionally, the following methods have special exceptions.
> poll():
> - SerializationException (problems deserializing keys/values)
> - InvalidOffsetException (only thrown if no reset policy is defined; includes 
> OffsetOutOfRange and NoOffsetForPartition)
> commit():
> - CommitFailedException (only thrown if group management is enabled and a 
> rebalance completed before the commit could finish)
> position():
> - InvalidOffsetException (same as above)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2441) SSL/TLS in official docs

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2441:

Status: Patch Available  (was: Open)

> SSL/TLS in official docs
> 
>
> Key: KAFKA-2441
> URL: https://issues.apache.org/jira/browse/KAFKA-2441
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SSL/TLS:
> http://kafka.apache.org/documentation.html
> There is already a wiki page where some of the information is already present:
> https://cwiki.apache.org/confluence/display/KAFKA/Deploying+SSL+for+Kafka



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #85

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2681: Added SASL documentation

[wangguoz] KAFKA-2518: Update NOTICE file

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 6383593fed5215f31e050d5a459b161249ed8366 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6383593fed5215f31e050d5a459b161249ed8366
 > git rev-list 9d8dd9f104aef3a9db9005d85bc55a15f851d258 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7659995961465689816.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 16.108 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3044826868957626619.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 13.892 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Commented] (KAFKA-2580) Kafka Broker keeps file handles open for all log files (even if its not written to/read from)

2015-11-02 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985875#comment-14985875
 ] 

Grant Henke commented on KAFKA-2580:


Marking as Unassigned as I need to shift my focus to some other jiras.

> Kafka Broker keeps file handles open for all log files (even if its not 
> written to/read from)
> -
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Vinoth Chandar
>
> We noticed this in one of our clusters where we stage logs for a longer 
> amount of time. It appears that the Kafka broker keeps file handles open even 
> for non active (not written to or read from) files. (in fact, there are some 
> threads going back to 2013 
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever) 
> Needless to say, this is a problem and forces us to either artificially bump 
> up ulimit (its already at 100K) or expand the cluster (even if we have 
> sufficient IO and everything). 
> Filing this ticket, since I could find anything similar. Very interested to 
> know if there are plans to address this (given how Samza's changelog topic is 
> meant to be a persistent large state use case).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2719) Kafka classpath has grown too large and breaks some system tests

2015-11-02 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985881#comment-14985881
 ] 

Rajini Sivaram commented on KAFKA-2719:
---

[~geoffra] If system tests are guaranteed to start only one java process on 
each VM, then services could always grep for _java_ to find processes instead 
of grepping for the classname as they do now (since _java_ is at the start and 
classname is at the end). But in general, it is useful to see what is running 
when you run _ps_, and not just in system tests. Hence the PR. Also, the 
changes to kafka-run-class.sh only change the parts of the script that are used 
to run from the development build, not from a release build. It felt like the 
simplest fix (because it is contained in one file), but I dont have a strong 
opinion either way.

> Kafka classpath has grown too large and breaks some system tests
> 
>
> Key: KAFKA-2719
> URL: https://issues.apache.org/jira/browse/KAFKA-2719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> The jars added under KAFKA-2369 makes the Kafka command line used in system 
> tests much higher than 4096 due to more jars in the classpath. Since the ps 
> command used to find processes in system tests truncates the command line, 
> some system tests are failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #745

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2518: Update NOTICE file

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 6383593fed5215f31e050d5a459b161249ed8366 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 6383593fed5215f31e050d5a459b161249ed8366
 > git rev-list 34775bd3ed7b8eb4ca3532d30a9d8e6bf7c0b738 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson6329595695208049479.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 17.007 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson7093552405301328506.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean
:copycat:file:clean
:copycat:json:clean
:copycat:runtime:clean
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.808 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Created] (KAFKA-2725) high level consumer rebalances with auto-commit disabled should throw an exception

2015-11-02 Thread Cliff Rhyne (JIRA)
Cliff Rhyne created KAFKA-2725:
--

 Summary: high level consumer rebalances with auto-commit disabled 
should throw an exception
 Key: KAFKA-2725
 URL: https://issues.apache.org/jira/browse/KAFKA-2725
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.8.2.1
 Environment: Experienced on Java running in linux
Reporter: Cliff Rhyne


Auto-commit is a very resilient mode.  Drops in zookeeper sessions due to JVM 
garbage collection, network, rebalance or other interference are handled 
gracefully within the kafka client.

Systems still can drop due to unexpected gc or network behavior.  My proposal 
is to handle this drop better when auto-commit is turned off:

- If a rebalance or similar occur (which cause the offset to get reverted in 
the client), check and see if the client was assigned back to the same 
partition or a different one.  If it's the same partition, find the place last 
consumed (it doesn't do this today for us).  This is to make a graceful 
recovery.
- If the partition assignment changes (which can mean duplicate data is getting 
processed), throw an exception back to the application code.  This lets the 
application code handle this exception-case with respect to the work it's doing 
(with might be transactional).  Failing "silently" (yes it's still getting 
logged) is very dangerous in our situation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2726) ntpdate causes vagrant provision to fail if ntp running

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985998#comment-14985998
 ] 

ASF GitHub Bot commented on KAFKA-2726:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/407

KAFKA-2726: Fix port collision between ntpdate and ntp daemon

@gwenshap Can you take a quick look? I have verified the change allows 
successful `vagrant provision` even with ntp daemon already running on the vm.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2726-ntp-port-collision

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/407.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #407


commit 4600ea1fa4a6dfe43b639210e31dae325413907d
Author: Geoff Anderson 
Date:   2015-11-02T20:42:26Z

Fix port collision between ntpdate and ntp daemon




> ntpdate causes vagrant provision to fail if ntp running
> ---
>
> Key: KAFKA-2726
> URL: https://issues.apache.org/jira/browse/KAFKA-2726
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>
> If ntp daemon is already running, vagrant provision can fail because of port 
> collision



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2698) add paused API

2015-11-02 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986779#comment-14986779
 ] 

Guozhang Wang commented on KAFKA-2698:
--

I think it is OK to not include this ticket; on the other hand, the patch 
should be pretty simple so may not be huge burden to include it.

> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-02 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986744#comment-14986744
 ] 

Jun Rao commented on KAFKA-2702:


[~granthenke], we can probably do the following.

1. Remove the required field.
2. Change all instances of non-required field to default to null.
3. Allow null as a default value.
4. Print the null default value properly in html output.

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: SSL authorization mechanizm

2015-11-02 Thread Jun Rao
Yes, by default, we take the full ssl certificate attributes as the user
name. This may not be suitable for ACL. We do allow the ssl user name to be
customized through PrincipalBuilder. You can define a
customized PrincipalBuilder and pass that in
through "principal.builder.class". The customized PrincipalBuilder can
extract just the user attribute in the ssl certificate.

Thanks,

Jun

On Mon, Nov 2, 2015 at 1:19 AM,  wrote:

> Hi,
>
> My company is currently looking at Kafka as message broker. One of key
> aspects is security.  I'm currently looking at authentication/authorization
> mechanisms in Kafka 0.9.0.0-SNAPSHOT. We have decided that SSL based
> authentication/authorization will be sufficient for us at  the begging.
> We have managed to get mechanism working, but I have couple of questions:
>
>
> 1)  In page
> https://cwiki.apache.org/confluence/display/KAFKA/Security#Security-Authorization
> you are describing username extraction mechanism like this: "When the
> client authenticates using SSL, the user name will be the first element in
> the Subject Alternate Name field of the client certificate.". I found it
> isn't implemented in current Kafka sources . Will it be implemented in the
> future?
>
> 2)  I found that currently username is a concatenation of standard
> certificate fields and it looks like this:
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". It's ok
> for us, but it turned out that kafka.admin.AclCommand don't accept username
> containing commas, as they are used in list of users. To get it working I
> had to change  kafka.admin.AclCommand to accept commas in a username. The
> question is: am I doing something wrong or is it an unfinished feature?
>
> Kind regards
> Łukasz Dębowczyk
>


[jira] [Commented] (KAFKA-2518) Update NOTICE file

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985584#comment-14985584
 ] 

ASF GitHub Bot commented on KAFKA-2518:
---

GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/404

KAFKA-2518: Update NOTICE file



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2518

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/404.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #404


commit bc576c869c1627dacd6ea5341f9238cc9e61ccc8
Author: Gwen Shapira 
Date:   2015-11-02T17:32:08Z

KAFKA-2518: Update NOTICE file




> Update NOTICE file
> --
>
> Key: KAFKA-2518
> URL: https://issues.apache.org/jira/browse/KAFKA-2518
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Flavio Junqueira
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> According to this page from ASF legal:
> {noformat}
> http://www.apache.org/legal/src-headers.html
> {noformat}
> the years in the NOTICE header should reflect the product name and years of 
> distribution of the current and past versions of the product. The current 
> NOTICE file says only 2012. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2724) Document ZooKeeper authentication

2015-11-02 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2724:
---
Affects Version/s: 0.9.0.0
Fix Version/s: 0.9.0.0

> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2518) Update NOTICE file

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira reassigned KAFKA-2518:
---

Assignee: Gwen Shapira

> Update NOTICE file
> --
>
> Key: KAFKA-2518
> URL: https://issues.apache.org/jira/browse/KAFKA-2518
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Flavio Junqueira
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> According to this page from ASF legal:
> {noformat}
> http://www.apache.org/legal/src-headers.html
> {noformat}
> the years in the NOTICE header should reflect the product name and years of 
> distribution of the current and past versions of the product. The current 
> NOTICE file says only 2012. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2518) Update NOTICE file

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2518:

Status: Patch Available  (was: Open)

> Update NOTICE file
> --
>
> Key: KAFKA-2518
> URL: https://issues.apache.org/jira/browse/KAFKA-2518
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Reporter: Flavio Junqueira
>Assignee: Gwen Shapira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> According to this page from ASF legal:
> {noformat}
> http://www.apache.org/legal/src-headers.html
> {noformat}
> the years in the NOTICE header should reflect the product name and years of 
> distribution of the current and past versions of the product. The current 
> NOTICE file says only 2012. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


0.9.0 release branch

2015-11-02 Thread Jun Rao
Hi, everyone,

We are getting close to the 0.9.0 release. The current plan is to have the
following remaining 0.9.0 blocker issues resolved this week, cut the 0.9.0
release branch by Nov. 6 (Friday) and start the RC on Nov. 9 (Monday).

https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC

Thanks,

Jun


[jira] [Updated] (KAFKA-2681) SASL authentication in official docs

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2681:

Status: Patch Available  (was: Open)

> SASL authentication in official docs
> 
>
> Key: KAFKA-2681
> URL: https://issues.apache.org/jira/browse/KAFKA-2681
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>Assignee: Sriharsha Chintalapani
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> We need to add a section in the official documentation regarding SASL 
> authentication:
> http://kafka.apache.org/documentation.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2518: Update NOTICE file

2015-11-02 Thread gwenshap
GitHub user gwenshap opened a pull request:

https://github.com/apache/kafka/pull/404

KAFKA-2518: Update NOTICE file



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gwenshap/kafka KAFKA-2518

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/404.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #404


commit bc576c869c1627dacd6ea5341f9238cc9e61ccc8
Author: Gwen Shapira 
Date:   2015-11-02T17:32:08Z

KAFKA-2518: Update NOTICE file




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2706: make state stores first class citi...

2015-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/387


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2726) ntpdate causes vagrant provision to fail if ntp running

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986155#comment-14986155
 ] 

ASF GitHub Bot commented on KAFKA-2726:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/407


> ntpdate causes vagrant provision to fail if ntp running
> ---
>
> Key: KAFKA-2726
> URL: https://issues.apache.org/jira/browse/KAFKA-2726
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> If ntp daemon is already running, vagrant provision can fail because of port 
> collision



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2719) Kafka classpath has grown too large and breaks some system tests

2015-11-02 Thread Geoff Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986056#comment-14986056
 ] 

Geoff Anderson commented on KAFKA-2719:
---

[~rsivaram] Unfortunately we don't have this guarantee, for example with the 
addition of jmx. 
As for your change, my take is that this is probably good enough for now, and 
does make the classpath more legible.

> Kafka classpath has grown too large and breaks some system tests
> 
>
> Key: KAFKA-2719
> URL: https://issues.apache.org/jira/browse/KAFKA-2719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.9.0.0
>
>
> The jars added under KAFKA-2369 makes the Kafka command line used in system 
> tests much higher than 4096 due to more jars in the classpath. Since the ps 
> command used to find processes in system tests truncates the command line, 
> some system tests are failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2707: make KStream processor names deter...

2015-11-02 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/408

KAFKA-2707: make KStream processor names deterministic

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka kstream_processor_name

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/408.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #408


commit 4d26184b26fcf50633f26081d10748ce6db94f65
Author: Yasuhiro Matsuda 
Date:   2015-10-30T16:55:52Z

kstream processor name

commit 9265d24ec7538052bc60fd51d7aa18f6cb7d6b66
Author: Yasuhiro Matsuda 
Date:   2015-11-02T21:31:39Z

Merge branch 'trunk' of github.com:apache/kafka into kstream_processor_name




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2726) ntpdate causes vagrant provision to fail if ntp running

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2726.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 407
[https://github.com/apache/kafka/pull/407]

> ntpdate causes vagrant provision to fail if ntp running
> ---
>
> Key: KAFKA-2726
> URL: https://issues.apache.org/jira/browse/KAFKA-2726
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> If ntp daemon is already running, vagrant provision can fail because of port 
> collision



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2726: Fix port collision between ntpdate...

2015-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/407


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2706) Make state stores first class citizens in the processor DAG

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2706.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 387
[https://github.com/apache/kafka/pull/387]

> Make state stores first class citizens in the processor DAG
> ---
>
> Key: KAFKA-2706
> URL: https://issues.apache.org/jira/browse/KAFKA-2706
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2706) Make state stores first class citizens in the processor DAG

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986076#comment-14986076
 ] 

ASF GitHub Bot commented on KAFKA-2706:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/387


> Make state stores first class citizens in the processor DAG
> ---
>
> Key: KAFKA-2706
> URL: https://issues.apache.org/jira/browse/KAFKA-2706
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.0
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2707) Make KStream processor names deterministic

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986098#comment-14986098
 ] 

ASF GitHub Bot commented on KAFKA-2707:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/408

KAFKA-2707: make KStream processor names deterministic

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka kstream_processor_name

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/408.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #408


commit 4d26184b26fcf50633f26081d10748ce6db94f65
Author: Yasuhiro Matsuda 
Date:   2015-10-30T16:55:52Z

kstream processor name

commit 9265d24ec7538052bc60fd51d7aa18f6cb7d6b66
Author: Yasuhiro Matsuda 
Date:   2015-11-02T21:31:39Z

Merge branch 'trunk' of github.com:apache/kafka into kstream_processor_name




> Make KStream processor names deterministic
> --
>
> Key: KAFKA-2707
> URL: https://issues.apache.org/jira/browse/KAFKA-2707
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Yasuhiro Matsuda
>
> Currently KStream processor names are generated from AtomicInteger static 
> member of KStreamImpl. It is incremented every time a new processor is 
> created. The problem is the name depends on the usage history of its use in 
> the same JVM, thus the corresponding processors may have different names in 
> different processes. It makes it difficult to debug. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1694) kafka command line and centralized operations

2015-11-02 Thread Andrii Biletskyi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984933#comment-14984933
 ] 

Andrii Biletskyi commented on KAFKA-1694:
-

[~granthenke], it was decided to split the work into 3 phases: api, admin 
client, cli. The Phase 1 was implemented under KAFKA-2229, the patch was moved 
to github (https://github.com/apache/kafka/pull/223). There were some minor 
comments under this pull request, they were fixed, though not rebased. IMO it 
lacks some deeper review and maybe testing. In short, you can pickup this 
ticket. I'm happy to help to close this issue asap too. If anything is needed 
(i.e. rebase) let me know.

> kafka command line and centralized operations
> -
>
> Key: KAFKA-1694
> URL: https://issues.apache.org/jira/browse/KAFKA-1694
> Project: Kafka
>  Issue Type: Bug
>Reporter: Joe Stein
>Assignee: Andrii Biletskyi
>Priority: Critical
> Attachments: KAFKA-1694.patch, KAFKA-1694_2014-12-24_21:21:51.patch, 
> KAFKA-1694_2015-01-12_15:28:41.patch, KAFKA-1694_2015-01-12_18:54:48.patch, 
> KAFKA-1694_2015-01-13_19:30:11.patch, KAFKA-1694_2015-01-14_15:42:12.patch, 
> KAFKA-1694_2015-01-14_18:07:39.patch, KAFKA-1694_2015-03-12_13:04:37.patch, 
> KAFKA-1772_1802_1775_1774_v2.patch
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Command+Line+and+Related+Improvements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2724: ZK Auth documentation.

2015-11-02 Thread fpj
GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/409

KAFKA-2724: ZK Auth documentation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2724

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/409.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #409


commit cf2211b0f37335dbe79eb0f7bc09418cd776a129
Author: Flavio Junqueira 
Date:   2015-11-02T22:54:01Z

KAFKA-2724: ZK Auth documentation.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2724) Document ZooKeeper authentication

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986214#comment-14986214
 ] 

ASF GitHub Bot commented on KAFKA-2724:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/409

KAFKA-2724: ZK Auth documentation.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2724

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/409.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #409


commit cf2211b0f37335dbe79eb0f7bc09418cd776a129
Author: Flavio Junqueira 
Date:   2015-11-02T22:54:01Z

KAFKA-2724: ZK Auth documentation.




> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #747

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2726: Fix port collision between ntpdate and ntp daemon

[wangguoz] KAFKA-2707: make KStream processor names deterministic

--
[...truncated 37 lines...]
:examples:clean
:log4j-appender:clean
:streams:clean
:tools:clean
:contrib:hadoop-consumer:clean
:contrib:hadoop-producer:clean
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar
:kafka-trunk-jdk7:log4j-appender:compileJava
:kafka-trunk-jdk7:log4j-appender:processResources UP-TO-DATE
:kafka-trunk-jdk7:log4j-appender:classes
:kafka-trunk-jdk7:log4j-appender:jar
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:264:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:380:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^

Build failed in Jenkins: kafka-trunk-jdk8 #86

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2706: make state stores first class citizens in the processor

[wangguoz] KAFKA-2726: Fix port collision between ntpdate and ntp daemon

--
[...truncated 3773 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > 

[jira] [Created] (KAFKA-2727) initialize only the part of the topology relevant to the task

2015-11-02 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-2727:
---

 Summary: initialize only the part of the topology relevant to the 
task
 Key: KAFKA-2727
 URL: https://issues.apache.org/jira/browse/KAFKA-2727
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Affects Versions: 0.9.0.0
Reporter: Yasuhiro Matsuda
Assignee: Yasuhiro Matsuda






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2724) Document ZooKeeper authentication

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2724:

Status: Patch Available  (was: Open)

> Document ZooKeeper authentication 
> --
>
> Key: KAFKA-2724
> URL: https://issues.apache.org/jira/browse/KAFKA-2724
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Add documentation for ZooKeeper authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2722) Improve ISR change propagation

2015-11-02 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2722:

Status: Patch Available  (was: Open)

> Improve ISR change propagation
> --
>
> Key: KAFKA-2722
> URL: https://issues.apache.org/jira/browse/KAFKA-2722
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> Currently the ISR change propagation interval is hard coded to 5 seconds, 
> this might still create a lot of ISR change propagation for a large cluster 
> in cases such as rolling bounce. The patch uses a dynamic propagation 
> interval and fixed a performance bug in IsrChangeNotificationListener on 
> controller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2707) Make KStream processor names deterministic

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2707:
-
Assignee: Yasuhiro Matsuda

> Make KStream processor names deterministic
> --
>
> Key: KAFKA-2707
> URL: https://issues.apache.org/jira/browse/KAFKA-2707
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> Currently KStream processor names are generated from AtomicInteger static 
> member of KStreamImpl. It is incremented every time a new processor is 
> created. The problem is the name depends on the usage history of its use in 
> the same JVM, thus the corresponding processors may have different names in 
> different processes. It makes it difficult to debug. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2707) Make KStream processor names deterministic

2015-11-02 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986183#comment-14986183
 ] 

ASF GitHub Bot commented on KAFKA-2707:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/408


> Make KStream processor names deterministic
> --
>
> Key: KAFKA-2707
> URL: https://issues.apache.org/jira/browse/KAFKA-2707
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> Currently KStream processor names are generated from AtomicInteger static 
> member of KStreamImpl. It is incremented every time a new processor is 
> created. The problem is the name depends on the usage history of its use in 
> the same JVM, thus the corresponding processors may have different names in 
> different processes. It makes it difficult to debug. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2697) add leave group logic to the consumer

2015-11-02 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986229#comment-14986229
 ] 

Guozhang Wang commented on KAFKA-2697:
--

[~onurkaraman] assigning the ticket to you now, please let me know if you are 
busy so I can help.

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2697) add leave group logic to the consumer

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2697:
-
Assignee: Onur Karaman

> add leave group logic to the consumer
> -
>
> Key: KAFKA-2697
> URL: https://issues.apache.org/jira/browse/KAFKA-2697
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Assignee: Onur Karaman
> Fix For: 0.9.0.0
>
>
> KAFKA-2397 added logic on the coordinator to handle LeaveGroupRequests. We 
> need to add logic to KafkaConsumer to send out a LeaveGroupRequest on close.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #87

2015-11-02 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2707: make KStream processor names deterministic

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e466ccd711ae00c5bb046c18aacf353b1a460dcd 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e466ccd711ae00c5bb046c18aacf353b1a460dcd
 > git rev-list 1f5d05fe718b7db7ee07c727b7a736fab09322d6 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6343247814249060294.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 9.708 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson4744266352683589430.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:copycat:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:copycat:api:clean UP-TO-DATE
:copycat:file:clean UP-TO-DATE
:copycat:json:clean UP-TO-DATE
:copycat:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.123 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


Re: 0.9.0 release branch

2015-11-02 Thread Onur Karaman
I added KAFKA-2698  as an
0.9.0.0 blocker. It adds an API to query the currently paused partitions.
Here's the PR: https://github.com/apache/kafka/pull/403

On Mon, Nov 2, 2015 at 11:16 AM, Cliff Rhyne  wrote:

> Hi Jun,
>
> I openned KAFKA-2725 based on my experience with duplicate message
> processing with auto-commit off.  I think it's a fairly small change,
> especially for someone familiar with the kafka code-base but it makes a big
> impact for clients not using auto-commit.  Can this be included in 0.9.0?
>
> Thanks,
> Cliff
>
> On Mon, Nov 2, 2015 at 12:57 PM, Jason Gustafson 
> wrote:
>
> > I added KAFKA-2691 as well, which improves client handling of
> authorization
> > errors.
> >
> > -Jason
> >
> > On Mon, Nov 2, 2015 at 10:25 AM, Becket Qin 
> wrote:
> >
> > > Hi Jun,
> > >
> > > I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
> > > scalability issue we saw.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Mon, Nov 2, 2015 at 9:16 AM, Jun Rao  wrote:
> > >
> > > > Hi, everyone,
> > > >
> > > > We are getting close to the 0.9.0 release. The current plan is to
> have
> > > the
> > > > following remaining 0.9.0 blocker issues resolved this week, cut the
> > > 0.9.0
> > > > release branch by Nov. 6 (Friday) and start the RC on Nov. 9
> (Monday).
> > > >
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > >
> >
>
>
>
> --
> Cliff Rhyne
> Software Engineering Lead
> m: 760-917-7823
> e: crh...@signal.co
> signal.co
> 
>
> Cut Through the Noise
>
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. Any unauthorized use of this email is strictly prohibited.
> ©2015 Signal. All rights reserved.
>


[GitHub] kafka pull request: Modified the async producer so it re-queues fa...

2015-11-02 Thread hiloboy0119
Github user hiloboy0119 closed the pull request at:

https://github.com/apache/kafka/pull/7


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2707: make KStream processor names deter...

2015-11-02 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/408


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2707) Make KStream processor names deterministic

2015-11-02 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2707.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 408
[https://github.com/apache/kafka/pull/408]

> Make KStream processor names deterministic
> --
>
> Key: KAFKA-2707
> URL: https://issues.apache.org/jira/browse/KAFKA-2707
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Yasuhiro Matsuda
> Fix For: 0.9.0.0
>
>
> Currently KStream processor names are generated from AtomicInteger static 
> member of KStreamImpl. It is incremented every time a new processor is 
> created. The problem is the name depends on the usage history of its use in 
> the same JVM, thus the corresponding processors may have different names in 
> different processes. It makes it difficult to debug. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk7 #746

2015-11-02 Thread Apache Jenkins Server
See 



[jira] [Updated] (KAFKA-2698) add paused API

2015-11-02 Thread Onur Karaman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Onur Karaman updated KAFKA-2698:

Priority: Blocker  (was: Major)

> add paused API
> --
>
> Key: KAFKA-2698
> URL: https://issues.apache.org/jira/browse/KAFKA-2698
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Onur Karaman
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> org.apache.kafka.clients.consumer.Consumer tends to follow a pattern of 
> having an action API paired with a query API:
> subscribe() has subscription()
> assign() has assignment()
> There's no analogous API for pause.
> Should there be a paused() API returning Set?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.9.0 release branch

2015-11-02 Thread Jun Rao
Cliff,

We try not to patch the old consumer too much since we are adding the new
java consumer in 0.9. The new consumer supports callbacks during rebalances
and can address the problem in KAFKA-2725 better.

Thanks,

Jun

On Mon, Nov 2, 2015 at 11:16 AM, Cliff Rhyne  wrote:

> Hi Jun,
>
> I openned KAFKA-2725 based on my experience with duplicate message
> processing with auto-commit off.  I think it's a fairly small change,
> especially for someone familiar with the kafka code-base but it makes a big
> impact for clients not using auto-commit.  Can this be included in 0.9.0?
>
> Thanks,
> Cliff
>
> On Mon, Nov 2, 2015 at 12:57 PM, Jason Gustafson 
> wrote:
>
> > I added KAFKA-2691 as well, which improves client handling of
> authorization
> > errors.
> >
> > -Jason
> >
> > On Mon, Nov 2, 2015 at 10:25 AM, Becket Qin 
> wrote:
> >
> > > Hi Jun,
> > >
> > > I added KAFKA-2722 as a blocker for 0.9. It fixes the ISR propagation
> > > scalability issue we saw.
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Mon, Nov 2, 2015 at 9:16 AM, Jun Rao  wrote:
> > >
> > > > Hi, everyone,
> > > >
> > > > We are getting close to the 0.9.0 release. The current plan is to
> have
> > > the
> > > > following remaining 0.9.0 blocker issues resolved this week, cut the
> > > 0.9.0
> > > > release branch by Nov. 6 (Friday) and start the RC on Nov. 9
> (Monday).
> > > >
> > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20KAFKA%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20priority%20%3D%20Blocker%20AND%20fixVersion%20%3D%200.9.0.0%20ORDER%20BY%20updated%20DESC
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > >
> >
>
>
>
> --
> Cliff Rhyne
> Software Engineering Lead
> m: 760-917-7823
> e: crh...@signal.co
> signal.co
> 
>
> Cut Through the Noise
>
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. Any unauthorized use of this email is strictly prohibited.
> ©2015 Signal. All rights reserved.
>


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-11-02 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986470#comment-14986470
 ] 

Jason Gustafson commented on KAFKA-2674:


[~guozhang] [~becket_qin] Since none of the alternatives seem clearly better, 
maybe we should just keep the current names. I can add a patch to try and 
clarify the behavior.

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2702) ConfigDef toHtmlTable() sorts in a way that is a bit confusing

2015-11-02 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14986371#comment-14986371
 ] 

Grant Henke commented on KAFKA-2702:


[~ijuma] I agree. This would be fairly simple in Scala, as Option is a commonly 
used concept and transparently handles null/empty-string/empty-list well. 
However, In java its not so common. There are going to be a lot of null 
defaults with this change (Its not new, it just was less obvious before with 
the required parameter) and thats not great either.

A default of null is not currently allowed. See ConfigDefTest.testNullDefault. 
At first look that seams like the best approach. Especially to avoid NPEs. But 
without null or some sort of Option type, how do I represent a default of 
"unset" for types like Integer or Long? I suspect this is why required was 
added. Should I allow null defaults with this change? 

> ConfigDef toHtmlTable() sorts in a way that is a bit confusing
> --
>
> Key: KAFKA-2702
> URL: https://issues.apache.org/jira/browse/KAFKA-2702
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Attachments: ConsumerConfig-After.html, ConsumerConfig-Before.html
>
>
> Because we put everything without default first (without prioritizing), 
> critical  parameters get placed below low priority ones when they both have 
> no defaults. Some parameters are without default and optional (SASL server in 
> ConsumerConfig for instance).
> Try printing ConsumerConfig parameters and see the mandatory group.id show up 
> as #15.
> I suggest sorting the no-default parameters by priority as well, or perhaps 
> adding a "REQUIRED" category that gets printed first no matter what.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2255) Missing documentation for max.in.flight.requests.per.connection

2015-11-02 Thread Stevo Slavic (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14985117#comment-14985117
 ] 

Stevo Slavic commented on KAFKA-2255:
-

Docs in code for this config property state:
{quote}
The maximum number of unacknowledged requests the client will send on a single 
connection before blocking.
{quote}

> Missing documentation for max.in.flight.requests.per.connection
> ---
>
> Key: KAFKA-2255
> URL: https://issues.apache.org/jira/browse/KAFKA-2255
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Navina Ramesh
>Assignee: Aditya Auradkar
>
> Hi Kafka team,
> Samza team noticed that the documentation for 
> max.in.flight.requests.per.connection property for the java based producer is 
> missing in the 0.8.2 documentation. I checked the code and looks like this 
> config is still enforced. Can you please update the website to reflect the 
> same?
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >