ProducerRecord/Consumer MetaData/Headers

2016-09-16 Thread Michael Pearce
Hi All,

First of all apologies if this has been previously discussed I have just joined 
the mail list (I cannot find a JIRA or KIP related nor via good old google 
search)

In our company we are looking to replace some of our more traditional message 
flows with Kafka.

One thing we have found lacking though compared with most messaging systems is 
the ability to set header/metadata separate from our payload. We did think 
about the key, but as this is used for compaction we cannot have changing 
values here which metadata/header values will obviously be.

e.g. these headers/metadata are useful for audit data or platform data that is 
not business payload related e.g. storing
the clientId that generated the message, correlation id of a request/response, 
cluster id where the message was generate (case for MirrorMakers), message uuid 
etc this list is endless.

We would like to propose extending the Record from 
ProducerRecord/ConsumerRecord to ProducerRecord/ConsumerRecord 
where M is metadata/header again being like the key and value a simple byte[] 
so that it is completely upto the end users how to serialize / deserialize it.

What our people’s thoughts?
Any other ideas how to add headers/metadata.

How can I progress this?

Cheers
Mike
The information contained in this email is strictly confidential and for the 
use of the addressee only, unless otherwise indicated. If you are not the 
intended recipient, please do not read, copy, use or disclose to others this 
message or any attachment. Please also notify the sender by replying to this 
email or by telephone (+44(020 7896 0011) and then delete the email and any 
copies of it. Opinions, conclusion (etc) that do not relate to the official 
business of this company shall be understood as neither given nor endorsed by 
it. IG is a trading name of IG Markets Limited (a company registered in England 
and Wales, company number 04008957) and IG Index Limited (a company registered 
in England and Wales, company number 01190902). Registered address at Cannon 
Bridge House, 25 Dowgate Hill, London EC4R 2YA. Both IG Markets Limited 
(register number 195355) and IG Index Limited (register number 114059) are 
authorised and regulated by the Financial Conduct Authority.


Build failed in Jenkins: kafka-trunk-jdk7 #1546

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4173; SchemaProjector should successfully project missing Struct

--
[...truncated 5517 lines...]
kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > 

Build failed in Jenkins: kafka-trunk-jdk8 #889

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4173; SchemaProjector should successfully project missing Struct

--
[...truncated 11003 lines...]
org.apache.kafka.common.record.RecordTest > testFields[172] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[173] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[173] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[173] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[173] PASSED

org.apache.kafka.common.record.RecordTest > testFields[173] STARTED

org.apache.kafka.common.record.RecordTest > testFields[173] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[174] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[174] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[174] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[174] PASSED

org.apache.kafka.common.record.RecordTest > testFields[174] STARTED

org.apache.kafka.common.record.RecordTest > testFields[174] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[175] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[175] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[175] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[175] PASSED

org.apache.kafka.common.record.RecordTest > testFields[175] STARTED

org.apache.kafka.common.record.RecordTest > testFields[175] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[176] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[176] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[176] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[176] PASSED

org.apache.kafka.common.record.RecordTest > testFields[176] STARTED

org.apache.kafka.common.record.RecordTest > testFields[176] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[177] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[177] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[177] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[177] PASSED

org.apache.kafka.common.record.RecordTest > testFields[177] STARTED

org.apache.kafka.common.record.RecordTest > testFields[177] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[178] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[178] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[178] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[178] PASSED

org.apache.kafka.common.record.RecordTest > testFields[178] STARTED

org.apache.kafka.common.record.RecordTest > testFields[178] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[179] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[179] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[179] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[179] PASSED

org.apache.kafka.common.record.RecordTest > testFields[179] STARTED

org.apache.kafka.common.record.RecordTest > testFields[179] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[180] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[180] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[180] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[180] PASSED

org.apache.kafka.common.record.RecordTest > testFields[180] STARTED

org.apache.kafka.common.record.RecordTest > testFields[180] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[181] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[181] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[181] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[181] PASSED

org.apache.kafka.common.record.RecordTest > testFields[181] STARTED

org.apache.kafka.common.record.RecordTest > testFields[181] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[182] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[182] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[182] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[182] PASSED

org.apache.kafka.common.record.RecordTest > testFields[182] STARTED

org.apache.kafka.common.record.RecordTest > testFields[182] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[183] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[183] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[183] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[183] PASSED

org.apache.kafka.common.record.RecordTest > testFields[183] STARTED

org.apache.kafka.common.record.RecordTest > testFields[183] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[184] STARTED


[jira] [Commented] (KAFKA-3396) Unauthorized topics are returned to the user

2016-09-16 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497797#comment-15497797
 ] 

Ismael Juma commented on KAFKA-3396:


Marking this as "Critical" so that we can make sure we get to it for 0.10.1.0.

> Unauthorized topics are returned to the user
> 
>
> Key: KAFKA-3396
> URL: https://issues.apache.org/jira/browse/KAFKA-3396
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Grant Henke
>Assignee: Edoardo Comar
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> Kafka's clients and protocol exposes unauthorized topics to the end user. 
> This is often considered a security hole. To some, the topic name is 
> considered sensitive information. Those that do not consider the name 
> sensitive, still consider it more information that allows a user to try and 
> circumvent security.  Instead, if a user does not have access to the topic, 
> the servers should act as if the topic does not exist. 
> To solve this some of the changes could include:
>   - The broker should not return a TOPIC_AUTHORIZATION(29) error for 
> requests (metadata, produce, fetch, etc) that include a topic that the user 
> does not have DESCRIBE access to.
>   - A user should not receive a TopicAuthorizationException when they do 
> not have DESCRIBE access to a topic or the cluster.
>  - The client should not maintain and expose a list of unauthorized 
> topics in org.apache.kafka.common.Cluster. 
> Other changes may be required that are not listed here. Further analysis is 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3396) Unauthorized topics are returned to the user

2016-09-16 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3396:
---
Priority: Critical  (was: Major)

> Unauthorized topics are returned to the user
> 
>
> Key: KAFKA-3396
> URL: https://issues.apache.org/jira/browse/KAFKA-3396
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0, 0.10.0.0
>Reporter: Grant Henke
>Assignee: Edoardo Comar
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> Kafka's clients and protocol exposes unauthorized topics to the end user. 
> This is often considered a security hole. To some, the topic name is 
> considered sensitive information. Those that do not consider the name 
> sensitive, still consider it more information that allows a user to try and 
> circumvent security.  Instead, if a user does not have access to the topic, 
> the servers should act as if the topic does not exist. 
> To solve this some of the changes could include:
>   - The broker should not return a TOPIC_AUTHORIZATION(29) error for 
> requests (metadata, produce, fetch, etc) that include a topic that the user 
> does not have DESCRIBE access to.
>   - A user should not receive a TopicAuthorizationException when they do 
> not have DESCRIBE access to a topic or the cluster.
>  - The client should not maintain and expose a list of unauthorized 
> topics in org.apache.kafka.common.Cluster. 
> Other changes may be required that are not listed here. Further analysis is 
> needed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1545

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4183; Corrected Kafka Connect's JSON Converter to properly convert

--
[...truncated 1940 lines...]

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED


[UPDATE] 0.10.1 Release Progress

2016-09-16 Thread Jason Gustafson
Hi All,

Thanks everyone for the hard work! Here's an update on the remaining KIPs
that we are hoping to include:

KIP-78 (clusterId): Review is basically complete. Assuming no problems
emerge, Ismael is planning to merge today.
KIP-74 (max fetch size): Review is nearing completion, just a few minor
issues remain. This will probably be merged tomorrow or Sunday.
KIP-55 (secure quotas): The patch has been rebased and probably needs one
more review pass before merging. Jun is confident it can get in before
Monday.

As for KIP-79, I've made one review pass, but to make it in, we'll need 1)
some more votes on the vote thread, and 2) a few review iterations. It's
looking a bit doubtful, but let's see how it goes!

Since we are nearing the feature freeze, it would be helpful if people
begin setting some priorities on the bugs that need to get in before the
code freeze. I am going to make an effort to prune the list early next
week, so if there are any critical issues you know about, make sure they
are marked as such.

Thanks,
Jason


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497716#comment-15497716
 ] 

ASF GitHub Bot commented on KAFKA-4183:
---

GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1872

KAFKA-4183: centralize checking for optional and default values to avoid 
bugs

Cleaner to just check once for optional & default value from the 
`convertToConnect()` function.

It also helps address an issue with conversions for logical type schemas 
that have default values and null as the included value. That test case is 
_probably_ not an issue in practice, since when using the `JsonConverter` to 
serialize a missing field with a default value, it will serialize the default 
value for the field. But in the face of JSON data streaming in from a topic 
being [generous on input, strict on 
output](http://tedwise.com/2009/05/27/generous-on-input-strict-on-output) seems 
best.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4183

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1872


commit 1e09c6431f11361e7f3a5af4c09a8174c3547669
Author: Shikhar Bhushan 
Date:   2016-09-16T23:17:40Z

KAFKA-4183: centralize checking for optional and default values to avoid 
bugs




> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1872: KAFKA-4183: centralize checking for optional and d...

2016-09-16 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1872

KAFKA-4183: centralize checking for optional and default values to avoid 
bugs

Cleaner to just check once for optional & default value from the 
`convertToConnect()` function.

It also helps address an issue with conversions for logical type schemas 
that have default values and null as the included value. That test case is 
_probably_ not an issue in practice, since when using the `JsonConverter` to 
serialize a missing field with a default value, it will serialize the default 
value for the field. But in the face of JSON data streaming in from a topic 
being [generous on input, strict on 
output](http://tedwise.com/2009/05/27/generous-on-input-strict-on-output) seems 
best.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4183

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1872.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1872


commit 1e09c6431f11361e7f3a5af4c09a8174c3547669
Author: Shikhar Bhushan 
Date:   2016-09-16T23:17:40Z

KAFKA-4183: centralize checking for optional and default values to avoid 
bugs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497694#comment-15497694
 ] 

Randall Hauch commented on KAFKA-4183:
--

[~shikhar], that'd be great. Note that I created two PRs: one for {{trunk}} and 
one for {{0.10.0}}.

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4173) SchemaProjector should successfully project when source schema field is missing and target schema field is optional

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497652#comment-15497652
 ] 

ASF GitHub Bot commented on KAFKA-4173:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1865


> SchemaProjector should successfully project when source schema field is 
> missing and target schema field is optional
> ---
>
> Key: KAFKA-4173
> URL: https://issues.apache.org/jira/browse/KAFKA-4173
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> As reported in https://github.com/confluentinc/kafka-connect-hdfs/issues/115



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4173) SchemaProjector should successfully project when source schema field is missing and target schema field is optional

2016-09-16 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4173.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1865
[https://github.com/apache/kafka/pull/1865]

> SchemaProjector should successfully project when source schema field is 
> missing and target schema field is optional
> ---
>
> Key: KAFKA-4173
> URL: https://issues.apache.org/jira/browse/KAFKA-4173
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> As reported in https://github.com/confluentinc/kafka-connect-hdfs/issues/115



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1865: KAFKA-4173: SchemaProjector should successfully pr...

2016-09-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1865


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan reopened KAFKA-4183:

  Assignee: Shikhar Bhushan  (was: Ewen Cheslack-Postava)

[~rhauch] Reopening this, I noticed an issue with handling default values. E.g. 
this test

{noformat}
@Test
public void timestampToConnectDefval() {
Schema schema = Timestamp.builder().defaultValue(new 
java.util.Date(42)).schema();
String msg = "{ \"schema\": { \"type\": \"int64\", \"name\": 
\"org.apache.kafka.connect.data.Timestamp\", \"version\": 1, \"default\": 42 }, 
\"payload\": null }";
SchemaAndValue schemaAndValue = converter.toConnectData(TOPIC, 
msg.getBytes());
assertEquals(schema, schemaAndValue.schema());
assertEquals(new java.util.Date(42), schemaAndValue.value());
}
{noformat}

Happy to create a followup PR since I'm poking around with it

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Shikhar Bhushan
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #888

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4183; Corrected Kafka Connect's JSON Converter to properly convert

--
[...truncated 3392 lines...]
kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication 

[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497626#comment-15497626
 ] 

Randall Hauch commented on KAFKA-4183:
--

[~hachikuji], thanks. I've created a [pull 
request|https://github.com/apache/kafka/pull/1871] for the {{0.10.0}} branch.

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4133) Provide a configuration to control consumer max in-flight fetches

2016-09-16 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497628#comment-15497628
 ] 

Jason Gustafson commented on KAFKA-4133:


[~mimaison] Thanks for writing the KIP! I'll take a look early next week once 
the feature freeze has passed.

> Provide a configuration to control consumer max in-flight fetches
> -
>
> Key: KAFKA-4133
> URL: https://issues.apache.org/jira/browse/KAFKA-4133
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Mickael Maison
>
> With KIP-74, we now have a good way to limit the size of fetch responses, but 
> it may still be difficult for users to control overall memory since the 
> consumer will send fetches in parallel to all the brokers which own 
> partitions that it is subscribed to. To give users finer control, it might 
> make sense to add a `max.in.flight.fetches` setting to limit the total number 
> of concurrent fetches at any time. This would require a KIP since it's a new 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497623#comment-15497623
 ] 

ASF GitHub Bot commented on KAFKA-4183:
---

GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/1871

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from null to logical values

The `JsonConverter` class has `LogicalTypeConverter` implementations for 
Date, Time, Timestamp, and Decimal, but these implementations fail when the 
input literal value (deserialized from the message) is null. 

Test cases were added to check for these cases, and these failed before the 
`LogicalTypeConverter` implementations were fixed to consider whether the 
schema has a default value or is optional, similarly to how the 
`JsonToConnectTypeConverter` implementations do this. Once the fixes were made, 
the new tests pass.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-4183-0.10.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1871


commit 4ffb9409f5a75345cf53aed0d799e6c694f636ca
Author: Randall Hauch 
Date:   2016-09-16T19:05:06Z

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from deserialized null values.




> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1871: KAFKA-4183 Corrected Kafka Connect's JSON Converte...

2016-09-16 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/1871

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from null to logical values

The `JsonConverter` class has `LogicalTypeConverter` implementations for 
Date, Time, Timestamp, and Decimal, but these implementations fail when the 
input literal value (deserialized from the message) is null. 

Test cases were added to check for these cases, and these failed before the 
`LogicalTypeConverter` implementations were fixed to consider whether the 
schema has a default value or is optional, similarly to how the 
`JsonToConnectTypeConverter` implementations do this. Once the fixes were made, 
the new tests pass.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-4183-0.10.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1871.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1871


commit 4ffb9409f5a75345cf53aed0d799e6c694f636ca
Author: Randall Hauch 
Date:   2016-09-16T19:05:06Z

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from deserialized null values.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497578#comment-15497578
 ] 

Jason Gustafson commented on KAFKA-4183:


[~rhauch] Good question. I'm not sure I have the answer, but if you open a PR 
into 0.10.0, I'll merge it there.

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4186) Transient failure in KStreamAggregationIntegrationTest.shouldGroupByKey

2016-09-16 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4186:
--

 Summary: Transient failure in 
KStreamAggregationIntegrationTest.shouldGroupByKey
 Key: KAFKA-4186
 URL: https://issues.apache.org/jira/browse/KAFKA-4186
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson


Saw this running locally off of trunk:

{code}
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey[1] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 10 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:268)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:211)
at 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.receiveMessages(KStreamAggregationIntegrationTest.java:480)
at 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.shouldGroupByKey(KStreamAggregationIntegrationTest.java:407)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4186) Transient failure in KStreamAggregationIntegrationTest.shouldGroupByKey

2016-09-16 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4186:
---
Component/s: streams

> Transient failure in KStreamAggregationIntegrationTest.shouldGroupByKey
> ---
>
> Key: KAFKA-4186
> URL: https://issues.apache.org/jira/browse/KAFKA-4186
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Jason Gustafson
>
> Saw this running locally off of trunk:
> {code}
> org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
> shouldGroupByKey[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 6. Did not 
> receive 10 number of records
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:268)
> at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:211)
> at 
> org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.receiveMessages(KStreamAggregationIntegrationTest.java:480)
> at 
> org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest.shouldGroupByKey(KStreamAggregationIntegrationTest.java:407)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3906) Connect logical types do not support nulls.

2016-09-16 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497533#comment-15497533
 ] 

Shikhar Bhushan commented on KAFKA-3906:


[~jcustenborder] did this come up in the context of {{JsonConverter}}, and if 
so can it be closed since KAFKA-4183 patched that?

> Connect logical types do not support nulls.
> ---
>
> Key: KAFKA-3906
> URL: https://issues.apache.org/jira/browse/KAFKA-3906
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Jeremy Custenborder
>Assignee: Ewen Cheslack-Postava
>
> The logical types for Kafka Connect do not support null data values. Date, 
> Decimal, Time, and Timestamp all will throw null reference exceptions if a 
> null is passed in to their fromLogical and toLogical methods. Date, Time, and 
> Timestamp require signature changes for these methods to support nullable 
> types.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497530#comment-15497530
 ] 

Randall Hauch commented on KAFKA-4183:
--

[~hachikuji], will there be a 0.10.0.2? If so, any chance this might be 
included?

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4183:
---
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1867
[https://github.com/apache/kafka/pull/1867]

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497511#comment-15497511
 ] 

ASF GitHub Bot commented on KAFKA-4183:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1867


> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.10.1.0
>
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1867: KAFKA-4183 Corrected Kafka Connect's JSON Converte...

2016-09-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1867


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1868: MINOR: Improve output format of `kafka_reassign_pa...

2016-09-16 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1868

MINOR: Improve output format of `kafka_reassign_partitions.sh` tool

The current output for the `--generate` option looks like this
```
Current partition replica assignment

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}
Proposed partition reassignment configuration

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

This PR simply changes it to
```
Current partition replica assignment
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

to make it more readable.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
minor/improve_output_format_of_reassign_partitions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1868.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1868


commit 8816df4b7d48a6fae41563f6c0195eb54713f2b0
Author: Vahid Hashemian 
Date:   2016-09-16T20:20:33Z

MINOR: Improve output format of `kafka_reassign_partitions.sh` tool

The current output for the --generate option looks like this
```
Current partition replica assignment

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}
Proposed partition reassignment configuration

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

This PR simply changes it to
```
Current partition replica assignment
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

to make it more readable.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1868: MINOR: Improve output format of `kafka_reassign_pa...

2016-09-16 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1868


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4185) Abstract out password verifier in SaslServer as an injectable dependency

2016-09-16 Thread Piyush Vijay (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497494#comment-15497494
 ] 

Piyush Vijay commented on KAFKA-4185:
-

[~junrao], [~gwenshap], [~ijuma] Any thoughts?

> Abstract out password verifier in SaslServer as an injectable dependency
> 
>
> Key: KAFKA-4185
> URL: https://issues.apache.org/jira/browse/KAFKA-4185
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Piyush Vijay
> Fix For: 0.10.0.2
>
>
> Kafka comes with a default SASL/PLAIN implementation which assumes that 
> username and password are present in a JAAS
> config file. People often want to use some other way to provide username and 
> password to SaslServer. Their best bet,
> currently, is to have their own implementation of SaslServer (which would be, 
> in most cases, a copied version of PlainSaslServer
> minus the logic where password verification happens). This is not ideal.
> We believe that there exists a better way to structure the current 
> PlainSaslServer implementation which makes it very
> easy for people to plug-in their custom password verifier without having to 
> rewrite SaslServer or copy any code.
> The idea is to have an injectable dependency interface PasswordVerifier which 
> can be re-implemented based on the
> requirements. There would be no need to re-implement or extend 
> PlainSaslServer class.
> Note that this is commonly asked feature and there have been some attempts in 
> the past to solve this problem:
> https://github.com/apache/kafka/pull/1350
> https://github.com/apache/kafka/pull/1770
> https://issues.apache.org/jira/browse/KAFKA-2629
> https://issues.apache.org/jira/browse/KAFKA-3679
> We believe that this proposed solution does not have the demerits because of 
> previous proposals were rejected.
> I would be happy to discuss more.
> Please find the link to the PR in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


No-deps Kafka Java client?

2016-09-16 Thread Gian Merlino
Hey Kafkas,

Would it make sense to offer a no-deps Kafka client (with third-party
packages like com.example.xxx relocated to a package like
org.apache.kafka.relocated.com.example.xxx)? cglib is one example of a
project that does this, in the form of cglib-nodep:

  http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22cglib%22

I'm asking because in Druid, we are running into conflicts with Kafka's lz4
and Druid's lz4:

  https://github.com/druid-io/druid/issues/3266

We do have a classloader isolation mechanism for extensions like our Kafka
plugin, but unfortunately it doesn't help here. lz4 is used by Druid's base
classloader, which is a parent of the extension classloader, so Kafka's
won't get loaded. We could work around the problem by upgrading to 0.10,
but (a) we're hesitant since AFAIK this would mean dropping support for 0.9
brokers, and (b) this time it was lz4 but there may be other issues in the
future.

This may just be our problem to solve with a more creative classloader
setup, but I also wanted to float the idea of an easier-to-embed nodeps
kafka library.

Gian


[jira] [Commented] (KAFKA-4185) Abstract out password verifier in SaslServer as an injectable dependency

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497405#comment-15497405
 ] 

ASF GitHub Bot commented on KAFKA-4185:
---

GitHub user piyushvijay opened a pull request:

https://github.com/apache/kafka/pull/1870

[KAFKA-4185] Abstract out password verifier in SaslServer as an injec…

…table dependency

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/piyushvijay/kafka passwordVerifier

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1870


commit cf5fc56d159a475329654fb277140d7c106d32ef
Author: Piyush Vijay 
Date:   2016-09-16T21:16:59Z

[KAFKA-4185] Abstract out password verifier in SaslServer as an injectable 
dependency




> Abstract out password verifier in SaslServer as an injectable dependency
> 
>
> Key: KAFKA-4185
> URL: https://issues.apache.org/jira/browse/KAFKA-4185
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.10.0.1
>Reporter: Piyush Vijay
> Fix For: 0.10.0.2
>
>
> Kafka comes with a default SASL/PLAIN implementation which assumes that 
> username and password are present in a JAAS
> config file. People often want to use some other way to provide username and 
> password to SaslServer. Their best bet,
> currently, is to have their own implementation of SaslServer (which would be, 
> in most cases, a copied version of PlainSaslServer
> minus the logic where password verification happens). This is not ideal.
> We believe that there exists a better way to structure the current 
> PlainSaslServer implementation which makes it very
> easy for people to plug-in their custom password verifier without having to 
> rewrite SaslServer or copy any code.
> The idea is to have an injectable dependency interface PasswordVerifier which 
> can be re-implemented based on the
> requirements. There would be no need to re-implement or extend 
> PlainSaslServer class.
> Note that this is commonly asked feature and there have been some attempts in 
> the past to solve this problem:
> https://github.com/apache/kafka/pull/1350
> https://github.com/apache/kafka/pull/1770
> https://issues.apache.org/jira/browse/KAFKA-2629
> https://issues.apache.org/jira/browse/KAFKA-3679
> We believe that this proposed solution does not have the demerits because of 
> previous proposals were rejected.
> I would be happy to discuss more.
> Please find the link to the PR in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1870: [KAFKA-4185] Abstract out password verifier in Sas...

2016-09-16 Thread piyushvijay
GitHub user piyushvijay opened a pull request:

https://github.com/apache/kafka/pull/1870

[KAFKA-4185] Abstract out password verifier in SaslServer as an injec…

…table dependency

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/piyushvijay/kafka passwordVerifier

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1870.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1870


commit cf5fc56d159a475329654fb277140d7c106d32ef
Author: Piyush Vijay 
Date:   2016-09-16T21:16:59Z

[KAFKA-4185] Abstract out password verifier in SaslServer as an injectable 
dependency




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.10.1 Release Plan

2016-09-16 Thread Jason Gustafson
Hi All,

Looks like this vote has passed with 8 binding and 4 non-binding votes.
I'll send a progress update this afternoon as we race for the Monday
feature freeze.

Thanks,
Jason

On Wed, Sep 14, 2016 at 11:08 PM, Neha Narkhede  wrote:

> +1 (binding)
>
> On Tue, Sep 13, 2016 at 6:58 PM Becket Qin  wrote:
>
> > +1 (non-binding)
> >
> > On Tue, Sep 13, 2016 at 5:33 PM, Dana Powers 
> > wrote:
> >
> > > +1
> > >
> >
> --
> Thanks,
> Neha
>


[jira] [Created] (KAFKA-4185) Abstract out password verifier in SaslServer as an injectable dependency

2016-09-16 Thread Piyush Vijay (JIRA)
Piyush Vijay created KAFKA-4185:
---

 Summary: Abstract out password verifier in SaslServer as an 
injectable dependency
 Key: KAFKA-4185
 URL: https://issues.apache.org/jira/browse/KAFKA-4185
 Project: Kafka
  Issue Type: Improvement
  Components: security
Affects Versions: 0.10.0.1
Reporter: Piyush Vijay
 Fix For: 0.10.0.2


Kafka comes with a default SASL/PLAIN implementation which assumes that 
username and password are present in a JAAS
config file. People often want to use some other way to provide username and 
password to SaslServer. Their best bet,
currently, is to have their own implementation of SaslServer (which would be, 
in most cases, a copied version of PlainSaslServer
minus the logic where password verification happens). This is not ideal.

We believe that there exists a better way to structure the current 
PlainSaslServer implementation which makes it very
easy for people to plug-in their custom password verifier without having to 
rewrite SaslServer or copy any code.

The idea is to have an injectable dependency interface PasswordVerifier which 
can be re-implemented based on the
requirements. There would be no need to re-implement or extend PlainSaslServer 
class.

Note that this is commonly asked feature and there have been some attempts in 
the past to solve this problem:
https://github.com/apache/kafka/pull/1350
https://github.com/apache/kafka/pull/1770
https://issues.apache.org/jira/browse/KAFKA-2629
https://issues.apache.org/jira/browse/KAFKA-3679

We believe that this proposed solution does not have the demerits because of 
previous proposals were rejected.
I would be happy to discuss more.

Please find the link to the PR in the comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4069) Forward records in context of cache flushing/eviction

2016-09-16 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-4069.
-
Resolution: Fixed

> Forward records in context of cache flushing/eviction
> -
>
> Key: KAFKA-4069
> URL: https://issues.apache.org/jira/browse/KAFKA-4069
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> When the cache is in place, records should we forwarded downstream when they 
> are evicted or flushed from the cache. 
> This is a major structural change to the internals of the code, moving from 
> having a single record outstanding inside a task to potentially having 
> several records outstanding. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3778) Avoiding using range queries of RocksDBWindowStore on KStream windowed aggregations

2016-09-16 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-3778.
-
Resolution: Fixed

> Avoiding using range queries of RocksDBWindowStore on KStream windowed 
> aggregations
> ---
>
> Key: KAFKA-3778
> URL: https://issues.apache.org/jira/browse/KAFKA-3778
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> RocksDbWindowStore currently does not use caches, but its window segments 
> implemented as RocksDbStore does. However, its range query {{fetch(key, 
> fromTime, toTime)}} will cause all its touched segments' cache to be flushed. 
> After KAFKA-3777, we should change its implementation for 
> KStreamWindowAggregation / KStreamWindowReduce to not use {{fetch}}, but just 
> as multiple {{get}} calls on the underlying segments, one for each affected 
> window range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3780) Add new config cache.max.bytes.buffering to the streams configuration

2016-09-16 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-3780.
-
Resolution: Fixed

> Add new config cache.max.bytes.buffering to the streams configuration
> -
>
> Key: KAFKA-3780
> URL: https://issues.apache.org/jira/browse/KAFKA-3780
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Add a new configuration cache.max.bytes.buffering to the streams 
> configuration options as described in KIP-63 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4167) Add cache metrics

2016-09-16 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-4167.
-
Resolution: Fixed

> Add cache metrics
> -
>
> Key: KAFKA-4167
> URL: https://issues.apache.org/jira/browse/KAFKA-4167
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Would be good to report things like hits and misses, overwrites, number of 
> puts and gets, number of flushes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3974) LRU cache should store bytes/object and not records

2016-09-16 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-3974.
-
Resolution: Fixed

> LRU cache should store bytes/object and not records
> ---
>
> Key: KAFKA-3974
> URL: https://issues.apache.org/jira/browse/KAFKA-3974
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> After the investigation in KAFKA-3973, if the outcome is either bytes or 
> objects, the actual LRU cache needs to be modified to store bytes or objects 
> (instead of records). The cache will have a total byte size as an input 
> parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3777) Extract the existing LRU cache out of RocksDBStore

2016-09-16 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-3777.
-
Resolution: Fixed

> Extract the existing LRU cache out of RocksDBStore
> --
>
> Key: KAFKA-3777
> URL: https://issues.apache.org/jira/browse/KAFKA-3777
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> The LRU cache that is currently inside the RocksDbStore class. As part of 
> KAFKA-3776 it needs to come outside of RocksDbStore and be a separate 
> component used in:
> 1. KGroupedStream.aggregate() / reduce(), 
> 2. KStream.aggregateByKey() / reduceByKey(),
> 3. KTable.to() (this will be done in KAFKA-3779).
> As all of the above operators can have a cache on top to deduplicate the 
> materialized state store in RocksDB.
> The scope of this JIRA is to extract out the cache of RocksDBStore, and keep 
> them as item 1) and 2) above; and it should be done together / after 
> KAFKA-3780.
> Note it is NOT in the scope of this JIRA to re-write the cache, so this will 
> basically stay the same record-based cache we currently have.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3438) Rack Aware Replica Reassignment should warn of overloaded brokers

2016-09-16 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-3438:
--

Assignee: Vahid Hashemian

> Rack Aware Replica Reassignment should warn of overloaded brokers
> -
>
> Key: KAFKA-3438
> URL: https://issues.apache.org/jira/browse/KAFKA-3438
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ben Stopford
>Assignee: Vahid Hashemian
> Fix For: 0.10.1.0
>
>
> We've changed the replica reassignment code to be rack aware.
> One problem that might catch users out would be that they rebalance the 
> cluster using kafka-reassign-partitions.sh but their rack configuration means 
> that some high proportion of replicas are pushed onto a single, or small 
> number of, brokers. 
> This should be an easy problem to avoid, by changing the rack assignment 
> information, but we should probably warn users if they are going to create 
> something that is unbalanced. 
> So imagine I have a Kafka cluster of 12 nodes spread over two racks with rack 
> awareness enabled. If I add a 13th machine, on a new rack, and run the 
> rebalance tool, that new machine will get ~6x as many replicas as the least 
> loaded broker. 
> Suggest a warning  be added to the tool output when --generate is called. 
> "The most loaded broker has 2.3x as many replicas as the the least loaded 
> broker. This is likely due to an uneven distribution of brokers across racks. 
> You're advised to alter the rack config so there are approximately the same 
> number of brokers per rack" and displays the individual rack→#brokers and 
> broker→#replicas data for the proposed move.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4184) Test failure in ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle

2016-09-16 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4184:
--

 Summary: Test failure in 
ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle
 Key: KAFKA-4184
 URL: https://issues.apache.org/jira/browse/KAFKA-4184
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Reporter: Jason Gustafson
Assignee: Ben Stopford


Seen here: https://builds.apache.org/job/kafka-trunk-jdk7/1544/.

{code}
unit.kafka.server.ReplicationQuotasTest > 
shouldBootstrapTwoBrokersWithFollowerThrottle FAILED
java.lang.AssertionError: expected:<3.0E7> but was:<2.060725619683527E7>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:553)
at org.junit.Assert.assertEquals(Assert.java:683)
at 
unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:172)
at 
unit.kafka.server.ReplicationQuotasTest$$anonfun$shouldMatchQuotaReplicatingThroughAnAsymmetricTopology$14.apply(ReplicationQuotasTest.scala:168)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
unit.kafka.server.ReplicationQuotasTest.shouldMatchQuotaReplicatingThroughAnAsymmetricTopology(ReplicationQuotasTest.scala:168)
at 
unit.kafka.server.ReplicationQuotasTest.shouldBootstrapTwoBrokersWithFollowerThrottle(ReplicationQuotasTest.scala:74)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1869: HOTFIX: changed quickstart donwload from 0.10.0.0 ...

2016-09-16 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/1869

HOTFIX: changed quickstart donwload from 0.10.0.0 to 0.10.0.1



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka hotfix-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1869.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1869


commit 78f7b608027b864cdf845f769b276e8e5b544291
Author: Matthias J. Sax 
Date:   2016-09-16T20:22:44Z

HOTFIX: changed quickstart donwload from 0.10.0.0 to 0.10.0.1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1868: MINOR: Improve output format of `kafka_reassign_pa...

2016-09-16 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/1868

MINOR: Improve output format of `kafka_reassign_partitions.sh` tool

The current output for the `--generate` option looks like this
```
Current partition replica assignment

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}
Proposed partition reassignment configuration

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

This PR simply changes it to
```
Current partition replica assignment
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

to make it more readable.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
minor/improve_output_format_of_reassign_partitions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1868.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1868


commit 8816df4b7d48a6fae41563f6c0195eb54713f2b0
Author: Vahid Hashemian 
Date:   2016-09-16T20:20:33Z

MINOR: Improve output format of `kafka_reassign_partitions.sh` tool

The current output for the --generate option looks like this
```
Current partition replica assignment

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}
Proposed partition reassignment configuration

{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

This PR simply changes it to
```
Current partition replica assignment
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[0]}]}

Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"t1","partition":0,"replicas":[1]}]}
```

to make it more readable.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #887

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3776: Unify store and downstream caching in streams

--
[...truncated 12991 lines...]
org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldEncodeDecodeWithUserEndPoint PASSED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible STARTED

org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfoTest > 
shouldBeBackwardCompatible PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testInjectClients PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeCommit 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval STARTED

org.apache.kafka.streams.processor.internals.PunctuationQueueTest > 
testPunctuationInterval PASSED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
STARTED

org.apache.kafka.streams.processor.internals.RecordQueueTest > testTimeTracking 
PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterNonPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testGetStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testClose PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testChangeLogOffsets PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testRegisterPersistentStore PASSED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic STARTED

org.apache.kafka.streams.processor.internals.ProcessorStateManagerTest > 
testNoTopic PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testProcessOrder 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > testPauseResume 
PASSED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate STARTED

org.apache.kafka.streams.processor.internals.StreamTaskTest > 
testMaybePunctuate PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUnite PASSED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
STARTED

org.apache.kafka.streams.processor.internals.QuickUnionTest > testUniteMany 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 

[jira] [Updated] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Randall Hauch (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-4183:
-
Status: Patch Available  (was: Open)

Added a [pull request|https://github.com/apache/kafka/pull/1867] with a fix.

The {{JsonConverter}} class has {{LogicalTypeConverter}} implementations for 
{{Date}}, {{Time}}, {{Timestamp}}, and {{Decimal}}, but these implementations 
fail when the input literal value (deserialized from the message) is null.

Test cases were added to check for these cases, and these failed before the 
{{LogicalTypeConverter}} implementations were fixed to consider whether the 
schema has a default value or is optional, similarly to how the 
{{JsonToConnectTypeConverter}} implementations do this. Once the fixes were 
made to the {{LogicalTypeConverter}} implementations, the new tests pass.

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497133#comment-15497133
 ] 

ASF GitHub Bot commented on KAFKA-4183:
---

GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/1867

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from null to logical values

The `JsonConverter` class has `LogicalTypeConverter` implementations for 
Date, Time, Timestamp, and Decimal, but these implementations fail when the 
input literal value (deserialized from the message) is null. 

Test cases were added to check for these cases, and these failed before the 
`LogicalTypeConverter` implementations were fixed to consider whether the 
schema has a default value or is optional, similarly to how the 
`JsonToConnectTypeConverter` implementations do this. Once the fixes were made, 
the new tests pass.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-4183

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1867.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1867


commit c21db21e7b56f7c8ea31fab9852a6852dc038015
Author: Randall Hauch 
Date:   2016-09-16T19:05:06Z

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from deserialized null values.




> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1867: KAFKA-4183 Corrected Kafka Connect's JSON Converte...

2016-09-16 Thread rhauch
GitHub user rhauch opened a pull request:

https://github.com/apache/kafka/pull/1867

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from null to logical values

The `JsonConverter` class has `LogicalTypeConverter` implementations for 
Date, Time, Timestamp, and Decimal, but these implementations fail when the 
input literal value (deserialized from the message) is null. 

Test cases were added to check for these cases, and these failed before the 
`LogicalTypeConverter` implementations were fixed to consider whether the 
schema has a default value or is optional, similarly to how the 
`JsonToConnectTypeConverter` implementations do this. Once the fixes were made, 
the new tests pass.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rhauch/kafka kafka-4183

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1867.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1867


commit c21db21e7b56f7c8ea31fab9852a6852dc038015
Author: Randall Hauch 
Date:   2016-09-16T19:05:06Z

KAFKA-4183 Corrected Kafka Connect's JSON Converter to properly convert 
from deserialized null values.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1866: MINOR: Add test cases for delays in consumer rebal...

2016-09-16 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1866

MINOR: Add test cases for delays in consumer rebalance listener



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka rebalance-delay-test-cases

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1866.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1866


commit f4e759f0b5f10d76f117feeac613fee3d2da1c2f
Author: Jason Gustafson 
Date:   2016-09-16T19:08:19Z

MINOR: Add test cases for delays in consumer rebalance listener




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Randall Hauch (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15497073#comment-15497073
 ] 

Randall Hauch commented on KAFKA-4183:
--

I should be able to provide a pull request to fix this. I guess I'll do it 
against "trunk" (for 0.10.1.0?), even though I'd love for this fix to be 
backported to any upcoming 0.10.0.x release.

> Logical converters in JsonConverter don't properly handle null values
> -
>
> Key: KAFKA-4183
> URL: https://issues.apache.org/jira/browse/KAFKA-4183
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: Randall Hauch
>Assignee: Ewen Cheslack-Postava
>
> The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
> {{LogicalTypeConverter}} implementations to convert from the raw value into 
> the corresponding logical type value, and they are used during 
> deserialization of message keys and/or values. However, these implementations 
> do not handle the case when the input raw value is null, which can happen 
> when a key or value has a schema that is or contains a field that is 
> _optional_.
> Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
> with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
> the key or value with this schema contains a null "date" field and is 
> serialized, the logical serializer properly will serialize the null value. 
> However, upon _deserialization_, the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the 
> literal value (which is null) to a logical value. All of the 
> {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
> NullPointerException when the input value is null. 
> For example:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
>   at 
> org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
>   at 
> org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
>   at 
> org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
>   at 
> org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
>   at 
> org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4183) Logical converters in JsonConverter don't properly handle null values

2016-09-16 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-4183:


 Summary: Logical converters in JsonConverter don't properly handle 
null values
 Key: KAFKA-4183
 URL: https://issues.apache.org/jira/browse/KAFKA-4183
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.1
Reporter: Randall Hauch
Assignee: Ewen Cheslack-Postava


The {{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} map contains 
{{LogicalTypeConverter}} implementations to convert from the raw value into the 
corresponding logical type value, and they are used during deserialization of 
message keys and/or values. However, these implementations do not handle the 
case when the input raw value is null, which can happen when a key or value has 
a schema that is or contains a field that is _optional_.

Consider a Kafka Connect schema of type STRUCT that contains a field "date" 
with an optional schema of type {{org.apache.kafka.connect.data.Date}}. When 
the key or value with this schema contains a null "date" field and is 
serialized, the logical serializer properly will serialize the null value. 
However, upon _deserialization_, the 
{{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} are used to convert the literal 
value (which is null) to a logical value. All of the 
{{JsonConverter.TO_CONNECT_LOGICAL_CONVERTERS}} implementations will throw a 
NullPointerException when the input value is null. 

For example:

{code:java}
java.lang.NullPointerException
at 
org.apache.kafka.connect.json.JsonConverter$14.convert(JsonConverter.java:224)
at 
org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:731)
at 
org.apache.kafka.connect.json.JsonConverter.access$100(JsonConverter.java:53)
at 
org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:200)
at 
org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:727)
at 
org.apache.kafka.connect.json.JsonConverter.jsonToConnect(JsonConverter.java:354)
at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:343)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1544

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3776: Unify store and downstream caching in streams

--
[...truncated 3385 lines...]
kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
STARTED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics STARTED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED


[jira] [Commented] (KAFKA-4173) SchemaProjector should successfully project when source schema field is missing and target schema field is optional

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496995#comment-15496995
 ] 

ASF GitHub Bot commented on KAFKA-4173:
---

GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1865

KAFKA-4173: SchemaProjector should successfully project missing Struct 
field when target field is optional



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1865.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1865


commit 085e30ee5ab49981dbb9b1f353b3ea02fdadec7e
Author: Shikhar Bhushan 
Date:   2016-09-16T18:16:18Z

KAFKA-4173: SchemaProjector should successfully project missing Struct 
field when target field is optional




> SchemaProjector should successfully project when source schema field is 
> missing and target schema field is optional
> ---
>
> Key: KAFKA-4173
> URL: https://issues.apache.org/jira/browse/KAFKA-4173
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Shikhar Bhushan
>
> As reported in https://github.com/confluentinc/kafka-connect-hdfs/issues/115



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1865: KAFKA-4173: SchemaProjector should successfully pr...

2016-09-16 Thread shikhar
GitHub user shikhar opened a pull request:

https://github.com/apache/kafka/pull/1865

KAFKA-4173: SchemaProjector should successfully project missing Struct 
field when target field is optional



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shikhar/kafka kafka-4173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1865.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1865


commit 085e30ee5ab49981dbb9b1f353b3ea02fdadec7e
Author: Shikhar Bhushan 
Date:   2016-09-16T18:16:18Z

KAFKA-4173: SchemaProjector should successfully project missing Struct 
field when target field is optional




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4182) Move the change logger our of RocksDB stores

2016-09-16 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4182:
-

 Summary: Move the change logger our of RocksDB stores
 Key: KAFKA-4182
 URL: https://issues.apache.org/jira/browse/KAFKA-4182
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Damian Guy
Assignee: Guozhang Wang
 Fix For: 0.10.1.1


We currently have the change logger embedded within the RocksDB store 
implementations, however this results in multiple implementations of the same 
thing and bad separation of concerns. We should create new LoggedStore that 
wraps the outer most store when logging is enabled, for example:

loggedStore -> cachingStore -> meteredStore -> innerStore



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4161) Decouple flush and offset commits

2016-09-16 Thread Shikhar Bhushan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shikhar Bhushan updated KAFKA-4161:
---
Summary: Decouple flush and offset commits  (was: Allow connectors to 
request flush via the context)

> Decouple flush and offset commits
> -
>
> Key: KAFKA-4161
> URL: https://issues.apache.org/jira/browse/KAFKA-4161
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Ewen Cheslack-Postava
>  Labels: needs-kip
>
> It is desirable to have, in addition to the time-based flush interval, volume 
> or size-based commits. E.g. a sink connector which is buffering in terms of 
> number of records may want to request a flush when the buffer is full, or 
> when sufficient amount of data has been buffered in a file.
> Having a method like say {{requestFlush()}} on the {{SinkTaskContext}} would 
> allow for connectors to have flexible policies around flushes. This would be 
> in addition to the time interval based flushes that are controlled with 
> {{offset.flush.interval.ms}}, for which the clock should be reset when any 
> kind of flush happens.
> We should probably also support requesting flushes via the 
> {{SourceTaskContext}} for consistency though a use-case doesn't come to mind 
> off the bat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4161) Allow connectors to request flush via the context

2016-09-16 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496820#comment-15496820
 ] 

Shikhar Bhushan commented on KAFKA-4161:


We could also implement KAFKA-3462 here by having the semantics that connectors 
that want to disable offset tracking by Connect can return an empty map from 
{{flushedOffsets()}}. Maybe {{flushedOffsets()}} isn't the best name - really 
want a name implying {{commitableOffsets()}}.

> Allow connectors to request flush via the context
> -
>
> Key: KAFKA-4161
> URL: https://issues.apache.org/jira/browse/KAFKA-4161
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Ewen Cheslack-Postava
>  Labels: needs-kip
>
> It is desirable to have, in addition to the time-based flush interval, volume 
> or size-based commits. E.g. a sink connector which is buffering in terms of 
> number of records may want to request a flush when the buffer is full, or 
> when sufficient amount of data has been buffered in a file.
> Having a method like say {{requestFlush()}} on the {{SinkTaskContext}} would 
> allow for connectors to have flexible policies around flushes. This would be 
> in addition to the time interval based flushes that are controlled with 
> {{offset.flush.interval.ms}}, for which the clock should be reset when any 
> kind of flush happens.
> We should probably also support requesting flushes via the 
> {{SourceTaskContext}} for consistency though a use-case doesn't come to mind 
> off the bat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3776) Unify store and downstream caching in streams

2016-09-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496814#comment-15496814
 ] 

ASF GitHub Bot commented on KAFKA-3776:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1752


> Unify store and downstream caching in streams
> -
>
> Key: KAFKA-3776
> URL: https://issues.apache.org/jira/browse/KAFKA-3776
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> This is an umbrella story for capturing changes to processor caching in 
> Streams as first described in KIP-63. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1752: KAFKA-3776: Unify store and downstream caching in ...

2016-09-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1752


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3776) Unify store and downstream caching in streams

2016-09-16 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3776.
--
Resolution: Fixed

Issue resolved by pull request 1752
[https://github.com/apache/kafka/pull/1752]

> Unify store and downstream caching in streams
> -
>
> Key: KAFKA-3776
> URL: https://issues.apache.org/jira/browse/KAFKA-3776
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> This is an umbrella story for capturing changes to processor caching in 
> Streams as first described in KIP-63. 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-63%3A+Unify+store+and+downstream+caching+in+streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4133) Provide a configuration to control consumer max in-flight fetches

2016-09-16 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496789#comment-15496789
 ] 

Mickael Maison commented on KAFKA-4133:
---

[~hachikuji] I've created KIP-80: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-80%3A+Provide+a+configuration+to+control+consumer+max+in-flight+fetches

Can you have a quick look before I start a discussion thread ? It feels a bit 
short but at the same time it's a pretty straight forward change.
Thanks

> Provide a configuration to control consumer max in-flight fetches
> -
>
> Key: KAFKA-4133
> URL: https://issues.apache.org/jira/browse/KAFKA-4133
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Mickael Maison
>
> With KIP-74, we now have a good way to limit the size of fetch responses, but 
> it may still be difficult for users to control overall memory since the 
> consumer will send fetches in parallel to all the brokers which own 
> partitions that it is subscribed to. To give users finer control, it might 
> make sense to add a `max.in.flight.fetches` setting to limit the total number 
> of concurrent fetches at any time. This would require a KIP since it's a new 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4181) Processors punctuate() methods call multiple times

2016-09-16 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496608#comment-15496608
 ] 

Matthias J. Sax commented on KAFKA-4181:


See also http://stackoverflow.com/a/39258140/4953079 and 
https://github.com/apache/kafka/pull/1689

> Processors punctuate() methods call multiple times
> --
>
> Key: KAFKA-4181
> URL: https://issues.apache.org/jira/browse/KAFKA-4181
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: James Clinton
>Assignee: Guozhang Wang
>
> I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
> invoked based on the schedule, but builds up a backlog of pending punctuate() 
> calls that get flushed through when the Processors process() method is next 
> called.
> So for example:
> 1. Schedule the punctuate() methods to be called every second.  
> 2. A minute passes
> 3. Processor.process() receives a new message
> 4. Processor.punctuate(..) is invoked 60 times



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4181) Processors punctuate() methods call multiple times

2016-09-16 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496575#comment-15496575
 ] 

Eno Thereska commented on KAFKA-4181:
-

Thanks for reporting this. Currently, this is indeed how Kafka Streams 
operates, since the system is "clocked" based on the arrival of events. There 
is discussion around clocking by system wallclock time, but this is not 
implemented yet. 

> Processors punctuate() methods call multiple times
> --
>
> Key: KAFKA-4181
> URL: https://issues.apache.org/jira/browse/KAFKA-4181
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: James Clinton
>Assignee: Guozhang Wang
>
> I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
> invoked based on the schedule, but builds up a backlog of pending punctuate() 
> calls that get flushed through when the Processors process() method is next 
> called.
> So for example:
> 1. Schedule the punctuate() methods to be called every second.  
> 2. A minute passes
> 3. Processor.process() receives a new message
> 4. Processor.punctuate(..) is invoked 60 times



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1864: Kafka-4177: Remove ThrottledReplicationRateLimit f...

2016-09-16 Thread benstopford
GitHub user benstopford opened a pull request:

https://github.com/apache/kafka/pull/1864

Kafka-4177: Remove ThrottledReplicationRateLimit from Server Config 

This small PR pulls ThrottledReplicationRateLimit out of KafkaConfig and 
puts it in a class that defines Dynamic Configs. Client configs are also placed 
in this class and validation added. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/benstopford/kafka KAFKA-4177

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1864.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1864


commit 63fbfe2f464b92aba7513c90086feed5d3afca33
Author: Ben Stopford 
Date:   2016-09-16T14:26:01Z

KAFKA-4177: Minor refactor to pull the dynamic broker property 
replication.quota.throttled.rate out of KafkaConfig and its own class 
collocated with the ConfigCommand/AdminUtils

commit 72ddadd42088127d2d4bdfb812079145e534bb1e
Author: Ben Stopford 
Date:   2016-09-16T14:43:26Z

KAFKA-4177:  Refactored to include dynamic client configs also

commit d51be065112ed398b82c966a38ad8f2ef8fc0f04
Author: Ben Stopford 
Date:   2016-09-16T15:08:16Z

KAFKA-4177:  Move to kafka.server package

commit 495c17c53a0c64e8ea539dad52508a20ca7b4c5c
Author: Ben Stopford 
Date:   2016-09-16T15:13:14Z

KAFKA-4177:  Moved tests to its own class

commit 3a609d9f52c0be49c4b37055f3e73f6002b2e31d
Author: Ben Stopford 
Date:   2016-09-16T15:16:02Z

KAFKA-4177:  Whitespace only




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-09-16 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496554#comment-15496554
 ] 

Vahid Hashemian commented on KAFKA-2985:


Perhaps you are running into [this 
issue|https://issues.apache.org/jira/browse/KAFKA-3859]?

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,154] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group 

[jira] [Assigned] (KAFKA-4177) Replication Throttling: Remove ThrottledReplicationRateLimit from Server Config

2016-09-16 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford reassigned KAFKA-4177:
---

Assignee: Ben Stopford

> Replication Throttling: Remove ThrottledReplicationRateLimit from Server 
> Config
> ---
>
> Key: KAFKA-4177
> URL: https://issues.apache.org/jira/browse/KAFKA-4177
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Replication throttling included the concept of a dynamic broker config 
> (currently for just one property: ThrottledReplicationRateLimit). 
> On reflection it seems best to not include this in KafkaConfig, but rather 
> validate only in AdminUtils. Remove the property 
> ThrottledReplicationRateLimit and related config from KafkaConfig and add 
> validation in AdminUtils where the value can be applied/changed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4177) Replication Throttling: Remove ThrottledReplicationRateLimit from Server Config

2016-09-16 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4177 started by Ben Stopford.
---
> Replication Throttling: Remove ThrottledReplicationRateLimit from Server 
> Config
> ---
>
> Key: KAFKA-4177
> URL: https://issues.apache.org/jira/browse/KAFKA-4177
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>Assignee: Ben Stopford
>
> Replication throttling included the concept of a dynamic broker config 
> (currently for just one property: ThrottledReplicationRateLimit). 
> On reflection it seems best to not include this in KafkaConfig, but rather 
> validate only in AdminUtils. Remove the property 
> ThrottledReplicationRateLimit and related config from KafkaConfig and add 
> validation in AdminUtils where the value can be applied/changed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4181) Processors punctuate() methods call multiple times

2016-09-16 Thread James Clinton (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clinton updated KAFKA-4181:
-
Description: 
I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
invoked based on the schedule, but builds up a backlog of pending punctuate() 
calls that get flushed through when the Processors process() method is next 
called.

So for example:
1. Schedule the punctuate() methods to be called every second.  
2. A minute passes
3. Processor.process() receives a new message
4. Processor.punctuate(..) is invoked 60 times




  was:
I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
invoked based on the schedule, but builds up a backlog of pending punctuate() 
calls that get flushed through when the Processors process() method is next 
called.

So for example:
1. Schedule the punctuate() methods to be called every second.  
2. A minute passes
3. Processor.process() receives a new message
4. Processor.punctuate(..) is invoked 60 times

Update
My properties has the following set which seemed to cause the issue.  So 
without the following line, the punctuate() methods is called only once given 
the steps above.

props.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG, 
WallclockTimestampExtractor.class);





> Processors punctuate() methods call multiple times
> --
>
> Key: KAFKA-4181
> URL: https://issues.apache.org/jira/browse/KAFKA-4181
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: James Clinton
>Assignee: Guozhang Wang
>
> I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
> invoked based on the schedule, but builds up a backlog of pending punctuate() 
> calls that get flushed through when the Processors process() method is next 
> called.
> So for example:
> 1. Schedule the punctuate() methods to be called every second.  
> 2. A minute passes
> 3. Processor.process() receives a new message
> 4. Processor.punctuate(..) is invoked 60 times



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4181) Processors punctuate() methods call multiple times

2016-09-16 Thread James Clinton (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Clinton updated KAFKA-4181:
-
Description: 
I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
invoked based on the schedule, but builds up a backlog of pending punctuate() 
calls that get flushed through when the Processors process() method is next 
called.

So for example:
1. Schedule the punctuate() methods to be called every second.  
2. A minute passes
3. Processor.process() receives a new message
4. Processor.punctuate(..) is invoked 60 times

Update
My properties has the following set which seemed to cause the issue.  So 
without the following line, the punctuate() methods is called only once given 
the steps above.

props.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG, 
WallclockTimestampExtractor.class);




  was:
I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
invoked based on the schedule, but builds up a backlog of pending punctuate() 
calls that get flushed through when the Processors process() method is next 
called.

So for example:
1. Schedule the punctuate() methods to be called every second.  
2. A minute passes
3. Processor.process() receives a new message
4. Processor.punctuate(..) is invoked 60 times





> Processors punctuate() methods call multiple times
> --
>
> Key: KAFKA-4181
> URL: https://issues.apache.org/jira/browse/KAFKA-4181
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: James Clinton
>Assignee: Guozhang Wang
>
> I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
> invoked based on the schedule, but builds up a backlog of pending punctuate() 
> calls that get flushed through when the Processors process() method is next 
> called.
> So for example:
> 1. Schedule the punctuate() methods to be called every second.  
> 2. A minute passes
> 3. Processor.process() receives a new message
> 4. Processor.punctuate(..) is invoked 60 times
> Update
> My properties has the following set which seemed to cause the issue.  So 
> without the following line, the punctuate() methods is called only once given 
> the steps above.
> props.put(StreamsConfig.TIMESTAMP_EXTRACTOR_CLASS_CONFIG, 
> WallclockTimestampExtractor.class);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4181) Processors punctuate() methods call multiple times

2016-09-16 Thread James Clinton (JIRA)
James Clinton created KAFKA-4181:


 Summary: Processors punctuate() methods call multiple times
 Key: KAFKA-4181
 URL: https://issues.apache.org/jira/browse/KAFKA-4181
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.0.1, 0.10.0.0
Reporter: James Clinton
Assignee: Guozhang Wang


I'm seeing odd behaviour whereby a Processor's punctuate(..) method is not 
invoked based on the schedule, but builds up a backlog of pending punctuate() 
calls that get flushed through when the Processors process() method is next 
called.

So for example:
1. Schedule the punctuate() methods to be called every second.  
2. A minute passes
3. Processor.process() receives a new message
4. Processor.punctuate(..) is invoked 60 times






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4180) Shared authentification with multiple actives Kafka producers/consumers

2016-09-16 Thread Mickael Maison (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496095#comment-15496095
 ] 

Mickael Maison commented on KAFKA-4180:
---

Hi Guillaume,

You can achieve that by having a custom login module. 
Create a class that extends PlainLoginModule and override initialize() with 
your credentials logic. For example you can set your client properties as a 
ThreadLocal, retrieve the client.id in initialize() and based on that load 
different credentials. You can set the username and password by calling 
respectively getPublicCredentials().add() and getPrivateCredentials().add() on 
the subject.Then update your jaas file to point to your class instead of the 
default PlainLoginModule class. 

> Shared authentification with multiple actives Kafka producers/consumers
> ---
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #886

2016-09-16 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-1464; Add a throttling option to the Kafka replication

--
[...truncated 11024 lines...]
org.apache.kafka.common.record.RecordTest > testFields[172] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[173] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[173] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[173] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[173] PASSED

org.apache.kafka.common.record.RecordTest > testFields[173] STARTED

org.apache.kafka.common.record.RecordTest > testFields[173] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[174] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[174] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[174] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[174] PASSED

org.apache.kafka.common.record.RecordTest > testFields[174] STARTED

org.apache.kafka.common.record.RecordTest > testFields[174] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[175] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[175] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[175] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[175] PASSED

org.apache.kafka.common.record.RecordTest > testFields[175] STARTED

org.apache.kafka.common.record.RecordTest > testFields[175] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[176] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[176] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[176] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[176] PASSED

org.apache.kafka.common.record.RecordTest > testFields[176] STARTED

org.apache.kafka.common.record.RecordTest > testFields[176] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[177] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[177] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[177] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[177] PASSED

org.apache.kafka.common.record.RecordTest > testFields[177] STARTED

org.apache.kafka.common.record.RecordTest > testFields[177] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[178] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[178] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[178] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[178] PASSED

org.apache.kafka.common.record.RecordTest > testFields[178] STARTED

org.apache.kafka.common.record.RecordTest > testFields[178] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[179] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[179] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[179] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[179] PASSED

org.apache.kafka.common.record.RecordTest > testFields[179] STARTED

org.apache.kafka.common.record.RecordTest > testFields[179] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[180] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[180] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[180] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[180] PASSED

org.apache.kafka.common.record.RecordTest > testFields[180] STARTED

org.apache.kafka.common.record.RecordTest > testFields[180] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[181] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[181] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[181] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[181] PASSED

org.apache.kafka.common.record.RecordTest > testFields[181] STARTED

org.apache.kafka.common.record.RecordTest > testFields[181] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[182] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[182] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[182] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[182] PASSED

org.apache.kafka.common.record.RecordTest > testFields[182] STARTED

org.apache.kafka.common.record.RecordTest > testFields[182] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[183] STARTED

org.apache.kafka.common.record.RecordTest > testChecksum[183] PASSED

org.apache.kafka.common.record.RecordTest > testEquality[183] STARTED

org.apache.kafka.common.record.RecordTest > testEquality[183] PASSED

org.apache.kafka.common.record.RecordTest > testFields[183] STARTED

org.apache.kafka.common.record.RecordTest > testFields[183] PASSED

org.apache.kafka.common.record.RecordTest > testChecksum[184] STARTED


[jira] [Created] (KAFKA-4180) Shared authentification with multiple actives Kafka producers/consumers

2016-09-16 Thread Guillaume Grossetie (JIRA)
Guillaume Grossetie created KAFKA-4180:
--

 Summary: Shared authentification with multiple actives Kafka 
producers/consumers
 Key: KAFKA-4180
 URL: https://issues.apache.org/jira/browse/KAFKA-4180
 Project: Kafka
  Issue Type: Bug
  Components: producer , security
Affects Versions: 0.10.0.1
Reporter: Guillaume Grossetie


I'm using Kafka 0.10.0.1 with an SASL authentication on the client:

{code:title=kafka_client_jaas.conf|borderStyle=solid}
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="guillaume"
password="secret";
};
{code}

When using multiple Kafka producers the authentification is shared [1]. In 
other words it's not currently possible to have multiple Kafka producers in a 
JVM process.

Am I missing something ? How can I have multiple active Kafka producers with 
different credentials ?

My use case is that I have an application that send messages to multiples 
clusters (one cluster for logs, one cluster for metrics, one cluster for 
business data).

[1] 
https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4179) Replication Throttling: Add Usability Metrics PartitionBytesInRate & SumReplicaLag

2016-09-16 Thread Ben Stopford (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ben Stopford updated KAFKA-4179:

Affects Version/s: 0.10.1.0
  Description: 
Add two new metrics to Kafka 

PartitionBytesInRate: Equivalent to BytesInPerSec, but at a partition level 
(i.e. total traffic - throttled and not throttled). This is required for 
estimating how long a rebalance will take to complete. B/s. See usability 
section below.

SumReplicaLag: This is the sum of all replica lag values on the broker. This 
metric is used to monitor progress of a rebalance and is particularly useful 
for determining if the rebalance has become stuck due to an overly harsh 
throttle value (as the metric will stop decreasing).

As covered in KIP-73 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas

These make it possible for an administrator to calculate how long a rebalance 
will take. 

> Replication Throttling: Add Usability Metrics PartitionBytesInRate & 
> SumReplicaLag
> --
>
> Key: KAFKA-4179
> URL: https://issues.apache.org/jira/browse/KAFKA-4179
> Project: Kafka
>  Issue Type: Improvement
>  Components: replication
>Affects Versions: 0.10.1.0
>Reporter: Ben Stopford
>
> Add two new metrics to Kafka 
> PartitionBytesInRate: Equivalent to BytesInPerSec, but at a partition level 
> (i.e. total traffic - throttled and not throttled). This is required for 
> estimating how long a rebalance will take to complete. B/s. See usability 
> section below.
> SumReplicaLag: This is the sum of all replica lag values on the broker. This 
> metric is used to monitor progress of a rebalance and is particularly useful 
> for determining if the rebalance has become stuck due to an overly harsh 
> throttle value (as the metric will stop decreasing).
> As covered in KIP-73 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-73+Replication+Quotas
> These make it possible for an administrator to calculate how long a rebalance 
> will take. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4179) Replication Throttling: Add Usability Metrics PartitionBytesInRate & SumReplicaLag

2016-09-16 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-4179:
---

 Summary: Replication Throttling: Add Usability Metrics 
PartitionBytesInRate & SumReplicaLag
 Key: KAFKA-4179
 URL: https://issues.apache.org/jira/browse/KAFKA-4179
 Project: Kafka
  Issue Type: Improvement
  Components: replication
Reporter: Ben Stopford






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4176) Node stopped receiving heartbeat responses once another node started within the same group

2016-09-16 Thread Marek Svitok (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marek Svitok updated KAFKA-4176:

Description: 
I have 3 nodes working in the same group. I started them one after the other. 
As I can see from the log the node once started receives heartbeat responses
for the group it is part of. However once I start another node the former one 
stops receiving these responses and the new one keeps receiving them. Moreover 
it stops consuming any messages from previously assigner partitions:

Node0
03:14:36.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:39.223 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:39.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:39.429 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30170 after 0ms
03:14:39.462 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30171 after 0ms
03:14:42.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:42.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:45.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:45.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:48.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt 
to heart beat failed for group test_streams_id since it is rebalancing.
03:14:48.224 [StreamThread-2] INFO  o.a.k.c.c.i.ConsumerCoordinator - Revoking 
previously assigned partitions [StreamTopic-2] for group test_streams_id
03:14:48.224 [StreamThread-2] INFO  o.a.k.s.p.internals.StreamThread - Removing 
a task 0_2

Node1
03:22:18.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:18.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:21.709 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:21.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:24.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:24.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:24.872 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30172 after 0ms
03:22:24.992 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30173 after 0ms
03:22:27.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:27.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:30.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:30.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id

Configuration used:

03:14:24.520 [main] INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig 
values: 
metric.reporters = []
metadata.max.age.ms = 30
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [mujsignal-03:9092, mujsignal-09:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 6
interceptor.classes = null
ssl.truststore.password = null
client.id = Test-Streams-Processor-StreamThread-2-producer
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 3
acks = 1
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 0
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 6
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432

[jira] [Created] (KAFKA-4178) Replication Throttling: Consolidate Rate Classes

2016-09-16 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-4178:
---

 Summary: Replication Throttling: Consolidate Rate Classes
 Key: KAFKA-4178
 URL: https://issues.apache.org/jira/browse/KAFKA-4178
 Project: Kafka
  Issue Type: Improvement
  Components: replication
Affects Versions: 0.10.1.0
Reporter: Ben Stopford


Replication throttling is using a different implementation of Rate to client 
throttling (Rate & SimpleRate). These should be consolidated so both use the 
same approach. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4177) Replication Throttling: Remove ThrottledReplicationRateLimit from Server Config

2016-09-16 Thread Ben Stopford (JIRA)
Ben Stopford created KAFKA-4177:
---

 Summary: Replication Throttling: Remove 
ThrottledReplicationRateLimit from Server Config
 Key: KAFKA-4177
 URL: https://issues.apache.org/jira/browse/KAFKA-4177
 Project: Kafka
  Issue Type: Improvement
  Components: replication
Affects Versions: 0.10.1.0
Reporter: Ben Stopford


Replication throttling included the concept of a dynamic broker config 
(currently for just one property: ThrottledReplicationRateLimit). 

On reflection it seems best to not include this in KafkaConfig, but rather 
validate only in AdminUtils. Remove the property ThrottledReplicationRateLimit 
and related config from KafkaConfig and add validation in AdminUtils where the 
value can be applied/changed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4176) Node stopped receiving heartbeat responses once another node started within the same group

2016-09-16 Thread Marek Svitok (JIRA)
Marek Svitok created KAFKA-4176:
---

 Summary: Node stopped receiving heartbeat responses once another 
node started within the same group
 Key: KAFKA-4176
 URL: https://issues.apache.org/jira/browse/KAFKA-4176
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.0.1
 Environment: Centos 7: 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 
UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Java: java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

Reporter: Marek Svitok


I have 3 nodes working in the same group. I started them one after the other. 
As I can see from the log the node once started receives heartbeat responses
for the group it is part of. However once I start another node the former one 
stops receiving these responses and the new one keeps receiving them:

Node0
03:14:36.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:39.223 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:39.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:39.429 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30170 after 0ms
03:14:39.462 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30171 after 0ms
03:14:42.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:42.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:45.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:45.224 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:14:48.224 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Attempt 
to heart beat failed for group test_streams_id since it is rebalancing.
03:14:48.224 [StreamThread-2] INFO  o.a.k.c.c.i.ConsumerCoordinator - Revoking 
previously assigned partitions [StreamTopic-2] for group test_streams_id
03:14:48.224 [StreamThread-2] INFO  o.a.k.s.p.internals.StreamThread - Removing 
a task 0_2

Node1
03:22:18.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:18.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:21.709 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:21.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:24.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:24.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:24.872 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30172 after 0ms
03:22:24.992 [main-SendThread(mujsignal-03:2182)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid: 
0x256bc1ce8c30173 after 0ms
03:22:27.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:27.717 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:30.710 [StreamThread-2] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id
03:22:30.716 [StreamThread-1] DEBUG o.a.k.c.c.i.AbstractCoordinator - Received 
successful heartbeat response for group test_streams_id

Configuration used:

03:14:24.520 [main] INFO  o.a.k.c.producer.ProducerConfig - ProducerConfig 
values: 
metric.reporters = []
metadata.max.age.ms = 30
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [mujsignal-03:9092, mujsignal-09:9093]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 6
interceptor.classes = null
ssl.truststore.password = null
client.id = Test-Streams-Processor-StreamThread-2-producer
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 

[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-09-16 Thread Rajneesh Mitharwal (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15495651#comment-15495651
 ] 

Rajneesh Mitharwal commented on KAFKA-2985:
---

Anyone found solution to fix this issue ?

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,154] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 22 
>