[jira] [Commented] (KAFKA-4160) Consumer onPartitionsRevoked should not be invoked while holding the coordinator lock

2016-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492437#comment-15492437
 ] 

ASF GitHub Bot commented on KAFKA-4160:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1855


> Consumer onPartitionsRevoked should not be invoked while holding the 
> coordinator lock
> -
>
> Key: KAFKA-4160
> URL: https://issues.apache.org/jira/browse/KAFKA-4160
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> We have a single lock which is used for protecting access to shared 
> coordinator state between the foreground thread and the background heartbeat 
> thread. Currently, the onPartitionsRevoked callback is invoked while holding 
> this lock, which prevents the heartbeat thread from sending any heartbeats. 
> If the heartbeat thread is blocked for longer than the session timeout, than 
> the consumer is kicked out of the group. Typically this is not a problem 
> because onPartitionsRevoked might only commit offsets, but for Kafka Streams, 
> there is some expensive cleanup logic which can take longer than the session 
> timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1855: KAFKA-4160: Ensure rebalance listener not called w...

2016-09-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1855


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4160) Consumer onPartitionsRevoked should not be invoked while holding the coordinator lock

2016-09-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4160.
--
Resolution: Fixed

Issue resolved by pull request 1855
[https://github.com/apache/kafka/pull/1855]

> Consumer onPartitionsRevoked should not be invoked while holding the 
> coordinator lock
> -
>
> Key: KAFKA-4160
> URL: https://issues.apache.org/jira/browse/KAFKA-4160
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> We have a single lock which is used for protecting access to shared 
> coordinator state between the foreground thread and the background heartbeat 
> thread. Currently, the onPartitionsRevoked callback is invoked while holding 
> this lock, which prevents the heartbeat thread from sending any heartbeats. 
> If the heartbeat thread is blocked for longer than the session timeout, than 
> the consumer is kicked out of the group. Typically this is not a problem 
> because onPartitionsRevoked might only commit offsets, but for Kafka Streams, 
> there is some expensive cleanup logic which can take longer than the session 
> timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4174) Delete a Config that does not exist in ConsumerConfig

2016-09-14 Thread shunichi ishii (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shunichi ishii updated KAFKA-4174:
--
Description: 
$ ./bin/kafka-run-class.sh org.apache.kafka.streams.examples.pipe.PipeDemo
[2016-09-15 13:10:49,789] WARN The configuration 'replication.factor' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2016-09-15 13:10:49,790] WARN The configuration 
'windowstore.changelog.additional.retention.ms' was supplied but isn't a known 
config. (org.apache.kafka.clients.consumer.ConsumerConfig)

  was:
```bash
$ ./bin/kafka-run-class.sh org.apache.kafka.streams.examples.pipe.PipeDemo
[2016-09-15 13:10:49,789] WARN The configuration 'replication.factor' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2016-09-15 13:10:49,790] WARN The configuration 
'windowstore.changelog.additional.retention.ms' was supplied but isn't a known 
config. (org.apache.kafka.clients.consumer.ConsumerConfig)
```


> Delete a Config that does not exist in ConsumerConfig
> -
>
> Key: KAFKA-4174
> URL: https://issues.apache.org/jira/browse/KAFKA-4174
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0, 0.10.0.1
>Reporter: shunichi ishii
>Assignee: Guozhang Wang
>Priority: Trivial
>  Labels: easyfix
>
> $ ./bin/kafka-run-class.sh org.apache.kafka.streams.examples.pipe.PipeDemo
> [2016-09-15 13:10:49,789] WARN The configuration 'replication.factor' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.consumer.ConsumerConfig)
> [2016-09-15 13:10:49,790] WARN The configuration 
> 'windowstore.changelog.additional.retention.ms' was supplied but isn't a 
> known config. (org.apache.kafka.clients.consumer.ConsumerConfig)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4174) Delete a Config that does not exist in ConsumerConfig

2016-09-14 Thread shunichi ishii (JIRA)
shunichi ishii created KAFKA-4174:
-

 Summary: Delete a Config that does not exist in ConsumerConfig
 Key: KAFKA-4174
 URL: https://issues.apache.org/jira/browse/KAFKA-4174
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.0.1, 0.10.0.0
Reporter: shunichi ishii
Assignee: Guozhang Wang
Priority: Trivial


```bash
$ ./bin/kafka-run-class.sh org.apache.kafka.streams.examples.pipe.PipeDemo
[2016-09-15 13:10:49,789] WARN The configuration 'replication.factor' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig)
[2016-09-15 13:10:49,790] WARN The configuration 
'windowstore.changelog.additional.retention.ms' was supplied but isn't a known 
config. (org.apache.kafka.clients.consumer.ConsumerConfig)
```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #882

2016-09-14 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4172; Ensure fetch responses contain the requested partitions

--
[...truncated 5268 lines...]
kafka.api.PlaintextProducerSendTest > testSendToPartition STARTED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset STARTED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > 

Jenkins build is back to normal : kafka-trunk-jdk7 #1539

2016-09-14 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15492184#comment-15492184
 ] 

ASF GitHub Bot commented on KAFKA-3590:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1859

KAFKA-3590: Handle not-enough-replicas errors when writing to offsets topic



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3590

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1859


commit 64cd85590c9f6a2e064b15b51f62eddcb7a804f2
Author: Jason Gustafson 
Date:   2016-09-15T03:12:04Z

KAFKA-3590: Handle not-enough-replicas errors when writing to offsets topic




> KafkaConsumer fails with "Messages are rejected since there are fewer in-sync 
> replicas than required." when polling
> ---
>
> Key: KAFKA-3590
> URL: https://issues.apache.org/jira/browse/KAFKA-3590
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
> Environment: JDK1.8 Ubuntu 14.04
>Reporter: Sergey Alaev
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
> in-sync replicas than required.". Isn't this message about minimum number of 
> ISR's when *sending* messages?
> Stacktrace:
> org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
> Messages are rejected since there are fewer in-sync replicas than required.
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
>  ~[kafka-clients-0.9.0.1.jar:na]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> ~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1859: KAFKA-3590: Handle not-enough-replicas errors when...

2016-09-14 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1859

KAFKA-3590: Handle not-enough-replicas errors when writing to offsets topic



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3590

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1859.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1859


commit 64cd85590c9f6a2e064b15b51f62eddcb7a804f2
Author: Jason Gustafson 
Date:   2016-09-15T03:12:04Z

KAFKA-3590: Handle not-enough-replicas errors when writing to offsets topic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4172) Fix masked test error in KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled

2016-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491964#comment-15491964
 ] 

ASF GitHub Bot commented on KAFKA-4172:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1857


> Fix masked test error in 
> KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled
> ---
>
> Key: KAFKA-4172
> URL: https://issues.apache.org/jira/browse/KAFKA-4172
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, unit tests
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> This test case has an incorrectly matched mock fetch response, which was 
> silently raising an NPE, which was caught in NetworkClient. In general, aside 
> from fixing the test, we are probably missing a null check in the 
> FetchRespose to verify that the partitions included in the fetch response 
> were actually requested. This is usually not a problem because the broker 
> doesn't send us invalid fetch responses, but it's probably better to be a 
> little more defensive when handling responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1857: KAFKA-4172: Ensure fetch responses contain the req...

2016-09-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1857


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4172) Fix masked test error in KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled

2016-09-14 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4172.

Resolution: Fixed

Issue resolved by pull request 1857
[https://github.com/apache/kafka/pull/1857]

> Fix masked test error in 
> KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled
> ---
>
> Key: KAFKA-4172
> URL: https://issues.apache.org/jira/browse/KAFKA-4172
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, unit tests
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> This test case has an incorrectly matched mock fetch response, which was 
> silently raising an NPE, which was caught in NetworkClient. In general, aside 
> from fixing the test, we are probably missing a null check in the 
> FetchRespose to verify that the partitions included in the fetch response 
> were actually requested. This is usually not a problem because the broker 
> doesn't send us invalid fetch responses, but it's probably better to be a 
> little more defensive when handling responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1858: HOTFIX: set sourceNodes to null for selectKey

2016-09-14 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1858

HOTFIX: set sourceNodes to null for selectKey

To indicate its source topic is no longer guaranteed to be partitioned on 
key.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka 
KHotfix-set-null-sourceNodes-selectKey

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1858.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1858


commit 7609ca35e313013621f4382a4bb54660864d5054
Author: Guozhang Wang 
Date:   2016-09-15T00:36:27Z

set sourceNodes to null for selectKey to indicate its source topic is no 
longer guaranteed to be partitioned on key




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4173) SchemaProjector should successfully project when source schema field is missing and target schema field is optional

2016-09-14 Thread Shikhar Bhushan (JIRA)
Shikhar Bhushan created KAFKA-4173:
--

 Summary: SchemaProjector should successfully project when source 
schema field is missing and target schema field is optional
 Key: KAFKA-4173
 URL: https://issues.apache.org/jira/browse/KAFKA-4173
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Shikhar Bhushan
Assignee: Shikhar Bhushan


As reported in https://github.com/confluentinc/kafka-connect-hdfs/issues/115



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4172) Fix masked test error in KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled

2016-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491880#comment-15491880
 ] 

ASF GitHub Bot commented on KAFKA-4172:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1857

KAFKA-4172: Ensure fetches responses contain the requested partitions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4172

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1857


commit 02aa611e3aba76bb73cee15b2706f7cf24cb6279
Author: Jason Gustafson 
Date:   2016-09-15T00:16:56Z

KAFKA-4172: Ensure fetches responses contain the requested partitions




> Fix masked test error in 
> KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled
> ---
>
> Key: KAFKA-4172
> URL: https://issues.apache.org/jira/browse/KAFKA-4172
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, unit tests
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> This test case has an incorrectly matched mock fetch response, which was 
> silently raising an NPE, which was caught in NetworkClient. In general, aside 
> from fixing the test, we are probably missing a null check in the 
> FetchRespose to verify that the partitions included in the fetch response 
> were actually requested. This is usually not a problem because the broker 
> doesn't send us invalid fetch responses, but it's probably better to be a 
> little more defensive when handling responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1857: KAFKA-4172: Ensure fetches responses contain the r...

2016-09-14 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1857

KAFKA-4172: Ensure fetches responses contain the requested partitions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4172

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1857.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1857


commit 02aa611e3aba76bb73cee15b2706f7cf24cb6279
Author: Jason Gustafson 
Date:   2016-09-15T00:16:56Z

KAFKA-4172: Ensure fetches responses contain the requested partitions




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4172) Fix masked test error in KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled

2016-09-14 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4172:
--

 Summary: Fix masked test error in 
KafkaConsumerTest.testSubscriptionChangesWithAutoCommitEnabled
 Key: KAFKA-4172
 URL: https://issues.apache.org/jira/browse/KAFKA-4172
 Project: Kafka
  Issue Type: Bug
  Components: consumer, unit tests
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.10.1.0


This test case has an incorrectly matched mock fetch response, which was 
silently raising an NPE, which was caught in NetworkClient. In general, aside 
from fixing the test, we are probably missing a null check in the FetchRespose 
to verify that the partitions included in the fetch response were actually 
requested. This is usually not a problem because the broker doesn't send us 
invalid fetch responses, but it's probably better to be a little more defensive 
when handling responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2016-09-14 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4114 started by Bill Bejeck.
--
> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4153) Incorrect KStream-KStream join behavior with asymmetric time window

2016-09-14 Thread Elias Levy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491553#comment-15491553
 ] 

Elias Levy commented on KAFKA-4153:
---

I've updated the PR to reverse the before & after semantics as you've pointed 
out.

> Incorrect KStream-KStream join behavior with asymmetric time window
> ---
>
> Key: KAFKA-4153
> URL: https://issues.apache.org/jira/browse/KAFKA-4153
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Using Kafka 0.10.0.1, if joining records in two streams separated by some 
> time, but only when records from one stream are newer than records from the 
> other, i.e. doing:
> {{stream1.join(stream2, valueJoiner, JoinWindows.of("X").after(1))}}
> One would expect that the following would be equivalent:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").before(1))}}
> Alas, that this is not the case.  Instead, this generates the same output as 
> the first example:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").after(1))}}
> The problem is that the 
> [{{DefaultJoin}}|https://github.com/apache/kafka/blob/caa9bd0fcd2fab4758791408e2b145532153910e/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L692-L697]
>  implementation in {{KStreamImpl}} fails to reverse the {{before}} and 
> {{after}} values when creates the {{KStreamKStreamJoin}} for the other 
> stream, even though is calls {{reverseJoiner}} to reverse the joiner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2016-09-14 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491532#comment-15491532
 ] 

Ewen Cheslack-Postava commented on KAFKA-4171:
--

Yeah, that's almost definitely what's happening [~criccomini]. I'm not sure 
that actually stripping them out will solve this problem though because the 
consumer.* prefixed version will probably also be printed by the worker config. 
We can try to remove the settings in each case, although this is also just a 
more general problem with the way Kafka handles configs (e.g. a sensitive 
serializer setting, e.g. something like SSL settings for a schema registry, 
can't possibly be known by the framework and so would always be logged). So I 
think we can fix this immediate issue, but the way configs are logged might 
still be an issue.

> Kafka-connect prints outs keystone and truststore password in log2
> --
>
> Key: KAFKA-4171
> URL: https://issues.apache.org/jira/browse/KAFKA-4171
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Akshath Patkar
>Assignee: Ewen Cheslack-Postava
>
> Kafka-connect prints outs keystone and truststore password in log
> [2016-09-14 16:30:33,971] WARN The configuration 
> consumer.ssl.truststore.password = X was supplied but isn't a known 
> config. (org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1538

2016-09-14 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4162: Fixed typo "rebalance"

--
[...truncated 4880 lines...]
kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
at 
hudson.plugins.toolenv.ToolEnvBuildWrapper$1.buildEnvVars(ToolEnvBuildWrapper.java:46)
at hudson.model.AbstractBuild.getEnvironment(AbstractBuild.java:931)
at hudson.plugins.git.GitSCM.getParamExpandedRepos(GitSCM.java:404)
at 
hudson.plugins.git.GitSCM.compareRemoteRevisionWithImpl(GitSCM.java:609)
at hudson.plugins.git.GitSCM.compareRemoteRevisionWith(GitSCM.java:574)
at hudson.scm.SCM.compareRemoteRevisionWith(SCM.java:381)
at hudson.scm.SCM.poll(SCM.java:398)
at hudson.model.AbstractProject._poll(AbstractProject.java:1446)
at hudson.model.AbstractProject.poll(AbstractProject.java:1349)
at hudson.triggers.SCMTrigger$Runner.runPolling(SCMTrigger.java:528)
at hudson.triggers.SCMTrigger$Runner.run(SCMTrigger.java:557)
at 
hudson.util.SequentialExecutionQueue$QueueEntry.run(SequentialExecutionQueue.java:119)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED


Build failed in Jenkins: kafka-0.10.0-jdk7 #199

2016-09-14 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-4162: Fixed typo "rebalance"

--
[...truncated 5756 lines...]
org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testValueGetter PASSED

org.apache.kafka.streams.kstream.internals.KStreamSelectKeyTest > testSelectKey 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testCopartitioning PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueWithProvidedSerde PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
testGroupedCountOccurences PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > 
testNotJoinable PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
testToWithNullValueSerdeDoesntNPE PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > testNumProcesses 
PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testLeftJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KStreamBranchTest > 
testKStreamBranch PASSED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilter PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testRepartition 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > 
testStateStoreLazyEval PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 

[jira] [Commented] (KAFKA-4074) Deleting a topic can make it unavailable even if delete.topic.enable is false

2016-09-14 Thread Mayuresh Gharat (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491390#comment-15491390
 ] 

Mayuresh Gharat commented on KAFKA-4074:


[~omkreddy] merging your PR with that in 
https://issues.apache.org/jira/browse/KAFKA-3175 would be great. 


> Deleting a topic can make it unavailable even if delete.topic.enable is false
> -
>
> Key: KAFKA-4074
> URL: https://issues.apache.org/jira/browse/KAFKA-4074
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Joel Koshy
>Assignee: Manikumar Reddy
> Fix For: 0.10.1.0
>
>
> The {{delete.topic.enable}} configuration does not completely block the 
> effects of delete topic since the controller may (indirectly) query the list 
> of topics under the delete-topic znode.
> To reproduce:
> * Delete topic X
> * Force a controller move (either by bouncing or removing the controller 
> znode)
> * The new controller will send out UpdateMetadataRequests with leader=-2 for 
> the partitions of X
> * Producers eventually stop producing to that topic
> The reason for this is that when ControllerChannelManager adds 
> UpdateMetadataRequests for brokers, we directly use the partitionsToBeDeleted 
> field of the DeleteTopicManager (which is set to the partitions of the topics 
> under the delete-topic znode on controller startup).
> In order to get out of the situation you have to remove X from the znode and 
> then force another controller move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4153) Incorrect KStream-KStream join behavior with asymmetric time window

2016-09-14 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491329#comment-15491329
 ] 

Guozhang Wang commented on KAFKA-4153:
--

According to the stated Java doc string, {{JoinWindows}} definition should be 
reversed as it states that, for example in {{before}} function, {{if the 
timestamp of a record from the secondary stream is earlier than or equal to the 
timestamp of a record from the first stream}}. It is indeed a bug that should 
be fixed, we can review the PR and merge it.

As for defining asymmetric {{JoinWindows}}, I think it is better to add to the 
current {{before}} {{after}} functions with static constructors, as it is for 
defining the "length" of the window as final values that should not be 
overridden later. 

> Incorrect KStream-KStream join behavior with asymmetric time window
> ---
>
> Key: KAFKA-4153
> URL: https://issues.apache.org/jira/browse/KAFKA-4153
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.1
>Reporter: Elias Levy
>Assignee: Guozhang Wang
>
> Using Kafka 0.10.0.1, if joining records in two streams separated by some 
> time, but only when records from one stream are newer than records from the 
> other, i.e. doing:
> {{stream1.join(stream2, valueJoiner, JoinWindows.of("X").after(1))}}
> One would expect that the following would be equivalent:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").before(1))}}
> Alas, that this is not the case.  Instead, this generates the same output as 
> the first example:
> {{stream2.join(stream1, valueJoiner, JoinWindows.of("X").after(1))}}
> The problem is that the 
> [{{DefaultJoin}}|https://github.com/apache/kafka/blob/caa9bd0fcd2fab4758791408e2b145532153910e/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KStreamImpl.java#L692-L697]
>  implementation in {{KStreamImpl}} fails to reverse the {{before}} and 
> {{after}} values when creates the {{KStreamKStreamJoin}} for the other 
> stream, even though is calls {{reverseJoiner}} to reverse the joiner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1856: Allow underscores in hostname

2016-09-14 Thread rnpridgeon
GitHub user rnpridgeon opened a pull request:

https://github.com/apache/kafka/pull/1856

Allow underscores in hostname

Technically this does not strictly adhere to RFC-952 however it is valid 
for domain names, urls and uris so we should loosen the requirements a tad. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rnpridgeon/kafka KAFKA-3719

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1856.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1856


commit 75a4441bdca0b603f1590150040cad82e0dd11f0
Author: rnpridgeon 
Date:   2016-09-14T19:51:25Z

Allow underscores for increased flexibility despite compliance issues with 
RFC-RFC-952




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4161) Allow connectors to request flush via the context

2016-09-14 Thread Shikhar Bhushan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491296#comment-15491296
 ] 

Shikhar Bhushan commented on KAFKA-4161:


bq. Probably worth clarifying whether we're really talking about just flush 
here or offset commit as well. Flush really only exists in order to support 
offset commit (from the framework's perspective), but since you mention full 
buffers I think you might be getting at a slightly different use case for 
connectors.

Sorry I wasn't clear, flushing data & offset commit are currently coupled as 
you pointed out. If we want to avoid unnecessary redelivery of records it is 
best to commit offsets with the 'most current' knowledge of them, which we 
currently have after calling {{flush()}}.

bq. In general, I think it'd actually be even better to just get rid of the 
idea of having to flush as a common operation as it hurts throughput to have to 
flush entirely to commit offsets (we are flushing the pipeline, which is never 
good). Ideally we coudl do what the framework does with source connectors and 
just track which data has been successfully delivered and use that for the 
majority of offset commits. We'd still need it for cases like shutdown where we 
want to make sure all data has been sent, but since the framework controls 
delivery of data, maybe its even better just to wait for that data to be 
written. 

Good points, I agree it would be better to make it so {{flush()}} is not 
routine since it can hurt throughput. I think we can deprecate it altogether. 
As a proposal:
{noformat}
abstract class SinkTask {
..
 // New method
public Map flushedOffsets() { throw new 
NotImplementedException(); }

@Deprecated
public void flush(Map offsets) { }
..
}
{noformat}

Then periodic offset committing business would get at the {{flushedOffsets()}}, 
and if that is not implemented, call {{flush()}} as currently so it can commit 
the offset state as of the last {{put()}} call.

I don't think {{flush()}} is needed even at shutdown. Tasks are already being 
advised via {{close()}} and can choose to flush any buffered data from there. 
We can do a final offset commit based on the {{flushedOffsets()}} after 
{{close()}} (though this does imply a quirk that even after a 
{{TopicPartition}} is closed we expect tasks to keep offset state around in the 
map returned by {{flushedOffsets()}}).

Additionally, it would be good to have a {{context.requestCommit()}} in the 
spirit of {{context.requestFlush()}} as I was originally proposing. The 
motivation is that connectors can optimize for avoiding unnecessary redelivery 
when recovering from failures. Connectors can choose whatever policies are best 
like number-of-records or size-based batching/buffering for writing to the 
destination system as part of the normal flow of calls to {{put()}}, and 
request a commit when they have actually written data to the destination 
system. There need not be a strong guarantee about whether offset committing 
actually happens after such a request so we don't commit offsets too often and 
can choose to only do it after some minimum interval, e.g. in case a connector 
always requests commit after a put.

bq. The main reason I think we even need the explicit flush() is that some 
connectors may have very long delays between flushes (e.g. any object stores 
like S3) such that they need to be told directly that they need to write all 
their data (or discard it).

I don't believe it is currently possible for a connector to communicate that it 
wants to discard data rather than write it out when {{flush()}} is called 
(aside from I guess throwing an exception...). With the above proposal the 
decision of when and whether or not to write data would be completely upto 
connectors.

bq. Was there a specific connector & scenario you were thinking about here?

This came up in a thread on the user list ('Sink Connector feature request: 
SinkTask.putAndReport()')

> Allow connectors to request flush via the context
> -
>
> Key: KAFKA-4161
> URL: https://issues.apache.org/jira/browse/KAFKA-4161
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Shikhar Bhushan
>Assignee: Ewen Cheslack-Postava
>  Labels: needs-kip
>
> It is desirable to have, in addition to the time-based flush interval, volume 
> or size-based commits. E.g. a sink connector which is buffering in terms of 
> number of records may want to request a flush when the buffer is full, or 
> when sufficient amount of data has been buffered in a file.
> Having a method like say {{requestFlush()}} on the {{SinkTaskContext}} would 
> allow for connectors to have flexible policies around flushes. This would be 
> 

[jira] [Reopened] (KAFKA-4058) Failure in org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset

2016-09-14 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-4058:
--

Re-open this issue since this transient error is still observable. Sharing 
[~enothereska] 's stack trace here:

{code}
org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterReset FAILED
java.lang.AssertionError: Condition not met within timeout 3. Did not 
receive 10 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:268)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:211)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:180)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:120)
{code}

> Failure in 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset
> --
>
> Key: KAFKA-4058
> URL: https://issues.apache.org/jira/browse/KAFKA-4058
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: test
> Fix For: 0.10.1.0
>
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:225)
>   at 
> org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterReset(ResetIntegrationTest.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at 

[jira] [Updated] (KAFKA-3719) Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too narrow

2016-09-14 Thread Ryan P (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan P updated KAFKA-3719:
--
Assignee: Ryan P

> Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too 
> narrow
> -
>
> Key: KAFKA-3719
> URL: https://issues.apache.org/jira/browse/KAFKA-3719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balazs Kossovics
>Assignee: Ryan P
>Priority: Trivial
>
> In our continuous integration environment the Kafka brokers run on hosts 
> containing underscores in their names. The current regex splits incorrectly 
> these names into host and port parts.
> I could submit a pull request if someone confirms that this is indeed a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-3719) Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too narrow

2016-09-14 Thread Ryan P (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan P reopened KAFKA-3719:
---

I would like to re-open this topic for discussion. While [~gwenshap] is 
correct, underscores are not valid hostnames per RFC-952 and RFC-1123. 

However they are still valid domain name characters per RFC-2181.

"The DNS itself places only one restriction on the particular labels
   that can be used to identify resource records.  That one restriction
   relates to the length of the label and the full name.  The length of
   any one label is limited to between 1 and 63 octets.  A full domain
   name is limited to 255 octets (including the separators).  The zero
   length full name is defined as representing the root of the DNS tree,
   and is typically written and displayed as ".".  Those restrictions
   aside, any binary string whatever can be used as the label of any
   resource record."

So while technically this underscores do violate the standard put in place for 
hostnames we could stand to be a tad more lenient given the relaxed set of 
standards for use with domain names. I personally feel that the flexibility in 
naming conventions outweighs the cost of strict standards compliance. 

Source:
https://tools.ietf.org/html/rfc2181#section-11



> Pattern regex org.apache.kafka.common.utils.Utils.HOST_PORT_PATTERN is too 
> narrow
> -
>
> Key: KAFKA-3719
> URL: https://issues.apache.org/jira/browse/KAFKA-3719
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balazs Kossovics
>Priority: Trivial
>
> In our continuous integration environment the Kafka brokers run on hosts 
> containing underscores in their names. The current regex splits incorrectly 
> these names into host and port parts.
> I could submit a pull request if someone confirms that this is indeed a bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2016-09-14 Thread Chris Riccomini (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491164#comment-15491164
 ] 

Chris Riccomini commented on KAFKA-4171:


Wondering if KC is just putting configuration from consumer.* into the config 
without the consumer.*, but leaving the config with the consumer. prefix 
floating around still? Seems like the prefix should be stripped when forwarding 
config on to the underlying Kafka consumer.

> Kafka-connect prints outs keystone and truststore password in log2
> --
>
> Key: KAFKA-4171
> URL: https://issues.apache.org/jira/browse/KAFKA-4171
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Akshath Patkar
>Assignee: Ewen Cheslack-Postava
>
> Kafka-connect prints outs keystone and truststore password in log
> [2016-09-14 16:30:33,971] WARN The configuration 
> consumer.ssl.truststore.password = X was supplied but isn't a known 
> config. (org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4171) Kafka-connect prints outs keystone and truststore password in log2

2016-09-14 Thread Akshath Patkar (JIRA)
Akshath Patkar created KAFKA-4171:
-

 Summary: Kafka-connect prints outs keystone and truststore 
password in log2
 Key: KAFKA-4171
 URL: https://issues.apache.org/jira/browse/KAFKA-4171
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 0.10.0.0
Reporter: Akshath Patkar
Assignee: Ewen Cheslack-Postava


Kafka-connect prints outs keystone and truststore password in log

[2016-09-14 16:30:33,971] WARN The configuration 
consumer.ssl.truststore.password = X was supplied but isn't a known config. 
(org.apache.kafka.clients.consumer.ConsumerConfig:186)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4160) Consumer onPartitionsRevoked should not be invoked while holding the coordinator lock

2016-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491143#comment-15491143
 ] 

ASF GitHub Bot commented on KAFKA-4160:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1855

KAFKA-4160: Ensure rebalance listener not called with coordinator lock



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4160

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1855.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1855


commit 956fe2a546d5ef9b6c6beb485060e510a7829520
Author: Jason Gustafson 
Date:   2016-09-13T23:02:03Z

KAFKA-4160: Ensure rebalance listener not called with coordinator lock




> Consumer onPartitionsRevoked should not be invoked while holding the 
> coordinator lock
> -
>
> Key: KAFKA-4160
> URL: https://issues.apache.org/jira/browse/KAFKA-4160
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> We have a single lock which is used for protecting access to shared 
> coordinator state between the foreground thread and the background heartbeat 
> thread. Currently, the onPartitionsRevoked callback is invoked while holding 
> this lock, which prevents the heartbeat thread from sending any heartbeats. 
> If the heartbeat thread is blocked for longer than the session timeout, than 
> the consumer is kicked out of the group. Typically this is not a problem 
> because onPartitionsRevoked might only commit offsets, but for Kafka Streams, 
> there is some expensive cleanup logic which can take longer than the session 
> timeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1855: KAFKA-4160: Ensure rebalance listener not called w...

2016-09-14 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1855

KAFKA-4160: Ensure rebalance listener not called with coordinator lock



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-4160

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1855.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1855


commit 956fe2a546d5ef9b6c6beb485060e510a7829520
Author: Jason Gustafson 
Date:   2016-09-13T23:02:03Z

KAFKA-4160: Ensure rebalance listener not called with coordinator lock




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4108) Improve DumpLogSegments offsets-decoder output format

2016-09-14 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491082#comment-15491082
 ] 

Vahid Hashemian commented on KAFKA-4108:


[~hachikuji] Would changing the output format from
{code}
key: metadata::cgroup1 payload: 
consumer:range:1:{consumer-1-040134f0-4d2a-4665-b996-a59322cc82cf=[t1-0]}
{code}
to
{code}
key: {metadata:cgroup1} payload: {protocolType: consumer, 
groupMetadata.protocol:range, groupMetadata.generationId:1, 
assignment:{consumer-1-040134f0-4d2a-4665-b996-a59322cc82cf=[t1-0]}}
{code}
be sufficient for this?

> Improve DumpLogSegments offsets-decoder output format
> -
>
> Key: KAFKA-4108
> URL: https://issues.apache.org/jira/browse/KAFKA-4108
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>
> When using the DumpLogSegments with the "--offsets-decoder" option (for 
> consuming __consumer_offsets), the encoding of group metadata makes it a 
> little difficult to identify individual fields. In particular, we use the 
> following formatted string for group metadata: 
> {code}
> ${protocolType}:${groupMetadata.protocol}:${groupMetadata.generationId}:${assignment}
> {code}
> Keys have a similar formatting. Most users are probably not going to know 
> which field is which based only on the output, so it would be helpful to 
> include field names. Maybe we could just output a JSON object?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4162) Typo in Kafka Connect document

2016-09-14 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4162.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   0.10.0.2

Issue resolved by pull request 1853
[https://github.com/apache/kafka/pull/1853]

> Typo in Kafka Connect document
> --
>
> Key: KAFKA-4162
> URL: https://issues.apache.org/jira/browse/KAFKA-4162
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: David Chen
>Assignee: Ewen Cheslack-Postava
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.10.0.2, 0.10.1.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Fixed typo "reblaance" -> "rebalance".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1853: KAFKA-4162: Fixed typo "rebalance"

2016-09-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1853


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4162) Typo in Kafka Connect document

2016-09-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15491017#comment-15491017
 ] 

ASF GitHub Bot commented on KAFKA-4162:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1853


> Typo in Kafka Connect document
> --
>
> Key: KAFKA-4162
> URL: https://issues.apache.org/jira/browse/KAFKA-4162
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: David Chen
>Assignee: Ewen Cheslack-Postava
>Priority: Trivial
>  Labels: documentation, easyfix
> Fix For: 0.10.1.0, 0.10.0.2
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Fixed typo "reblaance" -> "rebalance".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-79 - ListOffsetRequest v1 and search by timestamp methods in new consumer.

2016-09-14 Thread Bill Bejeck
+1

On Tue, Sep 13, 2016 at 11:08 PM, Jason Gustafson 
wrote:

> +1 and thanks for the great proposal!
>
> On Fri, Sep 9, 2016 at 4:38 PM, Becket Qin  wrote:
>
> > Hi all,
> >
> > I'd like to start the voting for KIP-79
> >
> > In short we propose to :
> > 1. add a ListOffsetRequest/ListOffsetResponse v1, and
> > 2. add earliestOffsts(), latestOffsets() and offsetForTime() methods in
> the
> > new consumer.
> >
> > The KIP wiki is the following:
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=65868090
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
>


[GitHub] kafka pull request #1854: MINOR: Give a name to the coordinator heartbeat th...

2016-09-14 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1854


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1854: MINOR: Give a name to the coordinator heartbeat th...

2016-09-14 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1854

MINOR: Give a name to the coordinator heartbeat thread

Followed the same naming pattern as the producer sender thread.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka heartbeat-thread-name

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1854.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1854


commit a6334518dc5c7ae904f479599d65d826efaf9373
Author: Ismael Juma 
Date:   2016-09-14T15:57:41Z

Give a name to the coordinator heartbeat thread

Followed the same naming pattern as the producer sender thread.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Work started] (KAFKA-4167) Add cache metrics

2016-09-14 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4167 started by Eno Thereska.
---
> Add cache metrics
> -
>
> Key: KAFKA-4167
> URL: https://issues.apache.org/jira/browse/KAFKA-4167
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Would be good to report things like hits and misses, overwrites, number of 
> puts and gets, number of flushes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4162) Typo in Kafka Connect document

2016-09-14 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490664#comment-15490664
 ] 

David Chen commented on KAFKA-4162:
---

PR submitted: https://github.com/apache/kafka/pull/1853

> Typo in Kafka Connect document
> --
>
> Key: KAFKA-4162
> URL: https://issues.apache.org/jira/browse/KAFKA-4162
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.1
>Reporter: David Chen
>Assignee: Ewen Cheslack-Postava
>Priority: Trivial
>  Labels: documentation, easyfix
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Fixed typo "reblaance" -> "rebalance".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4170) required() method not available in joptsimple.OptionSpec

2016-09-14 Thread Martin Gainty (JIRA)
Martin Gainty created KAFKA-4170:


 Summary: required() method not available in 
joptsimple.OptionSpec
 Key: KAFKA-4170
 URL: https://issues.apache.org/jira/browse/KAFKA-4170
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.10.0.1
 Environment: java 1.
Reporter: Martin Gainty
Priority: Critical


kafka.tools.StreamResetter.java:

private static joptsimple.OptionSpec applicationIdOption;

 private void parseArguments(final String[] args) throws java.io.IOException {
final joptsimple.OptionParser optionParser = new 
joptsimple.OptionParser();
applicationIdOption = (optionParser.accepts("application-id", "The 
Kafka Streams application ID (application.id)")
.withRequiredArg()
.ofType(String.class)
.describedAs("id")
.required();

//required() method is not available in joptline 4.9 joptline.OptionSpec.java


net.sf.jopt-simple
jopt-simple
4.9




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4167) Add cache metrics

2016-09-14 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska reassigned KAFKA-4167:
---

Assignee: Eno Thereska

> Add cache metrics
> -
>
> Key: KAFKA-4167
> URL: https://issues.apache.org/jira/browse/KAFKA-4167
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.10.1.0
>
>
> Would be good to report things like hits and misses, overwrites, number of 
> puts and gets, number of flushes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4169) Calculation of message size is too conservative for compressed messages

2016-09-14 Thread Dustin Cote (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490387#comment-15490387
 ] 

Dustin Cote commented on KAFKA-4169:


Ah, good point.  The user reporting this issue suggested pushing the check into 
the RecordAccumulator.  That might be a better option.

> Calculation of message size is too conservative for compressed messages
> ---
>
> Key: KAFKA-4169
> URL: https://issues.apache.org/jira/browse/KAFKA-4169
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.0.1
>Reporter: Dustin Cote
>
> Currently the producer uses the uncompressed message size to check against 
> {{max.request.size}} even if a {{compression.type}} is defined.  This can be 
> reproduced as follows:
> {code}
> # dd if=/dev/zero of=/tmp/outsmaller.dat bs=1024 count=1000
> # cat /tmp/out.dat | bin/kafka-console-producer --broker-list localhost:9092 
> --topic tester --producer-property compression.type=gzip
> {code}
> The above code creates a file that is the same size as the default for 
> {{max.request.size}} and the added overhead of the message pushes the 
> uncompressed size over the limit.  Compressing the message ahead of time 
> allows the message to go through.  When the message is blocked, the following 
> exception is produced:
> {code}
> [2016-09-14 08:56:19,558] ERROR Error when sending message to topic tester 
> with key: null, value: 1048576 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.RecordTooLargeException: The message is 
> 1048610 bytes when serialized which is larger than the maximum request size 
> you have configured with the max.request.size configuration.
> {code}
> For completeness, I have confirmed that the console producer is setting 
> {{compression.type}} properly by enabling DEBUG so this appears to be a 
> problem in the size estimate of the message itself.  I would suggest we 
> compress before we serialize instead of the other way around to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4169) Calculation of message size is too conservative for compressed messages

2016-09-14 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490383#comment-15490383
 ] 

Ismael Juma commented on KAFKA-4169:


We can't compress before serializing as the compression is done on a batch of 
messages to achieve a better compression ratio. I think we need to move the 
check so that it happens later in the process instead of doing it right after 
we serialize the message.

> Calculation of message size is too conservative for compressed messages
> ---
>
> Key: KAFKA-4169
> URL: https://issues.apache.org/jira/browse/KAFKA-4169
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.0.1
>Reporter: Dustin Cote
>
> Currently the producer uses the uncompressed message size to check against 
> {{max.request.size}} even if a {{compression.type}} is defined.  This can be 
> reproduced as follows:
> {code}
> # dd if=/dev/zero of=/tmp/outsmaller.dat bs=1024 count=1000
> # cat /tmp/out.dat | bin/kafka-console-producer --broker-list localhost:9092 
> --topic tester --producer-property compression.type=gzip
> {code}
> The above code creates a file that is the same size as the default for 
> {{max.request.size}} and the added overhead of the message pushes the 
> uncompressed size over the limit.  Compressing the message ahead of time 
> allows the message to go through.  When the message is blocked, the following 
> exception is produced:
> {code}
> [2016-09-14 08:56:19,558] ERROR Error when sending message to topic tester 
> with key: null, value: 1048576 bytes with error: 
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> org.apache.kafka.common.errors.RecordTooLargeException: The message is 
> 1048610 bytes when serialized which is larger than the maximum request size 
> you have configured with the max.request.size configuration.
> {code}
> For completeness, I have confirmed that the console producer is setting 
> {{compression.type}} properly by enabling DEBUG so this appears to be a 
> problem in the size estimate of the message itself.  I would suggest we 
> compress before we serialize instead of the other way around to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4169) Calculation of message size is too conservative for compressed messages

2016-09-14 Thread Dustin Cote (JIRA)
Dustin Cote created KAFKA-4169:
--

 Summary: Calculation of message size is too conservative for 
compressed messages
 Key: KAFKA-4169
 URL: https://issues.apache.org/jira/browse/KAFKA-4169
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.10.0.1
Reporter: Dustin Cote


Currently the producer uses the uncompressed message size to check against 
{{max.request.size}} even if a {{compression.type}} is defined.  This can be 
reproduced as follows:

{code}
# dd if=/dev/zero of=/tmp/outsmaller.dat bs=1024 count=1000

# cat /tmp/out.dat | bin/kafka-console-producer --broker-list localhost:9092 
--topic tester --producer-property compression.type=gzip
{code}

The above code creates a file that is the same size as the default for 
{{max.request.size}} and the added overhead of the message pushes the 
uncompressed size over the limit.  Compressing the message ahead of time allows 
the message to go through.  When the message is blocked, the following 
exception is produced:
{code}
[2016-09-14 08:56:19,558] ERROR Error when sending message to topic tester with 
key: null, value: 1048576 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.RecordTooLargeException: The message is 1048610 
bytes when serialized which is larger than the maximum request size you have 
configured with the max.request.size configuration.
{code}

For completeness, I have confirmed that the console producer is setting 
{{compression.type}} properly by enabling DEBUG so this appears to be a problem 
in the size estimate of the message itself.  I would suggest we compress before 
we serialize instead of the other way around to avoid this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3183) Add metrics for persistent store caching layer

2016-09-14 Thread Eno Thereska (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eno Thereska resolved KAFKA-3183.
-
Resolution: Duplicate

> Add metrics for persistent store caching layer
> --
>
> Key: KAFKA-3183
> URL: https://issues.apache.org/jira/browse/KAFKA-3183
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> We need to add the metrics collection such as cache hits / misses, cache 
> size, dirty key size, etc for the RocksDBStore. However this may need to 
> refactor the RocksDBStore a little bit since currently caching is not exposed 
> to the MeteredKeyValueStore, and it uses an LRUCacheStore as the cache that 
> does not keep the dirty key set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4168) More precise accounting of memory usage

2016-09-14 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-4168:
---

 Summary: More precise accounting of memory usage
 Key: KAFKA-4168
 URL: https://issues.apache.org/jira/browse/KAFKA-4168
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.10.1.0
Reporter: Eno Thereska
 Fix For: 0.10.1.0


Right now, the cache.max.bytes.buffering parameter controls the size of the 
cache used. Specifically the size includes the size of the values stored in the 
cache plus basic overheads, such as key size, all the LRU entry sizes, etc. 
However, we could be more fine-grained in the memory accounting and add up the 
size of hash sets, hash maps and their entries more precisely. For example, 
currently a dirty entry is placed into a dirty keys set, but we do not account 
for the size of that set in the memory consumption calculation.

It is likely this falls under "memory management" rather than "buffer cache 
management".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4167) Add cache metrics

2016-09-14 Thread Eno Thereska (JIRA)
Eno Thereska created KAFKA-4167:
---

 Summary: Add cache metrics
 Key: KAFKA-4167
 URL: https://issues.apache.org/jira/browse/KAFKA-4167
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.10.1.0
Reporter: Eno Thereska
 Fix For: 0.10.1.0


Would be good to report things like hits and misses, overwrites, number of puts 
and gets, number of flushes etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4166) TestMirrorMakerService.test_bounce transient system test failure

2016-09-14 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4166:
--

 Summary: TestMirrorMakerService.test_bounce transient system test 
failure
 Key: KAFKA-4166
 URL: https://issues.apache.org/jira/browse/KAFKA-4166
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma


We've only seen one failure so far and it's a timeout error so it could be an 
environment issue. Filing it here so that we can track it in case there are 
additional failures:

{code}
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": true,
  "new_consumer": true,
  "security_protocol": "SASL_SSL"
}

{code}
 
{code}
test_id:
2016-09-12--001.kafkatest.tests.core.mirror_maker_test.TestMirrorMakerService.test_bounce.clean_shutdown=True.security_protocol=SASL_SSL.new_consumer=True
status: FAIL
run time:   3 minutes 30.354 seconds

Traceback (most recent call last):
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape/tests/runner.py",
 line 106, in run_all_tests
data = self.run_single_test()
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape/tests/runner.py",
 line 162, in run_single_test
return self.current_test_context.function(self.current_test)
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/venv/local/lib/python2.7/site-packages/ducktape/mark/_mark.py",
 line 331, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/core/mirror_maker_test.py",
 line 178, in test_bounce
self.run_produce_consume_validate(core_test_action=lambda: 
self.bounce(clean_shutdown=clean_shutdown))
  File 
"/var/lib/jenkins/workspace/system-test-kafka/kafka/tests/kafkatest/tests/produce_consume_validate.py",
 line 79, in run_produce_consume_validate
raise e
TimeoutError

{code}
 
http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-09-12--001.1473700895--apache--trunk--a7ab9cb/TestMirrorMakerService/test_bounce/clean_shutdown=True.security_protocol=SASL_SSL.new_consumer=True.tgz



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3586) Quotas do not revert to default values when override config is deleted

2016-09-14 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-3586:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Fixed under KAFKA-4158.

> Quotas do not revert to default values when override config is deleted
> --
>
> Key: KAFKA-3586
> URL: https://issues.apache.org/jira/browse/KAFKA-3586
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.1
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> When quota override configuration for a clientId is updated (eg using 
> kafka-configs.sh), the quotas of the client are dynamically updated by the 
> broker. But when the quota override configuration is deleted, the changes are 
> ignored (until broker restart). It would make sense for the client's quotas 
> to revert to the default quota configuration when the override is removed 
> from the config.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-14 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490118#comment-15490118
 ] 

Manikumar Reddy commented on KAFKA-4126:


[~vahid] I think, I tried by disabling auto.topic.create.

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4165) Add 0.10.0.1 as a source for compatibility tests where relevant

2016-09-14 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4165:
---
Issue Type: Improvement  (was: Bug)

> Add 0.10.0.1 as a source for compatibility tests where relevant
> ---
>
> Key: KAFKA-4165
> URL: https://issues.apache.org/jira/browse/KAFKA-4165
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> We have a few compatibility tests: message_format_change_test.py, 
> compatibility_test_new_broker_test.py, upgrade_test.py that don't currently 
> test with 0.10.0.1 as the source. We will probably update upgrade_test as 
> part of the cluster id work, but we need to check if any other ones need to 
> be updated too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4165) Add 0.10.0.1 as a source for compatibility tests where relevant

2016-09-14 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-4165:
--

 Summary: Add 0.10.0.1 as a source for compatibility tests where 
relevant
 Key: KAFKA-4165
 URL: https://issues.apache.org/jira/browse/KAFKA-4165
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Priority: Blocker
 Fix For: 0.10.1.0


We have a few compatibility tests: message_format_change_test.py, 
compatibility_test_new_broker_test.py, upgrade_test.py that don't currently 
test with 0.10.0.1 as the source. We will probably update upgrade_test as part 
of the cluster id work, but we need to check if any other ones need to be 
updated too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4126) No relevant log when the topic is non-existent

2016-09-14 Thread JIRA

[ 
https://issues.apache.org/jira/browse/KAFKA-4126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15490076#comment-15490076
 ] 

Balázs Barnabás commented on KAFKA-4126:


My Scala application sends a ProducerRecord to Kafka the following way:
{noformat}
val producer = new KafkaProducer[String, Status](kafkaProps)
val data = new ProducerRecord[String, Status]("new_topic", status)
producer.send(data)
{noformat}
I expected to find a log like the ones you wrote, but I don't see them. (Not in 
IntelliJ, Zookeeper or Kafka.)
I'm using Kafka 0.10.0.0 with Scala 2.11.8.

> No relevant log when the topic is non-existent
> --
>
> Key: KAFKA-4126
> URL: https://issues.apache.org/jira/browse/KAFKA-4126
> Project: Kafka
>  Issue Type: Bug
>Reporter: Balázs Barnabás
>Assignee: Vahid Hashemian
>Priority: Minor
>
> When a producer sends a ProducerRecord into a Kafka topic that doesn't 
> existst, there is no relevant debug/error log that points out the error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4164) Kafka produces excessive logs when publishing message to non-existent topic

2016-09-14 Thread Vimal Sharma (JIRA)
Vimal Sharma created KAFKA-4164:
---

 Summary: Kafka produces excessive logs when publishing message to 
non-existent topic
 Key: KAFKA-4164
 URL: https://issues.apache.org/jira/browse/KAFKA-4164
 Project: Kafka
  Issue Type: Bug
Reporter: Vimal Sharma


When a message is published to a topic which is not already created(and 
auto.create.topics.enable is set to false), Kafka produces excessive WARN logs 
stating that metadata could not be fetched. Below are the logs

2016-08-22 06:43:47,655 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1177 :
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:47,756 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1178 : 
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:47,858 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1179 :
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:47,961 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1180 : 
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:48,062 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1181 :
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:48,165 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1182 : 
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:48,265 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1183 :
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:48,366 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1184 : 
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}
2016-08-22 06:43:48,467 WARN [kafka-producer-network-thread | producer-1]: 
clients.NetworkClient (NetworkClient.java:handleResponse(600)) - Error while 
fetching metadata with correlation id 1185 :
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}

The error is not communicated to the caller so if these logs are suppressed by 
setting Kafka log level to ERROR, there is no way to debug the issue. It would 
be helpful if the error message( for example 
{ATLAS_HOOK=UNKNOWN_TOPIC_OR_PARTITION}) can be communicated to the caller.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4163) NPE in StreamsMetadataState during re-balance operations

2016-09-14 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4163 started by Damian Guy.
-
> NPE in StreamsMetadataState during re-balance operations
> 
>
> Key: KAFKA-4163
> URL: https://issues.apache.org/jira/browse/KAFKA-4163
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> During rebalance operations it is possible that an NPE can be thrown on 
> StreamsMetadataState operations. We should first check it the Cluster object 
> is non-empty. If it is empty we should return StreamsMetadata.NOT_AVAILABLE.
> Also, we should tidy up InvalidStateStoreException messages in the store API 
> to suggest that the store may have migrated to another instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4163) NPE in StreamsMetadataState during re-balance operations

2016-09-14 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy updated KAFKA-4163:
--
Status: Patch Available  (was: In Progress)

> NPE in StreamsMetadataState during re-balance operations
> 
>
> Key: KAFKA-4163
> URL: https://issues.apache.org/jira/browse/KAFKA-4163
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> During rebalance operations it is possible that an NPE can be thrown on 
> StreamsMetadataState operations. We should first check it the Cluster object 
> is non-empty. If it is empty we should return StreamsMetadata.NOT_AVAILABLE.
> Also, we should tidy up InvalidStateStoreException messages in the store API 
> to suggest that the store may have migrated to another instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4163) NPE in StreamsMetadataState during re-balance operations

2016-09-14 Thread Damian Guy (JIRA)
Damian Guy created KAFKA-4163:
-

 Summary: NPE in StreamsMetadataState during re-balance operations
 Key: KAFKA-4163
 URL: https://issues.apache.org/jira/browse/KAFKA-4163
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Damian Guy
Assignee: Damian Guy
 Fix For: 0.10.1.0


During rebalance operations it is possible that an NPE can be thrown on 
StreamsMetadataState operations. We should first check it the Cluster object is 
non-empty. If it is empty we should return StreamsMetadata.NOT_AVAILABLE.
Also, we should tidy up InvalidStateStoreException messages in the store API to 
suggest that the store may have migrated to another instance



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1853: Fixed typo "rebalance"

2016-09-14 Thread mvj3
GitHub user mvj3 opened a pull request:

https://github.com/apache/kafka/pull/1853

Fixed typo "rebalance"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mvj3/kafka KAFKA-4162

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1853.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1853


commit a0875cca5d44abcc4d1da76ad1fe4a0474ab4285
Author: David Chen 
Date:   2016-09-14T09:20:10Z

Fixed typo "rebalance"




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4162) Typo in Kafka Connect document

2016-09-14 Thread David Chen (JIRA)
David Chen created KAFKA-4162:
-

 Summary: Typo in Kafka Connect document
 Key: KAFKA-4162
 URL: https://issues.apache.org/jira/browse/KAFKA-4162
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.1
Reporter: David Chen
Assignee: Ewen Cheslack-Postava
Priority: Trivial


Fixed typo "reblaance" -> "rebalance".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1438: MINOR: Kafka And ZooKeeper Stop Scripts doesn't re...

2016-09-14 Thread Kamal15
Github user Kamal15 closed the pull request at:

https://github.com/apache/kafka/pull/1438


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---