Build failed in Jenkins: kafka-trunk-jdk8 #506

2016-04-05 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2998: log warnings when client is disconnected from bootstrap

[me] KAFKA-3384: Conform to POSIX kill usage

--
[...truncated 2640 lines...]
kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept FAILED
java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
topic2-1] but actually got [topic-0, topic-1]
at org.junit.Assert.fail(Assert.java:88)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:765)
at 
kafka.api.PlaintextConsumerTest.changeConsumerSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:938)
at 
kafka.api.PlaintextConsumerTest.testAutoCommitIntercept(PlaintextConsumerTest.scala:662)

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.consumer.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.consumer.MetricsTest > testMetricsLeak PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIPOverrides PASSED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testFromString PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED


[jira] [Updated] (KAFKA-3508) Transient failure in kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls

2016-04-05 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3508:
---
Assignee: Grant Henke

> Transient failure in 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> --
>
> Key: KAFKA-3508
> URL: https://issues.apache.org/jira/browse/KAFKA-3508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Grant Henke
>
> {code}
> Stacktrace
> java.lang.AssertionError: Should support many concurrent calls failed with 
> exception(s) ArrayBuffer(java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to update ACLs for Topic:test after 
> trying a maximum of 10 times)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.utils.TestUtils$.assertConcurrent(TestUtils.scala:1123)
>   at 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls(SimpleAclAuthorizerTest.scala:335)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40

[jira] [Updated] (KAFKA-3508) Transient failure in kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls

2016-04-05 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3508:
---
Status: Patch Available  (was: Open)

There's a PR for this here: https://github.com/apache/kafka/pull/1156

> Transient failure in 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> --
>
> Key: KAFKA-3508
> URL: https://issues.apache.org/jira/browse/KAFKA-3508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Grant Henke
>
> {code}
> Stacktrace
> java.lang.AssertionError: Should support many concurrent calls failed with 
> exception(s) ArrayBuffer(java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to update ACLs for Topic:test after 
> trying a maximum of 10 times)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.utils.TestUtils$.assertConcurrent(TestUtils.scala:1123)
>   at 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls(SimpleAclAuthorizerTest.scala:335)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> or

[jira] [Commented] (KAFKA-3332) Consumer can't consume messages from zookeeper chroot

2016-04-05 Thread Sergey Vergun (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225868#comment-15225868
 ] 

Sergey Vergun commented on KAFKA-3332:
--

Can I get any answer?

> Consumer can't consume messages from zookeeper chroot
> -
>
> Key: KAFKA-3332
> URL: https://issues.apache.org/jira/browse/KAFKA-3332
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.8.2.2, 0.9.0.1
> Environment: RHEL 6.X, OS X
>Reporter: Sergey Vergun
>Assignee: Neha Narkhede
>
> I have faced issue when consumer can't consume messages from zookeeper 
> chroot. It was tested on Kafka 0.8.2.2 and Kafka 0.9.0.1
> My zookeeper options into server.properties:
> $cat config/server.properties | grep zookeeper
> zookeeper.connect=localhost:2181/kafka-cluster/kafka-0.9.0.1
> zookeeper.session.timeout.ms=1
> zookeeper.connection.timeout.ms=1
> zookeeper.sync.time.ms=2000
> I can create successfully a new topic
> $kafka-topics.sh --create --partition 3 --replication-factor 1 --topic 
> __TEST-Topic_1 --zookeeper localhost:2181/kafka-cluster/kafka-0.9.0.1
> Created topic "__TEST-Topic_1".
> and produce messages into it
> $kafka-console-producer.sh --topic __TEST-Topic_1 --broker-list localhost:9092
> 1
> 2
> 3
> 4
> 5
> In Kafka Manager I see that messages was delivered:
> Sum of partition offsets  5
> But I can't consume the messages via kafka-console-consumer
> $kafka-console-consumer.sh --topic TEST-Topic_1 --zookeeper 
> localhost:2181/kafka-cluster/kafka-0.9.0.1 --from-beginning
> The consumer is present in zookeeper
> [zk: localhost:2181(CONNECTED) 10] ls /kafka-cluster/kafka-0.9.0.1/consumers
> [console-consumer-62895] 
> [zk: localhost:2181(CONNECTED) 12] ls 
> /kafka-cluster/kafka-0.9.0.1/consumers/console-consumer-62895/ids
> [console-consumer-62895_SV-Macbook-1457097451996-64640cc1] 
> If I reconfigure kafka cluster with zookeeper chroot "/" then everything is 
> ok.
> $cat config/server.properties | grep zookeeper
> zookeeper.connect=localhost:2181
> zookeeper.session.timeout.ms=1
> zookeeper.connection.timeout.ms=1
> zookeeper.sync.time.ms=2000
> $kafka-console-producer.sh --topic __TEST-Topic_1 --broker-list localhost:9092
> 1
> 2
> 3
> 4
> 5
> $kafka-console-consumer.sh --topic TEST-Topic_1 --zookeeper localhost:2181 
> --from-beginning
> 1
> 2
> 3
> 4
> 5
> Is it bug or my mistake?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3510) OffsetIndex thread safety

2016-04-05 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-3510:
--

 Summary: OffsetIndex thread safety
 Key: KAFKA-3510
 URL: https://issues.apache.org/jira/browse/KAFKA-3510
 Project: Kafka
  Issue Type: Bug
Reporter: Ismael Juma
Assignee: Ismael Juma
 Fix For: 0.10.0.0


We expose non-volatile variables without a lock and outside the class. We also 
use an `AtomicInteger` unnecessarily since it's always modified within a lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3510) OffsetIndex thread safety

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15225957#comment-15225957
 ] 

ASF GitHub Bot commented on KAFKA-3510:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1188

KAFKA-3510; OffsetIndex thread safety

* Make all fields accessed outside of a lock `volatile`
* Only allow mutation within the class
* Remove unnecessary `AtomicInteger` since mutation always happens inside a 
lock

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3510-offset-index-thread-safety

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1188


commit 0f570b0c620df9f1145ed8c3e31363560f3835e1
Author: Ismael Juma 
Date:   2016-04-05T09:07:56Z

KAFKA-3510; OffsetIndex thread safety

* Make all fields accessed outside of a lock volatile
* Only allow mutation within the class
* Remove unnecessary `AtomicInteger` since mutation always happens inside a 
lock




> OffsetIndex thread safety
> -
>
> Key: KAFKA-3510
> URL: https://issues.apache.org/jira/browse/KAFKA-3510
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We expose non-volatile variables without a lock and outside the class. We 
> also use an `AtomicInteger` unnecessarily since it's always modified within a 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3510; OffsetIndex thread safety

2016-04-05 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1188

KAFKA-3510; OffsetIndex thread safety

* Make all fields accessed outside of a lock `volatile`
* Only allow mutation within the class
* Remove unnecessary `AtomicInteger` since mutation always happens inside a 
lock

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3510-offset-index-thread-safety

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1188


commit 0f570b0c620df9f1145ed8c3e31363560f3835e1
Author: Ismael Juma 
Date:   2016-04-05T09:07:56Z

KAFKA-3510; OffsetIndex thread safety

* Make all fields accessed outside of a lock volatile
* Only allow mutation within the class
* Remove unnecessary `AtomicInteger` since mutation always happens inside a 
lock




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3510) OffsetIndex thread safety

2016-04-05 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3510:
---
Status: Patch Available  (was: Open)

> OffsetIndex thread safety
> -
>
> Key: KAFKA-3510
> URL: https://issues.apache.org/jira/browse/KAFKA-3510
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We expose non-volatile variables without a lock and outside the class. We 
> also use an `AtomicInteger` unnecessarily since it's always modified within a 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ewen Cheslack-Postava
On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira  wrote:

> >
> >
> >In case of connection closures, the KIP recommends that clients should
> >use some other method of determining the apiRequest version to use,
> > like,
> >probing. For instance, client will send V0 version of apiVersion
> request
> >and will try higher versions incrementally. In case b, client will
> >eventually get apiVersion response and know what api versions it
> should
> >use. For case a and c, client will eventually give up and propagate an
> >error to application.
> >
> >
> I strongly disagree that we should recommend this probing method.
> Probing is essentially what clients do now (since we lack any way to
> communicate versions), and is what we are trying to solve with this KIP.
> Considering that different brokers could have different versions, and that
> brokers can change version at any point, this sounds slow, difficult to
> implement and fragile.
>
> Also note that even with this method, without VersionRequest v0, we will
> break clients in the one way Kafka currently promises to never break: Old
> clients won't be able to work with new brokers.
>
> If this is part of KIP-35, I am against it.
>
> Since all Request/Responses in our protocol have versions, publishing
> versions for each request/response should be something we can easily
> support into the future. It sounds far easier than asking every single
> client to implement the method you specified above.
>

Gwen,

Agreed, and I think it would be fine to make permanent support (barring
massive changes to the protocol) part of the KIP text. There's really no
reason not to and it basically just turns this into the basis for a pretty
simple handshake protocol.

(I'm tempted to not even bring this up given that we're converging, but one
reason I could see this being changed in the future is that protocol
support is only conveyed in one direction. This could also be turned into a
slightly more general handshake approach where the client also advertises
what it supports. However, given the way request/response versioning works,
I can't think of a reason we'd need this atm.)

-Ewen


>
> Gwen
>



-- 
Thanks,
Ewen


[jira] [Commented] (KAFKA-3503) Throw exception on missing/non-existent partition

2016-04-05 Thread Navin Markandeya (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15226573#comment-15226573
 ] 

Navin Markandeya commented on KAFKA-3503:
-

So I would expect an exception instead of 
{{14:08:02.892 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
mytopic-3 is unknown for fetching offset, wait for metadata refresh}} or even 
before.


> Throw exception on missing/non-existent  partition 
> ---
>
> Key: KAFKA-3503
> URL: https://issues.apache.org/jira/browse/KAFKA-3503
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.9.0.1
> Environment: Java 1.8.0_60. 
> Linux  centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC
>Reporter: Navin Markandeya
>Priority: Minor
>
> I would expect some exception to be thrown when a consumer tries to access a 
> non-existent partition. I did not see anyone reporting it. If is already 
> known, please link and close this.
> {code}
> java version "1.8.0_60"
> Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
> Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
> {code}
> {code}
> Linux centos65vm 2.6.32-573.el6.x86_64 #1 SMP Thu Jul 23 15:44:03 UTC 2015 
> x86_64 x86_64 x86_64 GNU/Linux
> {code}
> {{Kafka release - kafka_2.11-0.9.0.1}}
> Created a topic with 3 partitions
> {code}
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic mytopic
> Topic:mytopic PartitionCount:3ReplicationFactor:1 Configs:
>   Topic: mytopic  Partition: 0Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 1Leader: 0   Replicas: 0 Isr: 0
>   Topic: mytopic  Partition: 2Leader: 0   Replicas: 0 Isr: 0
> {code}
> Consumer application does not terminate. A thrown exception that there is no 
> such {{mytopic-3}} partition, that would help to gracefully terminate it.
> {code}
> 14:08:02.885 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - Fetching 
> committed offsets for partitions: [mytopic-3, mytopic-0, mytopic-1, mytopic-2]
> 14:08:02.887 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-sent
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.bytes-received
> 14:08:02.888 [main] DEBUG o.a.kafka.common.metrics.Metrics - Added sensor 
> with name node-2147483647.latency
> 14:08:02.888 [main] DEBUG o.apache.kafka.clients.NetworkClient - Completed 
> connection to node 2147483647
> 14:08:02.891 [main] DEBUG o.a.k.c.c.i.ConsumerCoordinator - No committed 
> offset for partition mytopic-3
> 14:08:02.891 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Resetting 
> offset for partition mytopic-3 to latest offset.
> 14:08:02.892 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:02.965 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=4,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804082965, sendTimeMs=0) to node 0
> 14:08:02.968 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 3 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:02.968 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> 14:08:03.071 [main] DEBUG o.apache.kafka.clients.NetworkClient - Sending 
> metadata request ClientRequest(expectResponse=true, callback=null, 
> request=RequestSend(header={api_key=3,api_version=0,correlation_id=5,client_id=consumer-2},
>  body={topics=[mytopic]}), isInitiatedByNetworkClient, 
> createdTimeMs=1459804083071, sendTimeMs=0) to node 0
> 14:08:03.073 [main] DEBUG org.apache.kafka.clients.Metadata - Updated cluster 
> metadata version 4 to Cluster(nodes = [Node(0, centos65vm, 9092)], partitions 
> = [Partition(topic = mytopic, partition = 0, leader = 0, replicas = [0,], isr 
> = [0,], Partition(topic = mytopic, partition = 1, leader = 0, replicas = 
> [0,], isr = [0,], Partition(topic = mytopic, partition = 2, leader = 0, 
> replicas = [0,], isr = [0,]])
> 14:08:03.073 [main] DEBUG o.a.k.c.consumer.internals.Fetcher - Partition 
> mytopic-3 is unknown for fetching offset, wait for metadata refresh
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ewen Cheslack-Postava
Also, just a thought but is empty list the sentinel we really want to
indicate we want all API versions? We've got nullable string and nullable
bytes in the protocol. Should we have nullable array support as well and
use that to indicate we want everything? I can't think of a use case for
sending an empty list, but null seems like a better sentinel than an empty
list.

-Ewen

On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava 
wrote:

>
>
> On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira  wrote:
>
>> >
>> >
>> >In case of connection closures, the KIP recommends that clients
>> should
>> >use some other method of determining the apiRequest version to use,
>> > like,
>> >probing. For instance, client will send V0 version of apiVersion
>> request
>> >and will try higher versions incrementally. In case b, client will
>> >eventually get apiVersion response and know what api versions it
>> should
>> >use. For case a and c, client will eventually give up and propagate
>> an
>> >error to application.
>> >
>> >
>> I strongly disagree that we should recommend this probing method.
>> Probing is essentially what clients do now (since we lack any way to
>> communicate versions), and is what we are trying to solve with this KIP.
>> Considering that different brokers could have different versions, and that
>> brokers can change version at any point, this sounds slow, difficult to
>> implement and fragile.
>>
>> Also note that even with this method, without VersionRequest v0, we will
>> break clients in the one way Kafka currently promises to never break: Old
>> clients won't be able to work with new brokers.
>>
>> If this is part of KIP-35, I am against it.
>>
>> Since all Request/Responses in our protocol have versions, publishing
>> versions for each request/response should be something we can easily
>> support into the future. It sounds far easier than asking every single
>> client to implement the method you specified above.
>>
>
> Gwen,
>
> Agreed, and I think it would be fine to make permanent support (barring
> massive changes to the protocol) part of the KIP text. There's really no
> reason not to and it basically just turns this into the basis for a pretty
> simple handshake protocol.
>
> (I'm tempted to not even bring this up given that we're converging, but
> one reason I could see this being changed in the future is that protocol
> support is only conveyed in one direction. This could also be turned into a
> slightly more general handshake approach where the client also advertises
> what it supports. However, given the way request/response versioning works,
> I can't think of a reason we'd need this atm.)
>
> -Ewen
>
>
>>
>> Gwen
>>
>
>
>
> --
> Thanks,
> Ewen
>



-- 
Thanks,
Ewen


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ismael Juma
Yeah, we should use nullable arrays (which have been introduced in
KIP-4-Metadata) instead of empty list to indicate all versions.

Ismael
On 5 Apr 2016 18:01, "Ewen Cheslack-Postava"  wrote:

> Also, just a thought but is empty list the sentinel we really want to
> indicate we want all API versions? We've got nullable string and nullable
> bytes in the protocol. Should we have nullable array support as well and
> use that to indicate we want everything? I can't think of a use case for
> sending an empty list, but null seems like a better sentinel than an empty
> list.
>
> -Ewen
>
> On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava 
> wrote:
>
> >
> >
> > On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira  wrote:
> >
> >> >
> >> >
> >> >In case of connection closures, the KIP recommends that clients
> >> should
> >> >use some other method of determining the apiRequest version to use,
> >> > like,
> >> >probing. For instance, client will send V0 version of apiVersion
> >> request
> >> >and will try higher versions incrementally. In case b, client will
> >> >eventually get apiVersion response and know what api versions it
> >> should
> >> >use. For case a and c, client will eventually give up and propagate
> >> an
> >> >error to application.
> >> >
> >> >
> >> I strongly disagree that we should recommend this probing method.
> >> Probing is essentially what clients do now (since we lack any way to
> >> communicate versions), and is what we are trying to solve with this KIP.
> >> Considering that different brokers could have different versions, and
> that
> >> brokers can change version at any point, this sounds slow, difficult to
> >> implement and fragile.
> >>
> >> Also note that even with this method, without VersionRequest v0, we will
> >> break clients in the one way Kafka currently promises to never break:
> Old
> >> clients won't be able to work with new brokers.
> >>
> >> If this is part of KIP-35, I am against it.
> >>
> >> Since all Request/Responses in our protocol have versions, publishing
> >> versions for each request/response should be something we can easily
> >> support into the future. It sounds far easier than asking every single
> >> client to implement the method you specified above.
> >>
> >
> > Gwen,
> >
> > Agreed, and I think it would be fine to make permanent support (barring
> > massive changes to the protocol) part of the KIP text. There's really no
> > reason not to and it basically just turns this into the basis for a
> pretty
> > simple handshake protocol.
> >
> > (I'm tempted to not even bring this up given that we're converging, but
> > one reason I could see this being changed in the future is that protocol
> > support is only conveyed in one direction. This could also be turned
> into a
> > slightly more general handshake approach where the client also advertises
> > what it supports. However, given the way request/response versioning
> works,
> > I can't think of a reason we'd need this atm.)
> >
> > -Ewen
> >
> >
> >>
> >> Gwen
> >>
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>
>
>
> --
> Thanks,
> Ewen
>


Re: [VOTE] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-05 Thread Ashish Singh
Jun,

KIP-50 is now updated. Mind taking a look.

On Mon, Apr 4, 2016 at 10:40 PM, Ashish Singh  wrote:

> Jun,
>
> Your suggested approach works, will update the KIP and re-initiate voting.
> Thanks!
>
> On Sun, Apr 3, 2016 at 8:37 AM, Jun Rao  wrote:
>
>> Ashish,
>>
>> For the two benefits that you listed, fail fast can be done just in the
>> implementation w/o getSupportedPrincipalTypes(). For avoiding the same
>> check in all implementations, I think Grant is thinking of adding some ACL
>> RPC request/response btw the client and broker directly, instead of
>> writing
>> the ACL to ZK directly in the future. If we do that, the right place to do
>> any sanity check is probably in each implementation instead of in
>> AclCommand since AclCommand won't necessarily be the only entry point for
>> changing ACLs. So, I am still not quite convinced that we should
>> getSupportedPrincipalTypes()
>> right now. It's easy to add a new method to the interface, but hard to
>> remove/change an existing method. Grant, could you comment on the latter?
>>
>> Thanks,
>>
>> Jun
>>
>> On Sat, Apr 2, 2016 at 11:43 AM, Ashish Singh 
>> wrote:
>>
>> > Hello Jun,
>> >
>> >
>> > On Fri, Apr 1, 2016 at 9:57 PM, Jun Rao  wrote:
>> >
>> > > Ashish,
>> > >
>> > > Thanks for the KIP.
>> > >
>> > > It seems that a specific implementation of Authorizer can reject
>> invalid
>> > > user type in addAcl/removeAcl without needing the new
>> > > getSupportedPrincipalTypes()
>> > > method, right? It's probably useful to provide the supported user
>> types
>> > as
>> > > information through CLI (e.g., when --help is specified). Then, there
>> may
>> > > other information that a specific authorizer may want to provide. So,
>> if
>> > > this is just informational, would it be better to add sth like
>> > > getDescription() in the Authorizer interface and expose that through
>> CLI?
>> > >
>> > Providing information is definitely an important reason, some other
>> reasons
>> > were to fail fast and to avoid same check in all implementations. I
>> agree
>> > having a generic getDescription() will be handy for authorizer
>> > implementations to provide more implementation specific info, including
>> > supported principal types and more. However, do you think other two
>> reasons
>> > I mentioned can convince you for current proposal?
>> >
>> > >
>> > > Jun
>> > >
>> > > On Wed, Mar 30, 2016 at 11:02 AM, Ashish Singh 
>> > > wrote:
>> > >
>> > > > Hi Guys,
>> > > >
>> > > > I would like to open the vote for KIP-50.
>> > > >
>> > > > KIP:
>> > > >
>> > > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-50+-+Enhance+Authorizer+interface+to+be+aware+of+supported+Principal+Types
>> > > >
>> > > > Discuss thread: here
>> > > > <
>> > > >
>> > >
>> >
>> http://mail-archives.apache.org/mod_mbox/kafka-dev/201603.mbox/%3CCAGQG9cUCLDO0owdziDcL9iStXNF1wURyVNcEZedQJg%3DUuC7j%3DQ%40mail.gmail.com%3E
>> > > > >
>> > > >
>> > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-3186
>> > > >
>> > > > PR: https://github.com/apache/kafka/pull/861
>> > > > ​
>> > > > --
>> > > >
>> > > > Regards,
>> > > > Ashish
>> > > >
>> > >
>> >
>> >
>> >
>> > --
>> >
>> > Regards,
>> > Ashish
>> >
>>
>
>
>
> --
>
> Regards,
> Ashish
>



-- 

Regards,
Ashish


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ashish Singh
Null array sounds good to me as well.

On Tue, Apr 5, 2016 at 10:06 AM, Ismael Juma  wrote:

> Yeah, we should use nullable arrays (which have been introduced in
> KIP-4-Metadata) instead of empty list to indicate all versions.
>
> Ismael
> On 5 Apr 2016 18:01, "Ewen Cheslack-Postava"  wrote:
>
> > Also, just a thought but is empty list the sentinel we really want to
> > indicate we want all API versions? We've got nullable string and nullable
> > bytes in the protocol. Should we have nullable array support as well and
> > use that to indicate we want everything? I can't think of a use case for
> > sending an empty list, but null seems like a better sentinel than an
> empty
> > list.
> >
> > -Ewen
> >
> > On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava  >
> > wrote:
> >
> > >
> > >
> > > On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira 
> wrote:
> > >
> > >> >
> > >> >
> > >> >In case of connection closures, the KIP recommends that clients
> > >> should
> > >> >use some other method of determining the apiRequest version to
> use,
> > >> > like,
> > >> >probing. For instance, client will send V0 version of apiVersion
> > >> request
> > >> >and will try higher versions incrementally. In case b, client
> will
> > >> >eventually get apiVersion response and know what api versions it
> > >> should
> > >> >use. For case a and c, client will eventually give up and
> propagate
> > >> an
> > >> >error to application.
> > >> >
> > >> >
> > >> I strongly disagree that we should recommend this probing method.
> > >> Probing is essentially what clients do now (since we lack any way to
> > >> communicate versions), and is what we are trying to solve with this
> KIP.
> > >> Considering that different brokers could have different versions, and
> > that
> > >> brokers can change version at any point, this sounds slow, difficult
> to
> > >> implement and fragile.
> > >>
> > >> Also note that even with this method, without VersionRequest v0, we
> will
> > >> break clients in the one way Kafka currently promises to never break:
> > Old
> > >> clients won't be able to work with new brokers.
> > >>
> > >> If this is part of KIP-35, I am against it.
> > >>
> > >> Since all Request/Responses in our protocol have versions, publishing
> > >> versions for each request/response should be something we can easily
> > >> support into the future. It sounds far easier than asking every single
> > >> client to implement the method you specified above.
> > >>
> > >
> > > Gwen,
> > >
> > > Agreed, and I think it would be fine to make permanent support (barring
> > > massive changes to the protocol) part of the KIP text. There's really
> no
> > > reason not to and it basically just turns this into the basis for a
> > pretty
> > > simple handshake protocol.
> > >
> > > (I'm tempted to not even bring this up given that we're converging, but
> > > one reason I could see this being changed in the future is that
> protocol
> > > support is only conveyed in one direction. This could also be turned
> > into a
> > > slightly more general handshake approach where the client also
> advertises
> > > what it supports. However, given the way request/response versioning
> > works,
> > > I can't think of a reason we'd need this atm.)
> > >
> > > -Ewen
> > >
> > >
> > >>
> > >> Gwen
> > >>
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Ewen
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>



-- 

Regards,
Ashish


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ashish Singh
Sounds fair. I am OK with putting down, permanent support of ApiVersion api
versions, as a limitation in KIP.

On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava 
wrote:

> On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira  wrote:
>
> > >
> > >
> > >In case of connection closures, the KIP recommends that clients
> should
> > >use some other method of determining the apiRequest version to use,
> > > like,
> > >probing. For instance, client will send V0 version of apiVersion
> > request
> > >and will try higher versions incrementally. In case b, client will
> > >eventually get apiVersion response and know what api versions it
> > should
> > >use. For case a and c, client will eventually give up and propagate
> an
> > >error to application.
> > >
> > >
> > I strongly disagree that we should recommend this probing method.
> > Probing is essentially what clients do now (since we lack any way to
> > communicate versions), and is what we are trying to solve with this KIP.
> > Considering that different brokers could have different versions, and
> that
> > brokers can change version at any point, this sounds slow, difficult to
> > implement and fragile.
> >
> > Also note that even with this method, without VersionRequest v0, we will
> > break clients in the one way Kafka currently promises to never break: Old
> > clients won't be able to work with new brokers.
> >
> > If this is part of KIP-35, I am against it.
> >
> > Since all Request/Responses in our protocol have versions, publishing
> > versions for each request/response should be something we can easily
> > support into the future. It sounds far easier than asking every single
> > client to implement the method you specified above.
> >
>
> Gwen,
>
> Agreed, and I think it would be fine to make permanent support (barring
> massive changes to the protocol) part of the KIP text. There's really no
> reason not to and it basically just turns this into the basis for a pretty
> simple handshake protocol.
>
> (I'm tempted to not even bring this up given that we're converging, but one
> reason I could see this being changed in the future is that protocol
> support is only conveyed in one direction. This could also be turned into a
> slightly more general handshake approach where the client also advertises
> what it supports. However, given the way request/response versioning works,
> I can't think of a reason we'd need this atm.)
>
> -Ewen
>
>
> >
> > Gwen
> >
>
>
>
> --
> Thanks,
> Ewen
>



-- 

Regards,
Ashish


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Magnus Edenhill
Empty arrays are already used in MetadataRequest to retrieve all topics in
the cluster,
ApiVersion request will have the same standard semantics.


2016-04-05 19:01 GMT+02:00 Ewen Cheslack-Postava :

> Also, just a thought but is empty list the sentinel we really want to
> indicate we want all API versions? We've got nullable string and nullable
> bytes in the protocol. Should we have nullable array support as well and
> use that to indicate we want everything? I can't think of a use case for
> sending an empty list, but null seems like a better sentinel than an empty
> list.
>
> -Ewen
>
> On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava 
> wrote:
>
> >
> >
> > On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira  wrote:
> >
> >> >
> >> >
> >> >In case of connection closures, the KIP recommends that clients
> >> should
> >> >use some other method of determining the apiRequest version to use,
> >> > like,
> >> >probing. For instance, client will send V0 version of apiVersion
> >> request
> >> >and will try higher versions incrementally. In case b, client will
> >> >eventually get apiVersion response and know what api versions it
> >> should
> >> >use. For case a and c, client will eventually give up and propagate
> >> an
> >> >error to application.
> >> >
> >> >
> >> I strongly disagree that we should recommend this probing method.
> >> Probing is essentially what clients do now (since we lack any way to
> >> communicate versions), and is what we are trying to solve with this KIP.
> >> Considering that different brokers could have different versions, and
> that
> >> brokers can change version at any point, this sounds slow, difficult to
> >> implement and fragile.
> >>
> >> Also note that even with this method, without VersionRequest v0, we will
> >> break clients in the one way Kafka currently promises to never break:
> Old
> >> clients won't be able to work with new brokers.
> >>
> >> If this is part of KIP-35, I am against it.
> >>
> >> Since all Request/Responses in our protocol have versions, publishing
> >> versions for each request/response should be something we can easily
> >> support into the future. It sounds far easier than asking every single
> >> client to implement the method you specified above.
> >>
> >
> > Gwen,
> >
> > Agreed, and I think it would be fine to make permanent support (barring
> > massive changes to the protocol) part of the KIP text. There's really no
> > reason not to and it basically just turns this into the basis for a
> pretty
> > simple handshake protocol.
> >
> > (I'm tempted to not even bring this up given that we're converging, but
> > one reason I could see this being changed in the future is that protocol
> > support is only conveyed in one direction. This could also be turned
> into a
> > slightly more general handshake approach where the client also advertises
> > what it supports. However, given the way request/response versioning
> works,
> > I can't think of a reason we'd need this atm.)
> >
> > -Ewen
> >
> >
> >>
> >> Gwen
> >>
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>
>
>
> --
> Thanks,
> Ewen
>


Re: How will KIP-35 and KIP-43 play together?

2016-04-05 Thread Gwen Shapira
I think we pretty much agreed on KIP-35 and are just finalizing details.

Given that we are merging both KIP-35 and KIP-43, I would like some
direction on what this will look like.

Magnus suggested adding new Request type as part of KIP-43, which will
allow us to advertise the new extension to clients. I think Rajini agreed
that this could work (with a full handshake), although I may have
misunderstood.

If we all agree that with KIP-35 any new Kafka capability will require
either (1) new Request type or (2) bumping an existing request, we are all
good.

Gwen


On Mon, Apr 4, 2016 at 8:51 PM, Dana Powers  wrote:

> I don't have anything specific to say wrt SASL features, but I think
> this circumstance makes it clear that there are only 2 ways forward:
>
> (1) official java client continues releasing w/ broker versioning as
> an implicit compatibility test ("java client X.Y requires broker X.Y")
> AND support is added to brokers so that all clients can query broker
> version ("0.10.0.0") via API, enabling similar implicit compatibility
> tests in non-java clients, or
>
> (2) java client versioning is decoupled from broker versioning,
> breaking reliance on implicit compatibility tests, AND all clients
> forced to rely on explicit protocol compatibility tests exposed via
> API (such as via KIP-35)
>
> Is there any other way to avoid this continuing to be an issue?
>
> -Dana
>
>
> On Mon, Apr 4, 2016 at 7:14 PM, Gwen Shapira  wrote:
> > Magnus,
> >
> > It sounds like KIP-43 will need to change in order to support the KIP-35
> > protocol. Can you add more details on what you had in mind?
> >
> > On Mon, Apr 4, 2016 at 12:41 PM, Magnus Edenhill 
> wrote:
> >
> >> As Jun says the SASL (and SSL) handshake is not done using the Kafka
> >> protocol
> >> and is performed before any Kafka protocol requests pass between client
> and
> >> server.
> >>
> >> It might make sense to move the SASL handshake from its custom protocol
> >> format
> >> into the Kafka protocol and make it use the proper Kafka protocol
> framing.
> >>
> >> (For SSL this is isnt needed since TLS has its own _standardised_
> >> hand-shake format and existing SSL implementations take care of it.)
> >>
> >> 2016-04-04 21:20 GMT+02:00 Ismael Juma :
> >>
> >> > An option would be to add a version for the handshake in the KIP-35
> >> > response.
> >> >
> >> > Ismael
> >> > On 4 Apr 2016 20:09, "Gwen Shapira"  wrote:
> >> >
> >> > > I think the challenge here is that even after KIP-35 clients will
> not
> >> > know
> >> > > whether the server supports new sasl mechanisms or not, so non-Java
> >> > clients
> >> > > will have to assume it is not supported (and will therefore lag
> behind
> >> on
> >> > > features).
> >> > >
> >> > > I think this highlights a short-coming of KIP-35, and I'm wondering
> if
> >> > > there are good ways to address this.
> >> > >
> >> > > Gwen
> >> > >
> >> > >
> >> > > On Mon, Apr 4, 2016 at 12:05 PM, Jun Rao  wrote:
> >> > >
> >> > > > I think with KIP-43, the existing way of sasl handshake during
> >> > connection
> >> > > > still works. It's just that if you want to support non-GSSAPI, you
> >> will
> >> > > > need a new sasl handshake implementation in the client. It's
> >> > unfortunate
> >> > > > that Protocol currently only covers the communication after the
> >> > > connection
> >> > > > is ready to use, but not during handshake. For now, we can
> probably
> >> > just
> >> > > > document this change during handshake since changing the
> >> implementation
> >> > > is
> >> > > > optional.
> >> > > >
> >> > > > Thanks,
> >> > > >
> >> > > > Jun
> >> > > >
> >> > > > On Mon, Apr 4, 2016 at 11:28 AM, Gwen Shapira 
> >> > wrote:
> >> > > >
> >> > > > > Hi Kafka Team,
> >> > > > >
> >> > > > > As a practical test-case of KIP-35, I'd like to turn your
> attention
> >> > to
> >> > > > > KIP-43:
> >> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-43
> >> > > > >
> >> > > > > KIP-43 makes an interesting modification to the protocol, but
> only
> >> > > under
> >> > > > > specific conditions:
> >> > > > >
> >> > > > > "*Client flow:*
> >> > > > >
> >> > > > >1. If sasl.mechanism is not GSSAPI, send a packet with the
> >> > mechanism
> >> > > > >name to the server. Otherwise go to Step 3.
> >> > > > >   - Packet Format: | Version (Int16) | Mechanism (String) |
> >> > > > >2. Wait for response from the server. If the error code in
> the
> >> > > > response
> >> > > > >is non-zero, indicating failure, report the error and fail
> >> > > > > authentication.
> >> > > > >3. Perform SASL authentication with the configured client
> >> > mechanism
> >> > > > >
> >> > > > > *Server flow:*
> >> > > > >
> >> > > > >1. Wait for first authentication packet from client
> >> > > > >2. If this packet is a not valid mechanism request, go to
> Step 4
> >> > and
> >> > > > >process this packet as the first GSSAPI client token
> >> > > > >3. If the client mechanism received in Step 2 is enabled i

Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ashish Singh
Magnus, it is proposed to be changed in version 1,
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-MetadataSchema
.

On Tue, Apr 5, 2016 at 10:23 AM, Magnus Edenhill  wrote:

> Empty arrays are already used in MetadataRequest to retrieve all topics in
> the cluster,
> ApiVersion request will have the same standard semantics.
>
>
> 2016-04-05 19:01 GMT+02:00 Ewen Cheslack-Postava :
>
> > Also, just a thought but is empty list the sentinel we really want to
> > indicate we want all API versions? We've got nullable string and nullable
> > bytes in the protocol. Should we have nullable array support as well and
> > use that to indicate we want everything? I can't think of a use case for
> > sending an empty list, but null seems like a better sentinel than an
> empty
> > list.
> >
> > -Ewen
> >
> > On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava  >
> > wrote:
> >
> > >
> > >
> > > On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira 
> wrote:
> > >
> > >> >
> > >> >
> > >> >In case of connection closures, the KIP recommends that clients
> > >> should
> > >> >use some other method of determining the apiRequest version to
> use,
> > >> > like,
> > >> >probing. For instance, client will send V0 version of apiVersion
> > >> request
> > >> >and will try higher versions incrementally. In case b, client
> will
> > >> >eventually get apiVersion response and know what api versions it
> > >> should
> > >> >use. For case a and c, client will eventually give up and
> propagate
> > >> an
> > >> >error to application.
> > >> >
> > >> >
> > >> I strongly disagree that we should recommend this probing method.
> > >> Probing is essentially what clients do now (since we lack any way to
> > >> communicate versions), and is what we are trying to solve with this
> KIP.
> > >> Considering that different brokers could have different versions, and
> > that
> > >> brokers can change version at any point, this sounds slow, difficult
> to
> > >> implement and fragile.
> > >>
> > >> Also note that even with this method, without VersionRequest v0, we
> will
> > >> break clients in the one way Kafka currently promises to never break:
> > Old
> > >> clients won't be able to work with new brokers.
> > >>
> > >> If this is part of KIP-35, I am against it.
> > >>
> > >> Since all Request/Responses in our protocol have versions, publishing
> > >> versions for each request/response should be something we can easily
> > >> support into the future. It sounds far easier than asking every single
> > >> client to implement the method you specified above.
> > >>
> > >
> > > Gwen,
> > >
> > > Agreed, and I think it would be fine to make permanent support (barring
> > > massive changes to the protocol) part of the KIP text. There's really
> no
> > > reason not to and it basically just turns this into the basis for a
> > pretty
> > > simple handshake protocol.
> > >
> > > (I'm tempted to not even bring this up given that we're converging, but
> > > one reason I could see this being changed in the future is that
> protocol
> > > support is only conveyed in one direction. This could also be turned
> > into a
> > > slightly more general handshake approach where the client also
> advertises
> > > what it supports. However, given the way request/response versioning
> > works,
> > > I can't think of a reason we'd need this atm.)
> > >
> > > -Ewen
> > >
> > >
> > >>
> > >> Gwen
> > >>
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Ewen
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>



-- 

Regards,
Ashish


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Magnus Edenhill
Ashish, thanks, didnt know that.

For ApiVersionRequest requesting no Apis  to be returned doesnt make sense
so the distinction isn't necessary,
but I'm fine with adding Null to be more in line with future protocol
requests, as long as it doesn't delay
this KIP any longer! :)

2016-04-05 19:38 GMT+02:00 Ashish Singh :

> Magnus, it is proposed to be changed in version 1,
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-MetadataSchema
> .
>
> On Tue, Apr 5, 2016 at 10:23 AM, Magnus Edenhill 
> wrote:
>
> > Empty arrays are already used in MetadataRequest to retrieve all topics
> in
> > the cluster,
> > ApiVersion request will have the same standard semantics.
> >
> >
> > 2016-04-05 19:01 GMT+02:00 Ewen Cheslack-Postava :
> >
> > > Also, just a thought but is empty list the sentinel we really want to
> > > indicate we want all API versions? We've got nullable string and
> nullable
> > > bytes in the protocol. Should we have nullable array support as well
> and
> > > use that to indicate we want everything? I can't think of a use case
> for
> > > sending an empty list, but null seems like a better sentinel than an
> > empty
> > > list.
> > >
> > > -Ewen
> > >
> > > On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava <
> e...@confluent.io
> > >
> > > wrote:
> > >
> > > >
> > > >
> > > > On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira 
> > wrote:
> > > >
> > > >> >
> > > >> >
> > > >> >In case of connection closures, the KIP recommends that clients
> > > >> should
> > > >> >use some other method of determining the apiRequest version to
> > use,
> > > >> > like,
> > > >> >probing. For instance, client will send V0 version of
> apiVersion
> > > >> request
> > > >> >and will try higher versions incrementally. In case b, client
> > will
> > > >> >eventually get apiVersion response and know what api versions
> it
> > > >> should
> > > >> >use. For case a and c, client will eventually give up and
> > propagate
> > > >> an
> > > >> >error to application.
> > > >> >
> > > >> >
> > > >> I strongly disagree that we should recommend this probing method.
> > > >> Probing is essentially what clients do now (since we lack any way to
> > > >> communicate versions), and is what we are trying to solve with this
> > KIP.
> > > >> Considering that different brokers could have different versions,
> and
> > > that
> > > >> brokers can change version at any point, this sounds slow, difficult
> > to
> > > >> implement and fragile.
> > > >>
> > > >> Also note that even with this method, without VersionRequest v0, we
> > will
> > > >> break clients in the one way Kafka currently promises to never
> break:
> > > Old
> > > >> clients won't be able to work with new brokers.
> > > >>
> > > >> If this is part of KIP-35, I am against it.
> > > >>
> > > >> Since all Request/Responses in our protocol have versions,
> publishing
> > > >> versions for each request/response should be something we can easily
> > > >> support into the future. It sounds far easier than asking every
> single
> > > >> client to implement the method you specified above.
> > > >>
> > > >
> > > > Gwen,
> > > >
> > > > Agreed, and I think it would be fine to make permanent support
> (barring
> > > > massive changes to the protocol) part of the KIP text. There's really
> > no
> > > > reason not to and it basically just turns this into the basis for a
> > > pretty
> > > > simple handshake protocol.
> > > >
> > > > (I'm tempted to not even bring this up given that we're converging,
> but
> > > > one reason I could see this being changed in the future is that
> > protocol
> > > > support is only conveyed in one direction. This could also be turned
> > > into a
> > > > slightly more general handshake approach where the client also
> > advertises
> > > > what it supports. However, given the way request/response versioning
> > > works,
> > > > I can't think of a reason we'd need this atm.)
> > > >
> > > > -Ewen
> > > >
> > > >
> > > >>
> > > >> Gwen
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Thanks,
> > > > Ewen
> > > >
> > >
> > >
> > >
> > > --
> > > Thanks,
> > > Ewen
> > >
> >
>
>
>
> --
>
> Regards,
> Ashish
>


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Magnus Edenhill
Hey,

people have had concerns about the complexity of the client of mapping API
versions to features,
so I implemented this in librdkafka and it is rather straight forward.

See here:
https://github.com/edenhill/librdkafka/blob/KIP-35/src/rdkafka_feature.c#L52

Consider it a proof of concept at this point.

Thanks,
Magnus




2016-04-05 19:40 GMT+02:00 Magnus Edenhill :

> Ashish, thanks, didnt know that.
>
> For ApiVersionRequest requesting no Apis  to be returned doesnt make sense
> so the distinction isn't necessary,
> but I'm fine with adding Null to be more in line with future protocol
> requests, as long as it doesn't delay
> this KIP any longer! :)
>
> 2016-04-05 19:38 GMT+02:00 Ashish Singh :
>
>> Magnus, it is proposed to be changed in version 1,
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations#KIP-4-Commandlineandcentralizedadministrativeoperations-MetadataSchema
>> .
>>
>> On Tue, Apr 5, 2016 at 10:23 AM, Magnus Edenhill 
>> wrote:
>>
>> > Empty arrays are already used in MetadataRequest to retrieve all topics
>> in
>> > the cluster,
>> > ApiVersion request will have the same standard semantics.
>> >
>> >
>> > 2016-04-05 19:01 GMT+02:00 Ewen Cheslack-Postava :
>> >
>> > > Also, just a thought but is empty list the sentinel we really want to
>> > > indicate we want all API versions? We've got nullable string and
>> nullable
>> > > bytes in the protocol. Should we have nullable array support as well
>> and
>> > > use that to indicate we want everything? I can't think of a use case
>> for
>> > > sending an empty list, but null seems like a better sentinel than an
>> > empty
>> > > list.
>> > >
>> > > -Ewen
>> > >
>> > > On Tue, Apr 5, 2016 at 9:23 AM, Ewen Cheslack-Postava <
>> e...@confluent.io
>> > >
>> > > wrote:
>> > >
>> > > >
>> > > >
>> > > > On Mon, Apr 4, 2016 at 11:24 AM, Gwen Shapira 
>> > wrote:
>> > > >
>> > > >> >
>> > > >> >
>> > > >> >In case of connection closures, the KIP recommends that
>> clients
>> > > >> should
>> > > >> >use some other method of determining the apiRequest version to
>> > use,
>> > > >> > like,
>> > > >> >probing. For instance, client will send V0 version of
>> apiVersion
>> > > >> request
>> > > >> >and will try higher versions incrementally. In case b, client
>> > will
>> > > >> >eventually get apiVersion response and know what api versions
>> it
>> > > >> should
>> > > >> >use. For case a and c, client will eventually give up and
>> > propagate
>> > > >> an
>> > > >> >error to application.
>> > > >> >
>> > > >> >
>> > > >> I strongly disagree that we should recommend this probing method.
>> > > >> Probing is essentially what clients do now (since we lack any way
>> to
>> > > >> communicate versions), and is what we are trying to solve with this
>> > KIP.
>> > > >> Considering that different brokers could have different versions,
>> and
>> > > that
>> > > >> brokers can change version at any point, this sounds slow,
>> difficult
>> > to
>> > > >> implement and fragile.
>> > > >>
>> > > >> Also note that even with this method, without VersionRequest v0, we
>> > will
>> > > >> break clients in the one way Kafka currently promises to never
>> break:
>> > > Old
>> > > >> clients won't be able to work with new brokers.
>> > > >>
>> > > >> If this is part of KIP-35, I am against it.
>> > > >>
>> > > >> Since all Request/Responses in our protocol have versions,
>> publishing
>> > > >> versions for each request/response should be something we can
>> easily
>> > > >> support into the future. It sounds far easier than asking every
>> single
>> > > >> client to implement the method you specified above.
>> > > >>
>> > > >
>> > > > Gwen,
>> > > >
>> > > > Agreed, and I think it would be fine to make permanent support
>> (barring
>> > > > massive changes to the protocol) part of the KIP text. There's
>> really
>> > no
>> > > > reason not to and it basically just turns this into the basis for a
>> > > pretty
>> > > > simple handshake protocol.
>> > > >
>> > > > (I'm tempted to not even bring this up given that we're converging,
>> but
>> > > > one reason I could see this being changed in the future is that
>> > protocol
>> > > > support is only conveyed in one direction. This could also be turned
>> > > into a
>> > > > slightly more general handshake approach where the client also
>> > advertises
>> > > > what it supports. However, given the way request/response versioning
>> > > works,
>> > > > I can't think of a reason we'd need this atm.)
>> > > >
>> > > > -Ewen
>> > > >
>> > > >
>> > > >>
>> > > >> Gwen
>> > > >>
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Thanks,
>> > > > Ewen
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Thanks,
>> > > Ewen
>> > >
>> >
>>
>>
>>
>> --
>>
>> Regards,
>> Ashish
>>
>
>


[GitHub] kafka pull request: KAFKA-3510; OffsetIndex thread safety

2016-04-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1188


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3510) OffsetIndex thread safety

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3510:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1188
[https://github.com/apache/kafka/pull/1188]

> OffsetIndex thread safety
> -
>
> Key: KAFKA-3510
> URL: https://issues.apache.org/jira/browse/KAFKA-3510
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We expose non-volatile variables without a lock and outside the class. We 
> also use an `AtomicInteger` unnecessarily since it's always modified within a 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3510) OffsetIndex thread safety

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15226902#comment-15226902
 ] 

ASF GitHub Bot commented on KAFKA-3510:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1188


> OffsetIndex thread safety
> -
>
> Key: KAFKA-3510
> URL: https://issues.apache.org/jira/browse/KAFKA-3510
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We expose non-volatile variables without a lock and outside the class. We 
> also use an `AtomicInteger` unnecessarily since it's always modified within a 
> lock.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3511) Provide built-in aggregators sum() and avg() in Kafka Streams DSL

2016-04-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3511:


 Summary: Provide built-in aggregators sum() and avg() in Kafka 
Streams DSL
 Key: KAFKA-3511
 URL: https://issues.apache.org/jira/browse/KAFKA-3511
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Reporter: Guozhang Wang
 Fix For: 0.10.0.0


Currently we only have one built-in aggregate function count() in the Kafka 
Streams DSL, but we want to add more aggregation functions like sum() and avg().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3512) Add a foreach() operator in Kafka Streams DSL

2016-04-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3512:


 Summary: Add a foreach() operator in Kafka Streams DSL
 Key: KAFKA-3512
 URL: https://issues.apache.org/jira/browse/KAFKA-3512
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Reporter: Guozhang Wang
 Fix For: 0.10.0.0


This would be a more intuitive operator to replace the mis-usage of map():

{code}
map((k, v) -> process(k, v))
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Search on queue?

2016-04-05 Thread Manish Degan
Hi,

I have been thinking of some high volume use cases where I want to utilize
Kafka. The message objects in my queue might have a date property. I want
to pull all messages till a date to perform certain actions. Currently, I
can continue listening but cannot confirm if there are any more messages
for the date which might still be in the queue. Would it be a good idea to
have configurable search props in a wrapper object that comes to Kafka and
for the users to use the field set by them for searches? Bit of complicated
use case but might be a good functionality.

Regards,
Manish


Build failed in Jenkins: kafka-trunk-jdk8 #507

2016-04-05 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3510; OffsetIndex thread safety

--
[...truncated 5403 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testFormatConversionWithPartialMessage PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testMessageFormatConversion PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testAppendWithOutOfOrderOffsetsThrowsException PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.Lo

[jira] [Created] (KAFKA-3513) Transient failure of OffsetValidationTest

2016-04-05 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-3513:


 Summary: Transient failure of OffsetValidationTest
 Key: KAFKA-3513
 URL: https://issues.apache.org/jira/browse/KAFKA-3513
 Project: Kafka
  Issue Type: Bug
  Components: consumer, system tests
Reporter: Ewen Cheslack-Postava
Assignee: Jason Gustafson


http://confluent-kafka-system-test-results.s3-us-west-2.amazonaws.com/2016-04-05--001.1459840046--apache--trunk--31e263e/report.html

The version of the test fails in this case is:
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": false
}

and others passed. It's unclear if the parameters actually have any impact on 
the failure.

I did some initial triage and it looks like the test code isn't seeing all the 
group members join the group (receive partition assignments), but it appears 
from the logs that they all did. This could indicate a simple timing issue, but 
I haven't been able to verify that yet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1178

2016-04-05 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3510; OffsetIndex thread safety

--
[...truncated  lines...]
org.apache.kafka.streams.kstream.internals.KTableKTableOuterJoinTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > testKTable 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testValueGetter PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > testNumProcesses 
PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testNotSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > 
testSedingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableSourceTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamBranchTest > 
testKStreamBranch PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testAggBasic PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testJoin PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testStateStore 
PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFlatMapTest > testFlatMap 
PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testMerge PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testFrom PASSED

org.apache.kafka.streams.kstream.KStreamBuilderTest > testNewName PASSED

org.apache.kafka.streams.KeyValueTest > testHashcode PASSED

org.apache.kafka.streams.KeyValueTest > testEquals PASSED

org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testInitialLoading PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > testRolling 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testSegmentMaintenance PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutSameKeyTimestamp PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchAfter PASSED

org.apache.kafka.streams.state.internals.StoreChangeLoggerTest > testRaw PASSED

o

[jira] [Commented] (KAFKA-3511) Provide built-in aggregators sum() and avg() in Kafka Streams DSL

2016-04-05 Thread Jay Kreps (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227096#comment-15227096
 ] 

Jay Kreps commented on KAFKA-3511:
--

The issue is that there are dozens of common aggregation functions, which ones 
will be built in and which not? 

I would argue the other direction, why not remove countByKey and implement all 
of these as Aggregator implementations, so you would do something like:

myStream.aggregateByKey(new Count(), )

> Provide built-in aggregators sum() and avg() in Kafka Streams DSL
> -
>
> Key: KAFKA-3511
> URL: https://issues.apache.org/jira/browse/KAFKA-3511
> Project: Kafka
>  Issue Type: Bug
>  Components: kafka streams
>Reporter: Guozhang Wang
>  Labels: newbie
> Fix For: 0.10.0.0
>
>
> Currently we only have one built-in aggregate function count() in the Kafka 
> Streams DSL, but we want to add more aggregation functions like sum() and 
> avg().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ashish Singh
Ismael, thanks for the review.

On Fri, Apr 1, 2016 at 1:22 AM, Ismael Juma  wrote:

> A couple of questions:
>
> 1. The KIP says "Specific version may be deprecated through protocol
> documentation but must still be supported (although it is fair to return an
> error code if the specific API supports it).". It may be worth expanding
> this a little more. For example, what does it mean to support the API? I
> guess this means that the broker must not disconnect the client and the
> broker must return a valid protocol response.

Sure, will add to KIP.

> Given that it says that it is
> "fair" (I would probably replace "fair" with "valid") to return an error
> code if the specific API supports it, it sounds like we are saying that we
> don't have to maintain the semantic behaviour (i.e. we could _always_
> return an error for a deprecated API?). Is this true?
>
Yes. Typically we should support deprecated APIs, however in cases where an
intermediate buggy version was backported, if the api has error field, it
should be valid to use it to signal an error. Example of such backporting
is provided on KIP, below is excerpt. Note that this KIP is not asking for
any new error type.
> For instance, say 0.9.0 had protocol versions [0] for api key 1. On
trunk, version 1 of the api key was added. Users running off trunk started
using version 1 of the api and found out a major bug. To rectify that
version 2 of the api is added to trunk. For some reason, it is now deemed
important to have version 2 of the api in 0.9.1 as well. To do so, version
1 and version 2 both of the api will be backported to the 0.9.1 branch.
0.9.1 broker will return 0 as min supported version for the api and 2 for
the max supported version for the api. However, the version 1 should be
clearly marked as deprecated on its documentation. It will be client's
responsibility to make sure they are not using any such deprecated version
to the best knowledge of the client at the time of development (or
alternatively by configuration).

>
> 2. ApiVersionQueryRequest seems a bit verbose, why not ApiVersionRequest?
>
Adopted.

>
> Ismael
>



-- 

Regards,
Ashish


Re: [DISCUSS] KIP-35 - Retrieve protocol version

2016-04-05 Thread Ashish Singh
On Fri, Apr 1, 2016 at 1:32 AM, Ismael Juma  wrote:

> Two more things:
>
> 3. We talk about backporting of new request versions to stable branches in
> the KIP. In practice, we can't do that until the Java client is changed so
> that it doesn't blindly use the latest protocol version. Otherwise, if new
> request versions were added to 0.9.0.2, the client would break when talking
> to a 0.9.0.1 broker (given Jason's proposal, it would fail a bit more
> gracefully, but that's still not good enough for a stable branch). It may
> be worth making this clear in the KIP (yes, it is a bit orthogonal and
> doesn't prevent the KIP from being adopted, but good to avoid confusion).
>
Good point. Adding this note and also adding a note that Kafka has not
backported an api version so far.

>
> 4. The paragraph below is a bit confusing. It starts talking about 0.9.0
> and trunk and then switches to 0.9.1. Is that intentional?
>
Yes.

>
> "Deprecation of a protocol version will be done by marking a protocol
> version as deprecated in protocol documentation. Documentation shall also
> be used to indicate a protocol version that must not be used, or for any
> such information.For instance, say 0.9.0 had protocol versions [0] for api
> key 1. On trunk, version 1 of the api key was added. Users running off
> trunk started using version 1 of the api and found out a major bug. To
> rectify that version 2 of the api is added to trunk. For some reason, it is
> now deemed important to have version 2 of the api in 0.9.1 as well. To do
> so, version 1 and version 2 both of the api will be backported to the 0.9.1
> branch. 0.9.1 broker will return 0 as min supported version for the api and
> 2 for the max supported version for the api. However, the version 1 should
> be clearly marked as deprecated on its documentation. It will be client's
> responsibility to make sure they are not using any such deprecated version
> to the best knowledge of the client at the time of development (or
> alternatively by configuration)."
>
> Ismael
>
>
>
> On Fri, Apr 1, 2016 at 9:22 AM, Ismael Juma  wrote:
>
> > A couple of questions:
> >
> > 1. The KIP says "Specific version may be deprecated through protocol
> > documentation but must still be supported (although it is fair to return
> an
> > error code if the specific API supports it).". It may be worth expanding
> > this a little more. For example, what does it mean to support the API? I
> > guess this means that the broker must not disconnect the client and the
> > broker must return a valid protocol response. Given that it says that it
> is
> > "fair" (I would probably replace "fair" with "valid") to return an error
> > code if the specific API supports it, it sounds like we are saying that
> we
> > don't have to maintain the semantic behaviour (i.e. we could _always_
> > return an error for a deprecated API?). Is this true?
> >
> > 2. ApiVersionQueryRequest seems a bit verbose, why not ApiVersionRequest?
> >
> > Ismael
> >
>



-- 

Regards,
Ashish


Re: [VOTE] KIP-4 Metadata Schema

2016-04-05 Thread Grant Henke
Hi Jun,

See my responses below:

2. The issues that I was thinking are the following. (a) Say the controller
> has topic deletion disabled and a topic deletion request is submitted to
> ZK. In this case, the controller will ignore this request. However, the
> broker may pick up the topic deletion marker in a transient window. (b)
> Suppose that a topic is deleted and then recreated immediately. It is
> possible for a broker to see the newly created topic and then the previous
> topic deletion marker in a transient window. Thinking about this a bit
> more. Both seem to be transient. So, it may not be a big concern. So, I am
> ok with this as long as the interim solution is not too complicated.
> Another thing to think through. If a topic is marked for deletion, do we
> still return the partition level metadata?


I am not changing anything about the metadata content, only adding a
boolean based on the marked for deletion flag in zookeeper. This is
maintaining the same method that the topics script does today. I do think
delete improvements should be considered/reviewed. The goal here is to
allow the broker to report the value that its sees, which is the value in
zookeeper.

5. The issue is the following. If you have a partition with 3 replicas
> 4,5,6, leader is on replica 4 and replica 5 is down. Currently, the broker
> will send a REPLICA_NOT_AVAILABLE error code and only replicas 4,6 in the
> assigned replicas. It's more intuitive to send no error code and 4,5,6 in
> the assigned replicas in this case.


Should the list with no error code just be 4,6 since 5 is not available?


On Mon, Apr 4, 2016 at 1:34 PM, Jun Rao  wrote:

> Grant,
>
> 2. The issues that I was thinking are the following. (a) Say the controller
> has topic deletion disabled and a topic deletion request is submitted to
> ZK. In this case, the controller will ignore this request. However, the
> broker may pick up the topic deletion marker in a transient window. (b)
> Suppose that a topic is deleted and then recreated immediately. It is
> possible for a broker to see the newly created topic and then the previous
> topic deletion marker in a transient window. Thinking about this a bit
> more. Both seem to be transient. So, it may not be a big concern. So, I am
> ok with this as long as the interim solution is not too complicated.
> Another thing to think through. If a topic is marked for deletion, do we
> still return the partition level metadata?
>
> 3. Your explanation on controller id seems reasonable to me.
>
> 5. The issue is the following. If you have a partition with 3 replicas
> 4,5,6, leader is on replica 4 and replica 5 is down. Currently, the broker
> will send a REPLICA_NOT_AVAILABLE error code and only replicas 4,6 in the
> assigned replicas. It's more intuitive to send no error code and 4,5,6 in
> the assigned replicas in this case.
>
> Thanks,
>
> Jun
>
> On Mon, Apr 4, 2016 at 8:33 AM, Grant Henke  wrote:
>
> > Hi Jun,
> >
> > Please See my responses below:
> >
> > Hmm, I am not sure about the listener approach. It ignores configs like
> > > enable.topic.deletion and also opens the door for potential ordering
> > issues
> > > since now there are two separate paths for propagating the metadata to
> > the
> > > brokers.
> >
> >
> > This mechanism is very similar to how deletes are tracked on the
> controller
> > itself. It is also the same way ACLs are tracked on brokers in the
> default
> > implementation. I am not sure I understand what ordering issue there
> could
> > be. This is used to report what topics are marked for deletion, which
> today
> > has no dependency on enable.topic.deletion. I agree that the delete
> > mechanism in Kafka has a lot of room for improvement, but the goal in
> this
> > change is just to enable reporting it to the user, not to fix/improve
> > existing issues. If you have an alternate approach that does not require
> > major changes to the controller code, I would be open to investigate it.
> >
> > Could we just leave out markedForDeletion for now? In the common
> > > case, if a topic is deleted, it will only be in markedForDeletion state
> > for
> > > a few milli seconds anyway.
> >
> >
> > I don't think we should leave it out. The point of these changes is to
> > prevent a user from needing to talk directly to zookeeper. We need a way
> > for a user to see if a topic has been marked for deletion. Given the
> issues
> > with the current delete implementation, its fairly common for a topic to
> > remain marked as deleted for quite some time.
> >
> > Yes, for those usage, it just seems it's a bit weird for the client to
> > > issue a MetadataRequest to get the controller info since it doesn't
> need
> > > any topic metadata.
> >
> >
> > Why does this seam weird? The MetadataRequest is the request used to
> > discover the cluster and metadata about that cluster regardless of the
> > topics you are interested in, if any. In fact, a big motivation for the
> > change to allow requesting "no topics" 

Re: [VOTE] KIP-4 Metadata Schema

2016-04-05 Thread Grant Henke
After the discussion today about the clarity and flexibility of the
flag/lists for internal topics and deleted topics, I think I will switch
back to using booleans inside the topic metadata. This is a clearer
representation of the intent, should not have too much overhead (especially
because users can request no topics now), and is more simple/flexible to
evolve forward if we ever decide we need to represent more "states".

I will update the patch in the next day or so and post a vote if no issue
is raised with that.

Thanks,
Grant

On Tue, Apr 5, 2016 at 4:05 PM, Grant Henke  wrote:

> Hi Jun,
>
> See my responses below:
>
> 2. The issues that I was thinking are the following. (a) Say the controller
>> has topic deletion disabled and a topic deletion request is submitted to
>> ZK. In this case, the controller will ignore this request. However, the
>> broker may pick up the topic deletion marker in a transient window. (b)
>> Suppose that a topic is deleted and then recreated immediately. It is
>> possible for a broker to see the newly created topic and then the previous
>> topic deletion marker in a transient window. Thinking about this a bit
>> more. Both seem to be transient. So, it may not be a big concern. So, I am
>> ok with this as long as the interim solution is not too complicated.
>> Another thing to think through. If a topic is marked for deletion, do we
>> still return the partition level metadata?
>
>
> I am not changing anything about the metadata content, only adding a
> boolean based on the marked for deletion flag in zookeeper. This is
> maintaining the same method that the topics script does today. I do think
> delete improvements should be considered/reviewed. The goal here is to
> allow the broker to report the value that its sees, which is the value in
> zookeeper.
>
> 5. The issue is the following. If you have a partition with 3 replicas
>> 4,5,6, leader is on replica 4 and replica 5 is down. Currently, the broker
>> will send a REPLICA_NOT_AVAILABLE error code and only replicas 4,6 in the
>> assigned replicas. It's more intuitive to send no error code and 4,5,6 in
>> the assigned replicas in this case.
>
>
> Should the list with no error code just be 4,6 since 5 is not available?
>
>
> On Mon, Apr 4, 2016 at 1:34 PM, Jun Rao  wrote:
>
>> Grant,
>>
>> 2. The issues that I was thinking are the following. (a) Say the
>> controller
>> has topic deletion disabled and a topic deletion request is submitted to
>> ZK. In this case, the controller will ignore this request. However, the
>> broker may pick up the topic deletion marker in a transient window. (b)
>> Suppose that a topic is deleted and then recreated immediately. It is
>> possible for a broker to see the newly created topic and then the previous
>> topic deletion marker in a transient window. Thinking about this a bit
>> more. Both seem to be transient. So, it may not be a big concern. So, I am
>> ok with this as long as the interim solution is not too complicated.
>> Another thing to think through. If a topic is marked for deletion, do we
>> still return the partition level metadata?
>>
>> 3. Your explanation on controller id seems reasonable to me.
>>
>> 5. The issue is the following. If you have a partition with 3 replicas
>> 4,5,6, leader is on replica 4 and replica 5 is down. Currently, the broker
>> will send a REPLICA_NOT_AVAILABLE error code and only replicas 4,6 in the
>> assigned replicas. It's more intuitive to send no error code and 4,5,6 in
>> the assigned replicas in this case.
>>
>> Thanks,
>>
>> Jun
>>
>> On Mon, Apr 4, 2016 at 8:33 AM, Grant Henke  wrote:
>>
>> > Hi Jun,
>> >
>> > Please See my responses below:
>> >
>> > Hmm, I am not sure about the listener approach. It ignores configs like
>> > > enable.topic.deletion and also opens the door for potential ordering
>> > issues
>> > > since now there are two separate paths for propagating the metadata to
>> > the
>> > > brokers.
>> >
>> >
>> > This mechanism is very similar to how deletes are tracked on the
>> controller
>> > itself. It is also the same way ACLs are tracked on brokers in the
>> default
>> > implementation. I am not sure I understand what ordering issue there
>> could
>> > be. This is used to report what topics are marked for deletion, which
>> today
>> > has no dependency on enable.topic.deletion. I agree that the delete
>> > mechanism in Kafka has a lot of room for improvement, but the goal in
>> this
>> > change is just to enable reporting it to the user, not to fix/improve
>> > existing issues. If you have an alternate approach that does not require
>> > major changes to the controller code, I would be open to investigate it.
>> >
>> > Could we just leave out markedForDeletion for now? In the common
>> > > case, if a topic is deleted, it will only be in markedForDeletion
>> state
>> > for
>> > > a few milli seconds anyway.
>> >
>> >
>> > I don't think we should leave it out. The point of these changes is to
>> > prevent a user from needing to tal

[jira] [Updated] (KAFKA-3184) Add Checkpoint for In-memory State Store

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3184:
-
Fix Version/s: 0.10.1.0

> Add Checkpoint for In-memory State Store
> 
>
> Key: KAFKA-3184
> URL: https://issues.apache.org/jira/browse/KAFKA-3184
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> Currently Kafka Streams does not make a checkpoint of the persistent state 
> store upon committing, which would be expensive since it is "stopping the 
> world" and write on disks: for example, RocksDB would require you to copy the 
> file directory to make a copy naively. 
> However, for in-memory stores checkpointing maybe doable in an asynchronous 
> manner hence it can be done quickly. And the benefit of having intermediate 
> checkpoint is to avoid restoring from scratch if standby tasks are not 
> present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3185) Allow users to cleanup internal data

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3185:
-
Fix Version/s: 0.10.1.0

> Allow users to cleanup internal data
> 
>
> Key: KAFKA-3185
> URL: https://issues.apache.org/jira/browse/KAFKA-3185
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> Currently the internal data is managed completely by Kafka Streams framework 
> and users cannot clean them up actively. This results in a bad out-of-the-box 
> user experience especially for running demo programs since it results 
> internal data (changelog topics, RocksDB files, etc) that need to be cleaned 
> manually. It will be better to add a
> {code}
> KafkaStreams.cleanup()
> {code}
> function call to clean up these internal data programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3183) Add metrics for persistent store caching layer

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3183:
-
Fix Version/s: 0.10.1.0

> Add metrics for persistent store caching layer
> --
>
> Key: KAFKA-3183
> URL: https://issues.apache.org/jira/browse/KAFKA-3183
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> We need to add the metrics collection such as cache hits / misses, cache 
> size, dirty key size, etc for the RocksDBStore. However this may need to 
> refactor the RocksDBStore a little bit since currently caching is not exposed 
> to the MeteredKeyValueStore, and it uses an LRUCacheStore as the cache that 
> does not keep the dirty key set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3262) Make KafkaStreams debugging friendly

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3262:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.1.0

> Make KafkaStreams debugging friendly
> 
>
> Key: KAFKA-3262
> URL: https://issues.apache.org/jira/browse/KAFKA-3262
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.10.0.0
>Reporter: Yasuhiro Matsuda
> Fix For: 0.10.1.0
>
>
> Current KafkaStreams polls records in the same thread as the data processing 
> thread. This makes debugging user code, as well as KafkaStreams itself, 
> difficult. When the thread is suspended by the debugger, the next heartbeat 
> of the consumer tie to the thread won't be send until the thread is resumed. 
> This often results in missed heartbeats and causes a group rebalance. So it 
> may will be a completely different context then the thread hits the break 
> point the next time.
> We should consider using separate threads for polling and processing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3439) Document possible exception thrown in public APIs

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3439:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.0.0

> Document possible exception thrown in public APIs
> -
>
> Key: KAFKA-3439
> URL: https://issues.apache.org/jira/browse/KAFKA-3439
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Candidate interfaces include all the ones in "kstream", "processor" and 
> "state" packages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3429) Remove Serdes needed for repartitioning in KTable stateful operations

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3429:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.1.0

> Remove Serdes needed for repartitioning in KTable stateful operations
> -
>
> Key: KAFKA-3429
> URL: https://issues.apache.org/jira/browse/KAFKA-3429
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
>  Labels: newbie++
> Fix For: 0.10.1.0
>
>
> Currently in KTable aggregate operations where a repartition is possibly 
> needed since the aggregation key may not be the same as the original primary 
> key, we require the users to provide serdes (default to configured ones) for 
> read / write to the internally created re-partition topic. However, these are 
> not necessary since for all KTable instances either generated from the topics 
> directly:
> {code}table = builder.table(...){code}
> or from aggregation operations:
> {code}table = stream.aggregate(...){code}
> There are already serde provided for materializing the data, and hence the 
> same serde can be re-used when the resulted KTable is involved in future 
> aggregation operations. For example:
> {code}
> table1 = stream.aggregateByKey(serde);
> table2 = table1.aggregate(aggregator, selector, originalSerde, 
> aggregateSerde);
> {code}
> We would not need to require users to specify the "originalSerde" in 
> table1.aggregate since it could always reuse the "serde" from 
> stream.aggregateByKey, which is used to materialize the table1 object.
> In order to get ride of it, implementation-wise we need to carry the serde 
> information along with the KTableImpl instance in order to re-use it in a 
> future operation that requires repartitioning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3440) Add Javadoc for KTable (changelog stream) and KStream (record stream)

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3440:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.0.0

> Add Javadoc for KTable (changelog stream) and KStream (record stream)
> -
>
> Key: KAFKA-3440
> URL: https://issues.apache.org/jira/browse/KAFKA-3440
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Currently we only have a 1-liner in {code}KTable{code} and 
> {code}KStream{code} class describing the changelog and record streams. We'd 
> better have a more detailed explanation as in the web docs in Javadocs as 
> well.
> Also we want to have some more description in windowed {code}KTable{code}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3430) Allow users to set key in KTable.toStream() and KStream

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3430:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.0.0

> Allow users to set key in KTable.toStream() and KStream
> ---
>
> Key: KAFKA-3430
> URL: https://issues.apache.org/jira/browse/KAFKA-3430
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Currently KTable.toStream does not take any parameters and hence users who 
> wants to set the key need to do two steps:
> {code}table.toStream().map(...){code} in order to do so. We can make it in 
> one step by providing the mapper parameter in toStream.
> And similarly today users usually need to call {code} KStream.map() {code} in 
> order to select the key before aggregation-by-key operation if the original 
> stream is does not contain keys. 
> We can consider adding a specific function in KStream to do so:
> {code}KStream.selectKey(mapper){code}
> which essential is the same as
> {code}KStream.map(/* mapper that does not change the value, but only the key 
> */){code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3443) Support regex topics in addSource() and stream()

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3443:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.1.0

> Support regex topics in addSource() and stream()
> 
>
> Key: KAFKA-3443
> URL: https://issues.apache.org/jira/browse/KAFKA-3443
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.1.0
>
>
> Currently Kafka Streams only support specific topics in creating source 
> streams, while we can leverage consumer's regex subscription to allow regex 
> topics as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3497) Streams ProcessorContext should support forward() based on child name

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3497:
-
Fix Version/s: 0.10.0.0

> Streams ProcessorContext should support forward() based on child name
> -
>
> Key: KAFKA-3497
> URL: https://issues.apache.org/jira/browse/KAFKA-3497
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.10.0.1
>Reporter: Yuto Kawamura
> Fix For: 0.10.0.0
>
>
> Currently {{ProcessorContext}} only supports {{forward(K, V)}} which forwards 
> KV to all children and {{forward(K, V, int childIndex)}} which forwards KV to 
>  specific children that is identified by an index of children List.
> While letting a Processor to issue messages which have arbitrary different 
> downstream destination, it is not handy to keep ordering of calling 
> {{addProcessor}}(or {{addSink}}) and childIndex consistent.
> Here I'd like to suggest introducing another signature {{forward(K, V, String 
> childName)}} which allows to use child name(first argument to addProcessor or 
> addSink) to indicate the destination downstream.
> Thread on user mailing list: 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201604.mbox/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3477) Add customizable StreamPartition into #to functions of Streams DSL

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3477:
-
Fix Version/s: (was: 0.10.0.1)
   0.10.0.0

> Add customizable StreamPartition into #to functions of Streams DSL
> --
>
> Key: KAFKA-3477
> URL: https://issues.apache.org/jira/browse/KAFKA-3477
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: newbie
> Fix For: 0.10.0.0
>
>
> In the lower-level Processor API we allow users to pass in a customizable 
> StreamPartitioner when creating a new sink processor node to the topology:
> {code}
> builder.addSink(String name, String topic, StreamPartitioner partitioner, 
> String... parentNames));
> {code}
> This StreamPartitioner allows users to specify any partitioning schemes based 
> on record values instead of using the default behavior of hashing on the 
> message key; but it is not exposed in the higher-level Streams DSL.
> We can add this parameter to the Streams DSL as well:
> {code}
> KStream#to(String topic, StreamPartitioner partitioner);
> KTable#to(String topic, StreamPartitioner partitioner);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3455) Connect custom processors with the streams DSL

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3455:
-
Fix Version/s: 0.10.1.0

> Connect custom processors with the streams DSL
> --
>
> Key: KAFKA-3455
> URL: https://issues.apache.org/jira/browse/KAFKA-3455
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.10.0.1
>Reporter: Jonathan Bender
> Fix For: 0.10.1.0
>
>
> From the kafka users email thread, we discussed the idea of connecting custom 
> processors with topologies defined from the Streams DSL (and being able to 
> sink data from the processor).  Possibly this could involve exposing the 
> underlying processor's name in the streams DSL so it can be connected with 
> the standard processor API.
> {quote}
> Thanks for the feedback. This is definitely something we wanted to support
> in the Streams DSL.
> One tricky thing, though, is that some operations do not translate to a
> single processor, but a sub-graph of processors (think of a stream-stream
> join, which is translated to actually 5 processors for windowing / state
> queries / merging, each with a different internal name). So how to define
> the API to return the processor name needs some more thinking.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3504) Changelog partition configured to enable log compaction

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3504:
-
Fix Version/s: (was: 0.10.1.0)
   0.10.0.0

> Changelog partition configured to enable log compaction
> ---
>
> Key: KAFKA-3504
> URL: https://issues.apache.org/jira/browse/KAFKA-3504
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Today Kafka Streams automatically configured changelog topics for state 
> stores, however these changelog topics are not configured as log compaction 
> enabled. We should set the right configs when auto-creating these internal 
> topics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3506: Kafka Connect restart APIs

2016-04-05 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1189

KAFKA-3506: Kafka Connect restart APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3506

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1189


commit 0c84974f3aa30e30098af81d1264bb503825b0e5
Author: Jason Gustafson 
Date:   2016-04-05T21:31:01Z

KAFKA-3506: Kafka Connect restart APIs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3506) Kafka Connect Task Restart API

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227204#comment-15227204
 ] 

ASF GitHub Bot commented on KAFKA-3506:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1189

KAFKA-3506: Kafka Connect restart APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3506

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1189


commit 0c84974f3aa30e30098af81d1264bb503825b0e5
Author: Jason Gustafson 
Date:   2016-04-05T21:31:01Z

KAFKA-3506: Kafka Connect restart APIs




> Kafka Connect Task Restart API
> --
>
> Key: KAFKA-3506
> URL: https://issues.apache.org/jira/browse/KAFKA-3506
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> This covers the connector and task restart APIs as documented on KIP-52: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3506) Kafka Connect Task Restart API

2016-04-05 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3506:
---
Status: Patch Available  (was: In Progress)

> Kafka Connect Task Restart API
> --
>
> Key: KAFKA-3506
> URL: https://issues.apache.org/jira/browse/KAFKA-3506
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> This covers the connector and task restart APIs as documented on KIP-52: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3514) Stream timestamp computation needs some further thoughts

2016-04-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3514:


 Summary: Stream timestamp computation needs some further thoughts
 Key: KAFKA-3514
 URL: https://issues.apache.org/jira/browse/KAFKA-3514
 Project: Kafka
  Issue Type: Bug
  Components: kafka streams
Reporter: Guozhang Wang
 Fix For: 0.10.1.0


Our current stream task's timestamp is used for punctuate function as well as 
selecting which stream to process next (i.e. best effort stream 
synchronization). And it is defined as the smallest timestamp over all 
partitions in the task's partition group. This results in two unintuitive 
corner cases:

1) observing a late arrived record would keep that stream's timestamp low for a 
period of time, and hence keep being process until that late record. For 
example take two partitions within the same task annotated by their timestamps:

{code}
Stream A: 5, 6, 7, 8, 9, 1, 10
{code}

{code}
Stream B: 2, 3, 4, 5
{code}

The late arrived record with timestamp "1" will cause stream A to be selected 
continuously in the thread loop, i.e. messages with timestamp 5, 6, 7, 8, 9 
until the record itself is dequeued and processed, then stream B will be 
selected starting with timestamp 2.

2) an empty buffered partition will cause its timestamp to be not advanced, and 
hence the task timestamp as well since it is the smallest among all partitions. 
This may not be a severe problem compared with 1) above though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3489; Update request metrics if a client...

2016-04-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1172


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3489) Update request metrics if client closes connection while broker response is in flight

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227234#comment-15227234
 ] 

ASF GitHub Bot commented on KAFKA-3489:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1172


> Update request metrics if client closes connection while broker response is 
> in flight
> -
>
> Key: KAFKA-3489
> URL: https://issues.apache.org/jira/browse/KAFKA-3489
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We currently don't update request metrics or request logging for this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3489) Update request metrics if client closes connection while broker response is in flight

2016-04-05 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3489:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1172
[https://github.com/apache/kafka/pull/1172]

> Update request metrics if client closes connection while broker response is 
> in flight
> -
>
> Key: KAFKA-3489
> URL: https://issues.apache.org/jira/browse/KAFKA-3489
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.10.0.0
>
>
> We currently don't update request metrics or request logging for this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3508: Fix transient SimpleACLAuthorizerT...

2016-04-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1156


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3508) Transient failure in kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227239#comment-15227239
 ] 

ASF GitHub Bot commented on KAFKA-3508:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1156


> Transient failure in 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> --
>
> Key: KAFKA-3508
> URL: https://issues.apache.org/jira/browse/KAFKA-3508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> {code}
> Stacktrace
> java.lang.AssertionError: Should support many concurrent calls failed with 
> exception(s) ArrayBuffer(java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to update ACLs for Topic:test after 
> trying a maximum of 10 times)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.utils.TestUtils$.assertConcurrent(TestUtils.scala:1123)
>   at 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls(SimpleAclAuthorizerTest.scala:335)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.Executo

[jira] [Updated] (KAFKA-3508) Transient failure in kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls

2016-04-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3508:
-
   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1156
[https://github.com/apache/kafka/pull/1156]

> Transient failure in 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> --
>
> Key: KAFKA-3508
> URL: https://issues.apache.org/jira/browse/KAFKA-3508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> {code}
> Stacktrace
> java.lang.AssertionError: Should support many concurrent calls failed with 
> exception(s) ArrayBuffer(java.util.concurrent.ExecutionException: 
> java.lang.IllegalStateException: Failed to update ACLs for Topic:test after 
> trying a maximum of 10 times)
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at kafka.utils.TestUtils$.assertConcurrent(TestUtils.scala:1123)
>   at 
> kafka.security.auth.SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls(SimpleAclAuthorizerTest.scala:335)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>

[GitHub] kafka pull request: KAFKA-3505: Fix punctuate generated record met...

2016-04-05 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1190

KAFKA-3505: Fix punctuate generated record metadata



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3505

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1190.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1190


commit a8cf941427a9c445945ccecf68775941bc2f5a17
Author: Guozhang Wang 
Date:   2016-04-05T04:28:28Z

use record timestamp for punctuate, and use sentinels instead of exceptions 
for punctuate

commit 469754817de9c7979026e8d2273a41c554130f27
Author: Guozhang Wang 
Date:   2016-04-05T18:28:41Z

fix one checkstyle failure

commit 7b7b07eb0e648d9fb96b09243057f2716f9d8abf
Author: Guozhang Wang 
Date:   2016-04-05T22:10:11Z

add unit tests

commit 0d556838004cde49a9f3b490608483f2a2f6f384
Author: Guozhang Wang 
Date:   2016-04-05T22:20:37Z

minor fixes on unit tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3505) Set curRecord in punctuate() functions

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227248#comment-15227248
 ] 

ASF GitHub Bot commented on KAFKA-3505:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1190

KAFKA-3505: Fix punctuate generated record metadata



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3505

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1190.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1190


commit a8cf941427a9c445945ccecf68775941bc2f5a17
Author: Guozhang Wang 
Date:   2016-04-05T04:28:28Z

use record timestamp for punctuate, and use sentinels instead of exceptions 
for punctuate

commit 469754817de9c7979026e8d2273a41c554130f27
Author: Guozhang Wang 
Date:   2016-04-05T18:28:41Z

fix one checkstyle failure

commit 7b7b07eb0e648d9fb96b09243057f2716f9d8abf
Author: Guozhang Wang 
Date:   2016-04-05T22:10:11Z

add unit tests

commit 0d556838004cde49a9f3b490608483f2a2f6f384
Author: Guozhang Wang 
Date:   2016-04-05T22:20:37Z

minor fixes on unit tests




> Set curRecord in punctuate() functions
> --
>
> Key: KAFKA-3505
> URL: https://issues.apache.org/jira/browse/KAFKA-3505
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> Punctuate() function in processor and transformer needs to be handled a bit 
> differently from process(), since it can generate new records to pass through 
> the topology from anywhere of the topology, whereas for the latter case a 
> record is always polled from Kafka and passed via the source processors.
> Today because we do not set the curRecord correctly, calls to timestamp() / 
> topic() / etc would actually trigger a KafkaStreamsException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: [KAFKA-3477] [Kafka Streams] extended KStream/...

2016-04-05 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1180


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3477) Add customizable StreamPartition into #to functions of Streams DSL

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3477.
--
Resolution: Fixed

Issue resolved by pull request 1180
[https://github.com/apache/kafka/pull/1180]

> Add customizable StreamPartition into #to functions of Streams DSL
> --
>
> Key: KAFKA-3477
> URL: https://issues.apache.org/jira/browse/KAFKA-3477
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: newbie
> Fix For: 0.10.0.0
>
>
> In the lower-level Processor API we allow users to pass in a customizable 
> StreamPartitioner when creating a new sink processor node to the topology:
> {code}
> builder.addSink(String name, String topic, StreamPartitioner partitioner, 
> String... parentNames));
> {code}
> This StreamPartitioner allows users to specify any partitioning schemes based 
> on record values instead of using the default behavior of hashing on the 
> message key; but it is not exposed in the higher-level Streams DSL.
> We can add this parameter to the Streams DSL as well:
> {code}
> KStream#to(String topic, StreamPartitioner partitioner);
> KTable#to(String topic, StreamPartitioner partitioner);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3477) Add customizable StreamPartition into #to functions of Streams DSL

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227308#comment-15227308
 ] 

ASF GitHub Bot commented on KAFKA-3477:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1180


> Add customizable StreamPartition into #to functions of Streams DSL
> --
>
> Key: KAFKA-3477
> URL: https://issues.apache.org/jira/browse/KAFKA-3477
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Guozhang Wang
>Assignee: Matthias J. Sax
>  Labels: newbie
> Fix For: 0.10.0.0
>
>
> In the lower-level Processor API we allow users to pass in a customizable 
> StreamPartitioner when creating a new sink processor node to the topology:
> {code}
> builder.addSink(String name, String topic, StreamPartitioner partitioner, 
> String... parentNames));
> {code}
> This StreamPartitioner allows users to specify any partitioning schemes based 
> on record values instead of using the default behavior of hashing on the 
> message key; but it is not exposed in the higher-level Streams DSL.
> We can add this parameter to the Streams DSL as well:
> {code}
> KStream#to(String topic, StreamPartitioner partitioner);
> KTable#to(String topic, StreamPartitioner partitioner);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3515) Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common package

2016-04-05 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3515:


 Summary: Migrate org.apache.kafka.connect.json.JsonSerializer / 
Deser to common package 
 Key: KAFKA-3515
 URL: https://issues.apache.org/jira/browse/KAFKA-3515
 Project: Kafka
  Issue Type: Bug
Reporter: Guozhang Wang
 Fix For: 0.10.0.0


We have these two classes in org.apache.kafka.connect.json but they should 
really be in o.a.k.common.serialization. To maintain backward compatibility we 
need to duplicate it for now and mark the ones in connect as deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3515) Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common package

2016-04-05 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-3515:


Assignee: Guozhang Wang

> Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common 
> package 
> ---
>
> Key: KAFKA-3515
> URL: https://issues.apache.org/jira/browse/KAFKA-3515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> We have these two classes in org.apache.kafka.connect.json but they should 
> really be in o.a.k.common.serialization. To maintain backward compatibility 
> we need to duplicate it for now and mark the ones in connect as deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] KIP-52: Kafka Connect Control APIs

2016-04-05 Thread Jason Gustafson
I'd like to open the vote on KIP-52, which adds several control APIs to
Kafka Connect:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs.
Compared to some of the other active KIPs, this is a relatively small
feature, but it makes administration of Connect clusters much easier. If
adopted, I'm hoping to get this into 0.10.

Thanks!
Jason


Build failed in Jenkins: kafka-trunk-jdk8 #508

2016-04-05 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3489; Update request metrics if a client closes a connection 
while

[me] KAFKA-3508: Fix transient SimpleACLAuthorizerTest failures

[wangguoz] KAFKA-3477: extended KStream/KTable API to specify custom partitioner

--
[...truncated 1591 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinat

Jenkins build is back to normal : kafka-trunk-jdk7 #1179

2016-04-05 Thread Apache Jenkins Server
See 



Rebased 0.10.0 on trunk

2016-04-05 Thread Gwen Shapira
Hi Team,

In order to make sure that the eventual 0.10.0 release will include
everything we are working on, we agreed to merge trunk into 0.10.0 branch
on a weekly basis.

I just rebased 0.10.0 on trunk and pushed:

Chens-MacBook-Pro:kafka gwen$ git push apache 0.10.0
Counting objects: 266, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (164/164), done.
Writing objects: 100% (266/266), 33.56 KiB | 0 bytes/s, done.
Total 266 (delta 138), reused 78 (delta 27)
remote: kafka git commit: KAFKA-3477: extended KStream/KTable API to
specify custom partitioner for sinks
remote: kafka git commit: MINOR: update new version in additional places
remote: kafka git commit: KAFKA-3508: Fix transient SimpleACLAuthorizerTest
failures
remote: kafka git commit: KAFKA-3384: Conform to POSIX kill usage
remote: kafka git commit: KAFKA-3489; Update request metrics if a client
closes a connection while the broker response is in flight
remote: kafka git commit: KAFKA-2998: log warnings when client is
disconnected from bootstrap brokers
remote: kafka git commit: KAFKA-3464: Add system tests for Connect with
Kafka security enabled
remote: kafka git commit: HOTFIX: set timestamp in SinkNode
remote: kafka git commit: KAFKA-3495; NetworkClient.blockingSendAndReceive`
should rely on requestTimeout
remote: kafka git commit: KAFKA-3419: clarify difference between topic
subscription and partition assignment
remote: kafka git commit: KAFKA-2844; Separate keytabs for sasl tests
remote: kafka git commit: MINOR: small code optimizations in streams
remote: kafka git commit: MINOR: Add check for empty topics iterator in
ReplicaVerificationTool.
remote: kafka git commit: KAFKA-3407 - ErrorLoggingCallback trims helpful
diagnostic information.
remote: kafka git commit: KAFKA-3445: Validate TASKS_MAX_CONFIG's lower
bound
remote: kafka git commit: MINOR: update new version in additional places
remote: kafka git commit: Changing version to 0.10.1.0-SNAPSHOT
remote: kafka git commit: KAFKA-2930: Update references to ZooKeeper in the
docs.
remote: kafka git commit: KAFKA-3510; OffsetIndex thread safety
remote: kafka git commit: Changing version to 0.10.0.0
remote: kafka git commit: MINOR: Revert 0.10.0 branch to SNAPSHOT per
change in release process
To https://git-wip-us.apache.org/repos/asf/kafka.git
   e0ac36f..0773bc4  0.10.0 -> 0.10.0

If you are seeing any issues, let me know so I can figure out what to do :)

Gwen


Re: [VOTE] KIP-52: Kafka Connect Control APIs

2016-04-05 Thread Gwen Shapira
+1

Super useful, thanks Jason.

On Tue, Apr 5, 2016 at 4:59 PM, Jason Gustafson  wrote:

> I'd like to open the vote on KIP-52, which adds several control APIs to
> Kafka Connect:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs
> .
> Compared to some of the other active KIPs, this is a relatively small
> feature, but it makes administration of Connect clusters much easier. If
> adopted, I'm hoping to get this into 0.10.
>
> Thanks!
> Jason
>


[GitHub] kafka pull request: KAFKA-3515: migrate json serde from connect to...

2016-04-05 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1191

KAFKA-3515: migrate json serde from connect to common



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3515

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1191


commit cf1eceacdf9ed0fdad0ba82d2872c47d4c8933da
Author: Guozhang Wang 
Date:   2016-04-06T00:12:18Z

migrate json serde from connect to common




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3515) Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common package

2016-04-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227414#comment-15227414
 ] 

ASF GitHub Bot commented on KAFKA-3515:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1191

KAFKA-3515: migrate json serde from connect to common



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3515

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1191


commit cf1eceacdf9ed0fdad0ba82d2872c47d4c8933da
Author: Guozhang Wang 
Date:   2016-04-06T00:12:18Z

migrate json serde from connect to common




> Migrate org.apache.kafka.connect.json.JsonSerializer / Deser to common 
> package 
> ---
>
> Key: KAFKA-3515
> URL: https://issues.apache.org/jira/browse/KAFKA-3515
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.10.0.0
>
>
> We have these two classes in org.apache.kafka.connect.json but they should 
> really be in o.a.k.common.serialization. To maintain backward compatibility 
> we need to duplicate it for now and mark the ones in connect as deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-52: Kafka Connect Control APIs

2016-04-05 Thread Ewen Cheslack-Postava
+1

On Tue, Apr 5, 2016 at 5:12 PM, Gwen Shapira  wrote:

> +1
>
> Super useful, thanks Jason.
>
> On Tue, Apr 5, 2016 at 4:59 PM, Jason Gustafson 
> wrote:
>
> > I'd like to open the vote on KIP-52, which adds several control APIs to
> > Kafka Connect:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs
> > .
> > Compared to some of the other active KIPs, this is a relatively small
> > feature, but it makes administration of Connect clusters much easier. If
> > adopted, I'm hoping to get this into 0.10.
> >
> > Thanks!
> > Jason
> >
>



-- 
Thanks,
Ewen


[GitHub] kafka pull request: MINOR: ensure original use of prop_file in ver...

2016-04-05 Thread apovzner
GitHub user apovzner opened a pull request:

https://github.com/apache/kafka/pull/1192

MINOR: ensure original use of prop_file in verifiable producer

This PR: https://github.com/apache/kafka/pull/958 fixed the use of 
prop_file in the situation when we have multiple producers (before, every 
producer will add to the config). However, it assumes that self.prop_file is 
initially "". This is correct for all existing tests, but it precludes us from 
extending verifiable producer and adding more properties to the producer config 
(same as console consumer). This is a small PR to change the behavior to the 
original, but keep the fix for multiple producers in verifiable producer.

@granders please review.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apovzner/kafka fix_verifiable_producer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1192.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1192


commit 1ff47c5f64ca22b28b476142a92d7a68966505f5
Author: Anna Povzner 
Date:   2016-04-06T00:45:33Z

MINOR: ensure original use of prop_file in verifiable producer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-4 Metadata Schema

2016-04-05 Thread Jun Rao
5. You will return no error and 4,5,6 as replicas. The response also
includes a list of live brokers. So the client can figure out 5 is not live
directly w/o relying on the error code.

Thanks,

Jun

On Tue, Apr 5, 2016 at 5:05 PM, Grant Henke  wrote:

> Hi Jun,
>
> See my responses below:
>
> 2. The issues that I was thinking are the following. (a) Say the controller
> > has topic deletion disabled and a topic deletion request is submitted to
> > ZK. In this case, the controller will ignore this request. However, the
> > broker may pick up the topic deletion marker in a transient window. (b)
> > Suppose that a topic is deleted and then recreated immediately. It is
> > possible for a broker to see the newly created topic and then the
> previous
> > topic deletion marker in a transient window. Thinking about this a bit
> > more. Both seem to be transient. So, it may not be a big concern. So, I
> am
> > ok with this as long as the interim solution is not too complicated.
> > Another thing to think through. If a topic is marked for deletion, do we
> > still return the partition level metadata?
>
>
> I am not changing anything about the metadata content, only adding a
> boolean based on the marked for deletion flag in zookeeper. This is
> maintaining the same method that the topics script does today. I do think
> delete improvements should be considered/reviewed. The goal here is to
> allow the broker to report the value that its sees, which is the value in
> zookeeper.
>
> 5. The issue is the following. If you have a partition with 3 replicas
> > 4,5,6, leader is on replica 4 and replica 5 is down. Currently, the
> broker
> > will send a REPLICA_NOT_AVAILABLE error code and only replicas 4,6 in the
> > assigned replicas. It's more intuitive to send no error code and 4,5,6 in
> > the assigned replicas in this case.
>
>
> Should the list with no error code just be 4,6 since 5 is not available?
>
>
> On Mon, Apr 4, 2016 at 1:34 PM, Jun Rao  wrote:
>
> > Grant,
> >
> > 2. The issues that I was thinking are the following. (a) Say the
> controller
> > has topic deletion disabled and a topic deletion request is submitted to
> > ZK. In this case, the controller will ignore this request. However, the
> > broker may pick up the topic deletion marker in a transient window. (b)
> > Suppose that a topic is deleted and then recreated immediately. It is
> > possible for a broker to see the newly created topic and then the
> previous
> > topic deletion marker in a transient window. Thinking about this a bit
> > more. Both seem to be transient. So, it may not be a big concern. So, I
> am
> > ok with this as long as the interim solution is not too complicated.
> > Another thing to think through. If a topic is marked for deletion, do we
> > still return the partition level metadata?
> >
> > 3. Your explanation on controller id seems reasonable to me.
> >
> > 5. The issue is the following. If you have a partition with 3 replicas
> > 4,5,6, leader is on replica 4 and replica 5 is down. Currently, the
> broker
> > will send a REPLICA_NOT_AVAILABLE error code and only replicas 4,6 in the
> > assigned replicas. It's more intuitive to send no error code and 4,5,6 in
> > the assigned replicas in this case.
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Apr 4, 2016 at 8:33 AM, Grant Henke  wrote:
> >
> > > Hi Jun,
> > >
> > > Please See my responses below:
> > >
> > > Hmm, I am not sure about the listener approach. It ignores configs like
> > > > enable.topic.deletion and also opens the door for potential ordering
> > > issues
> > > > since now there are two separate paths for propagating the metadata
> to
> > > the
> > > > brokers.
> > >
> > >
> > > This mechanism is very similar to how deletes are tracked on the
> > controller
> > > itself. It is also the same way ACLs are tracked on brokers in the
> > default
> > > implementation. I am not sure I understand what ordering issue there
> > could
> > > be. This is used to report what topics are marked for deletion, which
> > today
> > > has no dependency on enable.topic.deletion. I agree that the delete
> > > mechanism in Kafka has a lot of room for improvement, but the goal in
> > this
> > > change is just to enable reporting it to the user, not to fix/improve
> > > existing issues. If you have an alternate approach that does not
> require
> > > major changes to the controller code, I would be open to investigate
> it.
> > >
> > > Could we just leave out markedForDeletion for now? In the common
> > > > case, if a topic is deleted, it will only be in markedForDeletion
> state
> > > for
> > > > a few milli seconds anyway.
> > >
> > >
> > > I don't think we should leave it out. The point of these changes is to
> > > prevent a user from needing to talk directly to zookeeper. We need a
> way
> > > for a user to see if a topic has been marked for deletion. Given the
> > issues
> > > with the current delete implementation, its fairly common for a topic
> to
> > > remain marked as deleted for qui

Re: [VOTE] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-05 Thread Jun Rao
Ashish,

+1 from me. The KIP title is mis-leading now since it's no just about
principal type. Could we change that?

Thanks,

Jun

On Tue, Apr 5, 2016 at 1:07 PM, Ashish Singh  wrote:

> Jun,
>
> KIP-50 is now updated. Mind taking a look.
>
> On Mon, Apr 4, 2016 at 10:40 PM, Ashish Singh  wrote:
>
> > Jun,
> >
> > Your suggested approach works, will update the KIP and re-initiate
> voting.
> > Thanks!
> >
> > On Sun, Apr 3, 2016 at 8:37 AM, Jun Rao  wrote:
> >
> >> Ashish,
> >>
> >> For the two benefits that you listed, fail fast can be done just in the
> >> implementation w/o getSupportedPrincipalTypes(). For avoiding the same
> >> check in all implementations, I think Grant is thinking of adding some
> ACL
> >> RPC request/response btw the client and broker directly, instead of
> >> writing
> >> the ACL to ZK directly in the future. If we do that, the right place to
> do
> >> any sanity check is probably in each implementation instead of in
> >> AclCommand since AclCommand won't necessarily be the only entry point
> for
> >> changing ACLs. So, I am still not quite convinced that we should
> >> getSupportedPrincipalTypes()
> >> right now. It's easy to add a new method to the interface, but hard to
> >> remove/change an existing method. Grant, could you comment on the
> latter?
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >> On Sat, Apr 2, 2016 at 11:43 AM, Ashish Singh 
> >> wrote:
> >>
> >> > Hello Jun,
> >> >
> >> >
> >> > On Fri, Apr 1, 2016 at 9:57 PM, Jun Rao  wrote:
> >> >
> >> > > Ashish,
> >> > >
> >> > > Thanks for the KIP.
> >> > >
> >> > > It seems that a specific implementation of Authorizer can reject
> >> invalid
> >> > > user type in addAcl/removeAcl without needing the new
> >> > > getSupportedPrincipalTypes()
> >> > > method, right? It's probably useful to provide the supported user
> >> types
> >> > as
> >> > > information through CLI (e.g., when --help is specified). Then,
> there
> >> may
> >> > > other information that a specific authorizer may want to provide.
> So,
> >> if
> >> > > this is just informational, would it be better to add sth like
> >> > > getDescription() in the Authorizer interface and expose that through
> >> CLI?
> >> > >
> >> > Providing information is definitely an important reason, some other
> >> reasons
> >> > were to fail fast and to avoid same check in all implementations. I
> >> agree
> >> > having a generic getDescription() will be handy for authorizer
> >> > implementations to provide more implementation specific info,
> including
> >> > supported principal types and more. However, do you think other two
> >> reasons
> >> > I mentioned can convince you for current proposal?
> >> >
> >> > >
> >> > > Jun
> >> > >
> >> > > On Wed, Mar 30, 2016 at 11:02 AM, Ashish Singh  >
> >> > > wrote:
> >> > >
> >> > > > Hi Guys,
> >> > > >
> >> > > > I would like to open the vote for KIP-50.
> >> > > >
> >> > > > KIP:
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-50+-+Enhance+Authorizer+interface+to+be+aware+of+supported+Principal+Types
> >> > > >
> >> > > > Discuss thread: here
> >> > > > <
> >> > > >
> >> > >
> >> >
> >>
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201603.mbox/%3CCAGQG9cUCLDO0owdziDcL9iStXNF1wURyVNcEZedQJg%3DUuC7j%3DQ%40mail.gmail.com%3E
> >> > > > >
> >> > > >
> >> > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-3186
> >> > > >
> >> > > > PR: https://github.com/apache/kafka/pull/861
> >> > > > ​
> >> > > > --
> >> > > >
> >> > > > Regards,
> >> > > > Ashish
> >> > > >
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> >
> >> > Regards,
> >> > Ashish
> >> >
> >>
> >
> >
> >
> > --
> >
> > Regards,
> > Ashish
> >
>
>
>
> --
>
> Regards,
> Ashish
>


Build failed in Jenkins: kafka-trunk-jdk7 #1180

2016-04-05 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3477: extended KStream/KTable API to specify custom partitioner

--
[...truncated 1602 lines...]

kafka.log.LogTest > testTruncateTo PASSED

kafka.log.LogTest > testCleanShutdownFile PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.CleanerTest > testBuildOffsetMap PASSED

kafka.log.CleanerTest > testSegmentGrouping PASSED

kafka.log.CleanerTest > testCleanSegmentsWithAbort PASSED

kafka.log.CleanerTest > testSegmentGroupingWithSparseOffsets PASSED

kafka.log.CleanerTest > testRecoveryAfterCrash PASSED

kafka.log.CleanerTest > testLogToClean PASSED

kafka.log.CleanerTest > testCleaningWithDeletes PASSED

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceCausesRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentGroupProtocol PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooLarge PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupSessionTimeoutTooSmall PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupEmptyAssignment 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetWithDefaultGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedLeaderShouldRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatRebalanceInProgress PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesRebalancingGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFollowerAfterLeader PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testCommitOffsetInAwaitingSync 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testJoinGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testSyncGroupFromUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupInconsistentProtocolType PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testCommitOffsetFromUnknownGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaveGroupWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testLeaveGroupUnknownConsumerExistingGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupUnknownConsumerNewGroup PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testJoinGroupFromUnchangedFollowerDoesNotRebalance PASSED

kafka.coordinator.GroupCoordinatorR

[jira] [Commented] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2016-04-05 Thread Robert Christ (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227519#comment-15227519
 ] 

Robert Christ commented on KAFKA-3042:
--

I work with James and we have seen this problem repeatedly.  We have been
able to reproduce the problem somewhat reliably and the pattern seems
to be:

1) hard kill the controller (say broker 1)
2) after session timeout, the zookeeper session expires for broker 1
3) another node (say broker 2) takes ownership of the /controller node
4) The zookeeper session for broker 2 expires even though broker 2 continues to 
function (see below)
5) another (say broker 3) takes ownership of the /controller node
6) At some point in the future, possibly after broker 3 finishes taking 
controllership or broker 1 resumes from the hard stop,
broker 2 will spew unending streams of the "Cached zkVersion..." message.
7) Restarting broker 2 will cause the zkVersion problem to go away.

While the zkVersion message is appearing the ISR lists do not get updated and 
we have underreplicated
partiions.

So 4 is the mystery.  I believe it happens because we have some form of 
network/disk/cpu contention
that actually causes the ping from broker 2 not to reach or be acknowledged by 
zk within the session timeout.
We are actively working to try to figure that out but I believe it is 
triggering some race condition or bug where
the active controller loses control of the /controller node and another node 
takes it.

I have logs (oh so many logs) from when this was occurring and can reproduce it 
fairly easily.


> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3493) Replica fetcher load is not balanced over fetcher threads

2016-04-05 Thread Maysam Yabandeh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15227557#comment-15227557
 ] 

Maysam Yabandeh commented on KAFKA-3493:


[~nehanarkhede], git suggests that you have the most recent context about this 
function. Could you please chime in? 
Thanks.

> Replica fetcher load is not balanced over fetcher threads
> -
>
> Key: KAFKA-3493
> URL: https://issues.apache.org/jira/browse/KAFKA-3493
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.1
>Reporter: Maysam Yabandeh
> Fix For: 0.10.0.1
>
>
> The replicas are not evenly distributed among the fetcher threads. This has 
> caused some fetcher threads get overloaded and hence their requests time out 
> frequently. This is especially a big issue when a new node is added to the 
> cluster and the fetch traffic is high. 
> Here is an example run in a test cluster with 10 brokers and 6 fetcher 
> threads (per source broker). A single topic consisting of 500+ partitions was 
> assigned to have a replica for each parition on the newly added broker.
> {code}[kafka-jetstream.canary]myabandeh@sjc8c-rl17-23b:~$ for i in `seq 0 5`; 
> do grep ReplicaFetcherThread-$i- /var/log/kafka/server.log | grep "reset its 
> fetch offset from 0" | wc -l; done
> 85
> 83
> 85
> 83
> 85
> 85
> [kafka-jetstream.canary]myabandeh@sjc8c-rl17-23b:~$ for i in `seq 0 5`; do 
> grep ReplicaFetcherThread-$i-22 /var/log/kafka/server.log | grep "reset its 
> fetch offset from 0" | wc -l; done
> 15
> 1
> 13
> 1
> 14
> 1
> {code}
> The problem is that AbstractFetcherManager::getFetcherId method does not take 
> the broker id into account:
> {code}
>   private def getFetcherId(topic: String, partitionId: Int) : Int = {
> Utils.abs(31 * topic.hashCode() + partitionId) % numFetchers
>   }
> {code}
> Hence although the replicas are evenly distributed among the fetcher ids 
> across all source brokers, this is not necessarily the case for each broker 
> separately. 
> I think a random function would do a much better job in distributing the load 
> over the fetcher threads from each source broker.
> Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #33

2016-04-05 Thread Apache Jenkins Server
See 

Changes:

[cshapi] Changing version to 0.10.1.0-SNAPSHOT

[cshapi] MINOR: update new version in additional places

[cshapi] KAFKA-3445: Validate TASKS_MAX_CONFIG's lower bound

[cshapi] KAFKA-3407 - ErrorLoggingCallback trims helpful diagnostic information.

[cshapi] MINOR: Add check for empty topics iterator in ReplicaVerificationTool.

[cshapi] KAFKA-2844; Separate keytabs for sasl tests

[cshapi] KAFKA-2930: Update references to ZooKeeper in the docs.

[cshapi] MINOR: small code optimizations in streams

[cshapi] KAFKA-3419: clarify difference between topic subscription and partition

[cshapi] KAFKA-3495; NetworkClient.blockingSendAndReceive` should rely on

[cshapi] HOTFIX: set timestamp in SinkNode

[cshapi] KAFKA-3464: Add system tests for Connect with Kafka security enabled

[cshapi] KAFKA-2998: log warnings when client is disconnected from bootstrap

[cshapi] KAFKA-3384: Conform to POSIX kill usage

[cshapi] KAFKA-3510; OffsetIndex thread safety

[cshapi] KAFKA-3489; Update request metrics if a client closes a connection 
while

[cshapi] KAFKA-3508: Fix transient SimpleACLAuthorizerTest failures

[cshapi] KAFKA-3477: extended KStream/KTable API to specify custom partitioner

[cshapi] Changing version to 0.10.0.0

[cshapi] MINOR: update new version in additional places

[cshapi] MINOR: Revert 0.10.0 branch to SNAPSHOT per change in release process

--
[...truncated 3124 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.LogConfigTest > testFromPropsEmpty PASSED

kafka.log.LogConfigTest > testKafkaConfigToProps PASSED

kafka.log.LogConfigTest > testFromPropsInvalid PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.coordinator.MemberMetadataTest > testMatchesSupportedProtocols PASSED

kafka.coordinator.MemberMetadataTest > testMetadata PASSED

kafka.coordinator.MemberMetadataTest > testMetadataRaisesOnUnsupportedProtocol 
PASSED

kafka.coordinator.MemberMetadataTest > testVoteForPreferredProtocol PASSED

kafka.coordinator.MemberMetadataTest > testVoteRaisesOnNoSupportedProtocols 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatWrongCoordinator 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupStable PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatIllegalGeneration 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testDescribeGroupWrongCoordinator PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testDescribeGroupRebalancing 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testLeaderFailureInSyncGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testGenerationIdIncrementsOnRebalance PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testSyncGroupFromIllegalGeneration PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testInvalidGroupId PASSED

kafka.coordinator.GroupCoordinatorResponseTest > testHeartbeatUnknownGroup 
PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testListGroupsIncludesStableGroups PASSED

kafka.coordinator.GroupCoordinatorResponseTest > 
testHeartbeatDuringRebalanceC

Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-05 Thread Jay Kreps
Given that we're breaking compatibility anyway should we:
1. Remove the get prefix on this method and the existing one which violate
our own code style guidelines (Oops! Kind of sad we went through the whole
KIP process and no one even flagged this)
2. Move the interface out of scala to be a normal Java interface

This breaks source compatibility but probably what we should have done
originally I suspect. Probably there are few enough implementations of this
that it is better to just rip the bandaid off.

-Jay

On Sat, Apr 2, 2016 at 11:18 AM, Ashish Singh  wrote:

> Hello Ismael,
>
> On Fri, Apr 1, 2016 at 4:08 PM, Ismael Juma  wrote:
>
> > Hi Ashish,
> >
> > A few comments on the proposal:
> >
> > 1. In Kafka, we don't use use getter convention generally. However, the
> > other methods in the `Authorizer` interface do follow the getter
> > convention, which is unfortunate. So, I am OK with the name you suggested
> > (getSupportedPrincipalTypes instead of supportedPrincipalTypes), but I
> > wanted to mention this in case others have a different opinion.
> >
> If multiple people feel the same, I am happy to rename the method.
>
> >
> > 2. The proposed change to the Authorizer trait (adding a method with a
> > default implementation) is source compatible, but _not_ binary
> compatible.
> > So, it won't be possible for someone to implement the Authorizer and
> > compile it once so that it works with both Kafka 0.9.0.x and Kafka
> > 0.10.0.x. Not sure how much of an issue this is, but it's worth
> mentioning
> > it in the KIP.
> >
> Right, will mention that. It is binary incompatible only when someone uses
> >= 0.10 AdminCommand with an Authorizer implementation based on 0.9.
>
> >
> > 3. If an Authorizer wanted to support any type of principal without
> > specifying them, is there a way to do that? Is it something that we want
> to
> > support? Before the KIP was proposed, there was a discussion in the PR
> > about different principal types potentially being used by the
> > authentication layer where the Authorizer is agnostic.
> >
> Authorizer is agnostic of principal types right now. As we have already
> seen that this opens up space for user errors. We want to keep kafka-acls
> be generic so that various authorizer implementation can use the same cli.
> Each implementation has freedom of supporting their own principal types.
> How do we expect users to know what principal types are supported by a
> particular implementation? Sure we can document it, we have it documented
> for SimpleAclAuthorizer, still users ran into issue. Worst thing is that
> this error happens silently, invalid acls are persisted by authorizer, as
> authorizers themselves are not aware of principal types it supports. This
> is what the KIP is trying to solve.
>
>
> > 4. There is a PR for introducing a "group" principal type (
> > https://github.com/apache/kafka/pull/483/files), would that have any
> > impact
> > on this proposal?
> >
> Override of getSupportedPrincipalTypes in  SimpleAclAuthorizer will have to
> return group as well.
>
> >
> > Ismael
> >
> > On Mon, Mar 28, 2016 at 9:28 PM, Ashish Singh 
> wrote:
> >
> > > Hello Harsha,
> > >
> > > Pinging again. This is a minor KIP and it has been lying around for
> quite
> > > some time. If providing supported principal types via a config is what
> > you
> > > suggest, I am fine with it.
> > >
> > > On Wed, Mar 9, 2016 at 12:32 PM, Ashish Singh 
> > wrote:
> > >
> > > > Hi Harsha,
> > > >
> > > > On Wed, Mar 9, 2016 at 12:04 PM, Harsha  wrote:
> > > >
> > > >> Why we need to add this additional method just for validation. This
> > will
> > > >> invalidate all the existing authorizer implementations.
> > > >
> > > > As PrincipalTypes is implementation specific, wouldn't it be required
> > for
> > > > users to know what principal types they can use in add/removeAcls?
> > > >
> > > > All the existing authorizer implementations will continue to work, as
> > the
> > > > method by default will return List(User), as User is the only
> principal
> > > > type that is supported out of the box as of now. Let me know if I
> > missed
> > > > your question here.
> > > >
> > > >
> > > >> Why can't we add
> > > >> the logic for validation and pass it as authorizer config.
> > > >>
> > > > Do you mean passing PrincipalTypes as authorizer config? If I am
> > getting
> > > > your question correctly, then we are asking users to be aware of what
> > > > PrincipalTypes an authorizer supports. That defeats the purpose of
> > > > validation, right?
> > > >
> > > >>
> > > >> -Harsha
> > > >>
> > > >> On Mon, Mar 7, 2016, at 04:33 PM, Ashish Singh wrote:
> > > >> > + Parth, Harsha
> > > >> >
> > > >> > On Mon, Mar 7, 2016 at 4:32 PM, Ashish Singh  >
> > > >> wrote:
> > > >> >
> > > >> > > Thanks Gwen.
> > > >> > >
> > > >> > > @Parth, @Harsha pinging you guys for your feedback. Based on
> > > >> discussion on
> > > >> > > JIRA, we have following open questions.
> > > >> > >
> > > >> > >1.
> > > >> > >
>

Re: [VOTE] KIP-52: Kafka Connect Control APIs

2016-04-05 Thread Jay Kreps
+1

On Tue, Apr 5, 2016 at 4:59 PM, Jason Gustafson  wrote:

> I'd like to open the vote on KIP-52, which adds several control APIs to
> Kafka Connect:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs
> .
> Compared to some of the other active KIPs, this is a relatively small
> feature, but it makes administration of Connect clusters much easier. If
> adopted, I'm hoping to get this into 0.10.
>
> Thanks!
> Jason
>


Re: [VOTE] KIP-52: Kafka Connect Control APIs

2016-04-05 Thread Neha Narkhede
+1

On Tue, Apr 5, 2016 at 7:53 PM, Jay Kreps  wrote:

> +1
>
> On Tue, Apr 5, 2016 at 4:59 PM, Jason Gustafson 
> wrote:
>
> > I'd like to open the vote on KIP-52, which adds several control APIs to
> > Kafka Connect:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs
> > .
> > Compared to some of the other active KIPs, this is a relatively small
> > feature, but it makes administration of Connect clusters much easier. If
> > adopted, I'm hoping to get this into 0.10.
> >
> > Thanks!
> > Jason
> >
>



-- 
Thanks,
Neha


Re: [VOTE] KIP-52: Kafka Connect Control APIs

2016-04-05 Thread Guozhang Wang
+1

On Tue, Apr 5, 2016 at 8:06 PM, Neha Narkhede  wrote:

> +1
>
> On Tue, Apr 5, 2016 at 7:53 PM, Jay Kreps  wrote:
>
> > +1
> >
> > On Tue, Apr 5, 2016 at 4:59 PM, Jason Gustafson 
> > wrote:
> >
> > > I'd like to open the vote on KIP-52, which adds several control APIs to
> > > Kafka Connect:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs
> > > .
> > > Compared to some of the other active KIPs, this is a relatively small
> > > feature, but it makes administration of Connect clusters much easier.
> If
> > > adopted, I'm hoping to get this into 0.10.
> > >
> > > Thanks!
> > > Jason
> > >
> >
>
>
>
> --
> Thanks,
> Neha
>



-- 
-- Guozhang


assigning jiras

2016-04-05 Thread sunil kalva
Hi
I was trying to assign a jira to myself to start working, but looks like i
don't have permission.
Can someone give me access.

my id: sunilkalva

-- 
SunilKalva


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-05 Thread Ismael Juma
Hi Jay,

On Wed, Apr 6, 2016 at 3:48 AM, Jay Kreps  wrote:

> Given that we're breaking compatibility anyway should we:
>

We are not breaking source compatibility since the new method has a default
implementation. I take it that you mean binary compatibility?


> 1. Remove the get prefix on this method and the existing one which violate
> our own code style guidelines (Oops! Kind of sad we went through the whole
> KIP process and no one even flagged this)
>

I did flag this during the discussion and Ashish said he would change it if
other people felt that it should be changed.


> 2. Move the interface out of scala to be a normal Java interface
>
> This breaks source compatibility but probably what we should have done
> originally I suspect. Probably there are few enough implementations of this
> that it is better to just rip the bandaid off.
>

Can you please explain the motivation? It did come up in previous
discussions that some things like Operation and ResourceType should be in
the clients library, but not Authorizer itself. Are we saying that any
pluggable interface should be in Java so that users can implement it
without including the Scala library?

Grant, you originally suggested that some things would have to be in the
Java side as well. Can you please elaborate on this?

Ismael


Re: [DISCUSS] KIP-52 - Add Connector Control APIs

2016-04-05 Thread Ismael Juma
Thank you Jason. One more question, the response codes are described as for
all endpoints:

Response Codes: 202 (Accepted) on successful restart initiation, 404 if the
connector doesn't exist

What is the response code in the no-op case?

Ismael


On Mon, Apr 4, 2016 at 6:52 PM, Jason Gustafson  wrote:

> Hey Ismael, thanks for having a look. I've changed pause/resume to use PUT.
>
> -Jason
>
> On Sun, Apr 3, 2016 at 7:30 PM, Ewen Cheslack-Postava 
> wrote:
>
> > Ismael,
> >
> > Great point. Pause and resume should be idempotent and actually represent
> > updating a resource that gets written to Kafka (although I must admit I
> > don't know if the use of 202/Accepted should affect this at all), the
> > restart endpoints seem a bit different as they are one-off immediate
> > commands.
> >
> > -Ewen
> >
> > On Fri, Apr 1, 2016 at 3:52 PM, Ismael Juma  wrote:
> >
> > > Hi Jason,
> > >
> > > Do I understand correctly that these requests are idempotent? If so,
> why
> > > are they POSTs instead of PUTs?
> > >
> > > Ismael
> > >
> > > On Thu, Mar 31, 2016 at 5:31 PM, Jason Gustafson 
> > > wrote:
> > >
> > > > Hi All, I've written a short KIP to add control APIs to Kafka Connect
> > to
> > > > make administration easier:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-52%3A+Connector+Control+APIs
> > > > .
> > > > Please let me know your thoughts.
> > > >
> > > > Thanks,
> > > > Jason
> > > >
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>