[GitHub] kafka pull request #2543: MINOR: Remove unused MessageWriter

2017-02-12 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2543

MINOR: Remove unused MessageWriter



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka remove-message-writer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2543.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2543


commit bb23ce17a1249b27e0c49dd4b2690b0e13b6aad7
Author: Jason Gustafson 
Date:   2017-02-13T02:01:43Z

MINOR: Remove unused MessageWriter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #1272

2017-02-12 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4758; Connect missing checks for NO_TIMESTAMP

--
[...truncated 4115 lines...]

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.MetricsDuringTopicCreationDeletionTest > 

[jira] [Resolved] (KAFKA-4758) Connect WorkerSinkTask is missing checks for NO_TIMESTAMP

2017-02-12 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-4758.

   Resolution: Fixed
Fix Version/s: 0.10.3.0

Issue resolved by pull request 2533
[https://github.com/apache/kafka/pull/2533]

> Connect WorkerSinkTask is missing checks for NO_TIMESTAMP
> -
>
> Key: KAFKA-4758
> URL: https://issues.apache.org/jira/browse/KAFKA-4758
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Ryan P
> Fix For: 0.10.3.0
>
>
> The current check for NO_TIMESTAMP_TYPE is not sufficient. Upconverted 
> messages will have a timestamp type, but if the topic is set to use 
> CREAT_TIME, the timestamp will be NO_TIMESTAMP (-1). We should use {{null}} 
> in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2533: KAFKA-4758: Checking the TimestampType is insuffic...

2017-02-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2533


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4758) Connect WorkerSinkTask is missing checks for NO_TIMESTAMP

2017-02-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863055#comment-15863055
 ] 

ASF GitHub Bot commented on KAFKA-4758:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2533


> Connect WorkerSinkTask is missing checks for NO_TIMESTAMP
> -
>
> Key: KAFKA-4758
> URL: https://issues.apache.org/jira/browse/KAFKA-4758
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Ryan P
> Fix For: 0.10.3.0
>
>
> The current check for NO_TIMESTAMP_TYPE is not sufficient. Upconverted 
> messages will have a timestamp type, but if the topic is set to use 
> CREAT_TIME, the timestamp will be NO_TIMESTAMP (-1). We should use {{null}} 
> in that case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] 0.10.2.0 RC1

2017-02-12 Thread Guozhang Wang
Hello Ian,

I looked at the releasing artifact and tried running a simple streams app
with multiple threads with the jar but cannot reproduce your issue. If you
are still blocked on this I'm wondering if you could provide some code
sketch without any app business logic for me to try to reproduce it locally?


Guozhang


On Sun, Feb 12, 2017 at 2:54 AM, Ian Duffy  wrote:

> Hi Matthias,
>
> Thank you for your fast response.
> I am not using any custom class loaders and it is the 10.2 jar that is
> being used.
>
> I'll try clearing out the state on next failure.
>
> The config parameters we are setting are:
>
> consumer:
> heartbeat.interval.ms = "100"
> auto.offset.reset = "earliest"
> group.id = "text-pipeline"
> session.timeout.ms = "1"
> max.poll.interval.ms = "30"
> max.poll.records = "500"
>
> stream:
>   num.stream.threads = "8"
>
> Within 10.1 we seen this issue:
> https://issues.apache.org/jira/browse/KAFKA-4582 with 10.2 that has
> changed
> to the above exception and hanging on attempting to rejoin the group-id.
>
>
> On 11 February 2017 at 04:23, Matthias J. Sax 
> wrote:
>
> > Hi Ian,
> >
> > thanks for reporting this. I had a look at the stack trace and code and
> > the whole situation is quite confusing. The exception itself is expected
> > but we have a try-catch-block that should swallow the exception and it
> > should never bubble up:
> >
> > In
> >   AbstractTaskCreator.retryWithBackoff
> >
> > a call to
> >   TaskCreator.createTask
> >
> > is done (cf your stack trace). This call is guarded against a
> > LockExcption (cf StreamThread.java code):
> >
> > > try {
> > > createTask(taskId, partitions);
> > > it.remove();
> > > } catch (final LockException e) {
> > > // ignore and retry
> > > log.warn("Could not create task {}. Will retry.", taskId, e);
> > > }
> >
> >
> > Can you verify, that you loaded the correct jar file when running the
> > test? Ie, not caching issue loading old code etc.
> >
> > Another theory is about class loading. Do you use custom class loaders?
> >
> > One more thing you can try out is to delete the local app state
> > directory. This will give you a clean restart -- on the cost of state
> > recreation (for the first start only). Afterward stop and restart you
> > app to see if the issue is resolved.
> >
> >
> > Right now, I cannot reproduce the problem.
> >
> >
> > -Matthias
> >
> >
> >
> > On 2/10/17 9:47 AM, Ian Duffy wrote:
> > > Seeing the following failure when using multi-threaded streams
> > >
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]:
> > > org.apache.kafka.streams.errors.LockException: task [0_21] Failed to
> > lock
> > > the state directory: /tmp/kafka-streams/text_pipeline_id/0_21
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.
> > ProcessorStateManager.(ProcessorStateManager.java:102)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.AbstractTask.(
> > AbstractTask.java:73)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.
> > StreamTask.(StreamTask.java:108)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.
> > StreamThread.createStreamTask(StreamThread.java:834)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.
> > createTask(StreamThread.java:1207)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.StreamThread$
> > AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.
> > StreamThread.addStreamTasks(StreamThread.java:937)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.StreamThread.access$500(
> > StreamThread.java:69)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.streams.processor.internals.StreamThread$1.
> > onPartitionsAssigned(StreamThread.java:236)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
> > onJoinComplete(ConsumerCoordinator.java:255)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> > joinGroupIfNeeded(AbstractCoordinator.java:339)
> > > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> > ensureActiveGroup(AbstractCoordinator.java:303)
> > 

[jira] [Commented] (KAFKA-4732) Unstable test: KStreamKTableJoinIntegrationTest.shouldCountClicksPerRegion[1]

2017-02-12 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862912#comment-15862912
 ] 

Guozhang Wang commented on KAFKA-4732:
--

New instances of this failure:

https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1645/
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/1648/

{code}
Stacktrace

java.lang.AssertionError: Condition not met within timeout 3. Expecting 3 
records from topic output-topic-2 while only received 0: []
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:207)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:176)
at 
org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest.shouldCountClicksPerRegion(KStreamKTableJoinIntegrationTest.java:295)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

Jenkins build is back to normal : kafka-trunk-jdk8 #1271

2017-02-12 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4652) Improve test coverage KStreamBuilder

2017-02-12 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15862883#comment-15862883
 ] 

Bill Bejeck commented on KAFKA-4652:


Misses these as well with auto-offset reset PR, picking up.

> Improve test coverage KStreamBuilder
> 
>
> Key: KAFKA-4652
> URL: https://issues.apache.org/jira/browse/KAFKA-4652
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Bill Bejeck
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Some methods not covered,i.e, 
> {{public  KTable table(AutoOffsetReset offsetReset, String topic, 
> final String storeName)}}
> {{public  KStream stream(Serde keySerde, Serde valSerde, 
> Pattern topicPattern)}}
> {{public  KStream stream(AutoOffsetReset offsetReset, String... 
> topics)}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [DISCUSS] KIP-117: Add a public AdministrativeClient API for Kafka admin operations

2017-02-12 Thread Jay Kreps
Hey Colin,

Thanks for the hard work on this. I know going back and forth on APIs is
kind of frustrating but we're at the point where these things live long
enough and are used by enough people that it is worth the pain. I'm sure
it'll come down in the right place eventually. A couple things I've found
helped in the past:

   1. The burden of evidence needs to fall on the complicator. i.e. if
   person X thinks the api should be async they need to produce a set of
   common use cases that require this. Otherwise you are perpetually having to
   think "we might need x". I think it is good to have a rule of "simple until
   proven insufficient".
   2. Make sure we frame things for the intended audience. At this point
   our apis get used by a very broad set of Java engineers. This is a very
   different audience from our developer mailing list. These people code for a
   living not necessarily as a passion, and may not understand details of the
   internals of our system or even basic things like multi-threaded
   programming. I don't think this means we want to dumb things down, but
   rather try really hard to make things truly simple when possible.

Okay here were a couple of comments:

   1. Conceptually what is a TopicContext? I think it means something like
   TopicAdmin? It is not literally context about Topics right? What is the
   relationship of Contexts to clients? Is there a threadsafety difference?
   Would be nice to not have to think about this, this is what I mean by
   "conceptual weight". We introduce a new concept that is a bit nebulous that
   I have to figure out to use what could be a simple api. I'm sure you've
   been through this experience before where you have these various objects
   and you're trying to figure out what they represent (the connection to the
   server? the information to create a connection? a request session?).
   2. We've tried to avoid the Impl naming convention. In general the rule
   has been if there is only going to be one implementation you don't need an
   interface. If there will be multiple, distinguish it from the others. The
   other clients follow this pattern: Producer, KafkaProducer, MockProducer;
   Consumer, KafkaConsumer, MockConsumer.
   3. We generally don't use setters or getters as a naming convention. I
   personally think mutating the setting in place seems kind of like late 90s
   Java style. I think it likely has thread-safety issues. i.e. even if it is
   volatile you may not get the value you just set if there is another
   thread... I actually really liked what you described as your original idea
   of having a single parameter object like CreateTopicRequest that holds all
   these parameters and defaults. This lets you evolve the api with all the
   various combinations of arguments without overloading insanity. After doing
   literally tens of thousands of remote APIs at LinkedIn we eventually
   converged on a rule, which is ultimately every remote api needs a single
   argument object you can add to over time and it must be batched. Which
   brings me to my next point...
   4. I agree batch apis are annoying but I suspect we'll end up adding
   one. Doing 1000 requests for 1000 operations if creating or deleting will
   be bad, right? This won't be the common case, but when you do it it will be
   a deal-breaker problem. I don't think we should try to fix this one behind
   the scenes.
   5. Are we going to do CompletableFuture (which requires java 8) or
   normal Future? Normal Future is utterly useless for most things other than
   just calling wait. If we can evolve in place from Future to
   CompletableFuture that is fantastic (we could do it for the producer too!).
   My belief was that this was binary incompatible but I actually don't know
   (obviously it's source compatible).

-Jay

On Wed, Feb 8, 2017 at 5:00 PM, Colin McCabe  wrote:

> Hi all,
>
> I made some major revisions to the proposal on the wiki, so please check
> it out.
>
> The new API is based on Ismael's suggestion of grouping related APIs.
> There is only one layer of grouping.  I think that it's actually pretty
> intuitive.  It's also based on the idea of using Futures, which several
> people commented that they'd like to see.
>
> Here's a simple example:
>
>  > AdminClient client = new AdminClientImpl(myConfig);
>  > try {
>  >   client.topics().create("foo", 3, (short) 2, false).get();
>  >   Collection topicNames = client.topics().list(false).get();
>  >   log.info("Found topics: {}", Utils.mkString(topicNames, ", "));
>  >   Collection nodes = client.nodes().list().get();
>  >   log.info("Found cluster nodes: {}", Utils.mkString(nodes, ", "));
>  > } finally {
>  >   client.close();
>  > }
>
> The good thing is, there is no Try, no 'get' prefixes, no messing with
> batch APIs.  If there is an error, then Future#get() throws an
> ExecutionException which wraps the relevant exception in the standard
> Java way.
>
> Here's a slightly 

[jira] [Assigned] (KAFKA-4652) Improve test coverage KStreamBuilder

2017-02-12 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-4652:
--

Assignee: Bill Bejeck

> Improve test coverage KStreamBuilder
> 
>
> Key: KAFKA-4652
> URL: https://issues.apache.org/jira/browse/KAFKA-4652
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>Assignee: Bill Bejeck
>Priority: Minor
> Fix For: 0.10.3.0
>
>
> Some methods not covered,i.e, 
> {{public  KTable table(AutoOffsetReset offsetReset, String topic, 
> final String storeName)}}
> {{public  KStream stream(Serde keySerde, Serde valSerde, 
> Pattern topicPattern)}}
> {{public  KStream stream(AutoOffsetReset offsetReset, String... 
> topics)}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] kafka pull request #2534: MINOR: Update SimpleAclAuthorizer.scala

2017-02-12 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2534


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: log.flush.interval.messages setting of Kafka 0.9.0.0

2017-02-12 Thread +8618611626818
someone can help explain it?

发自我的超级手机

2016年12月16日 18:17于 Json Tu 写道:
>
> Hi all, 
> we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3 replicas, 
> and send it with ack -1, our sending latency is avg 7ms. I prepare to 
> optimize performance of cluster through adjusting some params. 
> we find our brokers has set config item as below, 
> log.flush.interval.messages=1 
> and other relevant parameter is default, and I find the default value of 
> log.flush.interval.messages is LONG.MAX_VALUE, because of setting this config 
> will flush intiative that may affect performace . I wonder can I cancel this 
> config  item’s setting, and use default value. 
>
> I think use default value may have two drawback as below. 
> 1.recovery checkpoint can not be updated,so when load segments,it will scan 
> from begin to end. 
> 2.it may lose data when leader partition’s broker’s vm is restart,but I think 
> 3 replicas can remedy this drawback if the network between them is good. 
>
> any suggestions? thank you 


Re: [VOTE] 0.10.2.0 RC1

2017-02-12 Thread Ian Duffy
Hi Matthias,

Thank you for your fast response.
I am not using any custom class loaders and it is the 10.2 jar that is
being used.

I'll try clearing out the state on next failure.

The config parameters we are setting are:

consumer:
heartbeat.interval.ms = "100"
auto.offset.reset = "earliest"
group.id = "text-pipeline"
session.timeout.ms = "1"
max.poll.interval.ms = "30"
max.poll.records = "500"

stream:
  num.stream.threads = "8"

Within 10.1 we seen this issue:
https://issues.apache.org/jira/browse/KAFKA-4582 with 10.2 that has changed
to the above exception and hanging on attempting to rejoin the group-id.


On 11 February 2017 at 04:23, Matthias J. Sax  wrote:

> Hi Ian,
>
> thanks for reporting this. I had a look at the stack trace and code and
> the whole situation is quite confusing. The exception itself is expected
> but we have a try-catch-block that should swallow the exception and it
> should never bubble up:
>
> In
>   AbstractTaskCreator.retryWithBackoff
>
> a call to
>   TaskCreator.createTask
>
> is done (cf your stack trace). This call is guarded against a
> LockExcption (cf StreamThread.java code):
>
> > try {
> > createTask(taskId, partitions);
> > it.remove();
> > } catch (final LockException e) {
> > // ignore and retry
> > log.warn("Could not create task {}. Will retry.", taskId, e);
> > }
>
>
> Can you verify, that you loaded the correct jar file when running the
> test? Ie, not caching issue loading old code etc.
>
> Another theory is about class loading. Do you use custom class loaders?
>
> One more thing you can try out is to delete the local app state
> directory. This will give you a clean restart -- on the cost of state
> recreation (for the first start only). Afterward stop and restart you
> app to see if the issue is resolved.
>
>
> Right now, I cannot reproduce the problem.
>
>
> -Matthias
>
>
>
> On 2/10/17 9:47 AM, Ian Duffy wrote:
> > Seeing the following failure when using multi-threaded streams
> >
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]:
> > org.apache.kafka.streams.errors.LockException: task [0_21] Failed to
> lock
> > the state directory: /tmp/kafka-streams/text_pipeline_id/0_21
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.
> ProcessorStateManager.(ProcessorStateManager.java:102)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.AbstractTask.(
> AbstractTask.java:73)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.(StreamTask.java:108)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.createStreamTask(StreamThread.java:834)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.
> createTask(StreamThread.java:1207)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.StreamThread$
> AbstractTaskCreator.retryWithBackoff(StreamThread.java:1180)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.addStreamTasks(StreamThread.java:937)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.StreamThread.access$500(
> StreamThread.java:69)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.streams.processor.internals.StreamThread$1.
> onPartitionsAssigned(StreamThread.java:236)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
> onJoinComplete(ConsumerCoordinator.java:255)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> joinGroupIfNeeded(AbstractCoordinator.java:339)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> ensureActiveGroup(AbstractCoordinator.java:303)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(
> ConsumerCoordinator.java:286)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.clients.consumer.KafkaConsumer.
> pollOnce(KafkaConsumer.java:1030)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > org.apache.kafka.clients.consumer.KafkaConsumer.poll(
> KafkaConsumer.java:995)
> > Feb 10 17:21:15 ip-172-31-137-57 docker/43e65fe123cd[826]: #011at
> > 

[GitHub] kafka pull request #2542: MINOR: Stream metrics documentation

2017-02-12 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/2542

MINOR: Stream metrics documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka minor-streams-metrics

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2542.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2542


commit 33506adfcbc12a443e16e0a4f2d4aa54c9384f7d
Author: Eno Thereska 
Date:   2017-02-12T10:44:42Z

Stream metrics




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---