Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Gwen Shapira
Hi,

One super minor issue (that can be fixed without a new RC): The big
exactly-once stuff (KIP-98) doesn't actually show up as new features in the
release notes. Most chunks appear as sub-tasks, but the new feature itself
(KAFKA-4815) is marked as 0.11.1.0 so this is missing. I get that this is
cosmetic, but having the biggest feature of the release missing from the
release notes seems like a big deal to me :)

Other than that...
Validated signatures, ran quickstart, ran tests and everything looks good.

+1 (binding).


On Mon, Jun 26, 2017 at 6:54 PM Ismael Juma  wrote:

> Hi Vahid,
>
> There are a few known issues when running Kafka on Windows. A PR with some
> fixes is: https://github.com/apache/kafka/pull/3283. The fact that the
> index cannot be accessed indicates that it may be a similar issue. I
> suggest we move this discussion to the relevant JIRAs instead of the
> release thread.
>
> Ismael
>
> On Mon, Jun 26, 2017 at 11:25 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Ismael,
> >
> > This is the output of core tests from the start until the first failed
> > test.
> >
> > kafka.admin.AdminRackAwareTest >
> testAssignmentWithRackAwareWithUnevenRacks
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest >
> testAssignmentWithRackAwareWithUnevenReplicas
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED
> >
> > kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED
> >
> > kafka.admin.AdminRackAwareTest >
> testAssignmentWith2ReplicasRackAwareWith6Partitions
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRac
> > kAwareWith6PartitionsAnd3Brokers PASSED
> >
> > kafka.admin.AdminRackAwareTest >
> testGetRackAlternatedBrokerListAndAssignReplicasToBrokers
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED
> >
> > kafka.admin.AdminRackAwareTest > testSingleRack PASSED
> >
> > kafka.admin.AdminRackAwareTest >
> testAssignmentWithRackAwareWithRandomStartIndex
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment
> > PASSED
> >
> > kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED
> >
> > kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED
> >
> > kafka.admin.ConfigCommandTest > testScramCredentials PASSED
> >
> > kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType
> > PASSED
> >
> > kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED
> >
> > kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED
> >
> > kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED
> >
> > kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED
> >
> > kafka.admin.DeleteConsumerGroupTest >
> testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup
> > PASSED
> >
> > kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED
> >
> > kafka.admin.ConfigCommandTest >
> shouldNotUpdateBrokerConfigIfMalformedBracketConfig
> > PASSED
> >
> > kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED
> >
> > kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED
> >
> > kafka.admin.ConfigCommandTest >
> shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted
> > PASSED
> >
> > kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED
> >
> > kafka.admin.BrokerApiVersionsCommandTest >
> checkBrokerApiVersionCommandOutput
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldRemoveThrottleReplicaListBasedOnProposedAssignment
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldFindMovingReplicasMultipleTopics
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldNotOverwriteExistingPropertiesWhenLimitIsAdded
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldFindMovingReplicasMultipleTopicsAndPartitions
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldRemoveThrottleLimitFromAllBrokers
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldFindMovingReplicasMultiplePartitions
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldFindMovingReplicasWhenProposedIsSubsetOfExisting
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldFindTwoMovingReplicasInSamePartition
> > PASSED
> >
> > kafka.admin.ReassignPartitionsCommandTest >
> shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas
> > PASSED
> >
> > kafka.admin.ConfigCommandTest >
> shouldNotUpdateBrokerConfigIfMalformedEntityName

Re: [kafka-clients] Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Jun Rao
Hi, Ismael,

Thanks for running the release. +1. Verified quickstart on the 2.11 binary.

Jun

On Mon, Jun 26, 2017 at 3:53 PM, Ismael Juma  wrote:

> Hi Vahid,
>
> There are a few known issues when running Kafka on Windows. A PR with some
> fixes is: https://github.com/apache/kafka/pull/3283. The fact that the
> index cannot be accessed indicates that it may be a similar issue. I
> suggest we move this discussion to the relevant JIRAs instead of the
> release thread.
>
> Ismael
>
> On Mon, Jun 26, 2017 at 11:25 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
>> Hi Ismael,
>>
>> This is the output of core tests from the start until the first failed
>> test.
>>
>> kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > 
>> testAssignmentWithRackAwareWithUnevenReplicas
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED
>>
>> kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED
>>
>> kafka.admin.AdminRackAwareTest > 
>> testAssignmentWith2ReplicasRackAwareWith6Partitions
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRac
>> kAwareWith6PartitionsAnd3Brokers PASSED
>>
>> kafka.admin.AdminRackAwareTest > 
>> testGetRackAlternatedBrokerListAndAssignReplicasToBrokers
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED
>>
>> kafka.admin.AdminRackAwareTest > testSingleRack PASSED
>>
>> kafka.admin.AdminRackAwareTest > 
>> testAssignmentWithRackAwareWithRandomStartIndex
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment
>> PASSED
>>
>> kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED
>>
>> kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED
>>
>> kafka.admin.ConfigCommandTest > testScramCredentials PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType
>> PASSED
>>
>> kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED
>>
>> kafka.admin.DeleteConsumerGroupTest > 
>> testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup
>> PASSED
>>
>> kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED
>>
>> kafka.admin.ConfigCommandTest > 
>> shouldNotUpdateBrokerConfigIfMalformedBracketConfig
>> PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED
>>
>> kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED
>>
>> kafka.admin.ConfigCommandTest > 
>> shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted
>> PASSED
>>
>> kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED
>>
>> kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput
>> PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldRemoveThrottleReplicaListBasedOnProposedAssignment PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldFindMovingReplicasMultipleTopics PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldNotOverwriteExistingPropertiesWhenLimitIsAdded PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldFindMovingReplicasMultipleTopicsAndPartitions PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldRemoveThrottleLimitFromAllBrokers PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas
>> PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldFindMovingReplicasMultiplePartitions PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldFindMovingReplicasWhenProposedIsSubsetOfExisting PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldFindTwoMovingReplicasInSamePartition PASSED
>>
>> kafka.admin.ReassignPartitionsCommandTest >
>> shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas PASSED
>>
>> kafka.admin.ConfigCommandTest > 
>> shouldNotUpdateBrokerConfigIfMalformedEntityName
>> PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig
>> PASSED
>>
>> kafka.admin.DeleteConsumerGroupTest > testGroupTopicWideDeleteInZKDo
>> esNothingForActiveGroupConsumingMultipleTopics PASSED
>>
>> kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType
>> PASSED
>>
>> kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED
>>

[GitHub] kafka pull request #3441: MINOR: Enable the TransactionsBounceTest

2017-06-26 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/3441

MINOR: Enable the TransactionsBounceTest

I'll let this have multiple runs on the branch builder to see if it fails, 
and investigate if so.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
MINOR-enable-transactions-bounce-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3441.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3441


commit e9286e72cb061245f400a6b32e20f245084522cb
Author: Apurva Mehta 
Date:   2017-06-27T05:22:56Z

Enable the TransactionsBounceTest to see if it is stable




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS] KIP-172 Add regular-expression topic support for sink connector

2017-06-26 Thread Kenji Hayashida
Hi all,

I have raised a new KIP at
https://cwiki.apache.org/confluence/display/KAFKA/KIP+172%3A+Add+regular-expression+topic+support+for+sink+connector

The corresponding JIRA is at https://issues.apache.org/jira/browse/KAFKA-3073

I look forward to your feedback.

Thanks,
Kenji Hayashida


Build failed in Jenkins: kafka-trunk-jdk8 #1762

2017-06-26 Thread Apache Jenkins Server
See 

--
[...truncated 972.54 KB...]

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsOldProducer PASSED

kafka.tools.ConsoleProducerTest > testInvalidConfigs STARTED

kafka.tools.ConsoleProducerTest > testInvalidConfigs PASSED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer STARTED

kafka.tools.ConsoleProducerTest > testValidConfigsNewProducer PASSED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum 
STARTED

kafka.tools.ReplicaVerificationToolTest > testReplicaBufferVerifyChecksum PASSED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit STARTED

kafka.tools.ConsoleConsumerTest > shouldLimitReadsToMaxMessageLimit PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidNewConsumerValidConfig PASSED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails STARTED

kafka.tools.ConsoleConsumerTest > shouldStopWhenOutputCheckErrorFails PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithStringOffset PASSED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile STARTED

kafka.tools.ConsoleConsumerTest > shouldParseConfigsFromFile PASSED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset STARTED

kafka.tools.ConsoleConsumerTest > 
shouldParseValidNewSimpleConsumerValidConfigWithNumericOffset PASSED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer STARTED

kafka.tools.ConsoleConsumerTest > testDefaultConsumer PASSED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig STARTED

kafka.tools.ConsoleConsumerTest > shouldParseValidOldConsumerValidConfig PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ZkAuthorizationTest > classMethod STARTED

kafka.security.auth.ZkAuthorizationTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(ThrottledRequestReaper-Produce, Test 
worker-SendThread(127.0.0.1:44450), ProcessThread(sid:0 cport:44450):, 
SessionTracker, metrics-meter-tick-thread-2, /0:0:0:0:0:0:0:1:35563 to 
/0:0:0:0:0:0:0:1:60744 workers Thread 2, Signal Dispatcher, main, Reference 
Handler, /0:0:0:0:0:0:0:1:35563 to /0:0:0:0:0:0:0:1:60744 workers Thread 3, 
ExpirationReaper-0-Produce, ExpirationReaper-0-DeleteRecords, 
ThrottledRequestReaper-Fetch, kafka-request-handler-1, 
ZkClient-EventThread-40872-127.0.0.1:44450, ThrottledRequestReaper-Request, 
Test worker, SyncThread:0, ReplicaFetcherThread-0-1, ForkJoinPool-1-worker-5, 
NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, 
ExpirationReaper-0-Fetch, Finalizer, kafka-coordinator-heartbeat-thread | 
group1, metrics-meter-tick-thread-1)

kafka.security.auth.ZkAuthorizationTest > classMethod STARTED

kafka.security.auth.ZkAuthorizationTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(ThrottledRequestReaper-Produce, Test 
worker-SendThread(127.0.0.1:44450), ProcessThread(sid:0 cport:44450):, 
SessionTracker, metrics-meter-tick-thread-2, /0:0:0:0:0:0:0:1:35563 to 
/0:0:0:0:0:0:0:1:60744 workers Thread 2, Signal Dispatcher, main, Reference 
Handler, /0:0:0:0:0:0:0:1:35563 to /0:0:0:0:0:0:0:1:60744 workers Thread 3, 

[GitHub] kafka pull request #3440: MINOR: Close producer in all KafkaProducer tests

2017-06-26 Thread 10110346
GitHub user 10110346 opened a pull request:

https://github.com/apache/kafka/pull/3440

MINOR: Close producer in all KafkaProducer tests

After running  KafkaProducer tests, we should close  producer for free 
resource

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/10110346/kafka wip-lx-0627

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3440.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3440


commit 5bc992893170952d5c8344ed3be9eddca168
Author: liuxian 
Date:   2017-06-27T02:54:42Z

 Close producer in all KafkaProducer tests




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk8 #1761

2017-06-26 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-0.10.1-jdk7 #121

2017-06-26 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5522) ListOffset should take LSO into account when searching by timestamp

2017-06-26 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-5522:
--

 Summary: ListOffset should take LSO into account when searching by 
timestamp
 Key: KAFKA-5522
 URL: https://issues.apache.org/jira/browse/KAFKA-5522
 Project: Kafka
  Issue Type: Sub-task
Affects Versions: 0.11.0.0
Reporter: Jason Gustafson
Assignee: Jason Gustafson
 Fix For: 0.11.0.1


For a normal read_uncommitted consumer, we bound the offset returned from 
ListOffsets by the high watermark. For read_committed consumers, we should 
similarly bound offsets by the LSO. Currently we only handle the case of 
fetching the end offset.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3437: MINOR: Make JmxMixin wait for the monitored proces...

2017-06-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3437


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [Vote] KIP-150 - Kafka-Streams Cogroup

2017-06-26 Thread Guozhang Wang
Kyle,

It seems you already have enough committer votes (me, Jay, Sriram, Damian).


Guozhang

On Mon, Jun 26, 2017 at 11:06 AM, Kyle Winkelman 
wrote:

> Bumping this so it is easy to find now that the discussions have died down.
>
> Thanks,
> Kyle
>
> On Jun 9, 2017 6:32 PM, "Sriram Subramanian"  wrote:
>
> > +1
> >
> > On Fri, Jun 9, 2017 at 2:24 PM, Jay Kreps  wrote:
> >
> > > +1
> > >
> > > -Jay
> > >
> > > On Thu, Jun 8, 2017 at 11:16 AM, Guozhang Wang 
> > wrote:
> > >
> > > > I think we can continue on this voting thread.
> > > >
> > > > Currently we have one binding vote and 2 non-binging votes. I would
> > like
> > > to
> > > > call out for other people especially committers to also take a look
> at
> > > this
> > > > proposal and vote.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Wed, Jun 7, 2017 at 6:37 PM, Kyle Winkelman <
> > winkelman.k...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Just bringing people's attention to the vote thread for my KIP. I
> > > started
> > > > > it before another round of discussion happened. Not sure the
> protocol
> > > so
> > > > > someone let me know if I am supposed to restart the vote.
> > > > > Thanks,
> > > > > Kyle
> > > > >
> > > > > On May 24, 2017 8:49 AM, "Bill Bejeck"  wrote:
> > > > >
> > > > > > +1  for the KIP and +1 what Xavier said as well.
> > > > > >
> > > > > > On Wed, May 24, 2017 at 3:57 AM, Damian Guy <
> damian@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > Also, +1 for the KIP
> > > > > > >
> > > > > > > On Wed, 24 May 2017 at 08:57 Damian Guy 
> > > > wrote:
> > > > > > >
> > > > > > > > +1 to what Xavier said
> > > > > > > >
> > > > > > > > On Wed, 24 May 2017 at 06:45 Xavier Léauté <
> > xav...@confluent.io>
> > > > > > wrote:
> > > > > > > >
> > > > > > > >> I don't think we should wait for entries from each stream,
> > since
> > > > > that
> > > > > > > >> might
> > > > > > > >> limit the usefulness of the cogroup operator. There are
> > > instances
> > > > > > where
> > > > > > > it
> > > > > > > >> can be useful to compute something based on data from one or
> > > more
> > > > > > > stream,
> > > > > > > >> without having to wait for all the streams to produce
> > something
> > > > for
> > > > > > the
> > > > > > > >> group. In the example I gave in the discussion, it is
> possible
> > > to
> > > > > > > compute
> > > > > > > >> impression/auction statistics without having to wait for
> click
> > > > data,
> > > > > > > which
> > > > > > > >> can typically arrive several minutes late.
> > > > > > > >>
> > > > > > > >> We could have a separate discussion around adding inner /
> > outer
> > > > > > > modifiers
> > > > > > > >> to each of the streams to decide which fields are optional /
> > > > > required
> > > > > > > >> before sending updates if we think that might be useful.
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On Tue, May 23, 2017 at 6:28 PM Guozhang Wang <
> > > wangg...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > >>
> > > > > > > >> > The proposal LGTM, +1
> > > > > > > >> >
> > > > > > > >> > One question I have is about when to send the record to
> the
> > > > > resulted
> > > > > > > >> KTable
> > > > > > > >> > changelog. For example in your code snippet in the wiki
> > page,
> > > > > before
> > > > > > > you
> > > > > > > >> > see the end result of
> > > > > > > >> >
> > > > > > > >> > 1L, Customer[
> > > > > > > >> >
> > > > > > > >> >   cart:{Item[no:01], Item[no:03],
> > > > > Item[no:04]},
> > > > > > > >> >   purchases:{Item[no:07],
> Item[no:08]},
> > > > > > > >> >   wishList:{Item[no:11]}
> > > > > > > >> >   ]
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> > You will firs see
> > > > > > > >> >
> > > > > > > >> > 1L, Customer[
> > > > > > > >> >
> > > > > > > >> >   cart:{Item[no:01]},
> > > > > > > >> >   purchases:{},
> > > > > > > >> >   wishList:{}
> > > > > > > >> >   ]
> > > > > > > >> >
> > > > > > > >> > 1L, Customer[
> > > > > > > >> >
> > > > > > > >> >   cart:{Item[no:01]},
> > > > > > > >> >   purchases:{Item[no:07],Item[n
> o:08]},
> > > > > > > >> >
> > > > > > > >> >   wishList:{}
> > > > > > > >> >   ]
> > > > > > > >> >
> > > > > > > >> > 1L, Customer[
> > > > > > > >> >
> > > > > > > >> >   cart:{Item[no:01]},
> > > > > > > >> >   purchases:{Item[no:07],Item[n
> o:08]},
> > > > > > > >> >
> > > > > > > >> >   wishList:{}
> > > > > > > >> >   ]
> > > > > > > >> >
> > > > > > > >> > ...
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> > I'm wondering if it makes more sense to only start sending
> > the
> > > > > > update
> > > > > > > 

Re: [DISCUSS] KIP-167: Add a restoreAll method to StateRestoreCallback

2017-06-26 Thread Guozhang Wang
Hmm... I'm not sure how this can achieve "closing the store before
restoration, re-open it with a slight different config, and then
close-and-reopen store for query" pattern? You need to be able to access
the store object in order to do this right?


Guozhang

On Mon, Jun 26, 2017 at 7:40 AM, Bill Bejeck  wrote:

> Thinking about this some more, I have another approach.  Leave the first
> parameter of as String in the StateRestoreListener interface.
>
> But we'll provide 2 default abstract classes one implementing
> StateRestoreCallback and the other implementing the
> BatchingStateRestoreCallback.  Both abstract classes will also implement
> the StateRestoreListener interface with no-op methods provided for the
> restore progress methods.
>
> WDYT?
>
> On Mon, Jun 26, 2017 at 10:13 AM, Bill Bejeck  wrote:
>
> > Guozhang,
> >
> > Thanks for the comments.
> >
> > I think that will work, but my concern is it might not be as clear to
> > users that want to receive external notification of the restore progress
> > separately (say for reporting purposes) and still send separate signals
> to
> > the state store for resource management tasks.
> >
> > However I like this approach better and I have some ideas I can do in the
> > implementation, so I'll update the KIP accordingly.
> >
> > Thanks,
> > Bill
> >
> > On Wed, Jun 21, 2017 at 10:14 PM, Guozhang Wang 
> > wrote:
> >
> >> More specifically, if we can replace the first parameter from the String
> >> store name to the store instance itself, would that be sufficient to
> >> cover `
> >> StateRestoreNotification`?
> >>
> >> On Wed, Jun 21, 2017 at 7:13 PM, Guozhang Wang 
> >> wrote:
> >>
> >> > Bill,
> >> >
> >> > I'm wondering why we need the `StateRestoreNotification` while still
> >> > having `StateRestoreListener`, could the above setup achievable just
> >> with
> >> > `StateRestoreListener.onRestoreStart / onRestoreEnd`? I.e. it seems
> the
> >> > later can subsume any use cases intended for the former API.
> >> >
> >> > Guozhang
> >> >
> >> > On Mon, Jun 19, 2017 at 3:23 PM, Bill Bejeck 
> wrote:
> >> >
> >> >> I'm going to update the KIP with new interface
> StateRestoreNotification
> >> >> containing two methods, startRestore and endRestore.
> >> >>
> >> >> While naming is very similar to methods already proposed on the
> >> >> StateRestoreListener, the intent of these methods is not for user
> >> >> notification of restore status.  Instead these new methods are for
> >> >> internal
> >> >> use by the state store to perform any required setup and teardown
> work
> >> due
> >> >> to a batch restoration process.
> >> >>
> >> >> Here's one current use case: when using RocksDB we should optimize
> for
> >> a
> >> >> bulk load by setting Options.prepareForBulkload().
> >> >>
> >> >>1. If the database has already been opened, we'll need to close
> it,
> >> set
> >> >>the "prepareForBulkload" and re-open the database.
> >> >>2. Once the restore is completed we'll need to close and re-open
> the
> >> >>database with the "prepareForBulkload" option turned off.
> >> >>
> >> >> While we are mentioning the RocksDB use case above, the addition of
> >> this
> >> >> interface is not specific to any specific implementation of a
> >> persistent
> >> >> state store.
> >> >>
> >> >> Additionally, a separate interface is needed so that any user can
> >> >> implement
> >> >> the state restore notification feature regardless of the state
> restore
> >> >> callback used.
> >> >>
> >> >> I'll also remove the "getStateRestoreListener" method and stick with
> >> the
> >> >> notion of a "global" restore listener for now.
> >> >>
> >> >> On Mon, Jun 19, 2017 at 1:05 PM, Bill Bejeck 
> >> wrote:
> >> >>
> >> >> > Yes it is, more of an oversight on my part, I'll remove it from the
> >> KIP.
> >> >> >
> >> >> >
> >> >> > On Mon, Jun 19, 2017 at 12:48 PM, Matthias J. Sax <
> >> >> matth...@confluent.io>
> >> >> > wrote:
> >> >> >
> >> >> >> Hi,
> >> >> >>
> >> >> >> I thinks for now it's good enough to start with a single global
> >> restore
> >> >> >> listener. We can incrementally improve this later on if required.
> Of
> >> >> >> course, if it's easy to do right away we can also be more fine
> >> grained.
> >> >> >> But for KTable, we might want to add this after getting rid of all
> >> the
> >> >> >> overloads we have atm.
> >> >> >>
> >> >> >> One question: what is the purpose of parameter "endOffset" in
> >> >> >> #onRestoreEnd() -- isn't this the same value as provided in
> >> >> >> #onRestoreStart() ?
> >> >> >>
> >> >> >>
> >> >> >> -Matthias
> >> >> >>
> >> >> >>
> >> >> >>
> >> >> >> On 6/15/17 6:18 AM, Bill Bejeck wrote:
> >> >> >> > Thinking about the custom StateRestoreListener approach and
> >> having a
> >> >> get
> >> >> >> > method on the interface will really only work for custom state
> >> >> stores.
> >> >> >> >
> >> >> >> > So we'll 

[jira] [Created] (KAFKA-5521) Support replicas movement between log directories (KIP-113)

2017-06-26 Thread Dong Lin (JIRA)
Dong Lin created KAFKA-5521:
---

 Summary: Support replicas movement between log directories 
(KIP-113)
 Key: KAFKA-5521
 URL: https://issues.apache.org/jira/browse/KAFKA-5521
 Project: Kafka
  Issue Type: Bug
Reporter: Dong Lin
Assignee: Dong Lin






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5367) Producer should not expiry topic from metadata cache if accumulator still has data for this topic

2017-06-26 Thread Dong Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Lin resolved KAFKA-5367.
-
Resolution: Invalid

> Producer should not expiry topic from metadata cache if accumulator still has 
> data for this topic
> -
>
> Key: KAFKA-5367
> URL: https://issues.apache.org/jira/browse/KAFKA-5367
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Dong Lin
>
> To be added.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3439: KAFKA-5464: StreamsKafkaClient should not use Stre...

2017-06-26 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/3439

KAFKA-5464: StreamsKafkaClient should not use StreamsConfig.POLL_MS_CONFIG



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-5464-streamskafkaclient-poll

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3439.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3439


commit 609ceeb0d546f02b9da2638545784f050dc9558a
Author: Matthias J. Sax 
Date:   2017-06-26T22:52:56Z

KAFKA-5464: StreamsKafkaClient should not use StreamsConfig.POLL_MS_CONFIG




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Ismael Juma
Hi Vahid,

There are a few known issues when running Kafka on Windows. A PR with some
fixes is: https://github.com/apache/kafka/pull/3283. The fact that the
index cannot be accessed indicates that it may be a similar issue. I
suggest we move this discussion to the relevant JIRAs instead of the
release thread.

Ismael

On Mon, Jun 26, 2017 at 11:25 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> This is the output of core tests from the start until the first failed
> test.
>
> kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenRacks
> PASSED
>
> kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware
> PASSED
>
> kafka.admin.AdminRackAwareTest > testAssignmentWithRackAwareWithUnevenReplicas
> PASSED
>
> kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned
> PASSED
>
> kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED
>
> kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED
>
> kafka.admin.AdminRackAwareTest > 
> testAssignmentWith2ReplicasRackAwareWith6Partitions
> PASSED
>
> kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRac
> kAwareWith6PartitionsAnd3Brokers PASSED
>
> kafka.admin.AdminRackAwareTest > 
> testGetRackAlternatedBrokerListAndAssignReplicasToBrokers
> PASSED
>
> kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED
>
> kafka.admin.AdminRackAwareTest > testSingleRack PASSED
>
> kafka.admin.AdminRackAwareTest > 
> testAssignmentWithRackAwareWithRandomStartIndex
> PASSED
>
> kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment
> PASSED
>
> kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED
>
> kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED
>
> kafka.admin.ConfigCommandTest > testScramCredentials PASSED
>
> kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType
> PASSED
>
> kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED
>
> kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED
>
> kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED
>
> kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED
>
> kafka.admin.DeleteConsumerGroupTest > 
> testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup
> PASSED
>
> kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED
>
> kafka.admin.ConfigCommandTest > 
> shouldNotUpdateBrokerConfigIfMalformedBracketConfig
> PASSED
>
> kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED
>
> kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED
>
> kafka.admin.ConfigCommandTest > 
> shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted
> PASSED
>
> kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED
>
> kafka.admin.BrokerApiVersionsCommandTest > checkBrokerApiVersionCommandOutput
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldRemoveThrottleReplicaListBasedOnProposedAssignment
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldFindMovingReplicasMultipleTopics
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldNotOverwriteExistingPropertiesWhenLimitIsAdded
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldFindMovingReplicasMultipleTopicsAndPartitions
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldRemoveThrottleLimitFromAllBrokers
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldFindMovingReplicasMultiplePartitions
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldFindMovingReplicasWhenProposedIsSubsetOfExisting
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldFindTwoMovingReplicasInSamePartition
> PASSED
>
> kafka.admin.ReassignPartitionsCommandTest > 
> shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas
> PASSED
>
> kafka.admin.ConfigCommandTest > 
> shouldNotUpdateBrokerConfigIfMalformedEntityName
> PASSED
>
> kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED
>
> kafka.admin.ConfigCommandTest > shouldNotUpdateBrokerConfigIfMalformedConfig
> PASSED
>
> kafka.admin.DeleteConsumerGroupTest > testGroupTopicWideDeleteInZKDo
> esNothingForActiveGroupConsumingMultipleTopics PASSED
>
> kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED
>
> kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType
> PASSED
>
> kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED
>
> kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED
>
> kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED
>
> kafka.admin.AdminTest > testGetBrokerMetadatas PASSED
>
> kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType
> PASSED
>
> 

Re: Open PRs

2017-06-26 Thread Arseniy Tashoyan
Hi Tom,

The team is really overloaded, we can see it from the activity in this dev
group. Let's wait a bit while the team is finishing their awesome work.

With best wishes,
A guy awaiting review for about 2 months
@tashoyan

2017-06-26 22:27 GMT+03:00 Paolo Patierno :

> Hi Ismael,
>
> thanks for replying to Tom's comment because it is the same situation here
> with some PRs opened even 2 weeks ago.
>
> I was pretty sure that the reason was the 0.11.0 release and you have just
> confirmed it.
>
> Looking forward to see the new Kafka version out, pushing more on the next
> one even with new contributions ;)
>
> Keep up the great work !
>
> Paolo
> 
> From: isma...@gmail.com  on behalf of Ismael Juma <
> ism...@juma.me.uk>
> Sent: Monday, June 26, 2017 6:32:56 PM
> To: dev@kafka.apache.org
> Subject: Re: Open PRs
>
> Hey Tom,
>
> Thanks for the PRs you have submitted. As you suggested, most of us are
> busy testing the upcoming release. It's a major release with many exciting
> features and we want to make sure it's a high quality release.
>
> I understand that it can be frustrating to wait. At the same time, there
> are not enough hours in a day to do everything we'd like to do (even if one
> gives up sleep).
>
> Ismael
>
> On Mon, Jun 26, 2017 at 5:15 PM, Tom Bentley 
> wrote:
>
> > I realise that 0.11.0.0 is imminent and so the committers are rightly
> going
> > to be rather focussed on that, but I opened some PRs nearly a week ago
> and
> > they don't seem to have been looked at.
> >
> > Even a comment on the PR to the effect of "We'll look at this right after
> > 0.11.0.0" would at least reassure contributors that the PR isn't just
> being
> > ignored. But even that is second best to actually merging or rejecting
> PRs.
> > For instance there are other PRs I want to open but right now there's no
> > point because I know they'll conflict with PRs I've already opened.
> >
> > Cheers,
> >
> > Tom
> >
> > p.s. I was encouraged to whinge because this page
> > https://kafka.apache.org/contributing says I should :-)
> >
>


Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Vahid S Hashemian
Hi Ismael,

This is the output of core tests from the start until the first failed 
test.

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithUnevenRacks PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWith2ReplicasRackAware 
PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithUnevenReplicas PASSED

kafka.admin.AdminRackAwareTest > testSkipBrokerWithReplicaAlreadyAssigned 
PASSED

kafka.admin.AdminRackAwareTest > testAssignmentWithRackAware PASSED

kafka.admin.AdminRackAwareTest > testRackAwareExpansion PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6Partitions PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWith2ReplicasRackAwareWith6PartitionsAnd3Brokers PASSED

kafka.admin.AdminRackAwareTest > 
testGetRackAlternatedBrokerListAndAssignReplicasToBrokers PASSED

kafka.admin.AdminRackAwareTest > testMoreReplicasThanRacks PASSED

kafka.admin.AdminRackAwareTest > testSingleRack PASSED

kafka.admin.AdminRackAwareTest > 
testAssignmentWithRackAwareWithRandomStartIndex PASSED

kafka.admin.AdminRackAwareTest > testLargeNumberPartitionsAssignment 
PASSED

kafka.admin.AdminRackAwareTest > testLessReplicasThanRacks PASSED

kafka.admin.AclCommandTest > testInvalidAuthorizerProperty PASSED

kafka.admin.ConfigCommandTest > testScramCredentials PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForTopicsEntityType 
PASSED

kafka.admin.ConfigCommandTest > testUserClientQuotaOpts PASSED

kafka.admin.ConfigCommandTest > shouldAddTopicConfig PASSED

kafka.admin.ConfigCommandTest > shouldAddClientConfig PASSED

kafka.admin.ConfigCommandTest > shouldDeleteBrokerConfig PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupWideDeleteInZKDoesNothingForActiveConsumerGroup PASSED

kafka.admin.ConfigCommandTest > testQuotaConfigEntity PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedBracketConfig PASSED

kafka.admin.ConfigCommandTest > shouldFailIfUnrecognisedEntityType PASSED

kafka.admin.AdminTest > testBasicPreferredReplicaElection PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfNonExistingConfigIsDeleted PASSED

kafka.admin.AdminTest > testPreferredReplicaJsonData PASSED

kafka.admin.BrokerApiVersionsCommandTest > 
checkBrokerApiVersionCommandOutput PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldRemoveThrottleReplicaListBasedOnProposedAssignment PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultipleTopics PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldNotOverwriteExistingPropertiesWhenLimitIsAdded PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultipleTopicsAndPartitions PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldRemoveThrottleLimitFromAllBrokers PASSED

kafka.admin.ReassignPartitionsCommandTest > shouldFindMovingReplicas 
PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasMultiplePartitions PASSED

kafka.admin.ReassignPartitionsCommandTest > shouldSetQuotaLimit PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindMovingReplicasWhenProposedIsSubsetOfExisting PASSED

kafka.admin.ReassignPartitionsCommandTest > shouldUpdateQuotaLimit PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldFindTwoMovingReplicasInSamePartition PASSED

kafka.admin.ReassignPartitionsCommandTest > 
shouldNotOverwriteEntityConfigsWhenUpdatingThrottledReplicas PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedEntityName PASSED

kafka.admin.ConfigCommandTest > shouldSupportCommaSeparatedValues PASSED

kafka.admin.ConfigCommandTest > 
shouldNotUpdateBrokerConfigIfMalformedConfig PASSED

kafka.admin.DeleteConsumerGroupTest > 
testGroupTopicWideDeleteInZKDoesNothingForActiveGroupConsumingMultipleTopics 
PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForBrokersEntityType 
PASSED

kafka.admin.ConfigCommandTest > shouldAddBrokerConfig PASSED

kafka.admin.AdminTest > testReassigningNonExistingPartition PASSED

kafka.admin.ConfigCommandTest > testQuotaDescribeEntities PASSED

kafka.admin.AdminTest > testGetBrokerMetadatas PASSED

kafka.admin.ConfigCommandTest > shouldParseArgumentsForClientsEntityType 
PASSED

kafka.admin.AclCommandTest > testAclCli PASSED

kafka.admin.ReassignPartitionsIntegrationTest > testRackAwareReassign 
PASSED

kafka.admin.AdminTest > testBootstrapClientIdConfig PASSED

kafka.admin.ReassignPartitionsClusterTest > 
shouldExecuteThrottledReassignment FAILED
java.nio.file.FileSystemException: 
C:\Users\IBM_AD~1\AppData\Local\Temp\kafka-719085320148197500\my-topic-0\.index:
 
The process cannot access the file because it is being used by another 
process.


>From the error message, it sounds like one of the prior tests does not do 
a proper clean-up?!

Thanks.
--Vahid
 



From:   Ismael Juma 
To: 

Re: [DISCUSS] KIP-171: Extend Consumer Group Reset Offset for Stream Application

2017-06-26 Thread Matthias J. Sax
Jorge,

thanks a lot for this KIP. Allowing the reset streams applications with
arbitrary start offset is something we got multiple requests already.

Couple of clarification question:

 - why do you want to deprecate the current tool instead of extending
the current tool with the stuff the offset reset tool can do (ie, use
the offset reset tool internally)

 - you suggest to extend the offset reset tool to replace the stream
reset tool: how would the reset tool know if it is resetting a streams
applications or a regular consumer group?



-Matthias


On 6/26/17 1:28 PM, Jorge Esteban Quilcate Otoya wrote:
> Hi all,
> 
> I'd like to start the discussion to add reset offset tooling for Stream
> applications.
> The KIP can be found here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+Application
> 
> Thanks,
> Jorge.
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Jason Gustafson
+1 from me. I verified the artifacts and tested basic producing and
consuming. I also verified the procedure for upgrading to the new message
format.

-Jason

On Mon, Jun 26, 2017 at 1:53 PM, Ismael Juma  wrote:

> Hi Vahid,
>
> Can you please check which test fails first? The errors you mentioned can
> happen if a test fails and doesn't clean-up properly.
>
> Ismael
>
> On Mon, Jun 26, 2017 at 8:41 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Ismael,
> >
> > To answer your questions:
> >
> > 1. Yes, the issues exists in trunk too.
> >
> > 2. I haven't checked with Cygwin, but I can give it a try.
> >
> > And thanks for addressing this issue. I can confirm with your PR I no
> > longer see it.
> > But now that the tests progress I see quite a few errors like this in
> > core:
> >
> > kafka.server.ReplicaFetchTest > classMethod FAILED
> > java.lang.AssertionError: Found unexpected threads,
> > allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565,
> > ProcessThread(sid:0 cport:56565):, metrics-mete
> > r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference
> > Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0
> > cport:59720):, ZkClie
> > nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread |
> > producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to
> > /127.0.0.1:54926 w
> > orkers Thread 2, Test worker, SyncThread:0,
> > NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test
> > worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
> > 2 to /127.0.0.1:54926 workers Thread 3,
> > ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0
> > cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
> > alizer, metrics-meter-tick-thread-1)
> >
> > I tested on a VM and a physical machine, and both give me a lot of errors
> > like this.
> >
> > Thanks.
> > --Vahid
> >
> >
> >
> >
> > From:   Ismael Juma 
> > To: Vahid S Hashemian 
> > Cc: dev@kafka.apache.org, kafka-clients
> > , Kafka Users 
> > Date:   06/26/2017 03:53 AM
> > Subject:Re: [VOTE] 0.11.0.0 RC2
> >
> >
> >
> > Hi Vahid,
> >
> > Sorry for not replying to the previous email, I had missed it. A couple
> of
> > questions:
> >
> > 1. Is this also happening in trunk? Seems like it should be the case for
> > months and seemingly no-one reported it until the RC stage.
> > 2. Is it correct that this only happens when compiling on Windows without
> > Cygwin?
> >
> > I believe the following PR should fix it, please verify:
> >
> > https://github.com/apache/kafka/pull/3431
> >
> > Ismael
> >
> > On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com> wrote:
> >
> > > Hi Ismael,
> > >
> > > Not sure if my response on RC1 was lost or this issue is not a
> > > show-stopper:
> > >
> > > I checked again and with RC2, tests still fail in my Windown 64 bit
> > > environment.
> > >
> > > :clients:checkstyleMain
> > > [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> > >
> > 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> > protocol\Errors.java:89:1:
> > > Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> > > [ApiExceptionBuilder, BrokerNotAvailableException,
> > > ClusterAuthorizationException, ConcurrentTransactionsException,
> > > ControllerMovedException, CoordinatorLoadInProgressException,
> > > CoordinatorNotAvailableException, CorruptRecordException,
> > > DuplicateSequenceNumberException, GroupAuthorizationException,
> > > IllegalGenerationException, IllegalSaslStateException,
> > > InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> > > InvalidConfigurationException, InvalidFetchSizeException,
> > > InvalidGroupIdException, InvalidPartitionsException,
> > > InvalidPidMappingException, InvalidReplicaAssignmentException,
> > > InvalidReplicationFactorException, InvalidRequestException,
> > > InvalidRequiredAcksException, InvalidSessionTimeoutException,
> > > InvalidTimestampException, InvalidTopicException,
> > InvalidTxnStateException,
> > > InvalidTxnTimeoutException, LeaderNotAvailableException,
> > NetworkException,
> > > NotControllerException, NotCoordinatorException,
> > > NotEnoughReplicasAfterAppendException, NotEnoughReplicasException,
> > > NotLeaderForPartitionException, OffsetMetadataTooLarge,
> > > OffsetOutOfRangeException, OperationNotAttemptedException,
> > > OutOfOrderSequenceException, PolicyViolationException,
> > > ProducerFencedException, RebalanceInProgressException,
> > > RecordBatchTooLargeException, RecordTooLargeException,
> > > ReplicaNotAvailableException, SecurityDisabledException,
> > TimeoutException,
> > > TopicAuthorizationException, TopicExistsException,
> > > TransactionCoordinatorFencedException,
> > TransactionalIdAuthorizationException,
> > > UnknownMemberIdException, UnknownServerException,

Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Ismael Juma
Hi Vahid,

Can you please check which test fails first? The errors you mentioned can
happen if a test fails and doesn't clean-up properly.

Ismael

On Mon, Jun 26, 2017 at 8:41 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> To answer your questions:
>
> 1. Yes, the issues exists in trunk too.
>
> 2. I haven't checked with Cygwin, but I can give it a try.
>
> And thanks for addressing this issue. I can confirm with your PR I no
> longer see it.
> But now that the tests progress I see quite a few errors like this in
> core:
>
> kafka.server.ReplicaFetchTest > classMethod FAILED
> java.lang.AssertionError: Found unexpected threads,
> allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565,
> ProcessThread(sid:0 cport:56565):, metrics-mete
> r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference
> Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0
> cport:59720):, ZkClie
> nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread |
> producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to
> /127.0.0.1:54926 w
> orkers Thread 2, Test worker, SyncThread:0,
> NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test
> worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
> 2 to /127.0.0.1:54926 workers Thread 3,
> ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0
> cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
> alizer, metrics-meter-tick-thread-1)
>
> I tested on a VM and a physical machine, and both give me a lot of errors
> like this.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Ismael Juma 
> To: Vahid S Hashemian 
> Cc: dev@kafka.apache.org, kafka-clients
> , Kafka Users 
> Date:   06/26/2017 03:53 AM
> Subject:Re: [VOTE] 0.11.0.0 RC2
>
>
>
> Hi Vahid,
>
> Sorry for not replying to the previous email, I had missed it. A couple of
> questions:
>
> 1. Is this also happening in trunk? Seems like it should be the case for
> months and seemingly no-one reported it until the RC stage.
> 2. Is it correct that this only happens when compiling on Windows without
> Cygwin?
>
> I believe the following PR should fix it, please verify:
>
> https://github.com/apache/kafka/pull/3431
>
> Ismael
>
> On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Ismael,
> >
> > Not sure if my response on RC1 was lost or this issue is not a
> > show-stopper:
> >
> > I checked again and with RC2, tests still fail in my Windown 64 bit
> > environment.
> >
> > :clients:checkstyleMain
> > [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> >
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> protocol\Errors.java:89:1:
> > Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> > [ApiExceptionBuilder, BrokerNotAvailableException,
> > ClusterAuthorizationException, ConcurrentTransactionsException,
> > ControllerMovedException, CoordinatorLoadInProgressException,
> > CoordinatorNotAvailableException, CorruptRecordException,
> > DuplicateSequenceNumberException, GroupAuthorizationException,
> > IllegalGenerationException, IllegalSaslStateException,
> > InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> > InvalidConfigurationException, InvalidFetchSizeException,
> > InvalidGroupIdException, InvalidPartitionsException,
> > InvalidPidMappingException, InvalidReplicaAssignmentException,
> > InvalidReplicationFactorException, InvalidRequestException,
> > InvalidRequiredAcksException, InvalidSessionTimeoutException,
> > InvalidTimestampException, InvalidTopicException,
> InvalidTxnStateException,
> > InvalidTxnTimeoutException, LeaderNotAvailableException,
> NetworkException,
> > NotControllerException, NotCoordinatorException,
> > NotEnoughReplicasAfterAppendException, NotEnoughReplicasException,
> > NotLeaderForPartitionException, OffsetMetadataTooLarge,
> > OffsetOutOfRangeException, OperationNotAttemptedException,
> > OutOfOrderSequenceException, PolicyViolationException,
> > ProducerFencedException, RebalanceInProgressException,
> > RecordBatchTooLargeException, RecordTooLargeException,
> > ReplicaNotAvailableException, SecurityDisabledException,
> TimeoutException,
> > TopicAuthorizationException, TopicExistsException,
> > TransactionCoordinatorFencedException,
> TransactionalIdAuthorizationException,
> > UnknownMemberIdException, UnknownServerException,
> > UnknownTopicOrPartitionException, UnsupportedForMessageFormatException,
> > UnsupportedSaslMechanismException, UnsupportedVersionException].
> > [ClassDataAbstractionCoupling]
> > [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> >
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> protocol\Errors.java:89:1:
> > Class Fan-Out Complexity is 60 (max allowed is 40).
> [ClassFanOutComplexity]
> > [ant:checkstyle] [ERROR] 

Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Jeff Chao
Hi,

Heroku has been doing additional performance testing on (1) log compaction
and, separately (2) Go clients with older message format against 0.11-rc2
brokers with new message format.

For log compaction, we've tested with messages using a single key, messages
using unique keys, and messages with a bounded key range. There were no
notable negative performance impacts.

For client testing with old format vs new format, we had Sarama Go async
producer clients speaking their older client protocol versions and had
messages producing in a tight loop. This resulted in a high percentage of
errors, though some messages went through:

Failed to produce message kafka: Failed to produce message to topic
rc2-topic: kafka server: Message was too large, server rejected it to avoid
allocation error.

Although this is to be expected as mentioned in the docs (
http://kafka.apache.org/0110/documentation.html#upgrade_11_message_format)
where in aggregate messages may become larger than max.message.bytes from
the broker, we'd like to point out that this might be confusing for users
running older clients against 0.11. That said, users can however work
around this issue by tuning their request size to be less than
max.message.bytes.

This, along with the testing previously mentioned by Tom wraps up our
performance testing. Overall, we're a +1 (non-binding) for this release,
but wanted to point out the client issue above.

Thanks,
Jeff

On Mon, Jun 26, 2017 at 12:41 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> To answer your questions:
>
> 1. Yes, the issues exists in trunk too.
>
> 2. I haven't checked with Cygwin, but I can give it a try.
>
> And thanks for addressing this issue. I can confirm with your PR I no
> longer see it.
> But now that the tests progress I see quite a few errors like this in
> core:
>
> kafka.server.ReplicaFetchTest > classMethod FAILED
> java.lang.AssertionError: Found unexpected threads,
> allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565,
> ProcessThread(sid:0 cport:56565):, metrics-mete
> r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference
> Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0
> cport:59720):, ZkClie
> nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread |
> producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to
> /127.0.0.1:54926 w
> orkers Thread 2, Test worker, SyncThread:0,
> NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test
> worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
> 2 to /127.0.0.1:54926 workers Thread 3,
> ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0
> cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
> alizer, metrics-meter-tick-thread-1)
>
> I tested on a VM and a physical machine, and both give me a lot of errors
> like this.
>
> Thanks.
> --Vahid
>
>
>
>
> From:   Ismael Juma 
> To: Vahid S Hashemian 
> Cc: dev@kafka.apache.org, kafka-clients
> , Kafka Users 
> Date:   06/26/2017 03:53 AM
> Subject:Re: [VOTE] 0.11.0.0 RC2
>
>
>
> Hi Vahid,
>
> Sorry for not replying to the previous email, I had missed it. A couple of
> questions:
>
> 1. Is this also happening in trunk? Seems like it should be the case for
> months and seemingly no-one reported it until the RC stage.
> 2. Is it correct that this only happens when compiling on Windows without
> Cygwin?
>
> I believe the following PR should fix it, please verify:
>
> https://github.com/apache/kafka/pull/3431
>
> Ismael
>
> On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Hi Ismael,
> >
> > Not sure if my response on RC1 was lost or this issue is not a
> > show-stopper:
> >
> > I checked again and with RC2, tests still fail in my Windown 64 bit
> > environment.
> >
> > :clients:checkstyleMain
> > [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> >
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> protocol\Errors.java:89:1:
> > Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> > [ApiExceptionBuilder, BrokerNotAvailableException,
> > ClusterAuthorizationException, ConcurrentTransactionsException,
> > ControllerMovedException, CoordinatorLoadInProgressException,
> > CoordinatorNotAvailableException, CorruptRecordException,
> > DuplicateSequenceNumberException, GroupAuthorizationException,
> > IllegalGenerationException, IllegalSaslStateException,
> > InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> > InvalidConfigurationException, InvalidFetchSizeException,
> > InvalidGroupIdException, InvalidPartitionsException,
> > InvalidPidMappingException, InvalidReplicaAssignmentException,
> > InvalidReplicationFactorException, InvalidRequestException,
> > InvalidRequiredAcksException, InvalidSessionTimeoutException,
> > InvalidTimestampException, 

[GitHub] kafka pull request #3438: KAFKA-3465: Clarify warning message of ConsumerOff...

2017-06-26 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3438

KAFKA-3465: Clarify warning message of ConsumerOffsetChecker

Add that the tool works with the old consumer only.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3465

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3438.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3438


commit fa68749267b40d156b3811368e2f49c5c8813a43
Author: Vahid Hashemian 
Date:   2017-06-26T20:24:46Z

KAFKA-3465: Clarify warning message of ConsumerOffsetChecker

Add that the tool works with the old consumer only.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[DISCUSS] KIP-171: Extend Consumer Group Reset Offset for Stream Application

2017-06-26 Thread Jorge Esteban Quilcate Otoya
Hi all,

I'd like to start the discussion to add reset offset tooling for Stream
applications.
The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-171+-+Extend+Consumer+Group+Reset+Offset+for+Stream+Application

Thanks,
Jorge.


[jira] [Created] (KAFKA-5520) Extend Consumer Group Reset Offset tool for Stream Applications

2017-06-26 Thread Jorge Quilcate (JIRA)
Jorge Quilcate created KAFKA-5520:
-

 Summary: Extend Consumer Group Reset Offset tool for Stream 
Applications
 Key: KAFKA-5520
 URL: https://issues.apache.org/jira/browse/KAFKA-5520
 Project: Kafka
  Issue Type: Improvement
  Components: core, tools
Reporter: Jorge Quilcate
 Fix For: 0.11.1.0


KIP documentation: TODO



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Vahid S Hashemian
Hi Ismael,

To answer your questions:

1. Yes, the issues exists in trunk too.

2. I haven't checked with Cygwin, but I can give it a try.

And thanks for addressing this issue. I can confirm with your PR I no 
longer see it.
But now that the tests progress I see quite a few errors like this in 
core:

kafka.server.ReplicaFetchTest > classMethod FAILED
java.lang.AssertionError: Found unexpected threads, 
allThreads=Set(ZkClient-EventThread-268-127.0.0.1:56565, 
ProcessThread(sid:0 cport:56565):, metrics-mete
r-tick-thread-2, SessionTracker, Signal Dispatcher, main, Reference 
Handler, ForkJoinPool-1-worker-1, Attach Listener, ProcessThread(sid:0 
cport:59720):, ZkClie
nt-EventThread-1347-127.0.0.1:59720, kafka-producer-network-thread | 
producer-1, Test worker-SendThread(127.0.0.1:56565), /127.0.0.1:54942 to 
/127.0.0.1:54926 w
orkers Thread 2, Test worker, SyncThread:0, 
NIOServerCxn.Factory:/127.0.0.1:0, Test worker-EventThread, Test 
worker-SendThread(127.0.0.1:59720), /127.0.0.1:5494
2 to /127.0.0.1:54926 workers Thread 3, 
ZkClient-EventThread-22-127.0.0.1:54976, ProcessThread(sid:0 
cport:54976):, Test worker-SendThread(127.0.0.1:54976), Fin
alizer, metrics-meter-tick-thread-1)

I tested on a VM and a physical machine, and both give me a lot of errors 
like this.

Thanks.
--Vahid




From:   Ismael Juma 
To: Vahid S Hashemian 
Cc: dev@kafka.apache.org, kafka-clients 
, Kafka Users 
Date:   06/26/2017 03:53 AM
Subject:Re: [VOTE] 0.11.0.0 RC2



Hi Vahid,

Sorry for not replying to the previous email, I had missed it. A couple of
questions:

1. Is this also happening in trunk? Seems like it should be the case for
months and seemingly no-one reported it until the RC stage.
2. Is it correct that this only happens when compiling on Windows without
Cygwin?

I believe the following PR should fix it, please verify:

https://github.com/apache/kafka/pull/3431

Ismael

On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> Not sure if my response on RC1 was lost or this issue is not a
> show-stopper:
>
> I checked again and with RC2, tests still fail in my Windown 64 bit
> environment.
>
> :clients:checkstyleMain
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 
0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
> Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> [ApiExceptionBuilder, BrokerNotAvailableException,
> ClusterAuthorizationException, ConcurrentTransactionsException,
> ControllerMovedException, CoordinatorLoadInProgressException,
> CoordinatorNotAvailableException, CorruptRecordException,
> DuplicateSequenceNumberException, GroupAuthorizationException,
> IllegalGenerationException, IllegalSaslStateException,
> InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> InvalidConfigurationException, InvalidFetchSizeException,
> InvalidGroupIdException, InvalidPartitionsException,
> InvalidPidMappingException, InvalidReplicaAssignmentException,
> InvalidReplicationFactorException, InvalidRequestException,
> InvalidRequiredAcksException, InvalidSessionTimeoutException,
> InvalidTimestampException, InvalidTopicException, 
InvalidTxnStateException,
> InvalidTxnTimeoutException, LeaderNotAvailableException, 
NetworkException,
> NotControllerException, NotCoordinatorException,
> NotEnoughReplicasAfterAppendException, NotEnoughReplicasException,
> NotLeaderForPartitionException, OffsetMetadataTooLarge,
> OffsetOutOfRangeException, OperationNotAttemptedException,
> OutOfOrderSequenceException, PolicyViolationException,
> ProducerFencedException, RebalanceInProgressException,
> RecordBatchTooLargeException, RecordTooLargeException,
> ReplicaNotAvailableException, SecurityDisabledException, 
TimeoutException,
> TopicAuthorizationException, TopicExistsException,
> TransactionCoordinatorFencedException, 
TransactionalIdAuthorizationException,
> UnknownMemberIdException, UnknownServerException,
> UnknownTopicOrPartitionException, UnsupportedForMessageFormatException,
> UnsupportedSaslMechanismException, UnsupportedVersionException].
> [ClassDataAbstractionCoupling]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 
0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
> Class Fan-Out Complexity is 60 (max allowed is 40). 
[ClassFanOutComplexity]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> requests\AbstractRequest.java:26:1: Class Fan-Out Complexity is 43 (max
> allowed is 40). [ClassFanOutComplexity]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> requests\AbstractResponse.java:26:1: Class Fan-Out Complexity is 42 (max
> allowed is 40). [ClassFanOutComplexity]
> 

Re: Open PRs

2017-06-26 Thread Paolo Patierno
Hi Ismael,

thanks for replying to Tom's comment because it is the same situation here with 
some PRs opened even 2 weeks ago.

I was pretty sure that the reason was the 0.11.0 release and you have just 
confirmed it.

Looking forward to see the new Kafka version out, pushing more on the next one 
even with new contributions ;)

Keep up the great work !

Paolo

From: isma...@gmail.com  on behalf of Ismael Juma 

Sent: Monday, June 26, 2017 6:32:56 PM
To: dev@kafka.apache.org
Subject: Re: Open PRs

Hey Tom,

Thanks for the PRs you have submitted. As you suggested, most of us are
busy testing the upcoming release. It's a major release with many exciting
features and we want to make sure it's a high quality release.

I understand that it can be frustrating to wait. At the same time, there
are not enough hours in a day to do everything we'd like to do (even if one
gives up sleep).

Ismael

On Mon, Jun 26, 2017 at 5:15 PM, Tom Bentley  wrote:

> I realise that 0.11.0.0 is imminent and so the committers are rightly going
> to be rather focussed on that, but I opened some PRs nearly a week ago and
> they don't seem to have been looked at.
>
> Even a comment on the PR to the effect of "We'll look at this right after
> 0.11.0.0" would at least reassure contributors that the PR isn't just being
> ignored. But even that is second best to actually merging or rejecting PRs.
> For instance there are other PRs I want to open but right now there's no
> point because I know they'll conflict with PRs I've already opened.
>
> Cheers,
>
> Tom
>
> p.s. I was encouraged to whinge because this page
> https://kafka.apache.org/contributing says I should :-)
>


[jira] [Created] (KAFKA-5519) Support for multiple certificates in a single keystore

2017-06-26 Thread Alla Tumarkin (JIRA)
Alla Tumarkin created KAFKA-5519:


 Summary: Support for multiple certificates in a single keystore
 Key: KAFKA-5519
 URL: https://issues.apache.org/jira/browse/KAFKA-5519
 Project: Kafka
  Issue Type: New Feature
  Components: security
Affects Versions: 0.10.2.1
Reporter: Alla Tumarkin


Background
Currently, we need to have a keystore exclusive to the component with exactly 
one key in it. Looking at the JSSE Reference guide, it seems like we would need 
to introduce our own KeyManager into the SSLContext which selects a 
configurable key alias name.
https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/X509KeyManager.html 
has methods for dealing with aliases.
The goal here to use a specific certificate (with proper ACLs set for this 
client), and not just the first one that matches.
Looks like it requires a code change to the SSLChannelBuilder



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3437: MINOR: Make JmxMixing wait for the monitored proce...

2017-06-26 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/3437

MINOR: Make JmxMixing wait for the monitored process to be listening on the 
JMX port before launching JmxTool



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka wait-jmx-listening

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3437.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3437






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [Vote] KIP-150 - Kafka-Streams Cogroup

2017-06-26 Thread Kyle Winkelman
Bumping this so it is easy to find now that the discussions have died down.

Thanks,
Kyle

On Jun 9, 2017 6:32 PM, "Sriram Subramanian"  wrote:

> +1
>
> On Fri, Jun 9, 2017 at 2:24 PM, Jay Kreps  wrote:
>
> > +1
> >
> > -Jay
> >
> > On Thu, Jun 8, 2017 at 11:16 AM, Guozhang Wang 
> wrote:
> >
> > > I think we can continue on this voting thread.
> > >
> > > Currently we have one binding vote and 2 non-binging votes. I would
> like
> > to
> > > call out for other people especially committers to also take a look at
> > this
> > > proposal and vote.
> > >
> > >
> > > Guozhang
> > >
> > >
> > > On Wed, Jun 7, 2017 at 6:37 PM, Kyle Winkelman <
> winkelman.k...@gmail.com
> > >
> > > wrote:
> > >
> > > > Just bringing people's attention to the vote thread for my KIP. I
> > started
> > > > it before another round of discussion happened. Not sure the protocol
> > so
> > > > someone let me know if I am supposed to restart the vote.
> > > > Thanks,
> > > > Kyle
> > > >
> > > > On May 24, 2017 8:49 AM, "Bill Bejeck"  wrote:
> > > >
> > > > > +1  for the KIP and +1 what Xavier said as well.
> > > > >
> > > > > On Wed, May 24, 2017 at 3:57 AM, Damian Guy 
> > > > wrote:
> > > > >
> > > > > > Also, +1 for the KIP
> > > > > >
> > > > > > On Wed, 24 May 2017 at 08:57 Damian Guy 
> > > wrote:
> > > > > >
> > > > > > > +1 to what Xavier said
> > > > > > >
> > > > > > > On Wed, 24 May 2017 at 06:45 Xavier Léauté <
> xav...@confluent.io>
> > > > > wrote:
> > > > > > >
> > > > > > >> I don't think we should wait for entries from each stream,
> since
> > > > that
> > > > > > >> might
> > > > > > >> limit the usefulness of the cogroup operator. There are
> > instances
> > > > > where
> > > > > > it
> > > > > > >> can be useful to compute something based on data from one or
> > more
> > > > > > stream,
> > > > > > >> without having to wait for all the streams to produce
> something
> > > for
> > > > > the
> > > > > > >> group. In the example I gave in the discussion, it is possible
> > to
> > > > > > compute
> > > > > > >> impression/auction statistics without having to wait for click
> > > data,
> > > > > > which
> > > > > > >> can typically arrive several minutes late.
> > > > > > >>
> > > > > > >> We could have a separate discussion around adding inner /
> outer
> > > > > > modifiers
> > > > > > >> to each of the streams to decide which fields are optional /
> > > > required
> > > > > > >> before sending updates if we think that might be useful.
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > > > >> On Tue, May 23, 2017 at 6:28 PM Guozhang Wang <
> > wangg...@gmail.com
> > > >
> > > > > > wrote:
> > > > > > >>
> > > > > > >> > The proposal LGTM, +1
> > > > > > >> >
> > > > > > >> > One question I have is about when to send the record to the
> > > > resulted
> > > > > > >> KTable
> > > > > > >> > changelog. For example in your code snippet in the wiki
> page,
> > > > before
> > > > > > you
> > > > > > >> > see the end result of
> > > > > > >> >
> > > > > > >> > 1L, Customer[
> > > > > > >> >
> > > > > > >> >   cart:{Item[no:01], Item[no:03],
> > > > Item[no:04]},
> > > > > > >> >   purchases:{Item[no:07], Item[no:08]},
> > > > > > >> >   wishList:{Item[no:11]}
> > > > > > >> >   ]
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > You will firs see
> > > > > > >> >
> > > > > > >> > 1L, Customer[
> > > > > > >> >
> > > > > > >> >   cart:{Item[no:01]},
> > > > > > >> >   purchases:{},
> > > > > > >> >   wishList:{}
> > > > > > >> >   ]
> > > > > > >> >
> > > > > > >> > 1L, Customer[
> > > > > > >> >
> > > > > > >> >   cart:{Item[no:01]},
> > > > > > >> >   purchases:{Item[no:07],Item[no:08]},
> > > > > > >> >
> > > > > > >> >   wishList:{}
> > > > > > >> >   ]
> > > > > > >> >
> > > > > > >> > 1L, Customer[
> > > > > > >> >
> > > > > > >> >   cart:{Item[no:01]},
> > > > > > >> >   purchases:{Item[no:07],Item[no:08]},
> > > > > > >> >
> > > > > > >> >   wishList:{}
> > > > > > >> >   ]
> > > > > > >> >
> > > > > > >> > ...
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > I'm wondering if it makes more sense to only start sending
> the
> > > > > update
> > > > > > if
> > > > > > >> > the corresponding agg-key has seen at least one input from
> > each
> > > of
> > > > > the
> > > > > > >> > input stream? Maybe it is out of the scope of this KIP and
> we
> > > can
> > > > > make
> > > > > > >> it a
> > > > > > >> > more general discussion in a separate one.
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > Guozhang
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > On Fri, May 19, 2017 at 8:37 AM, Xavier Léauté <
> > > > xav...@confluent.io
> > > > > >
> > 

Re: [DISCUSS] KIP-150 - Kafka-Streams Cogroup

2017-06-26 Thread Kyle Winkelman
The KIP and PR have been updated please go take a look and vote.

For those worried about the [DISCUSS] Streams DSL/StateStore Refactoring
email thread affecting this I believe the cogroup methods fit well into the
streams dsl and won't need to change. We can update the aggregate methods
in the same way we choose to update them in KGroupedStream.

Thanks,
Kyle

On Jun 14, 2017 8:14 PM, "Bill Bejeck"  wrote:

> +1
>
> Thanks,
> Bill
>
> On Wed, Jun 14, 2017 at 8:10 PM, Xavier Léauté 
> wrote:
>
> > +1 from me
> >
> > any stream should be able to initialize the cogroup
> >
> > On Wed, Jun 14, 2017 at 3:44 PM Kyle Winkelman  >
> > wrote:
> >
> > > I will update the kip to have only the aggregator in the first cogroup
> > call
> > > and the initializer and serde in the aggregate calls.
> > >
> > > This seems to be the consensus on the email chain.
> > >
> > > Thanks,
> > > Kyle
> > >
> > > On Jun 14, 2017 5:41 PM, wrote:
> > >
> > > That is not the case. No matter which stream the record comes in on the
> > > initializer is called if there is not currently an object in the store.
> > >
> > > On Jun 14, 2017 5:12 PM, "Guozhang Wang"  wrote:
> > >
> > > While regarding where we should ask users to set serdes: I think I'm
> > > convinced by Xavier that they should be in the `aggregate` call (but
> only
> > > those does not pass in a state store supplier) instead of the
> > > `KStream#cogroup` call to be consistent with other aggregate functions.
> > >
> > > BTW another motivation for me to suggest keeping the initializer on the
> > > first stream is that by reviewing the PR (some time ago though, so
> again
> > I
> > > might be wrong) we will trigger the initializer only when we received
> an
> > > incoming record from the first stream whose key is not in the state
> store
> > > yet, while for other streams we will just drop it on the floor. If that
> > is
> > > actually not the case, that we call initializer on any one of the
> > > co-grouped streams' incoming records, then I'm open to set the
> > initializer
> > > at the `aggregate` call as well.
> > >
> > >
> > > Guozhang
> > >
> > > On Wed, Jun 14, 2017 at 2:23 PM, Guozhang Wang 
> > wrote:
> > >
> > > > I'd suggest we do not block this KIP until the serde work has been
> > sorted
> > > > out: we cannot estimate yet how long it will take yet. Instead let's
> > say
> > > > make an agreement on where we want to specify the serdes: whether on
> > the
> > > > first co-group call or on the aggregate call.
> > > >
> > > > Also about the initializer specification I actually felt that the
> first
> > > > cogrouped stream is special (Kyle please feel free to correct me if
> I'm
> > > > wrong) and that is why I thought it is better to specify the
> > initializer
> > > at
> > > > the beginning: since from the typing you can see that the final
> > > aggregated
> > > > value type is defined to be the same as the first co-grouped stream,
> > and
> > > > for any intermediate stream to co-group, their value types not be
> > > inherited
> > > > but the value be "incorporated" into the original stream:
> > > >
> > > >   CogroupedKStream cogroup(final KGroupedStream
> > > > groupedStream, final Aggregator aggregator)
> > > >
> > > > Note that we do not have a cogroup function that returns
> > > > CogroupedKStream.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > >
> > > > On Tue, Jun 13, 2017 at 2:31 PM, Bill Bejeck 
> > wrote:
> > > >
> > > >> +1 on deferring discussion on Serdes until API improvements are
> ironed
> > > >> out.
> > > >>
> > > >> On Tue, Jun 13, 2017 at 2:06 PM, Matthias J. Sax <
> > matth...@confluent.io
> > > >
> > > >> wrote:
> > > >>
> > > >> > Hi,
> > > >> >
> > > >> > I am just catching up on this thread. (1) as most people agree, we
> > > >> > should not add anything to KStreamBuilder (btw: we actually plan
> to
> > > move
> > > >> > #merge() to KStream and deprecate it on KStreamBuilder as it's a
> > quite
> > > >> > unnatural API atm).
> > > >> >
> > > >> > About specifying Serdes: there is still the idea to improve to
> > overall
> > > >> > API from the current "we are adding more overloads"-pattern to a
> > > >> > builder-like pattern. This might make the whole discussion void if
> > we
> > > do
> > > >> > this. Thus, it might make sense to keep this in mind (or even
> delay
> > > this
> > > >> > KIP?). It seems a waste of time to discuss all this if we are
> going
> > to
> > > >> > chance it in 2 month anyway... Just saying.
> > > >> >
> > > >> >
> > > >> > -Matthias
> > > >> >
> > > >> > On 6/13/17 8:05 AM, Michal Borowiecki wrote:
> > > >> > > You're right, I haven't thought of that.
> > > >> > >
> > > >> > > Cheers,
> > > >> > >
> > > >> > > Michał
> > > >> > >
> > > >> > >
> > > >> > > On 13/06/17 13:00, Kyle Winkelman wrote:
> > > >> > >> First, I would prefer not calling it aggregate because there
> 

Re: [DISCUSS] KIP-168: Add TotalTopicCount metric per cluster

2017-06-26 Thread Joel Koshy
+1 on the original KIP
I actually prefer TotalTopicCount because it makes it clearer that it is a
cluster-wide count. OfflinePartitionsCount is global to the cluster (but it
is fairly clear that the controller is SoT on that). TopicCount on the
other hand could be misread as a local count since PartitinCount, URP, are
all local counts.

On Thu, Jun 22, 2017 at 9:20 AM, Abhishek Mendhekar <
abhishek.mendhe...@gmail.com> wrote:

> Hi Kafka Dev,
>
> Below is the link to the update KIP proposal.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 168%3A+Add+TopicCount+metric+per+cluster
>
> Thanks,
> Abhishek
>
> On Wed, Jun 21, 2017 at 3:55 PM, Abhishek Mendhekar <
> abhishek.mendhe...@gmail.com> wrote:
>
> > Hi Dong,
> >
> > Thanks for the suggestion!
> >
> > I think TopicCount sounds reasonable to me and it definitely seems
> > consistent with the other metric names. I will update the proposal to
> > reflect this change.
> >
> > Thanks,
> > Abhishek
> >
> > On Wed, Jun 21, 2017 at 2:17 PM, Dong Lin  wrote:
> >
> >> Hey Abhishek,
> >>
> >> I think the metric is useful. Sorry for being late on this. I am
> wondering
> >> if TopicCount is a better name than TotalTopicCount, given that we
> >> currently have metric with names OfflinePartitionsCount, LeaderCount,
> >> PartitionCount etc.
> >>
> >> Thanks,
> >> Dong
> >>
> >> On Fri, Jun 16, 2017 at 9:09 AM, Abhishek Mendhekar <
> >> abhishek.mendhe...@gmail.com> wrote:
> >>
> >> > Hi Kafka Dev,
> >> >
> >> > I created KIP-168 to propose adding a metric to emit total topic count
> >> > in a cluster. The metric will be emited by the controller.
> >> >
> >> > The KIP can be found here
> >> > (https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > 168%3A+Add+TotalTopicCount+metric+per+cluster)
> >> > and the assciated JIRA improvement is KAFKA-5461
> >> > (https://issues.apache.org/jira/browse/KAFKA-5461)
> >> >
> >> > Appreciate all the comments.
> >> >
> >> > Best,
> >> >
> >> > Abhishek
> >> >
> >>
> >
> >
> >
> > --
> > Abhishek Mendhekar
> > abhishek.mendhe...@gmail.com | 818.263.7030 <(818)%20263-7030>
> >
>
>
>
> --
> Abhishek Mendhekar
> abhishek.mendhe...@gmail.com | 818.263.7030
>


[jira] [Created] (KAFKA-5518) General Kafka connector performanc workload

2017-06-26 Thread Chen He (JIRA)
Chen He created KAFKA-5518:
--

 Summary: General Kafka connector performanc workload
 Key: KAFKA-5518
 URL: https://issues.apache.org/jira/browse/KAFKA-5518
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.2.1
Reporter: Chen He


Sorry, first time to create Kafka JIRA. Just curious whether there is a general 
purpose performance workload for Kafka connector (hdfs, s3, etc). Then, we can 
setup an standard and evaluate the performance for further connectors such as 
swift, etc.

Please feel free to comment or mark as dup if there already is one jira 
tracking this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Jenkins build is back to normal : kafka-0.10.2-jdk7 #177

2017-06-26 Thread Apache Jenkins Server
See 




Re: Open PRs

2017-06-26 Thread Ismael Juma
Hey Tom,

Thanks for the PRs you have submitted. As you suggested, most of us are
busy testing the upcoming release. It's a major release with many exciting
features and we want to make sure it's a high quality release.

I understand that it can be frustrating to wait. At the same time, there
are not enough hours in a day to do everything we'd like to do (even if one
gives up sleep).

Ismael

On Mon, Jun 26, 2017 at 5:15 PM, Tom Bentley  wrote:

> I realise that 0.11.0.0 is imminent and so the committers are rightly going
> to be rather focussed on that, but I opened some PRs nearly a week ago and
> they don't seem to have been looked at.
>
> Even a comment on the PR to the effect of "We'll look at this right after
> 0.11.0.0" would at least reassure contributors that the PR isn't just being
> ignored. But even that is second best to actually merging or rejecting PRs.
> For instance there are other PRs I want to open but right now there's no
> point because I know they'll conflict with PRs I've already opened.
>
> Cheers,
>
> Tom
>
> p.s. I was encouraged to whinge because this page
> https://kafka.apache.org/contributing says I should :-)
>


Open PRs

2017-06-26 Thread Tom Bentley
I realise that 0.11.0.0 is imminent and so the committers are rightly going
to be rather focussed on that, but I opened some PRs nearly a week ago and
they don't seem to have been looked at.

Even a comment on the PR to the effect of "We'll look at this right after
0.11.0.0" would at least reassure contributors that the PR isn't just being
ignored. But even that is second best to actually merging or rejecting PRs.
For instance there are other PRs I want to open but right now there's no
point because I know they'll conflict with PRs I've already opened.

Cheers,

Tom

p.s. I was encouraged to whinge because this page
https://kafka.apache.org/contributing says I should :-)


[GitHub] kafka pull request #3436: KAFKA-5517: Add id to config HTML tables to allow ...

2017-06-26 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3436

KAFKA-5517: Add id to config HTML tables to allow linking



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka KAFKA-5517

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3436.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3436






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5517) Support linking to particular configuration parameters

2017-06-26 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5517:
--

 Summary: Support linking to particular configuration parameters
 Key: KAFKA-5517
 URL: https://issues.apache.org/jira/browse/KAFKA-5517
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


Currently the configuration parameters are documented long tables, and it's 
only possible to link to the heading before a particular table. When discussing 
configuration parameters on forums it would be helpful to be able to link to 
the particular parameter under discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-160: Augment KStream.print() to allow users pass in extra parameters in the printed string

2017-06-26 Thread Bill Bejeck
+1

On Sat, Jun 24, 2017 at 4:37 AM, Matthias J. Sax 
wrote:

> +1
>
> On 6/23/17 9:43 PM, Guozhang Wang wrote:
> > +1
> >
> > On Fri, Jun 23, 2017 at 3:16 AM, Eno Thereska 
> > wrote:
> >
> >> +1 thanks!
> >>
> >> Eno
> >>> On 23 Jun 2017, at 05:29, James Chain 
> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> I apply original idea on KStream#writeAsText() and also update my pull
> >>> request.
> >>> Please re-review and re-cast the vote.
> >>>
> >>> James Chien
> >>
> >>
> >
> >
>
>


Re: [DISCUSS] KIP-167: Add a restoreAll method to StateRestoreCallback

2017-06-26 Thread Bill Bejeck
Thinking about this some more, I have another approach.  Leave the first
parameter of as String in the StateRestoreListener interface.

But we'll provide 2 default abstract classes one implementing
StateRestoreCallback and the other implementing the
BatchingStateRestoreCallback.  Both abstract classes will also implement
the StateRestoreListener interface with no-op methods provided for the
restore progress methods.

WDYT?

On Mon, Jun 26, 2017 at 10:13 AM, Bill Bejeck  wrote:

> Guozhang,
>
> Thanks for the comments.
>
> I think that will work, but my concern is it might not be as clear to
> users that want to receive external notification of the restore progress
> separately (say for reporting purposes) and still send separate signals to
> the state store for resource management tasks.
>
> However I like this approach better and I have some ideas I can do in the
> implementation, so I'll update the KIP accordingly.
>
> Thanks,
> Bill
>
> On Wed, Jun 21, 2017 at 10:14 PM, Guozhang Wang 
> wrote:
>
>> More specifically, if we can replace the first parameter from the String
>> store name to the store instance itself, would that be sufficient to
>> cover `
>> StateRestoreNotification`?
>>
>> On Wed, Jun 21, 2017 at 7:13 PM, Guozhang Wang 
>> wrote:
>>
>> > Bill,
>> >
>> > I'm wondering why we need the `StateRestoreNotification` while still
>> > having `StateRestoreListener`, could the above setup achievable just
>> with
>> > `StateRestoreListener.onRestoreStart / onRestoreEnd`? I.e. it seems the
>> > later can subsume any use cases intended for the former API.
>> >
>> > Guozhang
>> >
>> > On Mon, Jun 19, 2017 at 3:23 PM, Bill Bejeck  wrote:
>> >
>> >> I'm going to update the KIP with new interface StateRestoreNotification
>> >> containing two methods, startRestore and endRestore.
>> >>
>> >> While naming is very similar to methods already proposed on the
>> >> StateRestoreListener, the intent of these methods is not for user
>> >> notification of restore status.  Instead these new methods are for
>> >> internal
>> >> use by the state store to perform any required setup and teardown work
>> due
>> >> to a batch restoration process.
>> >>
>> >> Here's one current use case: when using RocksDB we should optimize for
>> a
>> >> bulk load by setting Options.prepareForBulkload().
>> >>
>> >>1. If the database has already been opened, we'll need to close it,
>> set
>> >>the "prepareForBulkload" and re-open the database.
>> >>2. Once the restore is completed we'll need to close and re-open the
>> >>database with the "prepareForBulkload" option turned off.
>> >>
>> >> While we are mentioning the RocksDB use case above, the addition of
>> this
>> >> interface is not specific to any specific implementation of a
>> persistent
>> >> state store.
>> >>
>> >> Additionally, a separate interface is needed so that any user can
>> >> implement
>> >> the state restore notification feature regardless of the state restore
>> >> callback used.
>> >>
>> >> I'll also remove the "getStateRestoreListener" method and stick with
>> the
>> >> notion of a "global" restore listener for now.
>> >>
>> >> On Mon, Jun 19, 2017 at 1:05 PM, Bill Bejeck 
>> wrote:
>> >>
>> >> > Yes it is, more of an oversight on my part, I'll remove it from the
>> KIP.
>> >> >
>> >> >
>> >> > On Mon, Jun 19, 2017 at 12:48 PM, Matthias J. Sax <
>> >> matth...@confluent.io>
>> >> > wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> I thinks for now it's good enough to start with a single global
>> restore
>> >> >> listener. We can incrementally improve this later on if required. Of
>> >> >> course, if it's easy to do right away we can also be more fine
>> grained.
>> >> >> But for KTable, we might want to add this after getting rid of all
>> the
>> >> >> overloads we have atm.
>> >> >>
>> >> >> One question: what is the purpose of parameter "endOffset" in
>> >> >> #onRestoreEnd() -- isn't this the same value as provided in
>> >> >> #onRestoreStart() ?
>> >> >>
>> >> >>
>> >> >> -Matthias
>> >> >>
>> >> >>
>> >> >>
>> >> >> On 6/15/17 6:18 AM, Bill Bejeck wrote:
>> >> >> > Thinking about the custom StateRestoreListener approach and
>> having a
>> >> get
>> >> >> > method on the interface will really only work for custom state
>> >> stores.
>> >> >> >
>> >> >> > So we'll need to provide another way for users to set behavior
>> with
>> >> >> > provided state stores.  The only option that comes to mind now is
>> >> also
>> >> >> > adding a parameter to the StateStoreSupplier.
>> >> >> >
>> >> >> >
>> >> >> > Bill
>> >> >> >
>> >> >> >
>> >> >> > On Wed, Jun 14, 2017 at 5:39 PM, Bill Bejeck 
>> >> wrote:
>> >> >> >
>> >> >> >> Guozhang,
>> >> >> >>
>> >> >> >> Thanks for the comments.
>> >> >> >>
>> >> >> >> 1.  As for the granularity, I agree that having one global
>> >> >> >> StateRestoreListener could be restrictive.  But I think it's
>> >> 

Re: [DISCUSS] KIP-167: Add a restoreAll method to StateRestoreCallback

2017-06-26 Thread Bill Bejeck
Guozhang,

Thanks for the comments.

I think that will work, but my concern is it might not be as clear to users
that want to receive external notification of the restore progress
separately (say for reporting purposes) and still send separate signals to
the state store for resource management tasks.

However I like this approach better and I have some ideas I can do in the
implementation, so I'll update the KIP accordingly.

Thanks,
Bill

On Wed, Jun 21, 2017 at 10:14 PM, Guozhang Wang  wrote:

> More specifically, if we can replace the first parameter from the String
> store name to the store instance itself, would that be sufficient to cover
> `
> StateRestoreNotification`?
>
> On Wed, Jun 21, 2017 at 7:13 PM, Guozhang Wang  wrote:
>
> > Bill,
> >
> > I'm wondering why we need the `StateRestoreNotification` while still
> > having `StateRestoreListener`, could the above setup achievable just with
> > `StateRestoreListener.onRestoreStart / onRestoreEnd`? I.e. it seems the
> > later can subsume any use cases intended for the former API.
> >
> > Guozhang
> >
> > On Mon, Jun 19, 2017 at 3:23 PM, Bill Bejeck  wrote:
> >
> >> I'm going to update the KIP with new interface StateRestoreNotification
> >> containing two methods, startRestore and endRestore.
> >>
> >> While naming is very similar to methods already proposed on the
> >> StateRestoreListener, the intent of these methods is not for user
> >> notification of restore status.  Instead these new methods are for
> >> internal
> >> use by the state store to perform any required setup and teardown work
> due
> >> to a batch restoration process.
> >>
> >> Here's one current use case: when using RocksDB we should optimize for a
> >> bulk load by setting Options.prepareForBulkload().
> >>
> >>1. If the database has already been opened, we'll need to close it,
> set
> >>the "prepareForBulkload" and re-open the database.
> >>2. Once the restore is completed we'll need to close and re-open the
> >>database with the "prepareForBulkload" option turned off.
> >>
> >> While we are mentioning the RocksDB use case above, the addition of this
> >> interface is not specific to any specific implementation of a persistent
> >> state store.
> >>
> >> Additionally, a separate interface is needed so that any user can
> >> implement
> >> the state restore notification feature regardless of the state restore
> >> callback used.
> >>
> >> I'll also remove the "getStateRestoreListener" method and stick with the
> >> notion of a "global" restore listener for now.
> >>
> >> On Mon, Jun 19, 2017 at 1:05 PM, Bill Bejeck  wrote:
> >>
> >> > Yes it is, more of an oversight on my part, I'll remove it from the
> KIP.
> >> >
> >> >
> >> > On Mon, Jun 19, 2017 at 12:48 PM, Matthias J. Sax <
> >> matth...@confluent.io>
> >> > wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> I thinks for now it's good enough to start with a single global
> restore
> >> >> listener. We can incrementally improve this later on if required. Of
> >> >> course, if it's easy to do right away we can also be more fine
> grained.
> >> >> But for KTable, we might want to add this after getting rid of all
> the
> >> >> overloads we have atm.
> >> >>
> >> >> One question: what is the purpose of parameter "endOffset" in
> >> >> #onRestoreEnd() -- isn't this the same value as provided in
> >> >> #onRestoreStart() ?
> >> >>
> >> >>
> >> >> -Matthias
> >> >>
> >> >>
> >> >>
> >> >> On 6/15/17 6:18 AM, Bill Bejeck wrote:
> >> >> > Thinking about the custom StateRestoreListener approach and having
> a
> >> get
> >> >> > method on the interface will really only work for custom state
> >> stores.
> >> >> >
> >> >> > So we'll need to provide another way for users to set behavior with
> >> >> > provided state stores.  The only option that comes to mind now is
> >> also
> >> >> > adding a parameter to the StateStoreSupplier.
> >> >> >
> >> >> >
> >> >> > Bill
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 14, 2017 at 5:39 PM, Bill Bejeck 
> >> wrote:
> >> >> >
> >> >> >> Guozhang,
> >> >> >>
> >> >> >> Thanks for the comments.
> >> >> >>
> >> >> >> 1.  As for the granularity, I agree that having one global
> >> >> >> StateRestoreListener could be restrictive.  But I think it's
> >> important
> >> >> to
> >> >> >> have a "setStateRestoreListener" on KafkaStreams as this allows
> >> users
> >> >> to
> >> >> >> define an anonymous instance that has access to local scope for
> >> >> reporting
> >> >> >> purposes.  This is a similar pattern we use for
> >> >> >> KafkaStreams.setStateListener.
> >> >> >>
> >> >> >> As an alternative, what if we add a method to the
> >> >> BatchingStateRestoreCallback
> >> >> >> interface named "getStateStoreListener".   Then in an abstract
> >> adapter
> >> >> >> class we return null from getStateStoreListener.   But if users
> >> want to
> >> >> >> supply a different StateRestoreListener strategy per 

[GitHub] kafka pull request #3435: KAFKA-4388 Recommended values for converters from ...

2017-06-26 Thread evis
GitHub user evis opened a pull request:

https://github.com/apache/kafka/pull/3435

KAFKA-4388 Recommended values for converters from plugins

Questions to reviewers:
1. Should we cache `converterRecommenders.validValues()`, 
`SinkConnectorConfig.configDef()` and `SourceConnectorConfig.configDef()` 
results?
2. What is appropriate place for testing new 
`ConnectorConfig.configDef(plugins)` functionality?

cc @ewencp 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/evis/kafka converters_values

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3435.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3435


commit 069a1ba832f844c20224598c622fa19576b0ba61
Author: Evgeny Veretennikov 
Date:   2017-06-26T13:44:40Z

KAFKA-4388 Recommended values for converters from plugins

ConnectorConfig.configDef() takes Plugins parameter now. List of
recommended values for converters is taken from plugins.converters()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5516) Formatting verifiable producer/consumer output in a similar fashion

2017-06-26 Thread Paolo Patierno (JIRA)
Paolo Patierno created KAFKA-5516:
-

 Summary: Formatting verifiable producer/consumer output in a 
similar fashion
 Key: KAFKA-5516
 URL: https://issues.apache.org/jira/browse/KAFKA-5516
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Paolo Patierno
Assignee: Paolo Patierno
Priority: Trivial


Hi,
following the proposal to have verifiable producer/consumer providing a very 
similar output where the "timestamp" is always the first column followed by 
"name" event and then all the specific data for such event.
It includes a verifiable producer refactoring for having that in the same way 
as verifiable consumer.

Thanks,
Paolo



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3434: KAFKA-5516: Formatting verifiable producer/consume...

2017-06-26 Thread ppatierno
GitHub user ppatierno opened a pull request:

https://github.com/apache/kafka/pull/3434

KAFKA-5516: Formatting verifiable producer/consumer output in a similar 
fashion



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ppatierno/kafka verifiable-consumer-producer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3434.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3434


commit 6e6d728dce83ea689061a196e1b6811b447d4db7
Author: ppatierno 
Date:   2017-06-26T12:07:08Z

Modified JSON order attributes in a more readable fashion

commit 9235aadfccf446415ffbfd5d90c8d4faeddecc08
Author: ppatierno 
Date:   2017-06-26T12:56:07Z

Fixed documentation about old request.required.acks producer parameter
Modified JSON order attributes in a more readable fashion
Refactoring on verifiable producer to be like the verifiable consumer




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5515) Consider removing date formatting from Segments class

2017-06-26 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-5515:
--

 Summary: Consider removing date formatting from Segments class
 Key: KAFKA-5515
 URL: https://issues.apache.org/jira/browse/KAFKA-5515
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck


Currently the {{Segments}} class uses a date when calculating the segment id 
and uses {{SimpleDateFormat}} for formatting the segment id.  However this is a 
high volume code path and creating a new {{SimpleDateFormat}} for each segment 
id is expensive.  We should look into removing the date from the segment id or 
at a minimum use a faster alternative to {{SimpleDateFormat}} 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5514) KafkaConsumer ignores default values in Properties object because of incorrect use of Properties object.

2017-06-26 Thread Geert Schuring (JIRA)
Geert Schuring created KAFKA-5514:
-

 Summary: KafkaConsumer ignores default values in Properties object 
because of incorrect use of Properties object.
 Key: KAFKA-5514
 URL: https://issues.apache.org/jira/browse/KAFKA-5514
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.10.2.1
Reporter: Geert Schuring


When setting default values in a Properties object the KafkaConsumer ignores 
these values because the Properties object is being treated as a Map. The 
ConsumerConfig object uses the putAll method to copy properties from the 
incoming object to its local copy. (See 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerConfig.java#L471)

This is incorrect because it only copies the explicit properties and ignores 
the default values also present in the properties object. (Also see: 
https://stackoverflow.com/questions/2004833/how-to-merge-two-java-util-properties-objects)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3433: MINOR: typo in variable name "unkownInfo"

2017-06-26 Thread evis
GitHub user evis opened a pull request:

https://github.com/apache/kafka/pull/3433

MINOR: typo in variable name "unkownInfo"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/evis/kafka unkown_info_typo

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3433.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3433


commit 0a8cf330c5eb5d49494c70eba95de8018a66e17c
Author: Evgeny Veretennikov 
Date:   2017-06-26T12:22:34Z

MINOR: typo in variable name "unkownInfo"




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #3432: KAFKA-5372: fixes to state transitions

2017-06-26 Thread enothereska
GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/3432

KAFKA-5372: fixes to state transitions



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka KAFKA-5372-state-transitions

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3432.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3432


commit 5f57558ef293351d9f5db11edb62089367e39b76
Author: Eno Thereska 
Date:   2017-06-26T11:04:20Z

Checkpoint

commit ac372b196998052a024aac47af64dbd803a65733
Author: Eno Thereska 
Date:   2017-06-26T12:11:26Z

Some fixes




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] 0.11.0.0 RC2

2017-06-26 Thread Ismael Juma
Hi Vahid,

Sorry for not replying to the previous email, I had missed it. A couple of
questions:

1. Is this also happening in trunk? Seems like it should be the case for
months and seemingly no-one reported it until the RC stage.
2. Is it correct that this only happens when compiling on Windows without
Cygwin?

I believe the following PR should fix it, please verify:

https://github.com/apache/kafka/pull/3431

Ismael

On Fri, Jun 23, 2017 at 8:25 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Hi Ismael,
>
> Not sure if my response on RC1 was lost or this issue is not a
> show-stopper:
>
> I checked again and with RC2, tests still fail in my Windown 64 bit
> environment.
>
> :clients:checkstyleMain
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
> Class Data Abstraction Coupling is 57 (max allowed is 20) classes
> [ApiExceptionBuilder, BrokerNotAvailableException,
> ClusterAuthorizationException, ConcurrentTransactionsException,
> ControllerMovedException, CoordinatorLoadInProgressException,
> CoordinatorNotAvailableException, CorruptRecordException,
> DuplicateSequenceNumberException, GroupAuthorizationException,
> IllegalGenerationException, IllegalSaslStateException,
> InconsistentGroupProtocolException, InvalidCommitOffsetSizeException,
> InvalidConfigurationException, InvalidFetchSizeException,
> InvalidGroupIdException, InvalidPartitionsException,
> InvalidPidMappingException, InvalidReplicaAssignmentException,
> InvalidReplicationFactorException, InvalidRequestException,
> InvalidRequiredAcksException, InvalidSessionTimeoutException,
> InvalidTimestampException, InvalidTopicException, InvalidTxnStateException,
> InvalidTxnTimeoutException, LeaderNotAvailableException, NetworkException,
> NotControllerException, NotCoordinatorException,
> NotEnoughReplicasAfterAppendException, NotEnoughReplicasException,
> NotLeaderForPartitionException, OffsetMetadataTooLarge,
> OffsetOutOfRangeException, OperationNotAttemptedException,
> OutOfOrderSequenceException, PolicyViolationException,
> ProducerFencedException, RebalanceInProgressException,
> RecordBatchTooLargeException, RecordTooLargeException,
> ReplicaNotAvailableException, SecurityDisabledException, TimeoutException,
> TopicAuthorizationException, TopicExistsException,
> TransactionCoordinatorFencedException, TransactionalIdAuthorizationException,
> UnknownMemberIdException, UnknownServerException,
> UnknownTopicOrPartitionException, UnsupportedForMessageFormatException,
> UnsupportedSaslMechanismException, UnsupportedVersionException].
> [ClassDataAbstractionCoupling]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\protocol\Errors.java:89:1:
> Class Fan-Out Complexity is 60 (max allowed is 40). [ClassFanOutComplexity]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> requests\AbstractRequest.java:26:1: Class Fan-Out Complexity is 43 (max
> allowed is 40). [ClassFanOutComplexity]
> [ant:checkstyle] [ERROR] C:\Users\User\Downloads\kafka-
> 0.11.0.0-src\clients\src\main\java\org\apache\kafka\common\
> requests\AbstractResponse.java:26:1: Class Fan-Out Complexity is 42 (max
> allowed is 40). [ClassFanOutComplexity]
> :clients:checkstyleMain FAILED
>
> FAILURE: Build failed with an exception.
>
> Thanks.
> --Vahid
>
>
>
> From:Ismael Juma 
> To:dev@kafka.apache.org, Kafka Users ,
> kafka-clients 
> Date:06/22/2017 06:16 PM
> Subject:[VOTE] 0.11.0.0 RC2
> Sent by:isma...@gmail.com
> --
>
>
>
> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 0.11.0.0.
>
> This is a major version release of Apache Kafka. It includes 32 new KIPs.
> See the release notes and release plan (
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.0)
> for more details. A few feature highlights:
>
> * Exactly-once delivery and transactional messaging
> * Streams exactly-once semantics
> * Admin client with support for topic, ACLs and config management
> * Record headers
> * Request rate quotas
> * Improved resiliency: replication protocol improvement and single-threaded
> controller
> * Richer and more efficient message format
>
> Release notes for the 0.11.0.0 release:
> http://home.apache.org/~ijuma/kafka-0.11.0.0-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, June 27, 6pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~ijuma/kafka-0.11.0.0-rc2/
>
> * Maven artifacts to be voted upon:
> 

[GitHub] kafka pull request #3431: MINOR: Adjust checkstyle suppression paths to work...

2017-06-26 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3431

MINOR: Adjust checkstyle suppression paths to work on Windows

Use the file name whenever possible and replace / with [/\\]
when it's not.

Also remove unnecessary suppresions.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
fix-checkstyle-suppressions-on-windows

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3431.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3431


commit 9166fefccfea9e4e41026e09cf0ce03ea8174ca1
Author: Ismael Juma 
Date:   2017-06-26T10:43:28Z

MINOR: Adjust checkstyle suppression paths to work on Windows

Use the file name whenever possible and replace / with [/\\]
when it's not.

Also remove unnecessary suppresions.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5513) Contradicting scalaDoc for AdminUtils.assignReplicasToBrokers

2017-06-26 Thread Charly Molter (JIRA)
Charly Molter created KAFKA-5513:


 Summary: Contradicting scalaDoc for 
AdminUtils.assignReplicasToBrokers
 Key: KAFKA-5513
 URL: https://issues.apache.org/jira/browse/KAFKA-5513
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Charly Molter
Priority: Trivial


The documentation for AdminUtils.assignReplicasToBrokers seems to contradict 
itself.

I says in the description: "As the result, if the number of replicas is equal 
to or greater than the number of racks, it will ensure that each rack will get 
at least one replica."

Which means that it is possible to get an assignment where there's multiple 
replicas in a rack (if there's less racks than the replication factor).

However, the throws clauses says: " @throws AdminOperationException If rack 
information is supplied but it is incomplete, or if it is not possible to 
assign each replica to a unique rack."

Which seems to be contradicting the first claim.

In practice it doesn't throw when RF < #racks so the point in the @throws 
clause should probably be removed.

https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/admin/AdminUtils.scala#L121-L130



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5512) KafkaConsumer: High memory allocation rate when idle

2017-06-26 Thread Stephane Roset (JIRA)
Stephane Roset created KAFKA-5512:
-

 Summary: KafkaConsumer: High memory allocation rate when idle
 Key: KAFKA-5512
 URL: https://issues.apache.org/jira/browse/KAFKA-5512
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.1.1
Reporter: Stephane Roset


Hi,

We noticed in our application that the memory allocation rate increased 
significantly when we have no Kafka messages to consume. We isolated the issue 
by using a JVM that simply runs 128 Kafka consumers. These consumers consume 
128 partitions (so each consumer consumes one partition). The partitions are 
empty and no message has been sent during the test. The consumers were 
configured with default values (session.timeout.ms=3, 
fetch.max.wait.ms=500, receive.buffer.bytes=65536, heartbeat.interval.ms=3000, 
max.poll.interval.ms=30, max.poll.records=500). The Kafka cluster was made 
of 3 brokers. Within this context, the allocation rate was about 55 MiB/s. This 
high allocation rate generates a lot of GC activity (to garbage the young heap) 
and was an issue for our project.

We profiled the JVM with JProfiler. We noticed that there were a huge quantity 
of ArrayList$Itr in memory. These collections were mainly instantiated by the 
methods handleCompletedReceives, handleCompletedSends, handleConnecions and 
handleDisconnections of the class NetWorkClient. We also noticed that we had a 
lot of calls to the method pollOnce of the class KafkaConsumer. 

So we decided to run only one consumer and to profile the calls to the method 
pollOnce. We noticed that regularly a huge number of calls is made to this 
method, up to 268000 calls within 100ms. The pollOnce method calls the 
NetworkClient.handle* methods. These methods iterate on collections (even if 
they are empty), so that explains the huge number of iterators in memory.

The large number of calls is related to the heartbeat mechanism. The pollOnce 
method calculates the poll timeout; if a heartbeat needs to be done, the 
timeout will be set to 0. The problem is that the heartbeat thread checks every 
100 ms (default value of retry.backoff.ms) if a heartbeat should be sent, so 
the KafkaConsumer will call the poll method in a loop without timeout until the 
heartbeat thread awakes. For example: the heartbeat thread just started to wait 
and will awake in 99ms. So during 99ms, the KafkaConsumer will call in a loop 
the pollOnce method and will use a timeout of 0. That explains how we can have 
268000 calls within 100ms. 

The heartbeat thread calls the method AbstractCoordinator.wait() to sleep, so I 
think the Kafka consumer should awake the heartbeat thread with a notify when 
needed.

We made two quick fixes to solve this issue:
  - In NetworkClient.handle*(), we don't iterate on collections if they are 
empty (to avoid unnecessary iterators instantiations).
  - In KafkaConsumer.pollOnce(), if the poll timeout is equal to 0 we notify 
the heartbeat thread to awake it (dirty fix because we don't handle the 
autocommit case).

With these 2 quick fixes and 128 consumers, the allocation rate drops down from 
55 MiB/s to 4 MiB/s.








--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3430: KAFKA-4750: RocksDBStore always deletes null value...

2017-06-26 Thread evis
GitHub user evis opened a pull request:

https://github.com/apache/kafka/pull/3430

KAFKA-4750: RocksDBStore always deletes null values

@guozhangwang @mjsax 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/evis/kafka rocksdbstore_null_values

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3430.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3430


commit 47b4b2b2a5fe8af4ca4a60c2cae97dcab4e7eec3
Author: Evgeny Veretennikov 
Date:   2017-06-26T09:21:35Z

KAFKA-4750: RocksDBStore always deletes null values

Deletion occurs even if serde serializes null value
to non-null byte array.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-5511) ConfigDef.define() overloads take too many parameters

2017-06-26 Thread Evgeny Veretennikov (JIRA)
Evgeny Veretennikov created KAFKA-5511:
--

 Summary: ConfigDef.define() overloads take too many parameters
 Key: KAFKA-5511
 URL: https://issues.apache.org/jira/browse/KAFKA-5511
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Evgeny Veretennikov
Priority: Minor


Builder pattern can be helpful to get rid of all these {{define()}} overloads. 
I think, it's better to create some {{ConfigKeyBuilder}} class to construct 
keys.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


RE: Pause and max.poll.records

2017-06-26 Thread Yan Wang
Sorry, seems it works as you said. System.out.print("+")  in my test is 
buffered. It does not show the "+" immediately till I print a new line. Weird.

Thanks a lot!

Yan

-Original Message-
From: Yan Wang 
Sent: Sunday, June 25, 2017 11:18 PM
To: dev@kafka.apache.org
Subject: RE: Pause and max.poll.records

As I tested, seems if I pause p1 but do not call pause/resume again on p1, I 
cannot pull any message from p2. But if I pause p1, I find I get one message 
from p2 each time I call pause on p1 ( I mean I keep calling consumer.pause(p1) 
before consumer.poll).

Is it a bug or right behavior?

Thanks a lot!

Yan

-Original Message-
From: Jason Gustafson [mailto:ja...@confluent.io]
Sent: Friday, June 23, 2017 3:08 PM
To: dev@kafka.apache.org
Subject: Re: Pause and max.poll.records

Fetches for each partition are sent once all the pending data for that 
partition has been consumed. The only difference that pause makes is that we do 
not bother fetching paused partitions. So in your example, p2 would be fetched 
even if p1 is paused and has fetched records > max.poll.records pending.

-Jason

On Fri, Jun 23, 2017 at 1:43 PM, Yan Wang  wrote:

> How the pause influences max.poll.records? If I have two partitions, 
> say
> p1 and p2. p1 has number local fetched records > max.poll.records. p2 
> has no data in local but in the server (not fetched yet). Now, if I pause P1.
> Does the next poll send fetch request to the server so that I can get 
> data for p2 from the server or we get nothing from p2 till we resume
> p1 and drain p1 till total local fetched records < max.poll.records?
>
>
> Thanks a lot!
>


RE: Pause and max.poll.records

2017-06-26 Thread Yan Wang
As I tested, seems if I pause p1 but do not call pause/resume again on p1, I 
cannot pull any message from p2. But if I pause p1, I find I get one message 
from p2 each time I call pause on p1 ( I mean I keep calling consumer.pause(p1) 
before consumer.poll).

Is it a bug or right behavior?

Thanks a lot!

Yan

-Original Message-
From: Jason Gustafson [mailto:ja...@confluent.io] 
Sent: Friday, June 23, 2017 3:08 PM
To: dev@kafka.apache.org
Subject: Re: Pause and max.poll.records

Fetches for each partition are sent once all the pending data for that 
partition has been consumed. The only difference that pause makes is that we do 
not bother fetching paused partitions. So in your example, p2 would be fetched 
even if p1 is paused and has fetched records > max.poll.records pending.

-Jason

On Fri, Jun 23, 2017 at 1:43 PM, Yan Wang  wrote:

> How the pause influences max.poll.records? If I have two partitions, 
> say
> p1 and p2. p1 has number local fetched records > max.poll.records. p2 
> has no data in local but in the server (not fetched yet). Now, if I pause P1.
> Does the next poll send fetch request to the server so that I can get 
> data for p2 from the server or we get nothing from p2 till we resume 
> p1 and drain p1 till total local fetched records < max.poll.records?
>
>
> Thanks a lot!
>