Re: [DISCUSS] KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field

2018-01-18 Thread Dong Lin
Hey Jun,

I agree. I have updated the KIP to remove the class OffetEpoch and replace
OffsetEpoch with byte[] in APIs that use it. Can you see if it looks good?

Thanks!
Dong

On Thu, Jan 18, 2018 at 6:07 PM, Jun Rao  wrote:

> Hi, Dong,
>
> Thanks for the updated KIP. It looks good to me now. The only thing is
> for OffsetEpoch.
> If we expose the individual fields in the class, we probably don't need the
> encode/decode methods. If we want to hide the details of OffsetEpoch, we
> probably don't want expose the individual fields.
>
> Jun
>
> On Wed, Jan 17, 2018 at 10:10 AM, Dong Lin  wrote:
>
> > Thinking about point 61 more, I realize that the async zookeeper read may
> > make it less of an issue for controller to read more zookeeper nodes.
> > Writing partition_epoch in the per-partition znode makes it simpler to
> > handle the broker failure between zookeeper writes for a topic creation.
> I
> > have updated the KIP to use the suggested approach.
> >
> >
> > On Wed, Jan 17, 2018 at 9:57 AM, Dong Lin  wrote:
> >
> > > Hey Jun,
> > >
> > > Thanks much for the comments. Please see my comments inline.
> > >
> > > On Tue, Jan 16, 2018 at 4:38 PM, Jun Rao  wrote:
> > >
> > >> Hi, Dong,
> > >>
> > >> Thanks for the updated KIP. Looks good to me overall. Just a few minor
> > >> comments.
> > >>
> > >> 60. OffsetAndMetadata positionAndOffsetEpoch(TopicPartition
> partition):
> > >> It
> > >> seems that there is no need to return metadata. We probably want to
> > return
> > >> sth like OffsetAndEpoch.
> > >>
> > >
> > > Previously I think we may want to re-use the existing class to keep our
> > > consumer interface simpler. I have updated the KIP to add class
> > > OffsetAndOffsetEpoch. I didn't use OffsetAndEpoch because user may
> > confuse
> > > this name with OffsetEpoch. Does this sound OK?
> > >
> > >
> > >>
> > >> 61. Should we store partition_epoch in
> > >> /brokers/topics/[topic]/partitions/[partitionId] in ZK?
> > >>
> > >
> > > I have considered this. I think the advantage of adding the
> > > partition->partition_epoch map in the existing
> > > znode /brokers/topics/[topic]/partitions is that controller only needs
> > to
> > > read one znode per topic to gets its partition_epoch information.
> > Otherwise
> > > controller may need to read one extra znode per partition to get the
> same
> > > information.
> > >
> > > When we delete partition or expand partition of a topic, someone needs
> to
> > > modify partition->partition_epoch map in znode
> > > /brokers/topics/[topic]/partitions. This may seem a bit more
> complicated
> > > than simply adding or deleting znode /brokers/topics/[topic]/
> > partitions/[partitionId].
> > > But the complexity is probably similar to the existing operation of
> > > modifying the partition->replica_list mapping in znode
> > > /brokers/topics/[topic]. So not sure it is better to store the
> > > partition_epoch in /brokers/topics/[topic]/partitions/[partitionId].
> > What
> > > do you think?
> > >
> > >
> > >>
> > >> 62. For checking outdated metadata in the client, we probably want to
> > add
> > >> when max_partition_epoch will be used.
> > >>
> > >
> > > The max_partition_epoch is used in the Proposed Changes -> Client's
> > > metadata refresh section to determine whether a metadata is outdated.
> And
> > > this formula is referenced and re-used in other sections to determine
> > > whether a metadata is outdated. Does this formula look OK?
> > >
> > >
> > >>
> > >> 63. "The leader_epoch should be the largest leader_epoch of messages
> > whose
> > >> offset < the commit offset. If no message has been consumed since
> > consumer
> > >> initialization, the leader_epoch from seek(...) or OffsetFetchResponse
> > >> should be used. The partition_epoch should be read from the last
> > >> FetchResponse corresponding to the given partition and commit offset.
> ":
> > >> leader_epoch and partition_epoch are associated with an offset. So, if
> > no
> > >> message is consumed, there is no offset and therefore there is no need
> > to
> > >> read leader_epoch and partition_epoch. Also, the leader_epoch
> associated
> > >> with the offset should just come from the messages returned in the
> fetch
> > >> response.
> > >>
> > >
> > > I am thinking that, if user calls seek(..) and commitSync(...) without
> > > consuming any messages, we should re-use the leader_epoch and
> > > partition_epoch provided by the seek(...) in the OffsetCommitRequest.
> And
> > > if messages have been successfully consumed, then leader_epoch will
> come
> > > from the messages returned in the fetch response. The condition
> "messages
> > > whose offset < the commit offset" is needed to take care of the log
> > > compacted topic which may have offset gap due to log cleaning.
> > >
> > > Did I miss something here? Or should I rephrase the paragraph to make
> it
> > > less confusing?
> > >
> > >
> > >> 64. Could you include the 

Build failed in Jenkins: kafka-trunk-jdk9 #320

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix typo in Log.scala: "actually recovery" > "actually recover"

--
[...truncated 1.87 MB...]
org.apache.kafka.streams.tools.StreamsResetterTest > shouldDeleteTopic STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldDeleteTopic PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBetweenBeginningAndEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBetweenBeginningAndEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetToSpecificOffsetWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldSeekToEndOffset 
STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > shouldSeekToEndOffset 
PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBetweenBeginningAndEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBetweenBeginningAndEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
shouldThrowOnInvalidDateFormat PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testResetUsingPlanWhenAfterEndOffset PASSED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset STARTED

org.apache.kafka.streams.tools.StreamsResetterTest > 
testShiftOffsetByWhenBeforeBeginningOffset PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldResetToDefaultIfProducerEnableIdempotenceIsOverriddenIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails 

Re: [VOTE] KIP-227: Introduce Fetch Requests that are Incremental to Increase Partition Scalability

2018-01-18 Thread Colin McCabe
Hi all,

I updated the KIP.  There is also an implementation of this KIP here: 
https://github.com/apache/kafka/pull/4418

The updated implementation simplifies a few things, and adds the ability to 
incrementally add or remove individual partitions in an incremental fetch 
request.

best,
Colin


On Tue, Dec 19, 2017, at 19:28, Colin McCabe wrote:
> Hi all,
> 
> I'd like to start the vote on KIP-227: Incremental Fetch Requests.
> 
> The KIP is here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-227%3A+Introduce+Incremental+FetchRequests+to+Increase+Partition+Scalability
> 
> and discussion thread earlier: 
> https://www.mail-archive.com/dev@kafka.apache.org/msg83115.html
> 
> thanks,
> Colin


Re: [DISCUSS] KIP-232: Detect outdated metadata by adding ControllerMetadataEpoch field

2018-01-18 Thread Jun Rao
Hi, Dong,

Thanks for the updated KIP. It looks good to me now. The only thing is
for OffsetEpoch.
If we expose the individual fields in the class, we probably don't need the
encode/decode methods. If we want to hide the details of OffsetEpoch, we
probably don't want expose the individual fields.

Jun

On Wed, Jan 17, 2018 at 10:10 AM, Dong Lin  wrote:

> Thinking about point 61 more, I realize that the async zookeeper read may
> make it less of an issue for controller to read more zookeeper nodes.
> Writing partition_epoch in the per-partition znode makes it simpler to
> handle the broker failure between zookeeper writes for a topic creation. I
> have updated the KIP to use the suggested approach.
>
>
> On Wed, Jan 17, 2018 at 9:57 AM, Dong Lin  wrote:
>
> > Hey Jun,
> >
> > Thanks much for the comments. Please see my comments inline.
> >
> > On Tue, Jan 16, 2018 at 4:38 PM, Jun Rao  wrote:
> >
> >> Hi, Dong,
> >>
> >> Thanks for the updated KIP. Looks good to me overall. Just a few minor
> >> comments.
> >>
> >> 60. OffsetAndMetadata positionAndOffsetEpoch(TopicPartition partition):
> >> It
> >> seems that there is no need to return metadata. We probably want to
> return
> >> sth like OffsetAndEpoch.
> >>
> >
> > Previously I think we may want to re-use the existing class to keep our
> > consumer interface simpler. I have updated the KIP to add class
> > OffsetAndOffsetEpoch. I didn't use OffsetAndEpoch because user may
> confuse
> > this name with OffsetEpoch. Does this sound OK?
> >
> >
> >>
> >> 61. Should we store partition_epoch in
> >> /brokers/topics/[topic]/partitions/[partitionId] in ZK?
> >>
> >
> > I have considered this. I think the advantage of adding the
> > partition->partition_epoch map in the existing
> > znode /brokers/topics/[topic]/partitions is that controller only needs
> to
> > read one znode per topic to gets its partition_epoch information.
> Otherwise
> > controller may need to read one extra znode per partition to get the same
> > information.
> >
> > When we delete partition or expand partition of a topic, someone needs to
> > modify partition->partition_epoch map in znode
> > /brokers/topics/[topic]/partitions. This may seem a bit more complicated
> > than simply adding or deleting znode /brokers/topics/[topic]/
> partitions/[partitionId].
> > But the complexity is probably similar to the existing operation of
> > modifying the partition->replica_list mapping in znode
> > /brokers/topics/[topic]. So not sure it is better to store the
> > partition_epoch in /brokers/topics/[topic]/partitions/[partitionId].
> What
> > do you think?
> >
> >
> >>
> >> 62. For checking outdated metadata in the client, we probably want to
> add
> >> when max_partition_epoch will be used.
> >>
> >
> > The max_partition_epoch is used in the Proposed Changes -> Client's
> > metadata refresh section to determine whether a metadata is outdated. And
> > this formula is referenced and re-used in other sections to determine
> > whether a metadata is outdated. Does this formula look OK?
> >
> >
> >>
> >> 63. "The leader_epoch should be the largest leader_epoch of messages
> whose
> >> offset < the commit offset. If no message has been consumed since
> consumer
> >> initialization, the leader_epoch from seek(...) or OffsetFetchResponse
> >> should be used. The partition_epoch should be read from the last
> >> FetchResponse corresponding to the given partition and commit offset. ":
> >> leader_epoch and partition_epoch are associated with an offset. So, if
> no
> >> message is consumed, there is no offset and therefore there is no need
> to
> >> read leader_epoch and partition_epoch. Also, the leader_epoch associated
> >> with the offset should just come from the messages returned in the fetch
> >> response.
> >>
> >
> > I am thinking that, if user calls seek(..) and commitSync(...) without
> > consuming any messages, we should re-use the leader_epoch and
> > partition_epoch provided by the seek(...) in the OffsetCommitRequest. And
> > if messages have been successfully consumed, then leader_epoch will come
> > from the messages returned in the fetch response. The condition "messages
> > whose offset < the commit offset" is needed to take care of the log
> > compacted topic which may have offset gap due to log cleaning.
> >
> > Did I miss something here? Or should I rephrase the paragraph to make it
> > less confusing?
> >
> >
> >> 64. Could you include the public methods in the OffsetEpoch class?
> >>
> >
> > I mistakenly deleted the definition of OffsetEpoch class from the KIP. I
> > just added it back with the public methods. Could you take another look?
> >
> >
> >>
> >> Jun
> >>
> >>
> >> On Thu, Jan 11, 2018 at 5:43 PM, Dong Lin  wrote:
> >>
> >> > Hey Jun,
> >> >
> >> > Thanks much. I agree that we can not rely on committed offsets to be
> >> always
> >> > deleted when we delete topic. So it is necessary to use a
> 

Build failed in Jenkins: kafka-trunk-jdk8 #2341

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6398: Return value getter based on KTable materialization 
status

[github] MINOR: Improve on reset integration test (#4436)

[jason] MINOR: Fix typo in Log.scala: "actually recovery" > "actually recover"

--
[...truncated 1.85 MB...]

org.apache.kafka.streams.KafkaStreamsTest > shouldReturnThreadMetadata PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateThreadClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStateChanges PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource PASSED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource STARTED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice PASSED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest 

Build failed in Jenkins: kafka-trunk-jdk7 #3105

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Improve on reset integration test (#4436)

[jason] MINOR: Fix typo in Log.scala: "actually recovery" > "actually recover"

--
[...truncated 1.89 MB...]

org.apache.kafka.streams.kstream.SessionWindowsTest > 
retentionTimeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenMaintainMsDifferent STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenMaintainMsDifferent PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > windowSizeMustNotBeZero 
PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
windowSizeMustNotBeNegative PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldSetWindowRetentionTime PASSED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenGapIsDifferent STARTED

org.apache.kafka.streams.kstream.SessionWindowsTest > 
shouldNotBeEqualWhenGapIsDifferent PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource PASSED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource STARTED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice PASSED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED


[jira] [Created] (KAFKA-6462) ResetIntegrationTest unstable

2018-01-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6462:
--

 Summary: ResetIntegrationTest unstable
 Key: KAFKA-6462
 URL: https://issues.apache.org/jira/browse/KAFKA-6462
 Project: Kafka
  Issue Type: Improvement
  Components: streams, unit tests
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax


{noformat}
java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 6 ms.
at 
org.apache.kafka.clients.producer.KafkaProducer$FutureFailure.(KafkaProducer.java:1155)
at 
org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:847)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:784)
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:671)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.produceKeyValuesSynchronouslyWithTimestamp(IntegrationTestUtils.java:133)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.produceKeyValuesSynchronouslyWithTimestamp(IntegrationTestUtils.java:118)
at 
org.apache.kafka.streams.integration.AbstractResetIntegrationTest.add10InputElements(AbstractResetIntegrationTest.java:199)
at 
org.apache.kafka.streams.integration.AbstractResetIntegrationTest.prepareTest(AbstractResetIntegrationTest.java:175)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.before(ResetIntegrationTest.java:56){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-247: Add public test utils for Kafka Streams

2018-01-18 Thread Matthias J. Sax
I added the new method to the KIP and also updated the PR.

-Matthias

On 1/18/18 10:48 AM, Guozhang Wang wrote:
> @Matthias
> 
> This comes to me while reviewing another using the test driver: could we
> add a `Map allStateStores()` to the
> `TopologyTestDriver` besides all the get-store-by-name functions? This is
> because some of the internal state stores may be implicitly created but
> users may still want to check its state.
> 
> 
> Guozhang
> 
> 
> On Thu, Jan 18, 2018 at 8:40 AM, James Cheng  wrote:
> 
>> +1 (non-binding)
>>
>> -James
>>
>> Sent from my iPhone
>>
>>> On Jan 17, 2018, at 6:09 PM, Matthias J. Sax 
>> wrote:
>>>
>>> Hi,
>>>
>>> I would like to start the vote for KIP-247:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 247%3A+Add+public+test+utils+for+Kafka+Streams
>>>
>>>
>>> -Matthias
>>>
>>
> 
> 
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk9 #319

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6398: Return value getter based on KTable materialization 
status

[github] MINOR: Improve on reset integration test (#4436)

--
[...truncated 1.87 MB...]

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftOuter[caching enabled = true] FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Never 
received expected final result.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftOuter(TableTableJoinIntegrationTest.java:391)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftLeft[caching enabled = true] FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Never 
received expected final result.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftLeft(TableTableJoinIntegrationTest.java:352)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuterLeft[caching enabled = true] FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Never 
received expected final result.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuterLeft(TableTableJoinIntegrationTest.java:468)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInner[caching enabled = true] FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Never 
received expected final result.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testInner(TableTableJoinIntegrationTest.java:99)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testOuter[caching enabled = true] FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Never 
received expected final result.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testOuter(TableTableJoinIntegrationTest.java:161)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] STARTED

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeft[caching enabled = true] FAILED
java.lang.AssertionError: Condition not met within timeout 15000. Never 
received expected final result.
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeft(TableTableJoinIntegrationTest.java:130)

org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testInnerInner[caching enabled = true] STARTED


Re: [DISCUSS] KIP-208: Add SSL support to Kafka Connect REST interface

2018-01-18 Thread Jakub Scholz
Hi Randall,

Yes the KIP should be up to date. The VOTE thread is actually running
already for more than 2 months. So the only thing we need is the votes. I
pinged the vote thread so that it gets more attention.

Thanks & Regards
Jakub

On Thu, Jan 18, 2018 at 7:33 PM, Randall Hauch  wrote:

> Jakub, have you had a chance to update the KIP with the latest changes?
> Would be great to start the vote today so that it's open long enough to
> adopt before the deadline on Tuesday. Let me know if I can help.
>
> On Wed, Jan 17, 2018 at 1:25 AM, Jakub Scholz  wrote:
>
> > I have been thinking about this a bit more yesterday while updating the
> > code. I think you are right, we should use only the prefixed values if at
> > least one of them exists. Even I got quite easily confused what setup is
> > actually used when the fields are mixed :-). Randall was also in favour
> of
> > this approach. So I think we should go this way. I will update the KIP
> > accordingly.
> >
> >
> > > I'm fine with consistency, but maybe the thing to do here then is to
> > ensure
> > > that we definitely log the "effective" or "derived" config before using
> > it
> > > so there is at least some useful trace of what we actually used that
> can
> > be
> > > helpful in debugging.
> >
>


Re: [VOTE] KIP-208: Add SSL support to Kafka Connect REST interface

2018-01-18 Thread Jakub Scholz
Hi all,

We still need at least 2 more binding +1s. I think that the PR (
https://github.com/apache/kafka/pull/4429) is shaping good. If we get the
votes, we should be able to make the 1.1.0 release.

Thanks & Regards
Jakub

On Fri, Jan 5, 2018 at 4:30 AM, Ewen Cheslack-Postava 
wrote:

> Jakub,
>
> I left a few comments in the discuss thread, but I'll also reply here just
> to bump the VOTE thread's visibility. I would like to resolve the few
> comments I left, but I am effectively +1 on this, the comments I left were
> mainly details.
>
> Committers that could help with the necessary votes would probably be Gwen
> and Jason (but others more than welcome to help out too :)
>
> -Ewen
>
> On Mon, Nov 6, 2017 at 1:52 AM, Jakub Scholz  wrote:
>
> > Hi all,
> >
> > Just a reminder that htis is still up for vote. I think this is important
> > featrue which would deserve your votes.
> >
> > Regards
> > Jakub
> >
> > On Mon, Oct 30, 2017 at 9:24 PM, Jakub Scholz  wrote:
> >
> > > Hi,
> > >
> > > It seems there are no more comments for this KIP, so I would like to
> > start
> > > the voting .
> > >
> > > For more details about the KIP-208 go to
> > > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface
> > >  > 208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface>*
> > >
> > > Thanks & Regards
> > > Jakub
> > >
> >
>


Build failed in Jenkins: kafka-1.0-jdk7 #127

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6398: Return value getter based on KTable materialization 
status

--
[...truncated 1.44 MB...]
:199:
 warning: [deprecation] filter(Predicate,String) in KTable 
has been deprecated
public KTable filter(final Predicate predicate,
^
  where K,V are type-variables:
K extends Object declared in interface KTable
V extends Object declared in interface KTable
:800:
 warning: [deprecation] leftJoin(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream leftJoin(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared in method 
leftJoin(KTable,ValueJoiner,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:726:
 warning: [deprecation] join(KTable,ValueJoiner,Serde,Serde) in KStream has been deprecated
public  KStream join(final KTable other,
 ^
  where VT,VR,K,V are type-variables:
VT extends Object declared in method join(KTable,ValueJoiner,Serde,Serde)
VR extends Object declared in method join(KTable,ValueJoiner,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:568:
 warning: [deprecation] outerJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde) in KStream has 
been deprecated
public  KStream outerJoin(final KStream other,
 ^
  where VO,VR,K,V are type-variables:
VO extends Object declared in method 
outerJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
VR extends Object declared in method 
outerJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:670:
 warning: [deprecation] leftJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde) in KStream has 
been deprecated
public  KStream leftJoin(final KStream other,
 ^
  where VO,VR,K,V are type-variables:
VO extends Object declared in method 
leftJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
VR extends Object declared in method 
leftJoin(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:540:
 warning: [deprecation] join(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde) in KStream has been 
deprecated
public  KStream join(final KStream other,
 ^
  where VO,VR,K,V are type-variables:
VO extends Object declared in method 
join(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
VR extends Object declared in method 
join(KStream,ValueJoiner,JoinWindows,Serde,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:829:
 warning: [deprecation] groupBy(KeyValueMapper,Serde,Serde) in KStream has been deprecated
public  KGroupedStream groupBy(final KeyValueMapper selector,
  ^
  where KR,K,V are type-variables:
KR extends Object declared in method groupBy(KeyValueMapper,Serde,Serde)
K extends Object declared in interface KStream
V extends Object declared in interface KStream
:855:
 warning: [deprecation] groupByKey(Serde,Serde) in KStream has been 

Build failed in Jenkins: kafka-trunk-jdk7 #3104

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: additional check to follower fetch handling  (#4433)

[jason] MINOR: Fix a typo in a comment in config/server.properties (#4373)

[wangguoz] KAFKA-6398: Return value getter based on KTable materialization 
status

--
[...truncated 405.18 KB...]
kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
STARTED

kafka.api.PlaintextConsumerTest > testHeadersExtendedSerializerDeserializer 
PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testCoordinatorFailover STARTED

kafka.api.SslConsumerTest > testCoordinatorFailover PASSED

kafka.api.SslConsumerTest > testSimpleConsumption STARTED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testTransactionalProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testProducerWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerGroupServiceWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testManualAssignmentConsumerWithAutoCommitDisabledWithAuthenticationFailure 
PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testKafkaAdminClientWithAuthenticationFailure PASSED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure STARTED

kafka.api.SaslClientsWithInvalidCredentialsTest > 
testConsumerWithAuthenticationFailure PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe PASSED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl STARTED

kafka.api.DelegationTokenEndToEndAuthorizationTest > 
testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials STARTED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials PASSED

kafka.api.SaslGssapiSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED


Build failed in Jenkins: kafka-trunk-jdk8 #2340

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: additional check to follower fetch handling  (#4433)

[jason] MINOR: Fix a typo in a comment in config/server.properties (#4373)

--
[...truncated 1.84 MB...]

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
STARTED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology 
STARTED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldDescribeMultipleGlobalStoreTopology STARTED

org.apache.kafka.streams.TopologyTest > 
shouldDescribeMultipleGlobalStoreTopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSinkWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSinkWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorSupplierWhenAddingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorSupplierWhenAddingProcessor PASSED

org.apache.kafka.streams.TopologyTest > shoudNotAllowToAddProcessorWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shoudNotAllowToAddProcessorWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > shouldFailIfSinkIsParent STARTED

org.apache.kafka.streams.TopologyTest > shouldFailIfSinkIsParent PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSourcesWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSourcesWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithMultipleStatesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithMultipleStatesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 

[jira] [Created] (KAFKA-6461) TableTableJoinIntegrationTest is unstable if caching is enabled

2018-01-18 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6461:
--

 Summary: TableTableJoinIntegrationTest is unstable if caching is 
enabled
 Key: KAFKA-6461
 URL: https://issues.apache.org/jira/browse/KAFKA-6461
 Project: Kafka
  Issue Type: Improvement
  Components: streams, unit tests
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax


{noformat}
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest > 
testLeftInner[caching enabled = true] FAILED
20:41:05 java.lang.AssertionError: Condition not met within timeout 15000. 
Never received expected final result.
20:41:05 at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
20:41:05 at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:254)
20:41:05 at 
org.apache.kafka.streams.integration.AbstractJoinIntegrationTest.runTest(AbstractJoinIntegrationTest.java:248)
20:41:05 at 
org.apache.kafka.streams.integration.TableTableJoinIntegrationTest.testLeftInner(TableTableJoinIntegrationTest.java:313){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-1.0-jdk7 #126

2018-01-18 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6398) Non-aggregation KTable generation operator does not construct value getter correctly

2018-01-18 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6398.
--
   Resolution: Fixed
Fix Version/s: 1.0.1
   1.1.0

> Non-aggregation KTable generation operator does not construct value getter 
> correctly
> 
>
> Key: KAFKA-6398
> URL: https://issues.apache.org/jira/browse/KAFKA-6398
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1, 1.0.0
>Reporter: Matthias J. Sax
>Assignee: Guozhang Wang
>Priority: Critical
>  Labels: bug
> Fix For: 1.1.0, 1.0.1
>
>
> For any operator that generates a KTable, its {{valueGetterSupplier}} has 
> three code path:
> 1. If the operator is a KTable source operator, using its materialized state 
> store for value getter (note that currently we always materialize on KTable 
> source).
> 2. If the operator is an aggregation operator, then its generated KTable 
> should always be materialized so we just use its materialized state store.
> 3. Otherwise, we treat the value getter in a per-operator basis.
> For 3) above, what we SHOULD do is that, if the generated KTable is 
> materialized, the value getter would just rely on its materialized state 
> store to get the value; otherwise we just rely on the operator itself to 
> define which parent's value getter to inherit and what computational logic to 
> apply on-the-fly to get the value. For example, for {{KTable#filter()}} where 
> the {{Materialized}} is not specified, in {{KTableFilterValueGetter}} we just 
> get from parent's value getter and then apply the filter on the fly; and in 
> addition we should let the future operators to be able to access its parent's 
> materialized state store via {{connectProcessorAndStateStore}}.
> However, current code does not do this correctly: it 1) does not check if the 
> result KTable is materialized or not, but always try to use its parent's 
> value getter, and 2) it does not try to connect its parent's materialized 
> store to the future operator. As a result, these operators such as 
> {{KTable#filter}}, {{KTable#mapValues}}, and {{KTable#join(KTable)}} would 
> result in TopologyException when building. The following is an example:
> 
> Using a non-materialized KTable in a stream-table join fails:
> {noformat}
> final KTable filteredKTable = builder.table("table-topic").filter(...);
> builder.stream("stream-topic").join(filteredKTable,...);
> {noformat}
> fails with
> {noformat}
> org.apache.kafka.streams.errors.TopologyBuilderException: Invalid topology 
> building: StateStore null is not added yet.
>   at 
> org.apache.kafka.streams.processor.TopologyBuilder.connectProcessorAndStateStore(TopologyBuilder.java:1021)
>   at 
> org.apache.kafka.streams.processor.TopologyBuilder.connectProcessorAndStateStores(TopologyBuilder.java:949)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamImpl.doStreamTableJoin(KStreamImpl.java:621)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamImpl.join(KStreamImpl.java:577)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamImpl.join(KStreamImpl.java:563)
> {noformat}
> Adding a store name is not sufficient as workaround but fails differently:
> {noformat}
> final KTable filteredKTable = builder.table("table-topic").filter(..., 
> "STORE-NAME");
> builder.stream("stream-topic").join(filteredKTable,...);
> {noformat}
> error:
> {noformat}
> org.apache.kafka.streams.errors.StreamsException: failed to initialize 
> processor KSTREAM-JOIN-05
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.init(ProcessorNode.java:113)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.initTopology(StreamTask.java:339)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.initialize(StreamTask.java:153)
> Caused by: org.apache.kafka.streams.errors.TopologyBuilderException: Invalid 
> topology building: Processor KSTREAM-JOIN-05 has no access to 
> StateStore KTABLE-SOURCE-STATE-STORE-00
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.getStateStore(ProcessorContextImpl.java:69)
>   at 
> org.apache.kafka.streams.kstream.internals.KTableSourceValueGetterSupplier$KTableSourceValueGetter.init(KTableSourceValueGetterSupplier.java:45)
>   at 
> org.apache.kafka.streams.kstream.internals.KTableFilter$KTableFilterValueGetter.init(KTableFilter.java:121)
>   at 
> 

Build failed in Jenkins: kafka-trunk-jdk7 #3103

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6456; JavaDoc clarification for Connect SourceTask#poll() (#4432)

[wangguoz] Streams Use case anchor (#4420)

--
[...truncated 1.89 MB...]
org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceWithMultipleProcessorsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesWithProcessorsShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
STARTED

org.apache.kafka.streams.TopologyTest > shouldThrowOnUnassignedStateStoreAccess 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology 
STARTED

org.apache.kafka.streams.TopologyTest > shouldDescribeGlobalStoreTopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldDescribeMultipleGlobalStoreTopology STARTED

org.apache.kafka.streams.TopologyTest > 
shouldDescribeMultipleGlobalStoreTopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSinkWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSinkWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorSupplierWhenAddingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullProcessorSupplierWhenAddingProcessor PASSED

org.apache.kafka.streams.TopologyTest > shoudNotAllowToAddProcessorWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shoudNotAllowToAddProcessorWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > shouldFailIfSinkIsParent STARTED

org.apache.kafka.streams.TopologyTest > shouldFailIfSinkIsParent PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSourcesWithSameName 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddSourcesWithSameName 
PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithMultipleStatesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 

Error sink parse Json

2018-01-18 Thread Maria Pilar
Hi everyone,

I trying to send events from the topic

This is the sink json configuration I’ve used:



{

  "name":"CPSConnector",

  "config":{


"connector.class":"com.google.pubsub.kafka.sink.CloudPubSubSinkConnector",

"tasks.max":"1",

"topics":"STREAM-CUSTOMER-ACCOUNTS",

"cps.topic":"test",

"cps.project":"test-dev",

"maxBufferSize":"10"

  }

}


and these are the ‘converter’ configuration I have in the properties file
(basically they are the default ones):




key.converter=org.apache.kafka.connect.json.JsonConverter

value.converter=org.apache.kafka.connect.json.JsonConverter

key.converter.schemas.enable=true

value.converter.schemas.enable=true



internal.key.converter=org.apache.kafka.connect.json.JsonConverter

internal.value.converter=org.apache.kafka.connect.json.JsonConverter

internal.key.converter.schemas.enable=false

internal.value.converter.schemas.enable=false





but I ‘m getting this trace in the log of Connect:



"org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka
Connect data failed due to serialization error: \n\tat
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:304)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:453)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:287)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\nCaused by:
org.apache.kafka.common.errors.SerializationException:
com.fasterxml.jackson.core.JsonParseException: Unrecognized token
'ac0e69cb': was expecting ('true', 'false' or 'null')\n at [Source:
(byte[])\"ac0e69cb-5cab-4134-90c1-91ac70ce8b11\"; line: 1, column:
10]\nCaused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized
token 'ac0e69cb': was expecting ('true', 'false' or 'null')\n at [Source:
(byte[])\"ac0e69cb-5cab-4134-90c1-91ac70ce8b11\"; line: 1, column:
10]\n\tat
com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798)\n\tat
com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:673)\n\tat
com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3527)\n\tat
com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2622)\n\tat
com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:826)\n\tat
com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:723)\n\tat
com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030)\n\tat
com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2559)\n\tat
org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:50)\n\tat
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:302)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:453)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:287)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:198)\n\tat
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:166)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)\n\tat
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)\n\tat
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
java.lang.Thread.run(Thread.java:748)\n"




I´m not sure what configuration needs to be able to send the event


Thanks


Re: 1.1 KIPs

2018-01-18 Thread Damian Guy
Gentle reminder that KIP deadline is just 5 days away. If there is anything
that wants to be in 1.1 and hasn't been voted on yet, now is the time!

On Thu, 18 Jan 2018 at 08:49 Damian Guy  wrote:

> Hi Xavier,
> I'll add it to the plan.
>
> Thanks,
> Damian
>
> On Tue, 16 Jan 2018 at 19:04 Xavier Léauté  wrote:
>
>> Hi Damian, I believe the list should also include KAFKA-5886 (KIP-91)
>> which
>> was voted for 1.0 but wasn't ready to be merged in time.
>>
>> On Tue, Jan 16, 2018 at 5:13 AM Damian Guy  wrote:
>>
>> > Hi,
>> >
>> > This is a reminder that we have one week left until the KIP deadline of
>> Jan
>> > 23. There are still some KIPs that are under discussion and/or being
>> voted
>> > on. Please keep in mind that the voting needs to be complete before the
>> > deadline for the KIP to be added to the release.
>> >
>> >
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
>> >
>> > Thanks,
>> > Damian
>> >
>>
>


Build failed in Jenkins: kafka-trunk-jdk9 #318

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6456; JavaDoc clarification for Connect SourceTask#poll() (#4432)

[wangguoz] Streams Use case anchor (#4420)

[rajinisivaram] MINOR: additional check to follower fetch handling  (#4433)

[jason] MINOR: Fix a typo in a comment in config/server.properties (#4373)

--
[...truncated 1.02 MB...]
org.apache.kafka.clients.producer.internals.ProducerBatchTest > 
testSplitPreservesHeaders STARTED

org.apache.kafka.clients.producer.internals.ProducerBatchTest > 
testSplitPreservesHeaders PASSED

org.apache.kafka.clients.producer.MockProducerTest > shouldInitTransactions 
STARTED

org.apache.kafka.clients.producer.MockProducerTest > shouldInitTransactions 
PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortTransactionIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortTransactionIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldBeFlushedWithAutoCompleteIfBufferedRecords STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldBeFlushedWithAutoCompleteIfBufferedRecords PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldNotBeFlushedAfterFlush STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldNotBeFlushedAfterFlush PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnBeginTransactionIfTransactionsNotInitialized STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnBeginTransactionIfTransactionsNotInitialized PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendIfProducerGotFenced STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendIfProducerGotFenced PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCommitTransactionIfNoTransactionGotStarted STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCommitTransactionIfNoTransactionGotStarted PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowSendOffsetsToTransactionIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowSendOffsetsToTransactionIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCloseIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnCloseIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishLatestAndCumulativeConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled
 STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishLatestAndCumulativeConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled
 PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPublishConsumerGroupOffsetsOnlyAfterCommitIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendOffsetsToTransactionTransactionIfNoTransactionGotStarted 
STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnSendOffsetsToTransactionTransactionIfNoTransactionGotStarted PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnNullConsumerGroupIdWhenSendOffsetsToTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnNullConsumerGroupIdWhenSendOffsetsToTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldAddOffsetsWhenSendOffsetsToTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldAddOffsetsWhenSendOffsetsToTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortForNonAutoCompleteIfTransactionsAreEnabled STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnAbortForNonAutoCompleteIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldResetSentOffsetsFlagOnlyWhenBeginningNewTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldResetSentOffsetsFlagOnlyWhenBeginningNewTransaction PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnInitTransactionIfProducerIsClosed STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldThrowOnInitTransactionIfProducerIsClosed PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPreserveCommittedMessagesOnAbortIfTransactionsAreEnabled STARTED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldPreserveCommittedMessagesOnAbortIfTransactionsAreEnabled PASSED

org.apache.kafka.clients.producer.MockProducerTest > 
shouldIgnoreEmptyOffsetsWhenSendOffsetsToTransaction STARTED

org.apache.kafka.clients.producer.MockProducerTest > 

Build failed in Jenkins: kafka-1.0-jdk7 #125

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix error message in KafkaConfig validation (#4417)

--
[...truncated 1.83 MB...]

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testAddStateStore 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkConnectedWithParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithNonExistingProcessor PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCleanupPolicyDeleteForInternalTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCleanupPolicyDeleteForInternalTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldCorrectlyMapStateStoreToInternalTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldCorrectlyMapStateStoreToInternalTopics PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullTopicsWhenAddingSourceWithPattern PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowZeroTopicsWhenAddingSource PASSED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource STARTED

org.apache.kafka.streams.TopologyTest > shouldFailOnUnknownSource PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullNameWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies STARTED

org.apache.kafka.streams.TopologyTest > 
multipleSourcesShouldHaveDistinctSubtopologies PASSED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.TopologyTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorsWithSharedStateShouldHaveSameSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddTopicTwice PASSED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
processorWithMultipleSourcesShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowToAddStateStoreToSink 
PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAddNullStateStoreSupplier 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddStateStoreToNonExistingProcessor PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullStoreNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowNullNameWhenAddingSourceWithTopic PASSED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
sourceAndProcessorWithStateShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
STARTED

org.apache.kafka.streams.TopologyTest > shouldNotAllowNullTopicWhenAddingSink 
PASSED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.TopologyTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology STARTED

org.apache.kafka.streams.TopologyTest > 
singleSourceWithListOfTopicsShouldHaveSingleSubtopology PASSED

org.apache.kafka.streams.TopologyTest > 

Re: [ANNOUNCE] New committer: Matthias J. Sax

2018-01-18 Thread Matthias J. Sax
Thanks a lot everyone!

It's a real pleasure to work with all of you :)


-Matthias

On 1/17/18 3:38 AM, Satish Duggana wrote:
> Congratulations Mathias!
> 
> On Tue, Jan 16, 2018 at 11:52 AM, Becket Qin  wrote:
> 
>> Congrats, Matthias!
>>
>> On Mon, Jan 15, 2018 at 9:54 PM, Konstantine Karantasis <
>> konstant...@confluent.io> wrote:
>>
>>> Matthias! Congratulations!
>>>
>>> Konstantine
>>>
>>> On Mon, Jan 15, 2018 at 4:18 AM, Edoardo Comar 
>> wrote:
>>>
 Congratulations Matthias !
 --

 Edoardo Comar

 IBM Message Hub

 IBM UK Ltd, Hursley Park, SO21 2JN



 From:   UMESH CHAUDHARY 
 To: dev@kafka.apache.org
 Date:   15/01/2018 11:47
 Subject:Re: [ANNOUNCE] New committer: Matthias J. Sax



 Congratulations Matthias :)

 On Mon, 15 Jan 2018 at 17:06 Michael Noll 
>> wrote:

> Herzlichen Glückwunsch, Matthias. ;-)
>
>
>
> On Mon, 15 Jan 2018 at 11:38 Viktor Somogyi >>
> wrote:
>
>> Congrats Matthias!
>>
>> On Mon, Jan 15, 2018 at 9:17 AM, Jorge Esteban Quilcate Otoya <
>> quilcate.jo...@gmail.com> wrote:
>>
>>> Congratulations Matthias!!
>>>
>>> El lun., 15 ene. 2018 a las 9:08, Boyang Chen
 ()
>>> escribió:
>>>
 Great news Matthias!


 
 From: Kaufman Ng 
 Sent: Monday, January 15, 2018 11:32 AM
 To: us...@kafka.apache.org
 Cc: dev@kafka.apache.org
 Subject: Re: [ANNOUNCE] New committer: Matthias J. Sax

 Congrats Matthias!

 On Sun, Jan 14, 2018 at 4:52 AM, Rajini Sivaram <
>> rajinisiva...@gmail.com

 wrote:

> Congratulations Matthias!
>
> On Sat, Jan 13, 2018 at 11:34 AM, Mickael Maison <
 mickael.mai...@gmail.com
>>
> wrote:
>
>> Congratulations Matthias !
>>
>> On Sat, Jan 13, 2018 at 7:01 AM, Paolo Patierno <
>> ppatie...@live.com>
>> wrote:
>>> Congratulations Matthias ! Very well deserved !
>>> 
>>> From: Guozhang Wang 
>>> Sent: Friday, January 12, 2018 11:59:21 PM
>>> To: dev@kafka.apache.org; us...@kafka.apache.org
>>> Subject: [ANNOUNCE] New committer: Matthias J. Sax
>>>
>>> Hello everyone,
>>>
>>> The PMC of Apache Kafka is pleased to announce Matthias
>> J.
 Sax
> as
>>> our
>>> newest Kafka committer.
>>>
>>> Matthias has made tremendous contributions to Kafka
>> Streams
 API
>>> since
>> early
>>> 2016. His footprint has been all over the places in
>>> Streams:
 in
>> the
> past
>>> two years he has been the main driver on improving the
>> join
>>> semantics
>>> inside Streams DSL, summarizing all their shortcomings
>> and
>> bridging
 the
>>> gaps; he has also been largely working on the
>> exactly-once
>>> semantics
 of
>>> Streams by leveraging on the transaction messaging
>> feature
 in
>>> 0.11.0.
> In
>>> addition, Matthias have been very active in community
 activity
>> that
> goes
>>> beyond mailing list: he's getting the close to 1000 up
>>> votes
> and
>>> 100
>>> helpful flags on SO for answering almost all questions
>>> about
>> Kafka
>> Streams.
>>>
>>> Thank you for your contribution and welcome to Apache
>>> Kafka,
 Matthias!
>>>
>>>
>>>
>>> Guozhang, on behalf of the Apache Kafka PMC
>>
>



 --
 Kaufman Ng
 +1 646 961 8063 <+1%20646-961-8063> <(646)%20961-8063>
 Solutions Architect | Confluent | www.confluent.io<
 https://urldefense.proofpoint.com/v2/url?u=http-3A__www=
 DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0
 OaeRo7hgW4_tQ=1kMl7t3vDn8RaG7aDYpjxKcgtAA0RVnWDeMBhN7yjzE=
 qPzEPox7QPw6NE5uYyUmMHo1Z4LPhLoC9hZBexOiBT8=
 .
>>> confluent.io
>
 [
 https://urldefense.proofpoint.com/v2/url?u=https-3A__www.
 confluent.io_wp-2Dcontent_uploads_Untitled-2Ddesign-
 2D12.png=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=
 EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=
 1kMl7t3vDn8RaG7aDYpjxKcgtAA0RVnWDeMBhN7yjzE=TyLmSShf1D_
 6QIoGIIrWjhvMpmsHqecjt3nVLb_K4yw=

> ]<

 https://urldefense.proofpoint.com/v2/url?u=http-3A__www.
 confluent.io_=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=
 

Build failed in Jenkins: kafka-trunk-jdk8 #2339

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix error message in KafkaConfig validation (#4417)

[jason] KAFKA-6456; JavaDoc clarification for Connect SourceTask#poll() (#4432)

[wangguoz] Streams Use case anchor (#4420)

--
[...truncated 402.77 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

Build failed in Jenkins: kafka-trunk-jdk9 #317

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Need to get a new transformer for each get() call. can't 
share'em

[github] improve internal topic integration test (#4437)

[jason] MINOR: Fix typo in KafkaConsumer javadoc (#4422)

[jason] MINOR: Fix error message in KafkaConfig validation (#4417)

--
[...truncated 1.86 MB...]

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED


Re: [DISCUSS] KIP-248 Create New ConfigCommand That Uses The New AdminClient

2018-01-18 Thread Rajini Sivaram
Hi Viktor,

Thanks for the updates.

*QuotaSource* currently has *Self/Default/Parent*. Not sure if that is
sufficient.
For the entity , quota could be used from any of these
configs:

   1. /config/users//clients/
   2. /config/users//clients/
   3. /config/users/
   4. /config/users//clients/
   5. /config/users//clients/
   6. /config/users/
   7. /config/clients/
   8. /config/clients/

So perhaps we should have a *QuotaSource* entry for each of these eight?

A couple of minor points:

   - *Help Message* still uses --config.properties
   - The other AdminClient APIs don't use aliases for various collections.
   So not sure if we need the aliases here. I think you can leave it as-is and
   see what others think.

Yes, please do start the voting thread to make it in time for the KIP
freeze.

Thank you,

Rajini


On Thu, Jan 18, 2018 at 6:15 PM, Viktor Somogyi 
wrote:

> Rajini, I have updated the KIP as agreed. Could you please have a second
> look at it?
> I have also added a section about SCRAM:
> "To enable describing and altering SCRAM credentials we will use the
> DescribeConfigs and AlterConfigs protocols. There are no changes in the
> protocol's structure but we will allow the USER resource type to be passed
> in the protocol. When this happens, the server will know that SCRAM configs
> are asked and will send them in the response.  In case of AlterConfigs if a
> USER resource type is passed it will validate if there are only SCRAM
> credentials are changed. If not, then will fail with
> InvalidRequestException
> ."
>
> If you don't have any comments, we might start voting as we're close to KIP
> freeze.
>
> On Thu, Jan 18, 2018 at 12:12 PM, Viktor Somogyi 
> wrote:
>
> > 3. Ok, I'll remove this from the KIP for now and perhaps add a future
> > considerations section with the idea.
> >
> > 9. Ok, I can do that.
> >
> > On Thu, Jan 18, 2018 at 11:18 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > wrote:
> >
> >> Hi Viktor,
> >>
> >> 3. Agree that it would be better to use something like ConfigEntityList
> >> rather than ListQuotas. But I would leave it out for now since we are so
> >> close to KIP freeze. We can introduce it later if required. Earlier, I
> was
> >> thinking that if we just wanted to get a list of entities without their
> >> actual quota values, you could have an option in DescribeQuotas to
> return
> >> the entities without the quota values. But actually that doesn't make
> >> sense
> >> since you will need to read all the ZK entries and find the ones with
> >> quotas in the first place. So let's just leave DescribeQuotas as-is.
> >>
> >> 7. Yes, with client-id quotas at the lowest level. The full list in the
> >> order of precedence is here:
> >> https://kafka.apache.org/documentation/#design_quotasconfig
> >>
> >> 9. One more suggestion. Since DescribeQuotas and AlterQuotas are
> specific
> >> to quotas, we could use *quota* instead of *config* in the protocol (and
> >> AdminClient API). Instead of *config_name*, we could use a *quota_type*
> >> enum (we have three types). And *config_value *could be *quota_value
> *that
> >> is a double rather than a string*,*
> >>
> >> On Thu, Jan 18, 2018 at 9:38 AM, Viktor Somogyi <
> viktorsomo...@gmail.com>
> >> wrote:
> >>
> >> > Hi Rajini,
> >> >
> >> > 1. Yes, --adminclient.config it is. I missed that, sorry :)
> >> >
> >> > 3. Indeed SCRAM in this case can raise complications. Since we'd like
> to
> >> > handle altering SCRAM credentials via AlterConfigs, I think we should
> >> use
> >> > DescribeConfigs to describe them. That is, to describe all the
> entities
> >> of
> >> > a given type we might need to introduce some kind of generic way of
> >> getting
> >> > metadata of config entities instead of ListQuotas which is very
> >> specific.
> >> > Therefore instead of ListQuotas we could have a ConfigMetadata (or
> >> > ConfigEntityList) protocol tailored to the needs of the admin client.
> >> This
> >> > protocol would accept a list of config entities for the request and
> >> produce
> >> > a list of entities as a response. For instance requesting (type:USER
> >> > name:user1, type:CLIENT name:) resources would return all the clients
> of
> >> > user1. This I think would better fit the use case and potentially
> future
> >> > use case too (like 'group' that you mentioned).
> >> > What do you think? Should we introduce a protocol like this or shall
> we
> >> > solve the problem without it?
> >> > Also, in a previous email you mentioned that we could use options.
> Could
> >> > you please elaborate on this idea?
> >> >
> >> > 7. Yea, that is a good idea. How are quotas applied? Does it first
> fall
> >> > back to the default on the same level and if there is no default then
> >> > applies the parent's config?
> >> >
> >> > Regards,
> >> > Viktor
> >> >
> >> >
> >> > On Wed, Jan 17, 2018 at 12:56 PM, Rajini Sivaram <
> >> rajinisiva...@gmail.com>
> >> > wrote:
> >> >

Re: [VOTE] KIP-247: Add public test utils for Kafka Streams

2018-01-18 Thread Guozhang Wang
@Matthias

This comes to me while reviewing another using the test driver: could we
add a `Map allStateStores()` to the
`TopologyTestDriver` besides all the get-store-by-name functions? This is
because some of the internal state stores may be implicitly created but
users may still want to check its state.


Guozhang


On Thu, Jan 18, 2018 at 8:40 AM, James Cheng  wrote:

> +1 (non-binding)
>
> -James
>
> Sent from my iPhone
>
> > On Jan 17, 2018, at 6:09 PM, Matthias J. Sax 
> wrote:
> >
> > Hi,
> >
> > I would like to start the vote for KIP-247:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 247%3A+Add+public+test+utils+for+Kafka+Streams
> >
> >
> > -Matthias
> >
>



-- 
-- Guozhang


Build failed in Jenkins: kafka-trunk-jdk7 #3102

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Need to get a new transformer for each get() call. can't 
share'em

[github] improve internal topic integration test (#4437)

[jason] MINOR: Fix typo in KafkaConsumer javadoc (#4422)

[jason] MINOR: Fix error message in KafkaConfig validation (#4417)

--
[...truncated 404.11 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED


Re: [DISCUSS] KIP-208: Add SSL support to Kafka Connect REST interface

2018-01-18 Thread Randall Hauch
Jakub, have you had a chance to update the KIP with the latest changes?
Would be great to start the vote today so that it's open long enough to
adopt before the deadline on Tuesday. Let me know if I can help.

On Wed, Jan 17, 2018 at 1:25 AM, Jakub Scholz  wrote:

> I have been thinking about this a bit more yesterday while updating the
> code. I think you are right, we should use only the prefixed values if at
> least one of them exists. Even I got quite easily confused what setup is
> actually used when the fields are mixed :-). Randall was also in favour of
> this approach. So I think we should go this way. I will update the KIP
> accordingly.
>
>
> > I'm fine with consistency, but maybe the thing to do here then is to
> ensure
> > that we definitely log the "effective" or "derived" config before using
> it
> > so there is at least some useful trace of what we actually used that can
> be
> > helpful in debugging.
>


Build failed in Jenkins: kafka-trunk-jdk8 #2338

2018-01-18 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Need to get a new transformer for each get() call. can't 
share'em

[github] improve internal topic integration test (#4437)

[jason] MINOR: Fix typo in KafkaConsumer javadoc (#4422)

--
[...truncated 410.84 KB...]

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED


Re: [DISCUSS] KIP-248 Create New ConfigCommand That Uses The New AdminClient

2018-01-18 Thread Viktor Somogyi
Rajini, I have updated the KIP as agreed. Could you please have a second
look at it?
I have also added a section about SCRAM:
"To enable describing and altering SCRAM credentials we will use the
DescribeConfigs and AlterConfigs protocols. There are no changes in the
protocol's structure but we will allow the USER resource type to be passed
in the protocol. When this happens, the server will know that SCRAM configs
are asked and will send them in the response.  In case of AlterConfigs if a
USER resource type is passed it will validate if there are only SCRAM
credentials are changed. If not, then will fail with InvalidRequestException
."

If you don't have any comments, we might start voting as we're close to KIP
freeze.

On Thu, Jan 18, 2018 at 12:12 PM, Viktor Somogyi 
wrote:

> 3. Ok, I'll remove this from the KIP for now and perhaps add a future
> considerations section with the idea.
>
> 9. Ok, I can do that.
>
> On Thu, Jan 18, 2018 at 11:18 AM, Rajini Sivaram 
> wrote:
>
>> Hi Viktor,
>>
>> 3. Agree that it would be better to use something like ConfigEntityList
>> rather than ListQuotas. But I would leave it out for now since we are so
>> close to KIP freeze. We can introduce it later if required. Earlier, I was
>> thinking that if we just wanted to get a list of entities without their
>> actual quota values, you could have an option in DescribeQuotas to return
>> the entities without the quota values. But actually that doesn't make
>> sense
>> since you will need to read all the ZK entries and find the ones with
>> quotas in the first place. So let's just leave DescribeQuotas as-is.
>>
>> 7. Yes, with client-id quotas at the lowest level. The full list in the
>> order of precedence is here:
>> https://kafka.apache.org/documentation/#design_quotasconfig
>>
>> 9. One more suggestion. Since DescribeQuotas and AlterQuotas are specific
>> to quotas, we could use *quota* instead of *config* in the protocol (and
>> AdminClient API). Instead of *config_name*, we could use a *quota_type*
>> enum (we have three types). And *config_value *could be *quota_value *that
>> is a double rather than a string*,*
>>
>> On Thu, Jan 18, 2018 at 9:38 AM, Viktor Somogyi 
>> wrote:
>>
>> > Hi Rajini,
>> >
>> > 1. Yes, --adminclient.config it is. I missed that, sorry :)
>> >
>> > 3. Indeed SCRAM in this case can raise complications. Since we'd like to
>> > handle altering SCRAM credentials via AlterConfigs, I think we should
>> use
>> > DescribeConfigs to describe them. That is, to describe all the entities
>> of
>> > a given type we might need to introduce some kind of generic way of
>> getting
>> > metadata of config entities instead of ListQuotas which is very
>> specific.
>> > Therefore instead of ListQuotas we could have a ConfigMetadata (or
>> > ConfigEntityList) protocol tailored to the needs of the admin client.
>> This
>> > protocol would accept a list of config entities for the request and
>> produce
>> > a list of entities as a response. For instance requesting (type:USER
>> > name:user1, type:CLIENT name:) resources would return all the clients of
>> > user1. This I think would better fit the use case and potentially future
>> > use case too (like 'group' that you mentioned).
>> > What do you think? Should we introduce a protocol like this or shall we
>> > solve the problem without it?
>> > Also, in a previous email you mentioned that we could use options. Could
>> > you please elaborate on this idea?
>> >
>> > 7. Yea, that is a good idea. How are quotas applied? Does it first fall
>> > back to the default on the same level and if there is no default then
>> > applies the parent's config?
>> >
>> > Regards,
>> > Viktor
>> >
>> >
>> > On Wed, Jan 17, 2018 at 12:56 PM, Rajini Sivaram <
>> rajinisiva...@gmail.com>
>> > wrote:
>> >
>> > > Hi Viktor,
>> > >
>> > > Thank you for the responses.
>> > >
>> > > 1. ConsoleProducer uses *--producer.config  --producer-property
>> > > key=value*, ConsoleConsumer uses* --consumer.config 
>> > > --consumer-property key=value*, so perhaps we should use
>> > > *--adminclient.config
>> > > *rather than *--config.properties*?
>> > >
>> > > 3. The one difference is that with ListGroups, ListTopics etc. you are
>> > > listing the entities (groups/topics). With ConfigCommand, you can list
>> > > entities and that makes sense. But with ListQuotas, quota is an
>> > attribute,
>> > > the entity is user/clientid/(user, clientId). This is significant
>> since
>> > we
>> > > can store other attributes for those entities. For instance, we store
>> > SCRAM
>> > > credentials along with quotas for 'user'. So to ListQuotas for user,
>> you
>> > > need to actually get the entries and check if quotas are defined or
>> just
>> > > credentials.
>> > >
>> > > 7.
>> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > 226+-+Dynamic+Broker+Configuration
>> > > is
>> > > replacing* is_default *flag in the config 

Re: add to contributor list

2018-01-18 Thread Guozhang Wang
Hello Brandon,

Thanks for your interest in contributing. I have added you to the
contributors list.

Cheers,

Guozhang

On Thu, Jan 18, 2018 at 9:22 AM, Brandon Kirchner <
brandon.kirch...@gmail.com> wrote:

> Hello!
>
> Could someone please add me to the contributors list in JIRA so i can
> assign a jira card to myself?
>
> thanks!
>
> Brandon K.
>



-- 
-- Guozhang


Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-01-18 Thread Mickael Maison
+1 (non binding), thanks

On Thu, Jan 18, 2018 at 5:41 PM, Colin McCabe  wrote:
> +1 (non-binding)
>
> Colin
>
>
> On Thu, Jan 18, 2018, at 07:36, Ted Yu wrote:
>> +1
>>  Original message From: Bill Bejeck 
>> Date: 1/18/18  6:59 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
>> Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient
>> Thanks for the KIP
>>
>> +1
>>
>> Bill
>>
>> On Thu, Jan 18, 2018 at 4:24 AM, Rajini Sivaram 
>> wrote:
>>
>> > +1 (binding)
>> >
>> > Thanks for the KIP, Jorge.
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>> > On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang  wrote:
>> >
>> > > +1 (binding). Thanks Jorge.
>> > >
>> > >
>> > > Guozhang
>> > >
>> > > On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira 
>> > wrote:
>> > >
>> > > > Hey, since there were no additional comments in the discussion, I'd
>> > like
>> > > to
>> > > > resume the voting.
>> > > >
>> > > > +1 (binding)
>> > > >
>> > > > On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang 
>> > > wrote:
>> > > >
>> > > > > Hello Jorge,
>> > > > >
>> > > > > I left some comments on the discuss thread. The wiki page itself
>> > looks
>> > > > good
>> > > > > overall.
>> > > > >
>> > > > >
>> > > > > Guozhang
>> > > > >
>> > > > > On Tue, Nov 14, 2017 at 10:02 AM, Jorge Esteban Quilcate Otoya <
>> > > > > quilcate.jo...@gmail.com> wrote:
>> > > > >
>> > > > > > Added.
>> > > > > >
>> > > > > > El mar., 14 nov. 2017 a las 19:00, Ted Yu ()
>> > > > > > escribió:
>> > > > > >
>> > > > > > > Please fill in JIRA number in Status section.
>> > > > > > >
>> > > > > > > On Tue, Nov 14, 2017 at 9:57 AM, Jorge Esteban Quilcate Otoya <
>> > > > > > > quilcate.jo...@gmail.com> wrote:
>> > > > > > >
>> > > > > > > > JIRA issue title updated.
>> > > > > > > >
>> > > > > > > > El mar., 14 nov. 2017 a las 18:45, Ted Yu (<
>> > yuzhih...@gmail.com
>> > > >)
>> > > > > > > > escribió:
>> > > > > > > >
>> > > > > > > > > Can you fill in JIRA number (KAFKA-6058
>> > > > > > > > > ) ?
>> > > > > > > > >
>> > > > > > > > > If one JIRA is used for the two additions, consider updating
>> > > the
>> > > > > JIRA
>> > > > > > > > > title.
>> > > > > > > > >
>> > > > > > > > > On Tue, Nov 14, 2017 at 9:04 AM, Jorge Esteban Quilcate
>> > Otoya <
>> > > > > > > > > quilcate.jo...@gmail.com> wrote:
>> > > > > > > > >
>> > > > > > > > > > Hi all,
>> > > > > > > > > >
>> > > > > > > > > > As I didn't see any further discussion around this KIP, I'd
>> > > > like
>> > > > > to
>> > > > > > > > start
>> > > > > > > > > > voting.
>> > > > > > > > > >
>> > > > > > > > > > KIP documentation:
>> > > > > > > > > >
>> > > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
>> > > > > > > > action?pageId=74686265
>> > > > > > > > > >
>> > > > > > > > > > Cheers,
>> > > > > > > > > > Jorge.
>> > > > > > > > > >
>> > > > > > > > >
>> > > > > > > >
>> > > > > > >
>> > > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > --
>> > > > > -- Guozhang
>> > > > >
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > -- Guozhang
>> > >
>> >


Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-01-18 Thread Colin McCabe
+1 (non-binding)

Colin


On Thu, Jan 18, 2018, at 07:36, Ted Yu wrote:
> +1
>  Original message From: Bill Bejeck  
> Date: 1/18/18  6:59 AM  (GMT-08:00) To: dev@kafka.apache.org Subject: 
> Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient 
> Thanks for the KIP
> 
> +1
> 
> Bill
> 
> On Thu, Jan 18, 2018 at 4:24 AM, Rajini Sivaram 
> wrote:
> 
> > +1 (binding)
> >
> > Thanks for the KIP, Jorge.
> >
> > Regards,
> >
> > Rajini
> >
> > On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang  wrote:
> >
> > > +1 (binding). Thanks Jorge.
> > >
> > >
> > > Guozhang
> > >
> > > On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira 
> > wrote:
> > >
> > > > Hey, since there were no additional comments in the discussion, I'd
> > like
> > > to
> > > > resume the voting.
> > > >
> > > > +1 (binding)
> > > >
> > > > On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang 
> > > wrote:
> > > >
> > > > > Hello Jorge,
> > > > >
> > > > > I left some comments on the discuss thread. The wiki page itself
> > looks
> > > > good
> > > > > overall.
> > > > >
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Tue, Nov 14, 2017 at 10:02 AM, Jorge Esteban Quilcate Otoya <
> > > > > quilcate.jo...@gmail.com> wrote:
> > > > >
> > > > > > Added.
> > > > > >
> > > > > > El mar., 14 nov. 2017 a las 19:00, Ted Yu ()
> > > > > > escribió:
> > > > > >
> > > > > > > Please fill in JIRA number in Status section.
> > > > > > >
> > > > > > > On Tue, Nov 14, 2017 at 9:57 AM, Jorge Esteban Quilcate Otoya <
> > > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > > >
> > > > > > > > JIRA issue title updated.
> > > > > > > >
> > > > > > > > El mar., 14 nov. 2017 a las 18:45, Ted Yu (<
> > yuzhih...@gmail.com
> > > >)
> > > > > > > > escribió:
> > > > > > > >
> > > > > > > > > Can you fill in JIRA number (KAFKA-6058
> > > > > > > > > ) ?
> > > > > > > > >
> > > > > > > > > If one JIRA is used for the two additions, consider updating
> > > the
> > > > > JIRA
> > > > > > > > > title.
> > > > > > > > >
> > > > > > > > > On Tue, Nov 14, 2017 at 9:04 AM, Jorge Esteban Quilcate
> > Otoya <
> > > > > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > > Hi all,
> > > > > > > > > >
> > > > > > > > > > As I didn't see any further discussion around this KIP, I'd
> > > > like
> > > > > to
> > > > > > > > start
> > > > > > > > > > voting.
> > > > > > > > > >
> > > > > > > > > > KIP documentation:
> > > > > > > > > >
> > > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > > > action?pageId=74686265
> > > > > > > > > >
> > > > > > > > > > Cheers,
> > > > > > > > > > Jorge.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >


add to contributor list

2018-01-18 Thread Brandon Kirchner
Hello!

Could someone please add me to the contributors list in JIRA so i can
assign a jira card to myself?

thanks!

Brandon K.


Re: [VOTE] KIP-86: Configurable SASL callback handlers

2018-01-18 Thread tao xiao
 +1 (non-binding)

On Fri, 19 Jan 2018 at 00:47 Rajini Sivaram  wrote:

> Hi all,
>
> I would like to restart the vote for KIP-86:
>https://cwiki.apache.org/confluence/display/KAFKA/KIP-86
> %3A+Configurable+SASL+callback+handlers
>
> The KIP makes callback handlers for SASL configurable to make it simpler to
> integrate with custom authentication database or custom authentication
> servers. This is particularly useful for SASL/PLAIN where the
> implementation in Kafka based on credentials stored in jaas.conf is not
> suitable for production use. It is also useful for SCRAM in environments
> where ZooKeeper is not secure. The KIP has also been updated to simplify
> addition of new SASL mechanisms by making the Login class configurable.
>
> The PR for the KIP has been rebased and updated (
> https://github.com/apache/kafka/pull/2022)
>
> Thank you,
>
> Rajini
>
>
>
> On Mon, Dec 11, 2017 at 2:22 PM, Ted Yu  wrote:
>
> > +1
> >  Original message From: Tom Bentley <
> t.j.bent...@gmail.com>
> > Date: 12/11/17  6:06 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
> > Re: [VOTE] KIP-86: Configurable SASL callback handlers
> > +1 (non-binding)
> >
> > On 5 May 2017 at 11:57, Mickael Maison  wrote:
> >
> > > Thanks for the KIP Rajini, this will significantly simplify providing
> > > custom credential providers
> > > +1 (non binding)
> > >
> > > On Wed, May 3, 2017 at 8:25 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > > > Can we have some more reviews or votes for this KIP to include in
> > > 0.11.0.0?
> > > > It is not a breaking change and the code is ready for integration, so
> > it
> > > > will be good to get it in if possible.
> > > >
> > > > Ismael/Jun, since you had reviewed the KIP earlier, can you let me
> know
> > > if
> > > > I can do anything more to get your votes?
> > > >
> > > >
> > > > Thank you,
> > > >
> > > > Rajini
> > > >
> > > >
> > > > On Mon, Apr 10, 2017 at 12:18 PM, Edoardo Comar 
> > > wrote:
> > > >
> > > >> +1 (non binding)
> > > >> many thanks Rajini !
> > > >>
> > > >> --
> > > >> Edoardo Comar
> > > >> IBM MessageHub
> > > >> eco...@uk.ibm.com
> > > >> IBM UK Ltd, Hursley Park, SO21 2JN
> > > >>
> > > >> IBM United Kingdom Limited Registered in England and Wales with
> number
> > > >> 741598 Registered office: PO Box 41, North Harbour, Portsmouth,
> Hants.
> > > PO6
> > > >> 3AU
> > > >>
> > > >>
> > > >>
> > > >> From:   Rajini Sivaram 
> > > >> To: dev@kafka.apache.org
> > > >> Date:   06/04/2017 10:53
> > > >> Subject:[VOTE] KIP-86: Configurable SASL callback handlers
> > > >>
> > > >>
> > > >>
> > > >> Hi all,
> > > >>
> > > >> I would like to start the voting process for KIP-86:
> > > >>
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> 86%3A+Configurable+SASL+callback+handlers
> > > >>
> > > >>
> > > >> The KIP makes callback handlers for SASL configurable to make it
> > simpler
> > > >> to
> > > >> integrate with custom authentication database or custom
> authentication
> > > >> servers. This is particularly useful for SASL/PLAIN where the
> > > >> implementation in Kafka based on credentials stored in jaas.conf is
> > not
> > > >> suitable for production use. It is also useful for SCRAM in
> > environments
> > > >> where ZooKeeper is not secure.
> > > >>
> > > >> Thank you...
> > > >>
> > > >> Regards,
> > > >>
> > > >> Rajini
> > > >>
> > > >>
> > > >>
> > > >> Unless stated otherwise above:
> > > >> IBM United Kingdom Limited - Registered in England and Wales with
> > number
> > > >> 741598.
> > > >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire
> PO6
> > > 3AU
> > > >>
> > >
> >
>


Re: [VOTE] KIP-86: Configurable SASL callback handlers

2018-01-18 Thread Rajini Sivaram
Hi all,

I would like to restart the vote for KIP-86:
   https://cwiki.apache.org/confluence/display/KAFKA/KIP-86
%3A+Configurable+SASL+callback+handlers

The KIP makes callback handlers for SASL configurable to make it simpler to
integrate with custom authentication database or custom authentication
servers. This is particularly useful for SASL/PLAIN where the
implementation in Kafka based on credentials stored in jaas.conf is not
suitable for production use. It is also useful for SCRAM in environments
where ZooKeeper is not secure. The KIP has also been updated to simplify
addition of new SASL mechanisms by making the Login class configurable.

The PR for the KIP has been rebased and updated (
https://github.com/apache/kafka/pull/2022)

Thank you,

Rajini



On Mon, Dec 11, 2017 at 2:22 PM, Ted Yu  wrote:

> +1
>  Original message From: Tom Bentley 
> Date: 12/11/17  6:06 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
> Re: [VOTE] KIP-86: Configurable SASL callback handlers
> +1 (non-binding)
>
> On 5 May 2017 at 11:57, Mickael Maison  wrote:
>
> > Thanks for the KIP Rajini, this will significantly simplify providing
> > custom credential providers
> > +1 (non binding)
> >
> > On Wed, May 3, 2017 at 8:25 AM, Rajini Sivaram 
> > wrote:
> > > Can we have some more reviews or votes for this KIP to include in
> > 0.11.0.0?
> > > It is not a breaking change and the code is ready for integration, so
> it
> > > will be good to get it in if possible.
> > >
> > > Ismael/Jun, since you had reviewed the KIP earlier, can you let me know
> > if
> > > I can do anything more to get your votes?
> > >
> > >
> > > Thank you,
> > >
> > > Rajini
> > >
> > >
> > > On Mon, Apr 10, 2017 at 12:18 PM, Edoardo Comar 
> > wrote:
> > >
> > >> +1 (non binding)
> > >> many thanks Rajini !
> > >>
> > >> --
> > >> Edoardo Comar
> > >> IBM MessageHub
> > >> eco...@uk.ibm.com
> > >> IBM UK Ltd, Hursley Park, SO21 2JN
> > >>
> > >> IBM United Kingdom Limited Registered in England and Wales with number
> > >> 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants.
> > PO6
> > >> 3AU
> > >>
> > >>
> > >>
> > >> From:   Rajini Sivaram 
> > >> To: dev@kafka.apache.org
> > >> Date:   06/04/2017 10:53
> > >> Subject:[VOTE] KIP-86: Configurable SASL callback handlers
> > >>
> > >>
> > >>
> > >> Hi all,
> > >>
> > >> I would like to start the voting process for KIP-86:
> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 86%3A+Configurable+SASL+callback+handlers
> > >>
> > >>
> > >> The KIP makes callback handlers for SASL configurable to make it
> simpler
> > >> to
> > >> integrate with custom authentication database or custom authentication
> > >> servers. This is particularly useful for SASL/PLAIN where the
> > >> implementation in Kafka based on credentials stored in jaas.conf is
> not
> > >> suitable for production use. It is also useful for SCRAM in
> environments
> > >> where ZooKeeper is not secure.
> > >>
> > >> Thank you...
> > >>
> > >> Regards,
> > >>
> > >> Rajini
> > >>
> > >>
> > >>
> > >> Unless stated otherwise above:
> > >> IBM United Kingdom Limited - Registered in England and Wales with
> number
> > >> 741598.
> > >> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> > 3AU
> > >>
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-18 Thread James Cheng
Congrats Rajini!

-James

Sent from my iPhone

> On Jan 17, 2018, at 10:48 AM, Gwen Shapira  wrote:
> 
> Dear Kafka Developers, Users and Fans,
> 
> Rajini Sivaram became a committer in April 2017.  Since then, she remained
> active in the community and contributed major patches, reviews and KIP
> discussions. I am glad to announce that Rajini is now a member of the
> Apache Kafka PMC.
> 
> Congratulations, Rajini and looking forward to your future contributions.
> 
> Gwen, on behalf of Apache Kafka PMC


Re: [VOTE] KIP-247: Add public test utils for Kafka Streams

2018-01-18 Thread James Cheng
+1 (non-binding)

-James

Sent from my iPhone

> On Jan 17, 2018, at 6:09 PM, Matthias J. Sax  wrote:
> 
> Hi,
> 
> I would like to start the vote for KIP-247:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-247%3A+Add+public+test+utils+for+Kafka+Streams
> 
> 
> -Matthias
> 


Re: [VOTE] KIP-247: Add public test utils for Kafka Streams

2018-01-18 Thread Bill Bejeck
Thanks for the KIP.

+1

-Bill

On Wed, Jan 17, 2018 at 9:09 PM, Matthias J. Sax 
wrote:

> Hi,
>
> I would like to start the vote for KIP-247:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 247%3A+Add+public+test+utils+for+Kafka+Streams
>
>
> -Matthias
>
>


Re: [VOTE] KIP-247: Add public test utils for Kafka Streams

2018-01-18 Thread Damian Guy
+1

On Thu, 18 Jan 2018 at 15:14 Bill Bejeck  wrote:

> Thanks for the KIP.
>
> +1
>
> -Bill
>
> On Wed, Jan 17, 2018 at 9:09 PM, Matthias J. Sax 
> wrote:
>
> > Hi,
> >
> > I would like to start the vote for KIP-247:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 247%3A+Add+public+test+utils+for+Kafka+Streams
> >
> >
> > -Matthias
> >
> >
>


Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-01-18 Thread Ted Yu
+1
 Original message From: Bill Bejeck  Date: 
1/18/18  6:59 AM  (GMT-08:00) To: dev@kafka.apache.org Subject: Re: [VOTE] 
KIP-222 - Add "describe consumer group" to KafkaAdminClient 
Thanks for the KIP

+1

Bill

On Thu, Jan 18, 2018 at 4:24 AM, Rajini Sivaram 
wrote:

> +1 (binding)
>
> Thanks for the KIP, Jorge.
>
> Regards,
>
> Rajini
>
> On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang  wrote:
>
> > +1 (binding). Thanks Jorge.
> >
> >
> > Guozhang
> >
> > On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira 
> wrote:
> >
> > > Hey, since there were no additional comments in the discussion, I'd
> like
> > to
> > > resume the voting.
> > >
> > > +1 (binding)
> > >
> > > On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Jorge,
> > > >
> > > > I left some comments on the discuss thread. The wiki page itself
> looks
> > > good
> > > > overall.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Tue, Nov 14, 2017 at 10:02 AM, Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > > > Added.
> > > > >
> > > > > El mar., 14 nov. 2017 a las 19:00, Ted Yu ()
> > > > > escribió:
> > > > >
> > > > > > Please fill in JIRA number in Status section.
> > > > > >
> > > > > > On Tue, Nov 14, 2017 at 9:57 AM, Jorge Esteban Quilcate Otoya <
> > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > >
> > > > > > > JIRA issue title updated.
> > > > > > >
> > > > > > > El mar., 14 nov. 2017 a las 18:45, Ted Yu (<
> yuzhih...@gmail.com
> > >)
> > > > > > > escribió:
> > > > > > >
> > > > > > > > Can you fill in JIRA number (KAFKA-6058
> > > > > > > > ) ?
> > > > > > > >
> > > > > > > > If one JIRA is used for the two additions, consider updating
> > the
> > > > JIRA
> > > > > > > > title.
> > > > > > > >
> > > > > > > > On Tue, Nov 14, 2017 at 9:04 AM, Jorge Esteban Quilcate
> Otoya <
> > > > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > As I didn't see any further discussion around this KIP, I'd
> > > like
> > > > to
> > > > > > > start
> > > > > > > > > voting.
> > > > > > > > >
> > > > > > > > > KIP documentation:
> > > > > > > > >
> > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > > action?pageId=74686265
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Jorge.
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-01-18 Thread Bill Bejeck
Thanks for the KIP

+1

Bill

On Thu, Jan 18, 2018 at 4:24 AM, Rajini Sivaram 
wrote:

> +1 (binding)
>
> Thanks for the KIP, Jorge.
>
> Regards,
>
> Rajini
>
> On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang  wrote:
>
> > +1 (binding). Thanks Jorge.
> >
> >
> > Guozhang
> >
> > On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira 
> wrote:
> >
> > > Hey, since there were no additional comments in the discussion, I'd
> like
> > to
> > > resume the voting.
> > >
> > > +1 (binding)
> > >
> > > On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang 
> > wrote:
> > >
> > > > Hello Jorge,
> > > >
> > > > I left some comments on the discuss thread. The wiki page itself
> looks
> > > good
> > > > overall.
> > > >
> > > >
> > > > Guozhang
> > > >
> > > > On Tue, Nov 14, 2017 at 10:02 AM, Jorge Esteban Quilcate Otoya <
> > > > quilcate.jo...@gmail.com> wrote:
> > > >
> > > > > Added.
> > > > >
> > > > > El mar., 14 nov. 2017 a las 19:00, Ted Yu ()
> > > > > escribió:
> > > > >
> > > > > > Please fill in JIRA number in Status section.
> > > > > >
> > > > > > On Tue, Nov 14, 2017 at 9:57 AM, Jorge Esteban Quilcate Otoya <
> > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > >
> > > > > > > JIRA issue title updated.
> > > > > > >
> > > > > > > El mar., 14 nov. 2017 a las 18:45, Ted Yu (<
> yuzhih...@gmail.com
> > >)
> > > > > > > escribió:
> > > > > > >
> > > > > > > > Can you fill in JIRA number (KAFKA-6058
> > > > > > > > ) ?
> > > > > > > >
> > > > > > > > If one JIRA is used for the two additions, consider updating
> > the
> > > > JIRA
> > > > > > > > title.
> > > > > > > >
> > > > > > > > On Tue, Nov 14, 2017 at 9:04 AM, Jorge Esteban Quilcate
> Otoya <
> > > > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > Hi all,
> > > > > > > > >
> > > > > > > > > As I didn't see any further discussion around this KIP, I'd
> > > like
> > > > to
> > > > > > > start
> > > > > > > > > voting.
> > > > > > > > >
> > > > > > > > > KIP documentation:
> > > > > > > > >
> > > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > > action?pageId=74686265
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > > Jorge.
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [DISCUSS] KIP-218: Make KafkaFuture.Function java 8 lambda compatible

2018-01-18 Thread Steven Aerts
Ok, will cook something up by the end of the weekend.

Op wo 17 jan. 2018 om 18:36 schreef Xavier Léauté :

> Hi Steven, do you think you'll get a chance to address the points Ismael
> made? It'd be great to get this change into 1.1.
>
> Thanks!
> Xavier
>
> On Tue, Dec 19, 2017 at 12:20 PM Ismael Juma  wrote:
>
> > Hi Steven,
> >
> > As a general rule, we don't freeze KIPs after the vote passes. It's
> > reasonably common for things to come up during code review, for example.
> If
> > we think of improvements, we shouldn't refrain from doing them because of
> > of the vote. If we do minor changes after the KIP passes, we usually
> send a
> > follow-up to the vote thread and assume it's all good if no objections
> are
> > raised. Only significant changes require a vote from scratch (this tends
> to
> > be rare). More inline.
> >
> > On Tue, Dec 19, 2017 at 7:58 PM, Steven Aerts 
> > wrote:
> > >
> > > > 1. The KIP seems to rely on the pull request for some of the details
> of
> > > the
> > > > proposal. Generally, the KIP should stand on its own.
> > >
> > > Looking back at what I wrote in the KIP, I agree that its style is too
> > > descriptive
> > > and relies too much on the content of the PR.
> > > I will keep it in mind, and try to do better next time.  But as the
> > > voting is over I
> > > assume I better not alter it any more.
> > >
> >
> > I think we should fix this. At a minimum, the public interfaces section
> > should include the signature of interfaces and methods being added (as I
> > said before).
> >
> > > 2. Do we really need to deprecate `Function`? This will add build noise
> > to
> > > > any library that builds with 1.1+ but also wants to support 0.11 and
> > 1.0.
> > >
> > > No we don't.  It is all a matter of how fast we can and want an api
> > tagged
> > > with
> > > @Evolving, to evolve.
> > > As we know, that it will evolve again when KIP-118 (dropping java 7) is
> > > implemented.
> > >
> >
> > For widely used APIs like the AdminClient, it's better to be
> conservative.
> > We can look at deprecations once we drop Java 7 so that we do them all at
> > once.
> >
> >
> > > > 3. `FunctionInterface` is a bit of a clunky name. Due to lambdas,
> users
> > > > don't have to type the name themselves, so maybe it's fine as it is.
> An
> > > > alternative would be `BaseFunction` or something like that.
> > >
> > > I share a little bit your feeling, as the best name for me would just
> be
> > > `Function`.  But that one is taken.
> > >
> >
> > Yeah, it's a case of choosing the second best option.
> >
> > Ismael
> >
>


Re: [DISCUSS] KIP-248 Create New ConfigCommand That Uses The New AdminClient

2018-01-18 Thread Viktor Somogyi
3. Ok, I'll remove this from the KIP for now and perhaps add a future
considerations section with the idea.

9. Ok, I can do that.

On Thu, Jan 18, 2018 at 11:18 AM, Rajini Sivaram 
wrote:

> Hi Viktor,
>
> 3. Agree that it would be better to use something like ConfigEntityList
> rather than ListQuotas. But I would leave it out for now since we are so
> close to KIP freeze. We can introduce it later if required. Earlier, I was
> thinking that if we just wanted to get a list of entities without their
> actual quota values, you could have an option in DescribeQuotas to return
> the entities without the quota values. But actually that doesn't make sense
> since you will need to read all the ZK entries and find the ones with
> quotas in the first place. So let's just leave DescribeQuotas as-is.
>
> 7. Yes, with client-id quotas at the lowest level. The full list in the
> order of precedence is here:
> https://kafka.apache.org/documentation/#design_quotasconfig
>
> 9. One more suggestion. Since DescribeQuotas and AlterQuotas are specific
> to quotas, we could use *quota* instead of *config* in the protocol (and
> AdminClient API). Instead of *config_name*, we could use a *quota_type*
> enum (we have three types). And *config_value *could be *quota_value *that
> is a double rather than a string*,*
>
> On Thu, Jan 18, 2018 at 9:38 AM, Viktor Somogyi 
> wrote:
>
> > Hi Rajini,
> >
> > 1. Yes, --adminclient.config it is. I missed that, sorry :)
> >
> > 3. Indeed SCRAM in this case can raise complications. Since we'd like to
> > handle altering SCRAM credentials via AlterConfigs, I think we should use
> > DescribeConfigs to describe them. That is, to describe all the entities
> of
> > a given type we might need to introduce some kind of generic way of
> getting
> > metadata of config entities instead of ListQuotas which is very specific.
> > Therefore instead of ListQuotas we could have a ConfigMetadata (or
> > ConfigEntityList) protocol tailored to the needs of the admin client.
> This
> > protocol would accept a list of config entities for the request and
> produce
> > a list of entities as a response. For instance requesting (type:USER
> > name:user1, type:CLIENT name:) resources would return all the clients of
> > user1. This I think would better fit the use case and potentially future
> > use case too (like 'group' that you mentioned).
> > What do you think? Should we introduce a protocol like this or shall we
> > solve the problem without it?
> > Also, in a previous email you mentioned that we could use options. Could
> > you please elaborate on this idea?
> >
> > 7. Yea, that is a good idea. How are quotas applied? Does it first fall
> > back to the default on the same level and if there is no default then
> > applies the parent's config?
> >
> > Regards,
> > Viktor
> >
> >
> > On Wed, Jan 17, 2018 at 12:56 PM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > wrote:
> >
> > > Hi Viktor,
> > >
> > > Thank you for the responses.
> > >
> > > 1. ConsoleProducer uses *--producer.config  --producer-property
> > > key=value*, ConsoleConsumer uses* --consumer.config 
> > > --consumer-property key=value*, so perhaps we should use
> > > *--adminclient.config
> > > *rather than *--config.properties*?
> > >
> > > 3. The one difference is that with ListGroups, ListTopics etc. you are
> > > listing the entities (groups/topics). With ConfigCommand, you can list
> > > entities and that makes sense. But with ListQuotas, quota is an
> > attribute,
> > > the entity is user/clientid/(user, clientId). This is significant since
> > we
> > > can store other attributes for those entities. For instance, we store
> > SCRAM
> > > credentials along with quotas for 'user'. So to ListQuotas for user,
> you
> > > need to actually get the entries and check if quotas are defined or
> just
> > > credentials.
> > >
> > > 7.
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 226+-+Dynamic+Broker+Configuration
> > > is
> > > replacing* is_default *flag in the config entry for DescribeConfigs
> with
> > a*
> > > config_source* enum which indicates where the config came from. Perhaps
> > we
> > > could do something similar here?
> > >
> > > 8. Yes, that is correct.
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Tue, Jan 16, 2018 at 2:59 PM, Viktor Somogyi <
> viktorsomo...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Rajini,
> > > >
> > > > 1 and 2: corrected it in my code. So there will be 3 properties in
> this
> > > > group: --bootstrap-server, --config.properties and
> > --adminclient-property
> > > > (following the conventions established elsewhere, like the
> > > > console-producer).
> > > >
> > > > 3: Let me explain the reason for ListQuotas. In the current version
> of
> > > > kafka-configs you can do this:
> > > > bin/kafka-configs.sh --zookeeper localhost:2181 --describe
> > --entity-type
> > > > users
> > > > And this will return you all the configs for users 

Re: [DISCUSS] KIP-248 Create New ConfigCommand That Uses The New AdminClient

2018-01-18 Thread Rajini Sivaram
Hi Viktor,

3. Agree that it would be better to use something like ConfigEntityList
rather than ListQuotas. But I would leave it out for now since we are so
close to KIP freeze. We can introduce it later if required. Earlier, I was
thinking that if we just wanted to get a list of entities without their
actual quota values, you could have an option in DescribeQuotas to return
the entities without the quota values. But actually that doesn't make sense
since you will need to read all the ZK entries and find the ones with
quotas in the first place. So let's just leave DescribeQuotas as-is.

7. Yes, with client-id quotas at the lowest level. The full list in the
order of precedence is here:
https://kafka.apache.org/documentation/#design_quotasconfig

9. One more suggestion. Since DescribeQuotas and AlterQuotas are specific
to quotas, we could use *quota* instead of *config* in the protocol (and
AdminClient API). Instead of *config_name*, we could use a *quota_type*
enum (we have three types). And *config_value *could be *quota_value *that
is a double rather than a string*,*

On Thu, Jan 18, 2018 at 9:38 AM, Viktor Somogyi 
wrote:

> Hi Rajini,
>
> 1. Yes, --adminclient.config it is. I missed that, sorry :)
>
> 3. Indeed SCRAM in this case can raise complications. Since we'd like to
> handle altering SCRAM credentials via AlterConfigs, I think we should use
> DescribeConfigs to describe them. That is, to describe all the entities of
> a given type we might need to introduce some kind of generic way of getting
> metadata of config entities instead of ListQuotas which is very specific.
> Therefore instead of ListQuotas we could have a ConfigMetadata (or
> ConfigEntityList) protocol tailored to the needs of the admin client. This
> protocol would accept a list of config entities for the request and produce
> a list of entities as a response. For instance requesting (type:USER
> name:user1, type:CLIENT name:) resources would return all the clients of
> user1. This I think would better fit the use case and potentially future
> use case too (like 'group' that you mentioned).
> What do you think? Should we introduce a protocol like this or shall we
> solve the problem without it?
> Also, in a previous email you mentioned that we could use options. Could
> you please elaborate on this idea?
>
> 7. Yea, that is a good idea. How are quotas applied? Does it first fall
> back to the default on the same level and if there is no default then
> applies the parent's config?
>
> Regards,
> Viktor
>
>
> On Wed, Jan 17, 2018 at 12:56 PM, Rajini Sivaram 
> wrote:
>
> > Hi Viktor,
> >
> > Thank you for the responses.
> >
> > 1. ConsoleProducer uses *--producer.config  --producer-property
> > key=value*, ConsoleConsumer uses* --consumer.config 
> > --consumer-property key=value*, so perhaps we should use
> > *--adminclient.config
> > *rather than *--config.properties*?
> >
> > 3. The one difference is that with ListGroups, ListTopics etc. you are
> > listing the entities (groups/topics). With ConfigCommand, you can list
> > entities and that makes sense. But with ListQuotas, quota is an
> attribute,
> > the entity is user/clientid/(user, clientId). This is significant since
> we
> > can store other attributes for those entities. For instance, we store
> SCRAM
> > credentials along with quotas for 'user'. So to ListQuotas for user, you
> > need to actually get the entries and check if quotas are defined or just
> > credentials.
> >
> > 7.
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 226+-+Dynamic+Broker+Configuration
> > is
> > replacing* is_default *flag in the config entry for DescribeConfigs with
> a*
> > config_source* enum which indicates where the config came from. Perhaps
> we
> > could do something similar here?
> >
> > 8. Yes, that is correct.
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, Jan 16, 2018 at 2:59 PM, Viktor Somogyi  >
> > wrote:
> >
> > > Hi Rajini,
> > >
> > > 1 and 2: corrected it in my code. So there will be 3 properties in this
> > > group: --bootstrap-server, --config.properties and
> --adminclient-property
> > > (following the conventions established elsewhere, like the
> > > console-producer).
> > >
> > > 3: Let me explain the reason for ListQuotas. In the current version of
> > > kafka-configs you can do this:
> > > bin/kafka-configs.sh --zookeeper localhost:2181 --describe
> --entity-type
> > > users
> > > And this will return you all the configs for users under /config/users
> > > znode. In that command you have direct access to zookeeper, so you can
> > > instantly do an iteration through the znode. Therefore I looked at
> other
> > > protocols (ListAcls ListGroups, ListTopics) and thought it would be
> > aligned
> > > with those if I separated off listing. This has the pros of being able
> to
> > > return a list of entities or fine tuning permissions (you might have
> > users
> > > who don't have to know 

One type of event per topic?

2018-01-18 Thread Maria Pilar
Hi everyone,

I´m working in the configuration of the topics for the integration between
one API and Data platform system. We have created topic for each entity
that they would need to integrate in to the datawarehouse.


My question and I hope you can help me is, each entity will have diferent
type of events, for example to create and entity or update entity, and I´m
not sure if create one event per topic? or perhaps shared differents events
per topic, but I think that this option will have more complexity

Thanks a lot
Maria


Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-18 Thread Viktor Somogyi
Congratulations! :)

On Thu, Jan 18, 2018 at 10:38 AM, Rajini Sivaram 
wrote:

> Thanks everyone!
>
> Regards,
>
> Rajini
>
> On Thu, Jan 18, 2018 at 8:53 AM, Damian Guy  wrote:
>
> > Congratulations Rajini!
> >
> > On Thu, 18 Jan 2018 at 00:57 Hu Xi  wrote:
> >
> > > Congratulations, Rajini Sivaram.  Very well deserved!
> > >
> > >
> > > 
> > > 发件人: Konstantine Karantasis 
> > > 发送时间: 2018年1月18日 6:23
> > > 收件人: dev@kafka.apache.org
> > > 抄送: us...@kafka.apache.org
> > > 主题: Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram
> > >
> > > Congrats Rajini!
> > >
> > > -Konstantine
> > >
> > > On Wed, Jan 17, 2018 at 2:18 PM, Becket Qin 
> > wrote:
> > >
> > > > Congratulations, Rajini!
> > > >
> > > > On Wed, Jan 17, 2018 at 1:52 PM, Ismael Juma 
> > wrote:
> > > >
> > > > > Congratulations Rajini!
> > > > >
> > > > > On 17 Jan 2018 10:49 am, "Gwen Shapira"  wrote:
> > > > >
> > > > > Dear Kafka Developers, Users and Fans,
> > > > >
> > > > > Rajini Sivaram became a committer in April 2017.  Since then, she
> > > > remained
> > > > > active in the community and contributed major patches, reviews and
> > KIP
> > > > > discussions. I am glad to announce that Rajini is now a member of
> the
> > > > > Apache Kafka PMC.
> > > > >
> > > > > Congratulations, Rajini and looking forward to your future
> > > contributions.
> > > > >
> > > > > Gwen, on behalf of Apache Kafka PMC
> > > > >
> > > >
> > >
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-18 Thread Rajini Sivaram
Thanks everyone!

Regards,

Rajini

On Thu, Jan 18, 2018 at 8:53 AM, Damian Guy  wrote:

> Congratulations Rajini!
>
> On Thu, 18 Jan 2018 at 00:57 Hu Xi  wrote:
>
> > Congratulations, Rajini Sivaram.  Very well deserved!
> >
> >
> > 
> > 发件人: Konstantine Karantasis 
> > 发送时间: 2018年1月18日 6:23
> > 收件人: dev@kafka.apache.org
> > 抄送: us...@kafka.apache.org
> > 主题: Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram
> >
> > Congrats Rajini!
> >
> > -Konstantine
> >
> > On Wed, Jan 17, 2018 at 2:18 PM, Becket Qin 
> wrote:
> >
> > > Congratulations, Rajini!
> > >
> > > On Wed, Jan 17, 2018 at 1:52 PM, Ismael Juma 
> wrote:
> > >
> > > > Congratulations Rajini!
> > > >
> > > > On 17 Jan 2018 10:49 am, "Gwen Shapira"  wrote:
> > > >
> > > > Dear Kafka Developers, Users and Fans,
> > > >
> > > > Rajini Sivaram became a committer in April 2017.  Since then, she
> > > remained
> > > > active in the community and contributed major patches, reviews and
> KIP
> > > > discussions. I am glad to announce that Rajini is now a member of the
> > > > Apache Kafka PMC.
> > > >
> > > > Congratulations, Rajini and looking forward to your future
> > contributions.
> > > >
> > > > Gwen, on behalf of Apache Kafka PMC
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-248 Create New ConfigCommand That Uses The New AdminClient

2018-01-18 Thread Viktor Somogyi
Hi Rajini,

1. Yes, --adminclient.config it is. I missed that, sorry :)

3. Indeed SCRAM in this case can raise complications. Since we'd like to
handle altering SCRAM credentials via AlterConfigs, I think we should use
DescribeConfigs to describe them. That is, to describe all the entities of
a given type we might need to introduce some kind of generic way of getting
metadata of config entities instead of ListQuotas which is very specific.
Therefore instead of ListQuotas we could have a ConfigMetadata (or
ConfigEntityList) protocol tailored to the needs of the admin client. This
protocol would accept a list of config entities for the request and produce
a list of entities as a response. For instance requesting (type:USER
name:user1, type:CLIENT name:) resources would return all the clients of
user1. This I think would better fit the use case and potentially future
use case too (like 'group' that you mentioned).
What do you think? Should we introduce a protocol like this or shall we
solve the problem without it?
Also, in a previous email you mentioned that we could use options. Could
you please elaborate on this idea?

7. Yea, that is a good idea. How are quotas applied? Does it first fall
back to the default on the same level and if there is no default then
applies the parent's config?

Regards,
Viktor


On Wed, Jan 17, 2018 at 12:56 PM, Rajini Sivaram 
wrote:

> Hi Viktor,
>
> Thank you for the responses.
>
> 1. ConsoleProducer uses *--producer.config  --producer-property
> key=value*, ConsoleConsumer uses* --consumer.config 
> --consumer-property key=value*, so perhaps we should use
> *--adminclient.config
> *rather than *--config.properties*?
>
> 3. The one difference is that with ListGroups, ListTopics etc. you are
> listing the entities (groups/topics). With ConfigCommand, you can list
> entities and that makes sense. But with ListQuotas, quota is an attribute,
> the entity is user/clientid/(user, clientId). This is significant since we
> can store other attributes for those entities. For instance, we store SCRAM
> credentials along with quotas for 'user'. So to ListQuotas for user, you
> need to actually get the entries and check if quotas are defined or just
> credentials.
>
> 7.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 226+-+Dynamic+Broker+Configuration
> is
> replacing* is_default *flag in the config entry for DescribeConfigs with a*
> config_source* enum which indicates where the config came from. Perhaps we
> could do something similar here?
>
> 8. Yes, that is correct.
>
> Regards,
>
> Rajini
>
> On Tue, Jan 16, 2018 at 2:59 PM, Viktor Somogyi 
> wrote:
>
> > Hi Rajini,
> >
> > 1 and 2: corrected it in my code. So there will be 3 properties in this
> > group: --bootstrap-server, --config.properties and --adminclient-property
> > (following the conventions established elsewhere, like the
> > console-producer).
> >
> > 3: Let me explain the reason for ListQuotas. In the current version of
> > kafka-configs you can do this:
> > bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type
> > users
> > And this will return you all the configs for users under /config/users
> > znode. In that command you have direct access to zookeeper, so you can
> > instantly do an iteration through the znode. Therefore I looked at other
> > protocols (ListAcls ListGroups, ListTopics) and thought it would be
> aligned
> > with those if I separated off listing. This has the pros of being able to
> > return a list of entities or fine tuning permissions (you might have
> users
> > who don't have to know other users' quota settings). Once the list of
> > resources returned, the user can initiate a bulk describe.
> > Of course the cons of having ListQuotas as a separate protocol is that it
> > might do something too simple for a protocol and actually as you say it
> > might be implemented with DescribeQuotasOptions perhaps by only using an
> > extra flag in the DescribeQuotas protocol (like "describe_all").
> > Do you think it would be better to add an option to "describe all"? Also
> of
> > course the response would be "asymmetric" to the request in this case
> > meaning that I send one resource and might get back more. One of my
> reasons
> > of implementing this "list then describe" way of doing things was to be
> > aligned with DescribeConfigs (as that is also symmetric similarly).
> >
> > 4. OK, I think I can do that, it makes sense.
> >
> > 5. Sure, I can do that. In fact I started with this but then reverted as
> I
> > didn't know if it's really planned to have more levels.
> >
> > 6. OK, will remove those.
> >
> > 7. Well, at this point if you specify (userA, client1) it will simply
> get's
> > the znode's data at /config/users/userA/clients/client1 . If there is no
> > such client it returns empty. This now functionally compatible with the
> > current ConfigCommand. However what you're saying I think makes sense,
> > meaning that 

Re: [VOTE] KIP-219 - Improve Quota Communication

2018-01-18 Thread Rajini Sivaram
Hi Becket,

Thanks for the update. Yes, it does address my concern.

+1 (binding)

Regards,

Rajini

On Wed, Jan 17, 2018 at 5:24 PM, Becket Qin  wrote:

> Actually returning an empty fetch request may still be useful to reduce the
> throttle time due to request quota violation because the FetchResponse send
> time will be less. I just updated the KIP.
>
> Rajini, does that address your concern?
>
> On Tue, Jan 16, 2018 at 7:01 PM, Becket Qin  wrote:
>
> > Thanks for the reply, Jun.
> >
> > Currently the byte rate quota does not apply to HeartbeatRequest,
> > JoinGroupRequest/SyncGroupRequest. So the only case those requests are
> > throttled is because the request quota is violated. In that case, the
> > throttle time does not really matter whether we return a full
> FetchResponse
> > or an empty one. Would it be more consistent if we throttle based on the
> > actual throttle time / channel mute time?
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> > On Tue, Jan 16, 2018 at 3:45 PM, Jun Rao  wrote:
> >
> >> Hi, Jiangjie,
> >>
> >> You are right that the heartbeat is done in a channel different from the
> >> fetch request. I think it's still useful to return an empty fetch
> response
> >> when the quota is violated. This way, the throttle time for the
> heartbeat
> >> request won't be large. I agree that we can just mute the channel for
> the
> >> fetch request for the throttle time computed based on a full fetch
> >> response. This probably also partially addresses Rajini's #1 concern.
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >> On Mon, Jan 15, 2018 at 9:27 PM, Becket Qin 
> wrote:
> >>
> >> > Hi Rajini,
> >> >
> >> > Thanks for the comments. Pleas see the reply inline.
> >> >
> >> > Hi Jun,
> >> >
> >> > Thinking about the consumer rebalance case a bit more, I am not sure
> if
> >> we
> >> > need to worry about the delayed rebalance due to quota violation or
> not.
> >> > The rebalance actually uses a separate channel to coordinator.
> Therefore
> >> > unless the request quota is hit, the rebalance won't be throttled.
> Even
> >> if
> >> > request quota is hit, it seems unlikely to be delayed long. So it
> looks
> >> > that we don't need to unmute the channel earlier than needed. Does
> this
> >> > address your concern?
> >> >
> >> > Thanks,
> >> >
> >> > Jiangjie (Becket) Qin
> >> >
> >> > On Mon, Jan 15, 2018 at 4:22 AM, Rajini Sivaram <
> >> rajinisiva...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi Becket,
> >> > >
> >> > > A few questions:
> >> > >
> >> > > 1. KIP says: *Although older client implementations (prior to
> >> knowledge
> >> > of
> >> > > this KIP) will immediately send the next request after the broker
> >> > responds
> >> > > without paying attention to the throttle time field, the broker is
> >> > > protected by virtue of muting the channel for time X. i.e., the next
> >> > > request will not be processed until the channel is unmuted. *
> >> > > For fetch requests, the response is sent immediately and the mute
> >> time on
> >> > > the broker based on empty fetch response will often be zero (unlike
> >> the
> >> > > throttle time returned to the client based on non-empty response).
> >> Won't
> >> > > that lead to a tight loop of fetch requests from old clients
> >> > (particularly
> >> > > expensive with SSL)? Wouldn't it be better to retain current
> behaviour
> >> > for
> >> > > old clients? Also, if we change the behaviour for old clients,
> client
> >> > > metrics that track throttle time will report a lot of throttle
> >> unrelated
> >> > to
> >> > >  actual throttle time.
> >> > >
> >> > For consumers, if quota is violated, the throttle time on the broker
> >> will
> >> > not be 0. It is just that the throttle time will not be increasing
> >> because
> >> > the consumer will return an empty response in this case. So there
> should
> >> > not be a tight loop.
> >> >
> >> >
> >> > > 2. KIP says: *The usual idle timeout i.e., connections.max.idle.ms
> >> > >  will still be honored during the
> >> > throttle
> >> > > time X. This makes sure that the brokers will detect client
> connection
> >> > > closure in a bounded time.*
> >> > > Wouldn't it be better to bound maximum throttle time to
> >> > > *connections.max.idle.ms
> >> > > *? If we mute for a time greater
> than
> >> > > *connections.max.idle.ms
> >> > > * and then close a client
> connection
> >> > > simply
> >> > > because we had muted it on the broker for a longer throttle time, we
> >> > force
> >> > > a reconnection and read the next request from the new connection
> >> straight
> >> > > away. This feels the same as a bound on the quota of *
> >> > > connections.max.idle.ms
> >> > > *, but with added load on the
> broker
> >> for
> >> > > authenticating another connection (expensive with SSL).
> >> > >
> >> 

Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-01-18 Thread Rajini Sivaram
+1 (binding)

Thanks for the KIP, Jorge.

Regards,

Rajini

On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang  wrote:

> +1 (binding). Thanks Jorge.
>
>
> Guozhang
>
> On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira  wrote:
>
> > Hey, since there were no additional comments in the discussion, I'd like
> to
> > resume the voting.
> >
> > +1 (binding)
> >
> > On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang 
> wrote:
> >
> > > Hello Jorge,
> > >
> > > I left some comments on the discuss thread. The wiki page itself looks
> > good
> > > overall.
> > >
> > >
> > > Guozhang
> > >
> > > On Tue, Nov 14, 2017 at 10:02 AM, Jorge Esteban Quilcate Otoya <
> > > quilcate.jo...@gmail.com> wrote:
> > >
> > > > Added.
> > > >
> > > > El mar., 14 nov. 2017 a las 19:00, Ted Yu ()
> > > > escribió:
> > > >
> > > > > Please fill in JIRA number in Status section.
> > > > >
> > > > > On Tue, Nov 14, 2017 at 9:57 AM, Jorge Esteban Quilcate Otoya <
> > > > > quilcate.jo...@gmail.com> wrote:
> > > > >
> > > > > > JIRA issue title updated.
> > > > > >
> > > > > > El mar., 14 nov. 2017 a las 18:45, Ted Yu ( >)
> > > > > > escribió:
> > > > > >
> > > > > > > Can you fill in JIRA number (KAFKA-6058
> > > > > > > ) ?
> > > > > > >
> > > > > > > If one JIRA is used for the two additions, consider updating
> the
> > > JIRA
> > > > > > > title.
> > > > > > >
> > > > > > > On Tue, Nov 14, 2017 at 9:04 AM, Jorge Esteban Quilcate Otoya <
> > > > > > > quilcate.jo...@gmail.com> wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > As I didn't see any further discussion around this KIP, I'd
> > like
> > > to
> > > > > > start
> > > > > > > > voting.
> > > > > > > >
> > > > > > > > KIP documentation:
> > > > > > > >
> > > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > > action?pageId=74686265
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Jorge.
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-18 Thread Damian Guy
Congratulations Rajini!

On Thu, 18 Jan 2018 at 00:57 Hu Xi  wrote:

> Congratulations, Rajini Sivaram.  Very well deserved!
>
>
> 
> 发件人: Konstantine Karantasis 
> 发送时间: 2018年1月18日 6:23
> 收件人: dev@kafka.apache.org
> 抄送: us...@kafka.apache.org
> 主题: Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram
>
> Congrats Rajini!
>
> -Konstantine
>
> On Wed, Jan 17, 2018 at 2:18 PM, Becket Qin  wrote:
>
> > Congratulations, Rajini!
> >
> > On Wed, Jan 17, 2018 at 1:52 PM, Ismael Juma  wrote:
> >
> > > Congratulations Rajini!
> > >
> > > On 17 Jan 2018 10:49 am, "Gwen Shapira"  wrote:
> > >
> > > Dear Kafka Developers, Users and Fans,
> > >
> > > Rajini Sivaram became a committer in April 2017.  Since then, she
> > remained
> > > active in the community and contributed major patches, reviews and KIP
> > > discussions. I am glad to announce that Rajini is now a member of the
> > > Apache Kafka PMC.
> > >
> > > Congratulations, Rajini and looking forward to your future
> contributions.
> > >
> > > Gwen, on behalf of Apache Kafka PMC
> > >
> >
>


Re: 1.1 KIPs

2018-01-18 Thread Damian Guy
Hi Xavier,
I'll add it to the plan.

Thanks,
Damian

On Tue, 16 Jan 2018 at 19:04 Xavier Léauté  wrote:

> Hi Damian, I believe the list should also include KAFKA-5886 (KIP-91) which
> was voted for 1.0 but wasn't ready to be merged in time.
>
> On Tue, Jan 16, 2018 at 5:13 AM Damian Guy  wrote:
>
> > Hi,
> >
> > This is a reminder that we have one week left until the KIP deadline of
> Jan
> > 23. There are still some KIPs that are under discussion and/or being
> voted
> > on. Please keep in mind that the voting needs to be complete before the
> > deadline for the KIP to be added to the release.
> >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546
> >
> > Thanks,
> > Damian
> >
>