Dead link in Kafka Streams documentation

2018-04-06 Thread Stephane Maarek
https://kafka.apache.org/documentation/streams/developer-guide/

"Testing a Streams application" points to
https://kafka.apache.org/documentation/streams/developer-guide/testing.html

Which is a dead link.

The working link is
https://kafka.apache.org/11/documentation/streams/developer-guide/testing.html
But I believe the working link should change to be consistent with the rest
of the doc

Unsure where do make PR for docs, but if you can do a quick fix it'll be
appreciated!

Thanks :)
Stephane


Jenkins build is back to normal : kafka-trunk-jdk7 #3321

2018-04-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2534

2018-04-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6054: Fix upgrade path from Kafka Streams v0.10.0 (#4779)

--
[...truncated 1.87 MB...]

org.apache.kafka.streams.kstream.internals.KStreamGlobalKTableLeftJoinTest > 
shouldJoinOnNullKeyMapperValues STARTED

org.apache.kafka.streams.kstream.internals.KStreamGlobalKTableLeftJoinTest > 
shouldJoinOnNullKeyMapperValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldRemoveMergedSessionsFromStateStore STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldRemoveMergedSessionsFromStateStore PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldMergeSessions STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldMergeSessions PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldHandleMultipleSessionsAndMerging STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldHandleMultipleSessionsAndMerging PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldImmediatelyForwardNewSessionWhenNonCachedStore STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldImmediatelyForwardNewSessionWhenNonCachedStore PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldGetAggregatedValuesFromValueGetter STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldGetAggregatedValuesFromValueGetter PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldImmediatelyForwardRemovedSessionsWhenMerging STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldImmediatelyForwardRemovedSessionsWhenMerging PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldUpdateSessionIfTheSameTime STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldUpdateSessionIfTheSameTime PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldHaveMultipleSessionsForSameIdWhenTimestampApartBySessionGap STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldHaveMultipleSessionsForSameIdWhenTimestampApartBySessionGap PASSED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldCreateSingleSessionWhenWithinGap STARTED

org.apache.kafka.streams.kstream.internals.KStreamSessionWindowAggregateProcessorTest
 > shouldCreateSingleSessionWhenWithinGap PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddRegexTopicToLatestAutoOffsetResetList STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddRegexTopicToLatestAutoOffsetResetList PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldMapStateStoresToCorrectSourceTopics STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldMapStateStoresToCorrectSourceTopics PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddTableToEarliestAutoOffsetResetList STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddTableToEarliestAutoOffsetResetList PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldHaveNullTimestampExtractorWhenNoneSupplied STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldHaveNullTimestampExtractorWhenNoneSupplied PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldBuildGlobalTopologyWithAllGlobalTables STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldBuildGlobalTopologyWithAllGlobalTables PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddRegexTopicToEarliestAutoOffsetResetList STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddRegexTopicToEarliestAutoOffsetResetList PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddTopicToEarliestAutoOffsetResetList STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldAddTopicToEarliestAutoOffsetResetList PASSED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 
shouldHaveCorrectSourceTopicsForTableFromMergedStream STARTED

org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest > 

Re: [DISCUSS] KIP-279: Fix log divergence between leader and follower after fast leader fail over

2018-04-06 Thread Ted Yu
Makes sense.
Thanks for the explanation. 
 Original message From: Anna Povzner  Date: 
4/6/18  5:38 PM  (GMT-08:00) To: dev@kafka.apache.org Subject: Re: [DISCUSS] 
KIP-279: Fix log divergence between leader and follower after fast leader fail 
over 
Hi Ted,

I updated the Rejected Alternatives section with a more thorough
description of alternatives and reasoning for choosing the solution we
proposed.

While it is more clear why the second alternative guarantees one roundtrip
for the clean leader election case, the proposed solution also guarantees
it. This is based on the fact that we cannot have more than one
back-to-back leader change due to preferred leader election where the
leader is not pushed out of the ISR, which means the follower will have at
most one leader epoch unknown to the new leader, and so the leader will be
able to respond with the epoch that the follower knows about in the first
response.

For unclean leader election case, the second alternative reduces the number
of roundtrips but for rare cases: we need at least 3 fast leader changes to
see the advantage. Approximate calculation: Proposed solution requires
(N+1)/2 roundtrips for N fast leader changes (worst-case, could be less
roundtrips for the same number of leader change); Alternative solution
requires at most 2 roundtrips (except super rare cases, where we may want
to limit the size of OffsetForLeaderEpoch request). This comes at the cost
of a bigger change in the OffsetForLeaderEpoch request,
larger OffsetForLeaderEpoch request size on average, and additional
complexity of dealing with how long the sequence should be for the
subsequent OffsetForLeaderEpoch requests, handling the edge/contrived cases
where sequence may become too long.

So, I think, the main trade-off here is improving efficiency of a broker
becoming a follower in rare cases of unclean leader election/at least 3
fast leader changes vs. less complexity in the common case. The proposed
solution in the KIP is for less complexity.

Please let me know if you have any concerns or suggestions.

Thanks,
Anna

On Thu, Apr 5, 2018 at 1:33 PM, Ted Yu  wrote:

> For the second alternative which was rejected (The follower sends all
> sequences of {leader_epoch, end_offset})
>
> bq. also increases the size of OffsetForLeaderEpoch request by at least
> 64bit
>
> Though the size increases, the number of roundtrips is reduced meaningfully
> which would increase the robustness of the solution.
>
> Please expand the reasoning for unclean leader election for this
> alternative.
>
> Thanks
>
> On Thu, Apr 5, 2018 at 12:17 PM, Anna Povzner  wrote:
>
> > Hi,
> >
> >
> > I just created KIP-279 to fix edge cases of log divergence for both clean
> > and unclean leader election configs.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 279%3A+Fix+log+divergence+between+leader+and+follower+
> > after+fast+leader+fail+over
> >
> >
> > The KIP is basically a follow up to KIP-101, and proposes a slight
> > extension to the replication protocol to fix edge cases where logs can
> > diverge due to fast leader fail over.
> >
> >
> > Feedback and suggestions are welcome!
> >
> >
> > Thanks,
> >
> > Anna
> >
>


Re: [DISCUSS] KIP-279: Fix log divergence between leader and follower after fast leader fail over

2018-04-06 Thread Anna Povzner
Hi Ted,

I updated the Rejected Alternatives section with a more thorough
description of alternatives and reasoning for choosing the solution we
proposed.

While it is more clear why the second alternative guarantees one roundtrip
for the clean leader election case, the proposed solution also guarantees
it. This is based on the fact that we cannot have more than one
back-to-back leader change due to preferred leader election where the
leader is not pushed out of the ISR, which means the follower will have at
most one leader epoch unknown to the new leader, and so the leader will be
able to respond with the epoch that the follower knows about in the first
response.

For unclean leader election case, the second alternative reduces the number
of roundtrips but for rare cases: we need at least 3 fast leader changes to
see the advantage. Approximate calculation: Proposed solution requires
(N+1)/2 roundtrips for N fast leader changes (worst-case, could be less
roundtrips for the same number of leader change); Alternative solution
requires at most 2 roundtrips (except super rare cases, where we may want
to limit the size of OffsetForLeaderEpoch request). This comes at the cost
of a bigger change in the OffsetForLeaderEpoch request,
larger OffsetForLeaderEpoch request size on average, and additional
complexity of dealing with how long the sequence should be for the
subsequent OffsetForLeaderEpoch requests, handling the edge/contrived cases
where sequence may become too long.

So, I think, the main trade-off here is improving efficiency of a broker
becoming a follower in rare cases of unclean leader election/at least 3
fast leader changes vs. less complexity in the common case. The proposed
solution in the KIP is for less complexity.

Please let me know if you have any concerns or suggestions.

Thanks,
Anna

On Thu, Apr 5, 2018 at 1:33 PM, Ted Yu  wrote:

> For the second alternative which was rejected (The follower sends all
> sequences of {leader_epoch, end_offset})
>
> bq. also increases the size of OffsetForLeaderEpoch request by at least
> 64bit
>
> Though the size increases, the number of roundtrips is reduced meaningfully
> which would increase the robustness of the solution.
>
> Please expand the reasoning for unclean leader election for this
> alternative.
>
> Thanks
>
> On Thu, Apr 5, 2018 at 12:17 PM, Anna Povzner  wrote:
>
> > Hi,
> >
> >
> > I just created KIP-279 to fix edge cases of log divergence for both clean
> > and unclean leader election configs.
> >
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 279%3A+Fix+log+divergence+between+leader+and+follower+
> > after+fast+leader+fail+over
> >
> >
> > The KIP is basically a follow up to KIP-101, and proposes a slight
> > extension to the replication protocol to fix edge cases where logs can
> > diverge due to fast leader fail over.
> >
> >
> > Feedback and suggestions are welcome!
> >
> >
> > Thanks,
> >
> > Anna
> >
>


Jenkins build is back to normal : kafka-trunk-jdk9 #539

2018-04-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #3320

2018-04-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6576: Configurable Quota Management (KIP-257) (#4699)

--
[...truncated 410.19 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED


Re: [DISCUSS] KIP-283: Efficient Memory Usage for Down-Conversion

2018-04-06 Thread Dhruvil Shah
Hi Ted,

Thanks for the comments.



*>> bq. we can perform down-conversion when Records.writeTo is called.>>
Wouldn't this delay the network thread (though maybe the duration is
short)>> ?*
Yes, this is noted in the Cons section. I think we have a precedent for
this in the `SSLTransportLayer` implementation, so trying to follow a
similar model here.


*>> Can you expand on the structure of LazyDownConvertedRecords in more
detail ?*
I added the basic structure to the KIP.




*>> bq. even if it exceeds fetch.max.bytes>> I did a brief search but
didn't see the above config. Did you mean>> message.max.bytes>> ?*
Yes, thanks for the correction.


*>> After the buffers grow, is there a way to trim them down if
subsequent>> down-conversion doesn't need that much memory ?*
The easiest way probably is to allocate and use a new buffer for each
topic-partition. I think we would not require any trimming down if we do
this. The buffer will be available for garbage collection as soon as we are
done serializing and writing all messages to the socket for the particular
topic-partition.

Thanks,
Dhruvil


On Fri, Apr 6, 2018 at 3:23 PM, Ted Yu  wrote:

> bq. we can perform down-conversion when Records.writeTo is called.
>
> Wouldn't this delay the network thread (though maybe the duration is short)
> ?
>
> Can you expand on the structure of LazyDownConvertedRecords in more detail
> ?
>
> bq. even if it exceeds fetch.max.bytes
>
> I did a brief search but didn't see the above config. Did you mean
> message.max.bytes
> ?
>
> bq. with possibility to grow if the allocation
>
> After the buffers grow, is there a way to trim them down if subsequent
> down-conversion doesn't need that much memory ?
>
> Thanks
>
>
> On Fri, Apr 6, 2018 at 2:56 PM, Dhruvil Shah  wrote:
>
> > Hi,
> >
> > I created a KIP to help mitigate out of memory issues during
> > down-conversion. The KIP proposes introducing a configuration that can
> > prevent down-conversions altogether, and also describes a design for
> > efficient memory usage for down-conversion.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 283%3A+Efficient+Memory+Usage+for+Down-Conversion
> >
> > Suggestions and feedback are welcome!
> >
> > Thanks,
> > Dhruvil
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #2533

2018-04-06 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-6576: Configurable Quota Management (KIP-257) (#4699)

--
[...truncated 413.81 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED


Re: [DISCUSS] KIP-283: Efficient Memory Usage for Down-Conversion

2018-04-06 Thread Ted Yu
bq. we can perform down-conversion when Records.writeTo is called.

Wouldn't this delay the network thread (though maybe the duration is short)
?

Can you expand on the structure of LazyDownConvertedRecords in more detail ?

bq. even if it exceeds fetch.max.bytes

I did a brief search but didn't see the above config. Did you mean
message.max.bytes
?

bq. with possibility to grow if the allocation

After the buffers grow, is there a way to trim them down if subsequent
down-conversion doesn't need that much memory ?

Thanks


On Fri, Apr 6, 2018 at 2:56 PM, Dhruvil Shah  wrote:

> Hi,
>
> I created a KIP to help mitigate out of memory issues during
> down-conversion. The KIP proposes introducing a configuration that can
> prevent down-conversions altogether, and also describes a design for
> efficient memory usage for down-conversion.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 283%3A+Efficient+Memory+Usage+for+Down-Conversion
>
> Suggestions and feedback are welcome!
>
> Thanks,
> Dhruvil
>


Re: [DISCUSS] KIP-283: Efficient Memory Usage for Down-Conversion

2018-04-06 Thread Dhruvil Shah
I fixed the diagrams - let me know if you are still having trouble seeing
them.

Thanks,
Dhruvil

On Fri, Apr 6, 2018 at 3:05 PM, Ted Yu  wrote:

> The two embedded diagrams seem broken.
>
> Can you double check ?
>
> Thanks
>
> On Fri, Apr 6, 2018 at 2:56 PM, Dhruvil Shah  wrote:
>
> > Hi,
> >
> > I created a KIP to help mitigate out of memory issues during
> > down-conversion. The KIP proposes introducing a configuration that can
> > prevent down-conversions altogether, and also describes a design for
> > efficient memory usage for down-conversion.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 283%3A+Efficient+Memory+Usage+for+Down-Conversion
> >
> > Suggestions and feedback are welcome!
> >
> > Thanks,
> > Dhruvil
> >
>


Re: [DISCUSS] KIP-283: Efficient Memory Usage for Down-Conversion

2018-04-06 Thread Ted Yu
The two embedded diagrams seem broken.

Can you double check ?

Thanks

On Fri, Apr 6, 2018 at 2:56 PM, Dhruvil Shah  wrote:

> Hi,
>
> I created a KIP to help mitigate out of memory issues during
> down-conversion. The KIP proposes introducing a configuration that can
> prevent down-conversions altogether, and also describes a design for
> efficient memory usage for down-conversion.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 283%3A+Efficient+Memory+Usage+for+Down-Conversion
>
> Suggestions and feedback are welcome!
>
> Thanks,
> Dhruvil
>


[jira] [Resolved] (KAFKA-6576) Configurable Quota Management (KIP-257)

2018-04-06 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6576.
---
   Resolution: Fixed
 Reviewer: Jun Rao
Fix Version/s: 1.2.0

> Configurable Quota Management (KIP-257)
> ---
>
> Key: KAFKA-6576
> URL: https://issues.apache.org/jira/browse/KAFKA-6576
> Project: Kafka
>  Issue Type: New Feature
>  Components: core
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 1.2.0
>
>
> See 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-257+-+Configurable+Quota+Management]
>  for details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-283: Efficient Memory Usage for Down-Conversion

2018-04-06 Thread Dhruvil Shah
Hi,

I created a KIP to help mitigate out of memory issues during
down-conversion. The KIP proposes introducing a configuration that can
prevent down-conversions altogether, and also describes a design for
efficient memory usage for down-conversion.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-283%3A+Efficient+Memory+Usage+for+Down-Conversion

Suggestions and feedback are welcome!

Thanks,
Dhruvil


[jira] [Created] (KAFKA-6761) Reduce Kafka Streams Footprint

2018-04-06 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6761:
--

 Summary: Reduce Kafka Streams Footprint
 Key: KAFKA-6761
 URL: https://issues.apache.org/jira/browse/KAFKA-6761
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 1.2.0


The persistent storage footprint of a Kafka Streams application contains the 
following aspects:
 # The internal topics created on the Kafka cluster side.
 # The materialized state stores on the Kafka Streams application instances 
side.

There have been some questions about reducing these footprints, especially 
since many of them are not necessary. For example, there are redundant internal 
topics, as well as unnecessary state stores that takes up space but also affect 
performance. When people are pushing Streams to production with high traffic, 
this issue would be more common and severe. Reducing the footprint of Streams 
have clear benefits of Kafka Streams operations, and also for KSQL deployment 
resource utilization



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6760) responses not logged properly in controller

2018-04-06 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6760:
--

 Summary: responses not logged properly in controller
 Key: KAFKA-6760
 URL: https://issues.apache.org/jira/browse/KAFKA-6760
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 1.1.0
Reporter: Jun Rao


Saw the following logging in controller.log. We need to log the 
StopReplicaResponse properly in KafkaController.

[2018-04-05 14:38:41,878] DEBUG [Controller id=0] Delete topic callback invoked 
for org.apache.kafka.common.requests.StopReplicaResponse@263d40c 
(kafka.controller.K

afkaController)

It seems that the same issue exists for LeaderAndIsrResponse as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6759) Include valid Fetch Response high_watermark and log_start_offset when Error Code 1 OFFSET_OUT_OF_RANGE

2018-04-06 Thread John R. Fallows (JIRA)
John R. Fallows created KAFKA-6759:
--

 Summary: Include valid Fetch Response high_watermark and 
log_start_offset when Error Code 1 OFFSET_OUT_OF_RANGE
 Key: KAFKA-6759
 URL: https://issues.apache.org/jira/browse/KAFKA-6759
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 1.0.0
Reporter: John R. Fallows


When FETCH version 6 response has Error Code 1 OFFSET_OUT_OF_RANGE, any reason 
why the FETCH response could *not* also include valid values for high_watermark 
and log_start_offset (they are both currently -1) ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-04-06 Thread Ray Chiang
Hi Jun, please add me to the invitation as well.  If this is happening 
near Palo Alto, let me know if I can join in person. Thanks.


-Ray

On 4/4/18 1:34 PM, Jun Rao wrote:

Hi, Jan, Dong, John, Guozhang,

Perhaps it will be useful to have a KIP meeting to discuss this together as
a group. Would Apr. 9 (Monday) at 9:00am PDT work? If so, I will send out
an invite to the mailing list.

Thanks,

Jun


On Wed, Apr 4, 2018 at 1:25 AM, Jan Filipiak 
wrote:


Want to quickly step in here again because it is going places again.

The last part of the discussion is just a pain to read and completely
diverged from what I suggested without making the reasons clear to me.

I don't know why this happens here are my comments anyway.

@Guozhang: That Streams is working on automatic creating
copartition-usuable topics: great for streams, has literally nothing todo
with the KIP as we want to grow the
input topic. Everyone can reshuffle rel. easily but that is not what we
need todo, we need to grow the topic in question. After streams
automatically reshuffled the input topic
still has the same size and it didn't help a bit. I fail to see why this
is relevant. What am i missing here?

@Dong
I am still on the position that the current proposal brings us into the
wrong direction. Especially introducing PartitionKeyRebalanceListener
 From this point we can never move away to proper state full handling
without completely deprecating this creature from hell again.
Linear hashing is not the optimising step we have todo here. An interface
that when a topic is a topic its always the same even after it had
grown or shrunk is important. So from my POV I have major concerns that
this KIP is benefitial in its current state.

What is it that makes everyone so addicted to the idea of linear hashing?
not attractive at all for me.
And with statefull consumers still a complete mess. Why not stick with the
Kappa architecture???





On 03.04.2018 17:38, Dong Lin wrote:


Hey John,

Thanks much for your comments!!

I have yet to go through the emails of John/Jun/Guozhang in detail. But
let
me present my idea for how to minimize the delay for state loading for
stream use-case.

For ease of understanding, let's assume that the initial partition number
of input topics and change log topic are both 10. And initial number of
stream processor is also 10. If we only increase initial partition number
of input topics to 15 without changing number of stream processor, the
current KIP already guarantees in-order delivery and no state needs to be
moved between consumers for stream use-case. Next, let's say we want to
increase the number of processor to expand the processing capacity for
stream use-case. This requires us to move state between processors which
will take time. Our goal is to minimize the impact (i.e. delay) for
processing while we increase the number of processors.

Note that stream processor generally includes both consumer and producer.
In addition to consume from the input topic, consumer may also need to
consume from change log topic on startup for recovery. And producer may
produce state to the change log topic.



The solution will include the following steps:

1) Increase partition number of the input topic from 10 to 15. Since the
messages with the same key will still go to the same consume before and
after the partition expansion, this step can be done without having to
move
state between processors.

2) Increase partition number of the change log topic from 10 to 15. Note
that this step can also be done without impacting existing workflow. After
we increase partition number of the change log topic, key space may split
and some key will be produced to the newly-added partition. But the same
key will still go to the same processor (i.e. consumer) before and after
the partition. Thus this step can also be done without having to move
state
between processors.

3) Now, let's add 5 new consumers whose groupId is different from the
existing processor's groupId. Thus these new consumers will not impact
existing workflow. Each of these new consumers should consume two
partitions from the earliest offset, where these two partitions are the
same partitions that will be consumed if the consumers have the same
groupId as the existing processor's groupId. For example, the first of the
five consumers will consume partition 0 and partition 10. The purpose of
these consumers is to rebuild the state (e.g. RocksDB) for the processors
in advance. Also note that, by design of the current KIP, each consume
will
consume the existing partition of the change log topic up to the offset
before the partition expansion. Then they will only need to consume the
state of the new partition of the change log topic.

4) After consumers have caught up in step 3), we should stop these
consumers and add 5 new processors to the stream processing job. These 5
new processors should run in the same location as the previous 5 consumers
to 

Build failed in Jenkins: kafka-trunk-jdk8 #2532

2018-04-06 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] Trogdor: Added commonClientConf and adminClientConf to workload 
specs

--
[...truncated 1.46 MB...]

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords STARTED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testDynamicUpdateModeInDocs 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testDynamicUpdateModeInDocs 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > toRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig STARTED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
PASSED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice STARTED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass STARTED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs STARTED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testEmptyList STARTED

org.apache.kafka.common.config.AbstractConfigTest > testEmptyList PASSED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithSecondaryPrefix STARTED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithSecondaryPrefix PASSED

org.apache.kafka.common.config.AbstractConfigTest > 
testValuesWithPrefixOverride STARTED

org.apache.kafka.common.config.AbstractConfigTest > 

Re: Wiki edit permission request

2018-04-06 Thread Ray Chiang

Got it.  Thanks.

-Ray

On 4/6/18 12:03 PM, Matthias J. Sax wrote:

Please keep it on the mailing list.

Wiki permissions granted.


-Matthias

On 4/6/18 11:58 AM, Ray Chiang wrote:

ID is rchiang.

-Ray

On 4/6/18 11:23 AM, Matthias J. Sax wrote:

What is your wiki ID?

-Matthias

On 4/6/18 10:54 AM, Ray Chiang wrote:

As best as I can tell, I currently don't have edit access.

-Ray





Build failed in Jenkins: kafka-trunk-jdk9 #538

2018-04-06 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] Trogdor: Added commonClientConf and adminClientConf to workload 
specs

--
[...truncated 1.48 MB...]
kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionLastIsrShuttingDown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaInIsrNotLive PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithNoLiveIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElection PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testReassignPartitionLeaderElectionWithEmptyIsr PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testControlledShutdownPartitionLeaderElectionAllIsrSimultaneouslyShutdown PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionEnabled 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testPreferredReplicaPartitionLeaderElectionPreferredReplicaNotInIsrNotLive 
PASSED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
STARTED

kafka.controller.PartitionLeaderElectionAlgorithmsTest > 
testOfflinePartitionLeaderElectionLastIsrOfflineUncleanLeaderElectionDisabled 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > 

Request for comments on my new feature request

2018-04-06 Thread Bill DeStein
Hello Developers,

I'd appreciate your comments on my recent feature request:

https://issues.apache.org/jira/browse/KAFKA-6754

Thanks and best,

Bill


Re: Wiki edit permission request

2018-04-06 Thread Matthias J. Sax
Please keep it on the mailing list.

Wiki permissions granted.


-Matthias

On 4/6/18 11:58 AM, Ray Chiang wrote:
> ID is rchiang.
> 
> -Ray
> 
> On 4/6/18 11:23 AM, Matthias J. Sax wrote:
>> What is your wiki ID?
>>
>> -Matthias
>>
>> On 4/6/18 10:54 AM, Ray Chiang wrote:
>>> As best as I can tell, I currently don't have edit access.
>>>
>>> -Ray
>>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Wiki edit permission request

2018-04-06 Thread Matthias J. Sax
What is your wiki ID?

-Matthias

On 4/6/18 10:54 AM, Ray Chiang wrote:
> As best as I can tell, I currently don't have edit access.
> 
> -Ray
> 



signature.asc
Description: OpenPGP digital signature


Wiki edit permission request

2018-04-06 Thread Ray Chiang

As best as I can tell, I currently don't have edit access.

-Ray



Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-04-06 Thread Dong Lin
Hey John,

Thanks much for your super-detailed explanation. This is very helpful.

Now that I have finished reading through your email, I think the proposed
solution in my previous email probably meets the requirement #6 without
requiring additional coordination (w.r.t. partition function) among
clients. My understanding of requirement #6 is that, after partition
expansion, messages with the given key will go to the same consumer before
and after the partition expansion such that stream processing jobs won't be
affected. Thus this approach seems to be better than backfilling since it
does not require data copy for input topics.

In order for the proposed solution to meet requirements #6, we need two
extra requirements in addition to what has been described in the previous
email: 1) stream processing job starts with the same number of processors
as the initial number of partitions of the input topics; and 2) at any
given time the number of partitions of the input topic >= the number of
processors of the given stream processing job.

Could you take a look at the proposed solution and see if any of the claims
above is false?


Hey Jan,

Maybe it is more efficient for us to discuss your concern in the KIP
Meeting.


Thanks,
Dong


On Thu, Mar 29, 2018 at 2:05 PM, John Roesler  wrote:

> Hi Jun,
>
> Thanks for the response. I'm very new to this project, but I will share my
> perspective. I'm going to say a bunch of stuff that I know you know
> already, but just so we're on the same page...
>
> This may also be a good time to get feedback from the other KStreams folks.
>
> Using KStreams as a reference implementation for how stream processing
> frameworks may interact with Kafka, I think it's important to eschew
> knowledge about how KStreams currently handles internal communication,
> making state durable, etc. Both because these details may change, and
> because they won't be shared with other stream processors.
>
> =
> Background
>
> We are looking at a picture like this:
>
>  input input input
>  \   |   /
>   +-+
> +-+ Consumer(s) +---+
> | +-+   |
> |   |
> |KStreams Application   |
> |   |
> | +-+   |
> +-+ Producer(s) +---+
>   +-+
>/\
> output output
>
> The inputs and outputs are Kafka topics (and therefore have 1 or more
> partitions). We'd have at least 1 input and 0 or more outputs. The
> Consumers and Producers are both the official KafkaConsumer and
> KafkaProducer.
>
> In general, we'll assume that the input topics are provided by actors over
> which we have no control, although we may as well assume they are friendly
> and amenable to requests, and also that their promises are trustworthy.
> This is important because we must depend on them to uphold some promises:
> * That they tell us the schema of the data they publish, and abide by that
> schema. Without this, the inputs are essentially garbage.
> * That they tell us some defining characteristics of the topics (more on
> this in a sec.) and again strictly abide by that promise.
>
> What are the topic characteristics we care about?
> 1. The name (or name pattern)
> 2. How the messages are keyed (if at all)
> 3. Whether the message timestamps are meaningful, and if so, what their
> meaning is
> 4. Assuming the records have identity, whether the partitions partition the
> records' identity space
> 5. Whether the topic completely contains the data set
> 6. Whether the messages in the topic are ordered
>
> #1 is obvious: without this information, we cannot access the data at all.
>
> For #2, #3, #4, and #6, we may or may not need this information, depending
> on the logic of the application. For example, a trivial application that
> simply counts all events it sees doesn't care about #2, #3, #4, or #6. But
> an application that groups by some attribute can take advantage of #2 and
> #4 if the topic data is already keyed and partitioned over that attribute.
> Likewise, if the application includes some temporal semantics on a temporal
> dimension that is already captured in #3, it may take advantage of that
> fact.
>
> Note that #2, #3, #4, and #6 are all optional. If they are not promised, we
> can do extra work inside the application to accomplish what we need.
> However, if they are promised (and if we depend on that promise), it is
> essential that the topic providers uphold those promises, as we may not be
> in a position to verify them.
>
> Note also that if they make a promise, but it doesn't happen to line up
> with our needs (data is keyed by attr1, but we need it by attr2, or
> timestamp is produce-time, but we need it by event-time, etc.), then we
> will have to go ahead and do that extra work internally anyway. This also
> captures the situation in which two inputs are produced by different
> providers, one of 

[jira] [Created] (KAFKA-6758) Default "" consumer group tracks committed offsets, but is otherwise not a real group

2018-04-06 Thread David van Geest (JIRA)
David van Geest created KAFKA-6758:
--

 Summary: Default "" consumer group tracks committed offsets, but 
is otherwise not a real group
 Key: KAFKA-6758
 URL: https://issues.apache.org/jira/browse/KAFKA-6758
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.2
Reporter: David van Geest


*To reproduce:*
 * Use the default config for `group.id` of "" (the empty string)
 * Use the default config for `enable.auto.commit` of `true`
 * Use manually assigned partitions with `assign`

*Actual (unexpected) behaviour:*

Consumer offsets are stored for the "" group. Example:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --describe --group ""}}
{{Note: This will only show information about consumers that use the Java 
consumer API (non-ZooKeeper-based consumers).Consumer group '' has no active 
members.TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
CLIENT-ID}}
{{my_topic 54 7859593 7865082 5489 - - -}}
{{my_topic 5 14252813 14266419 13606 - - -}}
{{my_topic 39 19099099 19122441 23342 - - -}}
{{my_topic 43 16434573 16449180 14607 - - -.}}



 

However, the "" is not a real group. It doesn't show up with:

{{~ $ /opt/kafka/kafka_2.11-0.11.0.2/bin/kafka-consumer-groups.sh 
--bootstrap-server localhost:9092 --list}}

You also can't do dynamic partition assignment with it - if you try to 
`subscribe` when using the default "" group ID, you get:

{{AbstractCoordinator: Attempt to join group  failed due to fatal error: The 
configured groupId is invalid}}

*Better behaviours:*

(any of these would be preferable, in my opinion)
 * Don't commit offsets with the "" group, and log a warning telling the user 
that `enable.auto.commit = true` is meaningless in this situation. This is what 
I would have expected.
 * Don't have a default `group.id`. Some of my reading indicates that the new 
consumer basically needs a `group.id` to function. If so, force users to choose 
a group ID so that they're more aware of what will happen.
 * Have a default `group.id` of `default`, and make it a real consumer group. 
That is, it shows up in lists of groups, it has dynamic partitioning, etc.

As a user, when I don't set `group.id` I expect that I'm not using consumer 
groups. Therefore, I expect that there will be no offset tracking in Kafka.

In my specific application, I was wanting `auto.offset.reset` to kick in so 
that a failed consumer would start at the `latest` offset. However, it started 
at this unexpectedly stored offset instead.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-269: Substitution Within Configuration Values

2018-04-06 Thread Ron Dagostino
Hi folks.  I think there are a couple of issues that were just raised in
this thread.  One is whether the ability to use PasswordCallback exists,
and if so whether that impacts the applicability of this KIP to the
SASL/OAUTHBEARER KIP-255.  The second issue is related to how we might
leverage this KIP more broadly (including as an alternative to KIP-76)
while maintaining forward compatibility and not causing unexpected
substitutions/parsing exceptions.

Let me address the second issue (more broad use) first, since I think
Rajini hit on a good possibility.  Currently this KIP addresses the
possibility of an unexpected substitution by saying "This would cause a
substitution to be attempted, which of course would fail and raise an
exception."  I think Rajini's idea is to explicitly state that any
substitution that cannot be parsed is to be treated as a pass-thru or a
no-op.  So, for example, if a configured password happened to look like
"Asd$[,mhsd_4]Q" and a substitution was attempted on that value then the
result should not be an exception simply because "$[,mhsd_4]" couldn't be
parsed according to the required delimited syntax but should instead just
end up doing nothing and the password would remain"Asd$[,mhsd_4]Q".
Rajini, do I have that right?  If so, then I think it is worth considering
the possibility that substitution could be turned on more broadly with an
acceptably low risk.  In the interest of caution substitution could still
be turned on only as an opt-in, but it could be a global opt-in if we
explicitly take a "do no harm" approach to things that have delimiters but
do not parse as valid substitutions.

Regarding whether the ability to use PasswordCallback exists in
SASL/OAUTHBEARER KIP-255, I don't think it does.  The reason is because
customers do not generally write the login module implementation; they use
the implementation provided, which is short and delegates the token
retrieval to the callback handler (which users are expected to provide).
Here is the login module code:

@Override
public boolean login() throws LoginException {
OAuthBearerLoginCallback callback = new OAuthBearerLoginCallback();
try {
this.callbackHandler.handle(new Callback[] {callback});
} catch (IOException | UnsupportedCallbackException e) {
log.error(e.getMessage(), e);
throw new LoginException("An internal error occurred");
}
token = callback.token();
if (token == null) {
log.info(String.format("Login failed: %s : %s (URI=%s)",
callback.errorCode(), callback.errorDescription(),
callback.errorUri()));
throw new LoginException(callback.errorDescription());
}
log.info("Login succeeded");
return true;
}

I don't see the callbackHandler using a PasswordCallback instance to ask
(itself?) to retrieve a password.  If the callbackHandler needs a password,
then I imagine it will get that password from a login module option, and
that could in turn come from a file, environment variable, password vault,
etc. if substitution is applicable.

Ron



On Fri, Apr 6, 2018 at 4:41 AM, Rajini Sivaram 
wrote:

> Yes, I was going to suggest that we should do this for all configs earlier,
> but was reluctant to do that since in its current form, there is a
> property enableSubstitution
> (in JAAS config at the moment) that indicates if substitution is to be
> performed. If enabled, all values in that config are considered for
> substitution. That works for JAAS configs with a small number of
> properties, but I wasn't sure it was reasonable to do this for all Kafka
> configs where there are several configs that may contain arbitrary
> characters including substitution delimiters. It will be good if some
> configs that contain arbitrary characters can be specified directly in the
> config while others are substituted from elsewhere. Perhaps a substitution
> type that simply uses the value within delimiters would work? Ron, what do
> you think?
>
>
>
> On Fri, Apr 6, 2018 at 7:49 AM, Manikumar 
> wrote:
>
> > Hi,
> >
> > Substitution mechanism can be useful to configure regular password
> configs
> > liken ssl.keystore.password, ssl.truststore.password, etc.
> > This is can be good alternative to previously proposed KIP-76 and will
> give
> > more options to the user.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 76+Enable+getting+password+from+executable+rather+than+
> > passing+as+plaintext+in+config+files
> >
> >
> > Thanks,
> >
> > On Fri, Apr 6, 2018 at 4:29 AM, Rajini Sivaram 
> > wrote:
> >
> > > Hi Ron,
> > >
> > > For the password example, you could define a login CallbackHandler that
> > > processes PasswordCallback to provide passwords. We don't currently do
> > this
> > > with PLAIN/SCRAM because login callback handlers were not configurable
> > > earlier and we 

Re: [DISCUSS] KIP-269: Substitution Within Configuration Values

2018-04-06 Thread Rajini Sivaram
Yes, I was going to suggest that we should do this for all configs earlier,
but was reluctant to do that since in its current form, there is a
property enableSubstitution
(in JAAS config at the moment) that indicates if substitution is to be
performed. If enabled, all values in that config are considered for
substitution. That works for JAAS configs with a small number of
properties, but I wasn't sure it was reasonable to do this for all Kafka
configs where there are several configs that may contain arbitrary
characters including substitution delimiters. It will be good if some
configs that contain arbitrary characters can be specified directly in the
config while others are substituted from elsewhere. Perhaps a substitution
type that simply uses the value within delimiters would work? Ron, what do
you think?



On Fri, Apr 6, 2018 at 7:49 AM, Manikumar  wrote:

> Hi,
>
> Substitution mechanism can be useful to configure regular password configs
> liken ssl.keystore.password, ssl.truststore.password, etc.
> This is can be good alternative to previously proposed KIP-76 and will give
> more options to the user.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 76+Enable+getting+password+from+executable+rather+than+
> passing+as+plaintext+in+config+files
>
>
> Thanks,
>
> On Fri, Apr 6, 2018 at 4:29 AM, Rajini Sivaram 
> wrote:
>
> > Hi Ron,
> >
> > For the password example, you could define a login CallbackHandler that
> > processes PasswordCallback to provide passwords. We don't currently do
> this
> > with PLAIN/SCRAM because login callback handlers were not configurable
> > earlier and we haven't updated the login modules to do this. But that
> could
> > be one way of providing passwords and integrating with other password
> > sources, now that we have configurable login callback handlers. I was
> > wondering whether similar approach could be used for the parameters that
> > OAuth needed to obtain at runtime. We could still have this KIP with
> > built-in substitutable types to handle common cases like getting options
> > from a file without writing any code. But I wasn't sure if there were
> OAuth
> > options that couldn't be handled as callbacks using the login callback
> > handler.
> >
> > On Thu, Apr 5, 2018 at 10:25 PM, Ron Dagostino 
> wrote:
> >
> > > Hi Rajini.  Thanks for the questions.  I could see someone wanting to
> > > retrieve a password from a vended password vault solution (for
> example);
> > > that is the kind of scenario that the ability to add new substitutable
> > > types would be meant for.  I do still consider this KIP 269 to be a
> > > prerequisite for the SASL/OAUTHBEARER KIP 255.  I am open to a
> different
> > > perspective in case I missed or misunderstood your point.
> > >
> > > Ron
> > >
> > > On Thu, Apr 5, 2018 at 8:13 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi Ron,
> > > >
> > > > Now that login callback handlers are configurable, is this KIP still
> a
> > > > pre-req for OAuth? I was wondering whether we still need the ability
> to
> > > add
> > > > new substitutable types or whether it would be sufficient to add the
> > > > built-in ones to read from file etc.
> > > >
> > > >
> > > > On Thu, Mar 29, 2018 at 6:48 AM, Ron Dagostino 
> > > wrote:
> > > >
> > > > > Hi everyone.  There have been no comments on this KIP, so I intend
> to
> > > put
> > > > > it to a vote next week if there are no comments that might entail
> > > changes
> > > > > between now and then.  Please take a look in the meantime if you
> > wish.
> > > > >
> > > > > Ron
> > > > >
> > > > > On Thu, Mar 15, 2018 at 2:36 PM, Ron Dagostino 
> > > > wrote:
> > > > >
> > > > > > Hi everyone.
> > > > > >
> > > > > > I created KIP-269: Substitution Within Configuration Values
> > > > > >  > > > > 269+Substitution+Within+Configuration+Values>
> > > > > > (https://cwiki.apache.org/confluence/display/KAFKA/KIP+269+
> > > > > > Substitution+Within+Configuration+Values
> > > > > >  > > > > action?pageId=75968876>
> > > > > > ).
> > > > > >
> > > > > > This KIP proposes adding support for substitution within client
> > JAAS
> > > > > > configuration values for PLAIN and SCRAM-related SASL mechanisms
> > in a
> > > > > > backwards-compatible manner and making the functionality
> available
> > to
> > > > > other
> > > > > > existing (or future) configuration contexts where it is deemed
> > > > > appropriate.
> > > > > >
> > > > > > This KIP was extracted from (and is now a prerequisite for)
> > KIP-255:
> > > > > > OAuth Authentication via SASL/OAUTHBEARER
> > > > > >  > > > > action?pageId=75968876>
> > > > > > based on discussion of that KIP.
> > > > > >
> > > > > > Ron
> > > > > >
> > > > >
> > > >
> 

Re: [VOTE] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-04-06 Thread Manikumar
+1 (non-binding)

Thanks for the detailed KIP.

On Fri, Apr 6, 2018 at 8:54 AM, Guozhang Wang  wrote:

> Thanks for the KIP!
>
> +1 (binding)
>
> On Thu, Apr 5, 2018 at 3:11 PM, Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> >
> > -Matthias
> >
> > On 4/5/18 4:36 AM, Ted Yu wrote:
> > > +1
> > >  Original message From: Mickael Maison <
> > mickael.mai...@gmail.com> Date: 4/5/18  1:42 AM  (GMT-08:00) To: dev <
> > dev@kafka.apache.org> Subject: Re: [VOTE] KIP-211: Revise Expiration
> > Semantics of Consumer Group Offsets
> > > +1 (non-binding)
> > > Thanks for the KIP!
> > >
> > > On Thu, Apr 5, 2018 at 8:08 AM, Jason Gustafson 
> > wrote:
> > >> +1 Thanks Vahid!
> > >>
> > >> On Wed, Mar 28, 2018 at 7:27 PM, James Cheng 
> > wrote:
> > >>
> > >>> +1 (non-binding)
> > >>>
> > >>> Thanks for all the hard work on this, Vahid!
> > >>>
> > >>> -James
> > >>>
> >  On Mar 28, 2018, at 10:34 AM, Vahid S Hashemian <
> > >>> vahidhashem...@us.ibm.com> wrote:
> > 
> >  Hi all,
> > 
> >  As I believe the feedback and suggestions on this KIP have been
> > addressed
> >  so far, I'd like to start a vote.
> >  The KIP can be found at
> >  https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> 211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
> > 
> >  Thanks in advance for voting :)
> > 
> >  --Vahid
> > 
> > >>>
> > >>>
> >
> >
>
>
> --
> -- Guozhang
>


[jira] [Created] (KAFKA-6757) Log runtime exception in case of server startup errors

2018-04-06 Thread Enrico Olivelli (JIRA)
Enrico Olivelli created KAFKA-6757:
--

 Summary: Log runtime exception in case of server startup errors
 Key: KAFKA-6757
 URL: https://issues.apache.org/jira/browse/KAFKA-6757
 Project: Kafka
  Issue Type: Improvement
  Components: log
Affects Versions: 1.0.1
Reporter: Enrico Olivelli


Sometimes while running Kafka server inside tests of an upstream application it 
can happen that the server cannot start due to a bad runtime error, like a 
missing jar on the classpath.

I would like KafkaServerStartable to log any 'Throwable' in order to catch this 
unpredictable errors

 

like this
{code:java}

java.lang.NoClassDefFoundError: com/fasterxml/jackson/annotation/JsonMerge
    at 
com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector.(JacksonAnnotationIntrospector.java:50)
    at 
com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:291)
    at kafka.utils.Json$.(Json.scala:29)
    at kafka.utils.Json$.(Json.scala)
    at kafka.utils.ZkUtils$ClusterId$.toJson(ZkUtils.scala:299)
    at kafka.utils.ZkUtils.createOrGetClusterId(ZkUtils.scala:314)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at 
kafka.server.KafkaServer$$anonfun$getOrGenerateClusterId$1.apply(KafkaServer.scala:356)
    at scala.Option.getOrElse(Option.scala:121)
    at kafka.server.KafkaServer.getOrGenerateClusterId(KafkaServer.scala:356)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:197)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at 
magnews.datastream.server.DataStreamServer.start(DataStreamServer.java:96)
    at 
magnews.datastream.server.RealServerSinglePartitionTest.test(RealServerSinglePartitionTest.java:85)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-280: Enhanced log compaction

2018-04-06 Thread Luís Cabral
 
Thank you very much for taking the time to read it.

bq. In the 'Proposed Changes' section, can you expand 'OCC' ?
I've made the 'OCC' into a link pointing to the appropriate Wiki page 
explaining what it is. This is not a particularly important part of the change, 
it is just to reference the similarity between this proposal and the version 
control offered by OCC.

bq. Is it possible to enumerate the keys ?
Do you mean hard-coding the header key used, rather than using a free-text 
solution? If I were to hard-code header with key "version", for example, then 
this may conflict with other clients that already use this header for something 
else, making it cumbersome for them to try and use this strategy, should they 
want it.
If I misunderstood your points, then please correct me. I appreciate the 
feedback!On Thursday, April 5, 2018, 5:13:47 PM GMT+2, Ted Yu 
 wrote:  
 
 In the 'Proposed Changes' section, can you expand 'OCC' ?

bq. Specifically changing this to anything other than "*offset*"

Is it possible to enumerate the keys ? In the future, more metadata would
be defined in record header - it is better to avoid collision.

Cheers

On Thu, Apr 5, 2018 at 2:05 AM, Luís Cabral 
wrote:

>
> This is embarassingly hard to fix... going again...
> 
> KIP-280:  https://cwiki.apache.org/confluence/display/
> KAFKA/KIP-280%3A+Enhanced+log+compaction
> -
> Pull-4822:  https://github.com/apache/kafka/pull/4822
>
>
>    On Thursday, April 5, 2018, 11:03:22 AM GMT+2, Luís Cabral
>  wrote:
>
>  Fixing the links:KIP-280:  https://cwiki.apache.org/confluence/display/
> KAFKA/KIP-280%3A+Enhanced+log+compactionPull-4822:  https://
> github.com/apache/kafka/pull/4822
>
>
> On 2018/04/0508:44:00, Luís Cabral  wrote:
> > Helloall,>
> > Starting adiscussion for this feature.>
> >KIP-280  :  https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 280%3A+Enhanced+log+compactionPull-4822:  https://github.com/apache/
> kafka/pull/4822>
>
> > KindRegards,Luís>
>
>  

Re: [DISCUSS] KIP-269: Substitution Within Configuration Values

2018-04-06 Thread Manikumar
Hi,

Substitution mechanism can be useful to configure regular password configs
liken ssl.keystore.password, ssl.truststore.password, etc.
This is can be good alternative to previously proposed KIP-76 and will give
more options to the user.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-
76+Enable+getting+password+from+executable+rather+than+
passing+as+plaintext+in+config+files


Thanks,

On Fri, Apr 6, 2018 at 4:29 AM, Rajini Sivaram 
wrote:

> Hi Ron,
>
> For the password example, you could define a login CallbackHandler that
> processes PasswordCallback to provide passwords. We don't currently do this
> with PLAIN/SCRAM because login callback handlers were not configurable
> earlier and we haven't updated the login modules to do this. But that could
> be one way of providing passwords and integrating with other password
> sources, now that we have configurable login callback handlers. I was
> wondering whether similar approach could be used for the parameters that
> OAuth needed to obtain at runtime. We could still have this KIP with
> built-in substitutable types to handle common cases like getting options
> from a file without writing any code. But I wasn't sure if there were OAuth
> options that couldn't be handled as callbacks using the login callback
> handler.
>
> On Thu, Apr 5, 2018 at 10:25 PM, Ron Dagostino  wrote:
>
> > Hi Rajini.  Thanks for the questions.  I could see someone wanting to
> > retrieve a password from a vended password vault solution (for example);
> > that is the kind of scenario that the ability to add new substitutable
> > types would be meant for.  I do still consider this KIP 269 to be a
> > prerequisite for the SASL/OAUTHBEARER KIP 255.  I am open to a different
> > perspective in case I missed or misunderstood your point.
> >
> > Ron
> >
> > On Thu, Apr 5, 2018 at 8:13 AM, Rajini Sivaram 
> > wrote:
> >
> > > Hi Ron,
> > >
> > > Now that login callback handlers are configurable, is this KIP still a
> > > pre-req for OAuth? I was wondering whether we still need the ability to
> > add
> > > new substitutable types or whether it would be sufficient to add the
> > > built-in ones to read from file etc.
> > >
> > >
> > > On Thu, Mar 29, 2018 at 6:48 AM, Ron Dagostino 
> > wrote:
> > >
> > > > Hi everyone.  There have been no comments on this KIP, so I intend to
> > put
> > > > it to a vote next week if there are no comments that might entail
> > changes
> > > > between now and then.  Please take a look in the meantime if you
> wish.
> > > >
> > > > Ron
> > > >
> > > > On Thu, Mar 15, 2018 at 2:36 PM, Ron Dagostino 
> > > wrote:
> > > >
> > > > > Hi everyone.
> > > > >
> > > > > I created KIP-269: Substitution Within Configuration Values
> > > > >  > > > 269+Substitution+Within+Configuration+Values>
> > > > > (https://cwiki.apache.org/confluence/display/KAFKA/KIP+269+
> > > > > Substitution+Within+Configuration+Values
> > > > >  > > > action?pageId=75968876>
> > > > > ).
> > > > >
> > > > > This KIP proposes adding support for substitution within client
> JAAS
> > > > > configuration values for PLAIN and SCRAM-related SASL mechanisms
> in a
> > > > > backwards-compatible manner and making the functionality available
> to
> > > > other
> > > > > existing (or future) configuration contexts where it is deemed
> > > > appropriate.
> > > > >
> > > > > This KIP was extracted from (and is now a prerequisite for)
> KIP-255:
> > > > > OAuth Authentication via SASL/OAUTHBEARER
> > > > >  > > > action?pageId=75968876>
> > > > > based on discussion of that KIP.
> > > > >
> > > > > Ron
> > > > >
> > > >
> > >
> >
>