Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-02-02 Thread Guozhang Wang
Hi Dong,

Could you elaborate a bit more how controller could affect leaders to
switch between all and quorum?


Guozhang


On Fri, Feb 2, 2018 at 10:12 PM, Dong Lin  wrote:

> Hey Guazhang,
>
> Got it. Thanks for the detailed explanation. I guess my point is that we
> can probably achieve the best of both worlds, i.e. maintain the existing
> behavior of ack="all" while improving the tail latency.
>
> Thanks,
> Dong
>
>
>
> On Fri, Feb 2, 2018 at 8:43 PM, Guozhang Wang  wrote:
>
>> Hi Dong,
>>
>> Yes, in terms of fault tolerance "quorum" does not do better than "all",
>> as I said, with {min.isr} to X+1 Kafka is able to tolerate X failures only.
>> So if A and B are partitioned off at the same time, then there are two
>> concurrent failures and we do not guarantee all acked messages will be
>> retained.
>>
>> The goal of my approach is to maintain the behavior of ack="all", which
>> happen to do better than what Kafka is actually guaranteed: when both A and
>> B are partitioned off, produced records will not be acked since "all"
>> requires all replicas (not only ISRs, my previous email has an incorrect
>> term) are required. This is doing better than tolerating X failures, which
>> I was proposing to keep, so that we would not introduce any regression
>> "surprises" to users who are already using "all". In other words, "quorum"
>> is trading a bit of failure tolerance that is strictly defined on min.isr
>> for better tail latency.
>>
>>
>> Guozhang
>>
>>
>> On Fri, Feb 2, 2018 at 6:25 PM, Dong Lin  wrote:
>>
>>> Hey Guozhang,
>>>
>>> According to the new proposal, with 3 replicas, min.isr=2 and
>>> acks="quorum", it seems that acknowledged messages can still be truncated
>>> in the network partition scenario you mentioned, right? So I guess the goal
>>> is for some user to achieve better tail latency at the cost of potential
>>> message loss?
>>>
>>> If this is the case, then I think it may be better to adopt an approach
>>> where controller dynamically turn on/off this optimization. This provides
>>> user with peace of mind (i.e. no message loss) while still reducing tail
>>> latency. What do you think?
>>>
>>> Thanks,
>>> Dong
>>>
>>>
>>> On Fri, Feb 2, 2018 at 11:11 AM, Guozhang Wang 
>>> wrote:
>>>
 Hello Litao,

 Just double checking on the leader election details, do you have time
 to complete the proposal on that part?

 Also Jun mentioned one caveat related to KIP-250 on the KIP-232
 discussion thread that Dong is working on, I figured it is worth pointing
 out here with a tentative solution:


 ```
 Currently, if the producer uses acks=-1, a write will only succeed if
 the write is received by all in-sync replicas (i.e., committed). This
 is true even when min.isr is set since we first wait for a message to
 be committed and then check the min.isr requirement. KIP-250 may
 change that, but we can discuss the implication there.
 ```

 The caveat is that, if we change the acking semantics in KIP-250 that
 we will only requires num of {min.isr} to acknowledge a produce, then the
 above scenario will have a caveat: imagine you have {A, B, C} replicas of a
 partition with A as the leader, all in the isr list, and min.isr is 2.

 1. Say there is a network partition and both A and B are fenced off. C
 is elected as the new leader, it shrinks its isr list to only {C}; from A's
 point of view it does not know it becomes the "ghost" and no longer the
 leader, all it does is shrinking the isr list to {A, B}.

 2. At this time, any new writes with ack=-1 to C will not be acked,
 since from C's pov there is only one replica. This is correct.

 3. However, any writes that are send to A (NOTE this is totally
 possible, since producers would only refresh metadata periodically,
 additionally if they happen to ask A or B they will get the stale metadata
 that A's still the leader), since A thinks that isr list is {A, B} and as
 long as B has replicated the message, A can acked the produce.

 This is not correct behavior, since when network heals, A would
 realize it is not the leader and will truncate its log. And hence as a
 result the acked records are lost, violating Kafka's guarantees. And
 KIP-232 would not help preventing this scenario.


 Although one can argue that, with 3 replicas and min.isr set to 2,
 Kafka is guaranteeing to tolerate only one failure, while the above
 scenario is actually two concurrent failures (both A and B are considered
 wedged), this is still a regression to the current version.

 So to resolve this issue, I'd propose we can change the semantics in
 the following way (this is only slightly different from your proposal):


 1. Add one more value to client-side acks config:

0: 

[jira] [Created] (KAFKA-6530) Use actual first offset of messages when rolling log segment for magic v2

2018-02-02 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6530:
--

 Summary: Use actual first offset of messages when rolling log 
segment for magic v2
 Key: KAFKA-6530
 URL: https://issues.apache.org/jira/browse/KAFKA-6530
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


We've implemented a heuristic to avoid overflowing when rolling a log segment 
to determine the base offset of the next segment without decompressing the 
message set to find the actual first offset. With the v2 message format, we can 
find the first offset without needing decompression, so we can set the correct 
base offset exactly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-02-02 Thread Dong Lin
Hey Guazhang,

Got it. Thanks for the detailed explanation. I guess my point is that we
can probably achieve the best of both worlds, i.e. maintain the existing
behavior of ack="all" while improving the tail latency.

Thanks,
Dong



On Fri, Feb 2, 2018 at 8:43 PM, Guozhang Wang  wrote:

> Hi Dong,
>
> Yes, in terms of fault tolerance "quorum" does not do better than "all",
> as I said, with {min.isr} to X+1 Kafka is able to tolerate X failures only.
> So if A and B are partitioned off at the same time, then there are two
> concurrent failures and we do not guarantee all acked messages will be
> retained.
>
> The goal of my approach is to maintain the behavior of ack="all", which
> happen to do better than what Kafka is actually guaranteed: when both A and
> B are partitioned off, produced records will not be acked since "all"
> requires all replicas (not only ISRs, my previous email has an incorrect
> term) are required. This is doing better than tolerating X failures, which
> I was proposing to keep, so that we would not introduce any regression
> "surprises" to users who are already using "all". In other words, "quorum"
> is trading a bit of failure tolerance that is strictly defined on min.isr
> for better tail latency.
>
>
> Guozhang
>
>
> On Fri, Feb 2, 2018 at 6:25 PM, Dong Lin  wrote:
>
>> Hey Guozhang,
>>
>> According to the new proposal, with 3 replicas, min.isr=2 and
>> acks="quorum", it seems that acknowledged messages can still be truncated
>> in the network partition scenario you mentioned, right? So I guess the goal
>> is for some user to achieve better tail latency at the cost of potential
>> message loss?
>>
>> If this is the case, then I think it may be better to adopt an approach
>> where controller dynamically turn on/off this optimization. This provides
>> user with peace of mind (i.e. no message loss) while still reducing tail
>> latency. What do you think?
>>
>> Thanks,
>> Dong
>>
>>
>> On Fri, Feb 2, 2018 at 11:11 AM, Guozhang Wang 
>> wrote:
>>
>>> Hello Litao,
>>>
>>> Just double checking on the leader election details, do you have time to
>>> complete the proposal on that part?
>>>
>>> Also Jun mentioned one caveat related to KIP-250 on the KIP-232
>>> discussion thread that Dong is working on, I figured it is worth pointing
>>> out here with a tentative solution:
>>>
>>>
>>> ```
>>> Currently, if the producer uses acks=-1, a write will only succeed if
>>> the write is received by all in-sync replicas (i.e., committed). This
>>> is true even when min.isr is set since we first wait for a message to
>>> be committed and then check the min.isr requirement. KIP-250 may change
>>> that, but we can discuss the implication there.
>>> ```
>>>
>>> The caveat is that, if we change the acking semantics in KIP-250 that we
>>> will only requires num of {min.isr} to acknowledge a produce, then the
>>> above scenario will have a caveat: imagine you have {A, B, C} replicas of a
>>> partition with A as the leader, all in the isr list, and min.isr is 2.
>>>
>>> 1. Say there is a network partition and both A and B are fenced off. C
>>> is elected as the new leader, it shrinks its isr list to only {C}; from A's
>>> point of view it does not know it becomes the "ghost" and no longer the
>>> leader, all it does is shrinking the isr list to {A, B}.
>>>
>>> 2. At this time, any new writes with ack=-1 to C will not be acked,
>>> since from C's pov there is only one replica. This is correct.
>>>
>>> 3. However, any writes that are send to A (NOTE this is totally
>>> possible, since producers would only refresh metadata periodically,
>>> additionally if they happen to ask A or B they will get the stale metadata
>>> that A's still the leader), since A thinks that isr list is {A, B} and as
>>> long as B has replicated the message, A can acked the produce.
>>>
>>> This is not correct behavior, since when network heals, A would
>>> realize it is not the leader and will truncate its log. And hence as a
>>> result the acked records are lost, violating Kafka's guarantees. And
>>> KIP-232 would not help preventing this scenario.
>>>
>>>
>>> Although one can argue that, with 3 replicas and min.isr set to 2, Kafka
>>> is guaranteeing to tolerate only one failure, while the above scenario is
>>> actually two concurrent failures (both A and B are considered wedged), this
>>> is still a regression to the current version.
>>>
>>> So to resolve this issue, I'd propose we can change the semantics in the
>>> following way (this is only slightly different from your proposal):
>>>
>>>
>>> 1. Add one more value to client-side acks config:
>>>
>>>0: no acks needed at all.
>>>1: ack from the leader.
>>>all: ack from ALL the ISR replicas AND that current number of isr
>>> replicas has to be no smaller than {min.isr} (i.e. not changing this
>>> semantic).
>>>quorum: this is the new value, it requires ack from enough number of
>>> ISR replicas no 

Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-02-02 Thread Guozhang Wang
Hi Dong,

Yes, in terms of fault tolerance "quorum" does not do better than "all", as
I said, with {min.isr} to X+1 Kafka is able to tolerate X failures only. So
if A and B are partitioned off at the same time, then there are two
concurrent failures and we do not guarantee all acked messages will be
retained.

The goal of my approach is to maintain the behavior of ack="all", which
happen to do better than what Kafka is actually guaranteed: when both A and
B are partitioned off, produced records will not be acked since "all"
requires all replicas (not only ISRs, my previous email has an incorrect
term) are required. This is doing better than tolerating X failures, which
I was proposing to keep, so that we would not introduce any regression
"surprises" to users who are already using "all". In other words, "quorum"
is trading a bit of failure tolerance that is strictly defined on min.isr
for better tail latency.


Guozhang


On Fri, Feb 2, 2018 at 6:25 PM, Dong Lin  wrote:

> Hey Guozhang,
>
> According to the new proposal, with 3 replicas, min.isr=2 and
> acks="quorum", it seems that acknowledged messages can still be truncated
> in the network partition scenario you mentioned, right? So I guess the goal
> is for some user to achieve better tail latency at the cost of potential
> message loss?
>
> If this is the case, then I think it may be better to adopt an approach
> where controller dynamically turn on/off this optimization. This provides
> user with peace of mind (i.e. no message loss) while still reducing tail
> latency. What do you think?
>
> Thanks,
> Dong
>
>
> On Fri, Feb 2, 2018 at 11:11 AM, Guozhang Wang  wrote:
>
>> Hello Litao,
>>
>> Just double checking on the leader election details, do you have time to
>> complete the proposal on that part?
>>
>> Also Jun mentioned one caveat related to KIP-250 on the KIP-232
>> discussion thread that Dong is working on, I figured it is worth pointing
>> out here with a tentative solution:
>>
>>
>> ```
>> Currently, if the producer uses acks=-1, a write will only succeed if
>> the write is received by all in-sync replicas (i.e., committed). This is
>> true even when min.isr is set since we first wait for a message to be
>> committed and then check the min.isr requirement. KIP-250 may change
>> that, but we can discuss the implication there.
>> ```
>>
>> The caveat is that, if we change the acking semantics in KIP-250 that we
>> will only requires num of {min.isr} to acknowledge a produce, then the
>> above scenario will have a caveat: imagine you have {A, B, C} replicas of a
>> partition with A as the leader, all in the isr list, and min.isr is 2.
>>
>> 1. Say there is a network partition and both A and B are fenced off. C is
>> elected as the new leader, it shrinks its isr list to only {C}; from A's
>> point of view it does not know it becomes the "ghost" and no longer the
>> leader, all it does is shrinking the isr list to {A, B}.
>>
>> 2. At this time, any new writes with ack=-1 to C will not be acked, since
>> from C's pov there is only one replica. This is correct.
>>
>> 3. However, any writes that are send to A (NOTE this is totally possible,
>> since producers would only refresh metadata periodically, additionally if
>> they happen to ask A or B they will get the stale metadata that A's still
>> the leader), since A thinks that isr list is {A, B} and as long as B has
>> replicated the message, A can acked the produce.
>>
>> This is not correct behavior, since when network heals, A would
>> realize it is not the leader and will truncate its log. And hence as a
>> result the acked records are lost, violating Kafka's guarantees. And
>> KIP-232 would not help preventing this scenario.
>>
>>
>> Although one can argue that, with 3 replicas and min.isr set to 2, Kafka
>> is guaranteeing to tolerate only one failure, while the above scenario is
>> actually two concurrent failures (both A and B are considered wedged), this
>> is still a regression to the current version.
>>
>> So to resolve this issue, I'd propose we can change the semantics in the
>> following way (this is only slightly different from your proposal):
>>
>>
>> 1. Add one more value to client-side acks config:
>>
>>0: no acks needed at all.
>>1: ack from the leader.
>>all: ack from ALL the ISR replicas AND that current number of isr
>> replicas has to be no smaller than {min.isr} (i.e. not changing this
>> semantic).
>>quorum: this is the new value, it requires ack from enough number of
>> ISR replicas no smaller than majority of the replicas AND no smaller than
>> {min.isr}.
>>
>> 2. Clarify in the docs that if a user wants to tolerate X failures, she
>> needs to set client acks=all or acks=quorum (better tail latency than
>> "all") with broker {min.sir} to be X+1; however, "all" is not necessarily
>> stronger than "quorum":
>>
>> For example, with 3 replicas, and {min.isr} set to 2. Here is a list of
>> scenarios:
>>
>> a. ISR list 

Jenkins build is back to normal : kafka-trunk-jdk8 #2384

2018-02-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #3147

2018-02-02 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6494; ConfigCommand update to use AdminClient for broker configs

--
[...truncated 415.99 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED


Build failed in Jenkins: kafka-1.1-jdk7 #7

2018-02-02 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6494; ConfigCommand update to use AdminClient for broker configs

--
[...truncated 411.76 KB...]

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > testDecodeMap STARTED

kafka.utils.json.JsonValueTest > testDecodeMap PASSED

kafka.utils.json.JsonValueTest > testDecodeSeq STARTED

kafka.utils.json.JsonValueTest > testDecodeSeq PASSED

kafka.utils.json.JsonValueTest > testJsonObjectGet STARTED

kafka.utils.json.JsonValueTest > testJsonObjectGet PASSED

kafka.utils.json.JsonValueTest > testJsonValueEquals STARTED

kafka.utils.json.JsonValueTest > testJsonValueEquals PASSED

kafka.utils.json.JsonValueTest > testJsonArrayIterator STARTED

kafka.utils.json.JsonValueTest > testJsonArrayIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectApply STARTED

kafka.utils.json.JsonValueTest > testJsonObjectApply PASSED

kafka.utils.json.JsonValueTest > testDecodeBoolean STARTED

kafka.utils.json.JsonValueTest > testDecodeBoolean PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > 

Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-02-02 Thread Dong Lin
Hey Guozhang,

According to the new proposal, with 3 replicas, min.isr=2 and
acks="quorum", it seems that acknowledged messages can still be truncated
in the network partition scenario you mentioned, right? So I guess the goal
is for some user to achieve better tail latency at the cost of potential
message loss?

If this is the case, then I think it may be better to adopt an approach
where controller dynamically turn on/off this optimization. This provides
user with peace of mind (i.e. no message loss) while still reducing tail
latency. What do you think?

Thanks,
Dong


On Fri, Feb 2, 2018 at 11:11 AM, Guozhang Wang  wrote:

> Hello Litao,
>
> Just double checking on the leader election details, do you have time to
> complete the proposal on that part?
>
> Also Jun mentioned one caveat related to KIP-250 on the KIP-232 discussion
> thread that Dong is working on, I figured it is worth pointing out here
> with a tentative solution:
>
>
> ```
> Currently, if the producer uses acks=-1, a write will only succeed if the
> write is received by all in-sync replicas (i.e., committed). This is true
> even when min.isr is set since we first wait for a message to be
> committed and then check the min.isr requirement. KIP-250 may change
> that, but we can discuss the implication there.
> ```
>
> The caveat is that, if we change the acking semantics in KIP-250 that we
> will only requires num of {min.isr} to acknowledge a produce, then the
> above scenario will have a caveat: imagine you have {A, B, C} replicas of a
> partition with A as the leader, all in the isr list, and min.isr is 2.
>
> 1. Say there is a network partition and both A and B are fenced off. C is
> elected as the new leader, it shrinks its isr list to only {C}; from A's
> point of view it does not know it becomes the "ghost" and no longer the
> leader, all it does is shrinking the isr list to {A, B}.
>
> 2. At this time, any new writes with ack=-1 to C will not be acked, since
> from C's pov there is only one replica. This is correct.
>
> 3. However, any writes that are send to A (NOTE this is totally possible,
> since producers would only refresh metadata periodically, additionally if
> they happen to ask A or B they will get the stale metadata that A's still
> the leader), since A thinks that isr list is {A, B} and as long as B has
> replicated the message, A can acked the produce.
>
> This is not correct behavior, since when network heals, A would
> realize it is not the leader and will truncate its log. And hence as a
> result the acked records are lost, violating Kafka's guarantees. And
> KIP-232 would not help preventing this scenario.
>
>
> Although one can argue that, with 3 replicas and min.isr set to 2, Kafka
> is guaranteeing to tolerate only one failure, while the above scenario is
> actually two concurrent failures (both A and B are considered wedged), this
> is still a regression to the current version.
>
> So to resolve this issue, I'd propose we can change the semantics in the
> following way (this is only slightly different from your proposal):
>
>
> 1. Add one more value to client-side acks config:
>
>0: no acks needed at all.
>1: ack from the leader.
>all: ack from ALL the ISR replicas AND that current number of isr
> replicas has to be no smaller than {min.isr} (i.e. not changing this
> semantic).
>quorum: this is the new value, it requires ack from enough number of
> ISR replicas no smaller than majority of the replicas AND no smaller than
> {min.isr}.
>
> 2. Clarify in the docs that if a user wants to tolerate X failures, she
> needs to set client acks=all or acks=quorum (better tail latency than
> "all") with broker {min.sir} to be X+1; however, "all" is not necessarily
> stronger than "quorum":
>
> For example, with 3 replicas, and {min.isr} set to 2. Here is a list of
> scenarios:
>
> a. ISR list has 3: "all" waits for all 3, "quorum" waits for 2 of them.
> b. ISR list has 2: "all" and "quorum" waits for both 2 of them.
> c. ISR list has 1: "all" and "quorum" would not ack.
>
> If {min.isr} is set to 1, interestingly, here would be the list of
> scenarios:
>
> a. ISR list has 3: "all" waits for all 3, "quorum" waits for 2 of them.
> b. ISR list has 2: "all" and "quorum" waits for both 2 of them.
> c. ISR list has 1: "all" waits for leader to return, while "quorum" would
> not ack (because it requires that number > {min.isr}, AND >= {majority of
> num.replicas}, so its actually stronger than "all").
>
>
> WDYT?
>
>
> Guozhang
>
>
> On Thu, Jan 25, 2018 at 8:13 PM, Dong Lin  wrote:
>
>> Hey Litao,
>>
>> Not sure there will be an easy way to select the broker with highest LEO
>> without losing acknowledged message. In case it is useful, here is another
>> idea. Maybe we can have a mechanism to turn switch between the min.isr and
>> isr set for determining when to acknowledge a message. Controller can
>> probably use RPC to request the current leader to use isr set 

Jenkins build is back to normal : kafka-1.1-jdk7 #6

2018-02-02 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk7 #3146

2018-02-02 Thread Apache Jenkins Server
See 


Changes:

[mjsax] KAFKA-6354: Update KStream JavaDoc using new State Store API (#4456)

--
[...truncated 413.27 KB...]

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls STARTED

kafka.security.auth.SimpleAclAuthorizerTest > 
testHighConcurrencyModificationOfResourceAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testLoadCache PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsWrongSetValue PASSED

kafka.KafkaTest > testKafkaSslPasswords STARTED

kafka.KafkaTest > testKafkaSslPasswords PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgs STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgs PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheEnd PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsOnly PASSED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging STARTED

kafka.KafkaTest > testGetKafkaConfigFromArgsNonArgsAtTheBegging PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED


Jenkins build is back to normal : kafka-1.0-jdk7 #144

2018-02-02 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6494) Extend ConfigCommand to update broker config using new AdminClient

2018-02-02 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6494.

   Resolution: Fixed
Fix Version/s: (was: 1.2.0)
   1.1.0

> Extend ConfigCommand to update broker config using new AdminClient
> --
>
> Key: KAFKA-6494
> URL: https://issues.apache.org/jira/browse/KAFKA-6494
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 1.1.0
>
>
> Add --bootstrap-server and --command-config options for new AdminClient. 
> Update ConfigCommand to use new AdminClient for dynamic broker config updates 
> in KIP-226. Full conversion of ConfigCommand to new AdminClient will be done 
> later under KIP-248.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Kafka streams API for .net

2018-02-02 Thread Bykov, Serge
Hello.

We are a .net shop and was wondering if you are planning streams API support 
for .net?

If not, where can I find documentation on the REST API around KSQL mentioned in 
this ticket:
https://github.com/confluentinc/confluent-kafka-dotnet/issues/344

Finally, if none of the above, then how can I build a .net api (websocket 
based) that can stream notifications near real time via kafka?

Thank you

Serge Bykov
BPI Integration Architecture
Desk: 416-215-3060
Cell: 416-825-2563



[jira] [Created] (KAFKA-6529) Broker leaks memory and file descriptors after sudden client disconnects

2018-02-02 Thread Graham Campbell (JIRA)
Graham Campbell created KAFKA-6529:
--

 Summary: Broker leaks memory and file descriptors after sudden 
client disconnects
 Key: KAFKA-6529
 URL: https://issues.apache.org/jira/browse/KAFKA-6529
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 0.11.0.2, 1.0.0
Reporter: Graham Campbell


If a producer forcefully disconnects from a broker while it has staged 
receives, that connection enters a limbo state where it is no longer processed 
by the SocketServer.Processor, leaking the file descriptor for the socket and 
the memory used for the staged recieve queue for that connection.

We noticed this during an upgrade from 0.9.0.2 to 0.11.0.2. Immediately after 
the rolling restart to upgrade, open file descriptors on the brokers started 
climbing uncontrollably. In a few cases brokers reached our configured max open 
files limit of 100k and crashed before we rolled back.

We tracked this down to a buildup of muted connections in the 
Selector.closingChannels list. If a client disconnects from the broker with 
multiple pending produce requests, when the broker attempts to send an ack to 
the client it recieves an IOException because the TCP socket has been closed. 
This triggers the Selector to close the channel, but because it still has 
pending requests, it adds it to Selector.closingChannels to process those 
requests. However, because that exception was triggered by trying to send a 
response, the SocketServer.Processor has marked the channel as muted and will 
no longer process it at all.

*Reproduced by:*
Starting a Kafka broker/cluster
Client produces several messages and then disconnects abruptly (eg. 
_./rdkafka_performance -P -x 100 -b broker:9092 -t test_topic_)
Broker then leaks file descriptor previously used for TCP socket and memory for 
unprocessed messages

*Proposed solution (which we've implemented internally)*
Whenever an exception is encountered when writing to a socket in 
Selector.pollSelectionKeys(...) record that that connection failed a send by 
adding the KafkaChannel ID to Selector.failedSends. Then re-raise the exception 
to still trigger the socket disconnection logic. Since every exception raised 
in this function triggers a disconnect, we also treat any exception while 
writing to the socket as a failed send.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-222 - Add "describe consumer group" to KafkaAdminClient

2018-02-02 Thread Matthias J. Sax
Feature freeze for 1.1 passed already, thus, KIP-222 will not be part of
1.1 release.

I updated the JIRA with target version 1.2.

-Matthias

On 2/1/18 3:57 PM, Jeff Widman wrote:
> Don't forget to update the wiki page now that the vote has passed--it
> currently says this KIP is "under discussion":
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-222+-+Add+Consumer+Group+operations+to+Admin+API
> 
> Also, should the JIRA ticket be tagged with 1.1.0 (provided this is merged
> by then)?
> 
> On Mon, Jan 22, 2018 at 3:26 AM, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
>> My bad, KIP is updated:
>>
>> ```
>> public class MemberDescription {
>> private final String consumerId;
>> private final String clientId;
>> private final String host;
>> private final MemberAssignment assignment;
>> }
>> public class MemberAssignment {
>> private final List assignment;
>> }
>> ```
>>
>> Cheers,
>> Jorge.
>>
>> El lun., 22 ene. 2018 a las 6:46, Jun Rao () escribió:
>>
>>> Hi, Jorge,
>>>
>>> For #3, I wasn't suggesting using the internal Assignment. We can just
>>> introduce a new public type that wraps List. We can call
>> it
>>> sth like MemberAssignment to distinguish it from the internal one. This
>>> makes extending the type in the future easier.
>>>
>>> Thanks,
>>>
>>> Jun
>>>
>>> On Sun, Jan 21, 2018 at 3:19 PM, Jorge Esteban Quilcate Otoya <
>>> quilcate.jo...@gmail.com> wrote:
>>>
 Hi all,

 Thanks all for your votes and approving this KIP :)

 @Jun Rao:

 1. Yes, KIP is updated with MemberDescription.
 2. Changed:
 ```
 public class ListGroupOffsetsResult {
 final KafkaFuture> future;
 ```
 3. Not sure about this one as Assignment type is part of
 o.a.k.clients.consumer.internals. Will we be breaking encapsulation
>> if we
 expose it as part of AdminClient?
 Currently is defined as:
 ```
 public class MemberDescription {
 private final String consumerId;
 private final String clientId;
 private final String host;
 private final List assignment;
 }
 ```

 BTW: I've created a PR with the work in progress:
 https://github.com/apache/kafka/pull/4454

 Cheers,
 Jorge.

 El vie., 19 ene. 2018 a las 23:52, Jun Rao ()
>>> escribió:

> Hi, Jorge,
>
> Thanks for the KIP. Looks good to me overall. A few comments below.
>
> 1. It seems that ConsumerDescription should be MemberDescription?
>
> 2. Each offset can have an optional metadata. So, in
> ListGroupOffsetsResult, perhaps it's better to have
> KafkaFuture>, where
> OffsetAndMetadata contains an offset and a metadata of String.
>
> 3. As Jason mentioned in the discussion, it would be nice to extend
>>> this
> api to support general group management, instead of just the consumer
 group
> in the future. For that, it might be better for MemberDescription to
>>> have
> assignment of type Assignment, which consists of a list of
>> partitions.
> Then, in the future, we can add other fields to Assignment.
>
> Jun
>
>
> On Thu, Jan 18, 2018 at 9:45 AM, Mickael Maison <
 mickael.mai...@gmail.com>
> wrote:
>
>> +1 (non binding), thanks
>>
>> On Thu, Jan 18, 2018 at 5:41 PM, Colin McCabe 
> wrote:
>>> +1 (non-binding)
>>>
>>> Colin
>>>
>>>
>>> On Thu, Jan 18, 2018, at 07:36, Ted Yu wrote:
 +1
  Original message From: Bill Bejeck <
> bbej...@gmail.com>
 Date: 1/18/18  6:59 AM  (GMT-08:00) To: dev@kafka.apache.org
 Subject:
 Re: [VOTE] KIP-222 - Add "describe consumer group" to
 KafkaAdminClient
 Thanks for the KIP

 +1

 Bill

 On Thu, Jan 18, 2018 at 4:24 AM, Rajini Sivaram <
>> rajinisiva...@gmail.com>
 wrote:

> +1 (binding)
>
> Thanks for the KIP, Jorge.
>
> Regards,
>
> Rajini
>
> On Wed, Jan 17, 2018 at 9:04 PM, Guozhang Wang <
 wangg...@gmail.com>
>> wrote:
>
>> +1 (binding). Thanks Jorge.
>>
>>
>> Guozhang
>>
>> On Wed, Jan 17, 2018 at 11:29 AM, Gwen Shapira <
 g...@confluent.io
>>
> wrote:
>>
>>> Hey, since there were no additional comments in the
 discussion,
>> I'd
> like
>> to
>>> resume the voting.
>>>
>>> +1 (binding)
>>>
>>> On Fri, Nov 17, 2017 at 9:15 AM Guozhang Wang <
> wangg...@gmail.com
>>>
>> wrote:
>>>
 Hello Jorge,

 I left some comments on the discuss thread. The 

Re: [DISCUSS] KIP-250 Add Support for Quorum-based Producer Acknowledgment

2018-02-02 Thread Guozhang Wang
Hello Litao,

Just double checking on the leader election details, do you have time to
complete the proposal on that part?

Also Jun mentioned one caveat related to KIP-250 on the KIP-232 discussion
thread that Dong is working on, I figured it is worth pointing out here
with a tentative solution:


```
Currently, if the producer uses acks=-1, a write will only succeed if the
write is received by all in-sync replicas (i.e., committed). This is true
even when min.isr is set since we first wait for a message to be committed
and then check the min.isr requirement. KIP-250 may change that, but we can
discuss the implication there.
```

The caveat is that, if we change the acking semantics in KIP-250 that we
will only requires num of {min.isr} to acknowledge a produce, then the
above scenario will have a caveat: imagine you have {A, B, C} replicas of a
partition with A as the leader, all in the isr list, and min.isr is 2.

1. Say there is a network partition and both A and B are fenced off. C is
elected as the new leader, it shrinks its isr list to only {C}; from A's
point of view it does not know it becomes the "ghost" and no longer the
leader, all it does is shrinking the isr list to {A, B}.

2. At this time, any new writes with ack=-1 to C will not be acked, since
from C's pov there is only one replica. This is correct.

3. However, any writes that are send to A (NOTE this is totally possible,
since producers would only refresh metadata periodically, additionally if
they happen to ask A or B they will get the stale metadata that A's still
the leader), since A thinks that isr list is {A, B} and as long as B has
replicated the message, A can acked the produce.

This is not correct behavior, since when network heals, A would realize
it is not the leader and will truncate its log. And hence as a result the
acked records are lost, violating Kafka's guarantees. And KIP-232 would not
help preventing this scenario.


Although one can argue that, with 3 replicas and min.isr set to 2, Kafka is
guaranteeing to tolerate only one failure, while the above scenario is
actually two concurrent failures (both A and B are considered wedged), this
is still a regression to the current version.

So to resolve this issue, I'd propose we can change the semantics in the
following way (this is only slightly different from your proposal):


1. Add one more value to client-side acks config:

   0: no acks needed at all.
   1: ack from the leader.
   all: ack from ALL the ISR replicas AND that current number of isr
replicas has to be no smaller than {min.isr} (i.e. not changing this
semantic).
   quorum: this is the new value, it requires ack from enough number of ISR
replicas no smaller than majority of the replicas AND no smaller than
{min.isr}.

2. Clarify in the docs that if a user wants to tolerate X failures, she
needs to set client acks=all or acks=quorum (better tail latency than
"all") with broker {min.sir} to be X+1; however, "all" is not necessarily
stronger than "quorum":

For example, with 3 replicas, and {min.isr} set to 2. Here is a list of
scenarios:

a. ISR list has 3: "all" waits for all 3, "quorum" waits for 2 of them.
b. ISR list has 2: "all" and "quorum" waits for both 2 of them.
c. ISR list has 1: "all" and "quorum" would not ack.

If {min.isr} is set to 1, interestingly, here would be the list of
scenarios:

a. ISR list has 3: "all" waits for all 3, "quorum" waits for 2 of them.
b. ISR list has 2: "all" and "quorum" waits for both 2 of them.
c. ISR list has 1: "all" waits for leader to return, while "quorum" would
not ack (because it requires that number > {min.isr}, AND >= {majority of
num.replicas}, so its actually stronger than "all").


WDYT?


Guozhang


On Thu, Jan 25, 2018 at 8:13 PM, Dong Lin  wrote:

> Hey Litao,
>
> Not sure there will be an easy way to select the broker with highest LEO
> without losing acknowledged message. In case it is useful, here is another
> idea. Maybe we can have a mechanism to turn switch between the min.isr and
> isr set for determining when to acknowledge a message. Controller can
> probably use RPC to request the current leader to use isr set before it
> sends LeaderAndIsrRequest for leadership change.
>
> Regards,
> Dong
>
>
> On Wed, Jan 24, 2018 at 7:29 PM, Litao Deng  >
> wrote:
>
> > Thanks Jun for the detailed feedback.
> >
> > Yes, for #1, I mean the live replicas from the ISR.
> >
> > Actually, I believe for all of the 4 new leader election strategies
> > (offline, reassign, preferred replica and controlled shutdown), we need
> to
> > make corresponding changes. Will document the details in the KIP.
> >
> > On Wed, Jan 24, 2018 at 3:59 PM, Jun Rao  wrote:
> >
> > > Hi, Litao,
> > >
> > > Thanks for the KIP. Good proposal. A few comments below.
> > >
> > > 1. The KIP says "select the live replica with the largest LEO".  I
> guess
> > > what you meant is selecting the live replicas in ISR with 

[jira] [Created] (KAFKA-6528) Transient failure in DynamicBrokerReconfigurationTest.testThreadPoolResize

2018-02-02 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6528:
--

 Summary: Transient failure in 
DynamicBrokerReconfigurationTest.testThreadPoolResize
 Key: KAFKA-6528
 URL: https://issues.apache.org/jira/browse/KAFKA-6528
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


{code:java}
java.lang.AssertionError: expected:<108> but was:<123>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
kafka.server.DynamicBrokerReconfigurationTest.stopAndVerifyProduceConsume(DynamicBrokerReconfigurationTest.scala:755)
at 
kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:443)
at 
kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:451){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6527) Transient failure in DynamicBrokerReconfigurationTest.testDefaultTopicConfig

2018-02-02 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-6527:
--

 Summary: Transient failure in 
DynamicBrokerReconfigurationTest.testDefaultTopicConfig
 Key: KAFKA-6527
 URL: https://issues.apache.org/jira/browse/KAFKA-6527
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson


{code:java}
java.lang.AssertionError: Log segment size increase not applied
at kafka.utils.TestUtils$.fail(TestUtils.scala:355)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:865)
at 
kafka.server.DynamicBrokerReconfigurationTest.testDefaultTopicConfig(DynamicBrokerReconfigurationTest.scala:348)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6526) Update controller to handle changes to unclean.leader.election.enable

2018-02-02 Thread Rajini Sivaram (JIRA)
Rajini Sivaram created KAFKA-6526:
-

 Summary: Update controller to handle changes to 
unclean.leader.election.enable
 Key: KAFKA-6526
 URL: https://issues.apache.org/jira/browse/KAFKA-6526
 Project: Kafka
  Issue Type: Sub-task
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram


At the moment, updates to default unclean.leader.election.enable uses the same 
code path as updates to topic overrides. This requires controller change for 
the new value to take effect. It will be good if we can update the controller 
to handle the change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6496) NAT and Kafka

2018-02-02 Thread Ronald van de Kuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald van de Kuil resolved KAFKA-6496.
---
Resolution: Not A Bug

> NAT and Kafka
> -
>
> Key: KAFKA-6496
> URL: https://issues.apache.org/jira/browse/KAFKA-6496
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.0.0
>Reporter: Ronald van de Kuil
>Priority: Critical
>
> Hi,
> As far as I know Kafka itself does not support NAT based on a test that I did 
> with my physical router.
>  
> I can imagine that a real use case exists where NAT is desirable. For 
> example, an OpenStack installation where Kafka hides behind floating ip 
> addresses.
>  
> Are there any plans, to make Kafka NAT friendly?
>  
> Best Regards,
> Ronald



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2925) NullPointerException if FileStreamSinkTask is stopped before initialization finishes

2018-02-02 Thread Robert Yokota (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-2925.
--
Resolution: Cannot Reproduce

I wasn't able to reproduce the NPE and by reviewing the code it doesn't seem 
possible any longer.  Closing this as cannot reproduce.

> NullPointerException if FileStreamSinkTask is stopped before initialization 
> finishes
> 
>
> Key: KAFKA-2925
> URL: https://issues.apache.org/jira/browse/KAFKA-2925
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Robert Yokota
>Priority: Minor
>
> If a FileStreamSinkTask is stopped too quickly after a distributed herder 
> rebalances work, it can result in cleanup happening without start() ever 
> being called:
> {quote}
> Sink task org.apache.kafka.connect.runtime.WorkerSinkTask@f9ac651 was stopped 
> before completing join group. Task initialization and start is being skipped 
> (org.apache.kafka.connect.runtime.WorkerSinkTask:150)
> {quote}
> This is actually a bit weird since stop() is still called so resources 
> allocated in the constructor can be cleaned up, but possibly unexpected that 
> stop() will be called without start() ever being called.
> Because the code in FileStreamSinkTask's stop() method assumes start() has 
> been called, it can result in a NullPointerException because it assumes the 
> PrintStream is already initialized.
> The easy fix is to check for nulls before closing. However, we should 
> probably also consider whether the current possibly sequence of events is 
> confusing and if we shoud not invoke stop() and make it clear in the SInkTask 
> interface that you should only initialize stuff in the constructor that won't 
> need any manual cleanup later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-206: Add support for UUID serialization and deserialization

2018-02-02 Thread Colin McCabe
Hi Brandon,

I think people are generally busy working on the upcoming release now.  Sorry 
for the inconvenience.

best,


On Fri, Feb 2, 2018, at 07:33, Brandon Kirchner wrote:
> I'd really like this to get 2 more binding votes. If that doesn't happen,
> how / can this still move forward? Not sure what the procedure is...
> 
> Brandon K.
> 
> On Tue, Jan 30, 2018 at 9:56 AM, Mickael Maison 
> wrote:
> 
> > +1 (non binding)
> > Thanks for the KIP
> >
> > On Tue, Jan 30, 2018 at 9:49 AM, Manikumar 
> > wrote:
> > > +1 (non-binding)
> > >
> > > On Tue, Jan 30, 2018 at 11:50 AM, Ewen Cheslack-Postava <
> > e...@confluent.io>
> > > wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> On Fri, Jan 26, 2018 at 9:16 AM, Colin McCabe 
> > wrote:
> > >>
> > >> > +1 (non-binding)
> > >> >
> > >> >
> > >> >
> > >> > On Fri, Jan 26, 2018, at 08:29, Ted Yu wrote:
> > >> > > +1
> > >> > >
> > >> > > On Fri, Jan 26, 2018 at 7:00 AM, Brandon Kirchner <
> > >> > > brandon.kirch...@gmail.com> wrote:
> > >> > >
> > >> > > > Hi all,
> > >> > > >
> > >> > > > I would like to (re)start the voting process for KIP-206:
> > >> > > >
> > >> > > > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > > > 206%3A+Add+support+for+UUID+serialization+and+deserialization
> > >> > > >  > >> > > > 206%3A+Add+support+for+UUID+serialization+and+deserialization>*
> > >> > > >
> > >> > > > The KIP adds a UUID serializer and deserializer. Possible
> > >> > implementation
> > >> > > > can be seen here --
> > >> > > >
> > >> > > > https://github.com/apache/kafka/pull/4438
> > >> > > >
> > >> > > > Original discussion and voting thread can be seen here --
> > >> > > > http://search-hadoop.com/m/Kafka/uyzND1dlgePJY7l9?subj=+
> > >> > > > DISCUSS+KIP+206+Add+support+for+UUID+serialization+and+
> > >> deserialization
> > >> > > >
> > >> > > >
> > >> > > > Thanks!
> > >> > > > Brandon K.
> > >> > > >
> > >> >
> > >>
> >


[jira] [Created] (KAFKA-6525) Connect should allow pluggable encryption for records

2018-02-02 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-6525:


 Summary: Connect should allow pluggable encryption for records
 Key: KAFKA-6525
 URL: https://issues.apache.org/jira/browse/KAFKA-6525
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Randall Hauch


The Connect framework does not easily support pluggable encryption and 
decryption mechanisms. It is possible to use custom Converters to 
encrypt/decrypt individual keys and values when the encryption metadata (keys, 
algorithm, etc.) can be specified in the Converter. or when the key and/or 
value are _wrapped_ to include the metadata. 

However, if the encryption metadata is to be stored as headers, then as of AK 
1.1 Connect does have support for using headers in connectors and SMTs, but not 
Converters. 

We should make it easier to plug encryption and decryption mechanisms into 
Connect. Since we're moving to Java 8, one approach might be to change the 
Converter interface to add a default methods that also supply the headers (and 
maybe the whole record). 

An alternative is to define a new plugin interface that can be used to 
filter/transform/map the entire source and sink records. Here's we'd actually 
call this for source connectors before the Converter, and for sink connectors 
after the Converter is called.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2018-02-02 Thread Sönke Liebau
Hi Manikumar,

you are right, 5713 is a bit ambiguous about which fields are considered in
scope, but I agree that wildcards for Ips are not necessary when we have
ranges.

I am wondering though, if we might want to extend the scope of this KIP a
bit while we are changing acl and authorizer classes anyway.

After considering this a bit on a flihht with no wifi yesterday I came up
with the following:

* wildcards or regular expressions for principals, groups and topics
* extend the KafkaPrincipal object to allow adding custom key-value pairs
in principalbuilder implementations
* extend SimpleAclAuthorizer and the ACL tools to authorize on these
key/value pairs

The second and third bullet points would allow easy creation of for example
a principalbuilder that adds groups the user belongs to in the active
directory to its principal, without requiring the user to also extend the
authorizer and create custom ACL storage. This would significantly lower
the technical debt incurred by custom authorizer mechanisms I think.

There are a few issues to hash out of course, but I'd think in general this
should work work nicely and be a step towards meeting corporate
authorization requirements.

Best regards,
Sönke
Am 01.02.2018 18:46 schrieb "Manikumar" :

Hi,

They are few deployments using IPv6.  It is good to support IPv6 also.

I think KAFKA-5713 is about adding regular expression support to resource
names (topic. consumer etc..).
Yes, wildcards (*) in hostname doesn't makes sense. Range and subnet
support will give us the flexibility.

On Thu, Feb 1, 2018 at 5:56 PM, Sönke Liebau <
soenke.lie...@opencore.com.invalid> wrote:

> Hi Manikumar,
>
> the current proposal indeed leaves out IPv6 addresses, as I was unsure
> whether Kafka fully supports that yet to be honest. But it would be
> fairly easy to add these to the proposal - I'll update it over the
> weekend.
>
> Regarding KAFKA-5713, I simply listed it as related, since it is
> similar in spirit, if not exact wording.  Parts of that issue
> (wildcards in hosts) would be covered by this kip - just in a slightly
> different way. Do we really need wildcard support in IP addresses if
> we can specify ranges and subnets? I considered it, but only came up
> with scenarios that seemed fairly academic to me, like allowing the
> same host from multiple subnets (10.0.*.1) for example.
>
> Allowing wildcards has the potential to make the code more complex,
> depending on how we decide to implement this feature, hance I decided
> to leave wildcards out for now.
>
> What do you think?
>
> Best regards,
> Sönke
>
> On Thu, Feb 1, 2018 at 10:14 AM, Manikumar 
> wrote:
> > Hi,
> >
> > 1. Do we support IPv6 CIDR/ranges?
> >
> > 2. KAFKA-5713 is mentioned in Related JIRAs section. But there is no
> > mention of wildcard support in the KIP.
> >
> >
> > Thanks,
> >
> > On Thu, Feb 1, 2018 at 4:05 AM, Sönke Liebau <
> > soenke.lie...@opencore.com.invalid> wrote:
> >
> >> Hey everybody,
> >>
> >> following a brief inital discussion a couple of days ago on this list
> >> I'd like to get a discussion going on KIP-252 which would allow
> >> specifying ip ranges and subnets for the -allow-host and --deny-host
> >> parameters of the acl tool.
> >>
> >> The KIP can be found at
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 252+-+Extend+ACLs+to+allow+filtering+based+on+ip+ranges+and+subnets
> >>
> >> Best regards,
> >> Sönke
> >>
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>


Re: [VOTE] KIP-206: Add support for UUID serialization and deserialization

2018-02-02 Thread Brandon Kirchner
I'd really like this to get 2 more binding votes. If that doesn't happen,
how / can this still move forward? Not sure what the procedure is...

Brandon K.

On Tue, Jan 30, 2018 at 9:56 AM, Mickael Maison 
wrote:

> +1 (non binding)
> Thanks for the KIP
>
> On Tue, Jan 30, 2018 at 9:49 AM, Manikumar 
> wrote:
> > +1 (non-binding)
> >
> > On Tue, Jan 30, 2018 at 11:50 AM, Ewen Cheslack-Postava <
> e...@confluent.io>
> > wrote:
> >
> >> +1 (binding)
> >>
> >> On Fri, Jan 26, 2018 at 9:16 AM, Colin McCabe 
> wrote:
> >>
> >> > +1 (non-binding)
> >> >
> >> >
> >> >
> >> > On Fri, Jan 26, 2018, at 08:29, Ted Yu wrote:
> >> > > +1
> >> > >
> >> > > On Fri, Jan 26, 2018 at 7:00 AM, Brandon Kirchner <
> >> > > brandon.kirch...@gmail.com> wrote:
> >> > >
> >> > > > Hi all,
> >> > > >
> >> > > > I would like to (re)start the voting process for KIP-206:
> >> > > >
> >> > > > *https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > > > 206%3A+Add+support+for+UUID+serialization+and+deserialization
> >> > > >  >> > > > 206%3A+Add+support+for+UUID+serialization+and+deserialization>*
> >> > > >
> >> > > > The KIP adds a UUID serializer and deserializer. Possible
> >> > implementation
> >> > > > can be seen here --
> >> > > >
> >> > > > https://github.com/apache/kafka/pull/4438
> >> > > >
> >> > > > Original discussion and voting thread can be seen here --
> >> > > > http://search-hadoop.com/m/Kafka/uyzND1dlgePJY7l9?subj=+
> >> > > > DISCUSS+KIP+206+Add+support+for+UUID+serialization+and+
> >> deserialization
> >> > > >
> >> > > >
> >> > > > Thanks!
> >> > > > Brandon K.
> >> > > >
> >> >
> >>
>


Re: Kafka Log deletion Problem

2018-02-02 Thread Manikumar
looks like log segment is not rotated.  You can send more data or adjust
log.roll.ms/log.segment.bytes configs to rotate segments.


On Fri, Feb 2, 2018 at 7:08 PM, SenthilKumar K 
wrote:

> Hello Experts , We have a Kafka Setup running for our analytics pipeline
> ...Below is the broker config ..
>
> max.message.bytes = 67108864
> replica.fetch.max.bytes = 67108864
> zookeeper.session.timeout.ms = 7000
> replica.socket.timeout.ms = 3
> offsets.commit.timeout.ms = 5000
> request.timeout.ms = 4
> zookeeper.connection.timeout.ms = 7000
> controller.socket.timeout.ms = 3
> num.partitions = 24
> listeners = SSL://23.212.237.10:9093
> broker.id = 1
> socket.receive.buffer.bytes = 102400
> message.max.bytes = 2621440
> auto.create.topics.enable = true
> auto.leader.rebalance.enable = true
> zookeeper.connect = zk1:2181,zk2:2181
> log.retention.ms=8640
> #log.retention.hours = 24
> socket.request.max.bytes = 104857600
> default.replication.factor = 2
> log.dirs = /data/kafka_logs
> compression.codec = 3
>
> Kafka Version : *0.11*
>
> The retention period is set to 24 Hours , but i could see the data in disk
> after 24 hours.. *What could be the problem here ?*
>
> Note : There is no topic specific configuration.
>
> --Senthil
>


Kafka Log deletion Problem

2018-02-02 Thread SenthilKumar K
Hello Experts , We have a Kafka Setup running for our analytics pipeline
...Below is the broker config ..

max.message.bytes = 67108864
replica.fetch.max.bytes = 67108864
zookeeper.session.timeout.ms = 7000
replica.socket.timeout.ms = 3
offsets.commit.timeout.ms = 5000
request.timeout.ms = 4
zookeeper.connection.timeout.ms = 7000
controller.socket.timeout.ms = 3
num.partitions = 24
listeners = SSL://23.212.237.10:9093
broker.id = 1
socket.receive.buffer.bytes = 102400
message.max.bytes = 2621440
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
zookeeper.connect = zk1:2181,zk2:2181
log.retention.ms=8640
#log.retention.hours = 24
socket.request.max.bytes = 104857600
default.replication.factor = 2
log.dirs = /data/kafka_logs
compression.codec = 3

Kafka Version : *0.11*

The retention period is set to 24 Hours , but i could see the data in disk
after 24 hours.. *What could be the problem here ?*

Note : There is no topic specific configuration.

--Senthil


[jira] [Created] (KAFKA-6524) kafka mirror can't producer internal topic

2018-02-02 Thread Ahmed Madkour (JIRA)
Ahmed Madkour created KAFKA-6524:


 Summary: kafka mirror can't producer internal topic 
 Key: KAFKA-6524
 URL: https://issues.apache.org/jira/browse/KAFKA-6524
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 1.0.0
Reporter: Ahmed Madkour


We are using kafka-mirror-maker.sh to consume data from a 3 brokers kafka 
cluster and producer the data to another single broker kafka cluster

We want to include internal topics so we added the following in the consumer 
configuration
exclude.internal.topics=false

We keep receiving the following errors:
{code:java}
org.apache.kafka.common.errors.InvalidTopicException: The request attempted to 
perform an operation on an invalid topic.

 ERROR Error when sending message to topic __consumer_offsets with key: 43 
bytes, value: 28 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)

{code}

It seems that the producer can't access the internal topic __consumer_offsets.

Any way to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.1-jdk7 #5

2018-02-02 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update docs for KIP-229 (#4499)

--
[...truncated 410.92 KB...]

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED


[jira] [Created] (KAFKA-6523) kafka server not starting

2018-02-02 Thread Sanjeevani Mehra (JIRA)
Sanjeevani Mehra created KAFKA-6523:
---

 Summary: kafka server not starting
 Key: KAFKA-6523
 URL: https://issues.apache.org/jira/browse/KAFKA-6523
 Project: Kafka
  Issue Type: Bug
Reporter: Sanjeevani Mehra
 Attachments: scre.JPG

Hi,

 

i ran this command .\bin\windows\kafka-server-start.bat  
.\config\server.properties, but it does not start the server ( zookeeper is 
started) . 

I have also re installed Kafka but still no luck.

Can someone suggest a solution ? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk7 #3145

2018-02-02 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update docs for KIP-229 (#4499)

--
[...truncated 411.46 KB...]

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytesWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > 
testIteratorIsConsistentWithCompression PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testIteratorIsConsistent PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEqualsWithCompression 
PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testWrittenEqualsRead PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testEquals PASSED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes STARTED

kafka.javaapi.message.ByteBufferMessageSetTest > testSizeInBytes PASSED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic STARTED

kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics STARTED

kafka.metrics.MetricsTest > testSessionExpireListenerMetrics PASSED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic STARTED

kafka.metrics.MetricsTest > 
testBrokerTopicMetricsUnregisteredAfterDeletingTopic PASSED

kafka.metrics.MetricsTest > testClusterIdMetric STARTED

kafka.metrics.MetricsTest > testClusterIdMetric PASSED

kafka.metrics.MetricsTest > testControllerMetrics STARTED

kafka.metrics.MetricsTest > testControllerMetrics PASSED

kafka.metrics.MetricsTest > testWindowsStyleTagNames STARTED

kafka.metrics.MetricsTest > testWindowsStyleTagNames PASSED

kafka.metrics.MetricsTest > testMetricsLeak STARTED

kafka.metrics.MetricsTest > testMetricsLeak PASSED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut STARTED

kafka.metrics.MetricsTest > testBrokerTopicMetricsBytesInOut PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer STARTED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.security.auth.ResourceTypeTest > testJavaConversions STARTED

kafka.security.auth.ResourceTypeTest > testJavaConversions PASSED

kafka.security.auth.ResourceTypeTest > testFromString STARTED

kafka.security.auth.ResourceTypeTest > testFromString PASSED

kafka.security.auth.OperationTest > testJavaConversions STARTED

kafka.security.auth.OperationTest > testJavaConversions PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled STARTED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils STARTED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration STARTED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot STARTED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete STARTED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive STARTED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.AclTest > testAclJsonConversion STARTED

kafka.security.auth.AclTest > testAclJsonConversion PASSED

kafka.security.auth.PermissionTypeTest > testJavaConversions STARTED

kafka.security.auth.PermissionTypeTest > testJavaConversions PASSED

kafka.security.auth.PermissionTypeTest > testFromString STARTED

kafka.security.auth.PermissionTypeTest > testFromString PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess STARTED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest >