[jira] [Resolved] (KAFKA-7413) Replace slave terminology with follower in website

2018-10-28 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7413.

   Resolution: Fixed
Fix Version/s: 2.2.0

> Replace slave terminology with follower in website
> --
>
> Key: KAFKA-7413
> URL: https://issues.apache.org/jira/browse/KAFKA-7413
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Reporter: Sayat Satybaldiyev
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 2.2.0
>
>
> I'm prosing to replace the word "slave" with "follower" on Kafka website as 
> "slave" has a negative connotation.
>  
> Inspired by: [https://bugs.python.org/issue34605]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-370: Remove Orphan Partitions

2018-10-28 Thread Dong Lin
Hey Xiongqi,

Thanks for the KIP. Here are some comments:

1) KIP provides two motivation for the timeout/correction phase. One
motivation is to handle outdated requests. Would this still be an issue
after KIP-380? The second motivation seems to be mainly for performance
optimization when there is reassignment. In general we expect data movement
when we reassign partitions to new brokers. So this is probably not a
strong reason for adding a new config.

2) The KIP says "Adding metrics to keep track of the number of orphan
partitions and the size of these orphan partitions". Can you add the
specification of these new metrics? Here are example doc in
https://cwiki.apache.org/confluence/display/KAFKA/KIP-237%3A+More+Controller+Health+Metrics
.

Thanks,
Dong

On Thu, Sep 20, 2018 at 5:40 PM xiongqi wu  wrote:

> Colin,
>
> Thanks for the comment.
> 1)
> auto.orphan.partition.removal.delay.ms refers to timeout since the first
> leader and ISR request was received.  The idea is we want to wait enough
> time to receive up-to-dated leaderandISR request and any old or new
> partitions reassignment requests.
>
> 2)
> Is there any logic to remove the partition folders on disk?  I can only
> find references to removing older log segments, but not the folder, in the
> KIP.
> ==> yes, the plan is to remove partition folders as well.
>
> I will update the KIP to make it more clear.
>
>
> Xiongqi (Wesley) Wu
>
>
> On Thu, Sep 20, 2018 at 5:02 PM Colin McCabe  wrote:
>
> > Hi Xiongqi,
> >
> > Thanks for the KIP.
> >
> > Can you be a bit more clear what the timeout
> > auto.orphan.partition.removal.delay.ms refers to?  Is the timeout
> > measured since the partition was supposed to be on the broker?  Or is the
> > timeout measured since the broker started up?
> >
> > Is there any logic to remove the partition folders on disk?  I can only
> > find references to removing older log segments, but not the folder, in
> the
> > KIP.
> >
> > best,
> > Colin
> >
> > On Wed, Sep 19, 2018, at 10:53, xiongqi wu wrote:
> > > Any comments?
> > >
> > > Xiongqi (Wesley) Wu
> > >
> > >
> > > On Mon, Sep 10, 2018 at 3:04 PM xiongqi wu 
> wrote:
> > >
> > > > Here is the implementation for the KIP 370.
> > > >
> > > >
> > > >
> >
> https://github.com/xiowu0/kafka/commit/f1bd3085639f41a7af02567550a8e3018cfac3e9
> > > >
> > > >
> > > > The purpose is to do one time cleanup (after a configured delay) of
> > orphan
> > > > partitions when a broker starts up.
> > > >
> > > >
> > > > Xiongqi (Wesley) Wu
> > > >
> > > >
> > > > On Wed, Sep 5, 2018 at 10:51 AM xiongqi wu 
> > wrote:
> > > >
> > > >>
> > > >> This KIP enables broker to remove orphan partitions automatically.
> > > >>
> > > >>
> > > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-370%3A+Remove+Orphan+Partitions
> > > >>
> > > >>
> > > >> Xiongqi (Wesley) Wu
> > > >>
> > > >
> >
>


Re: [VOTE] KIP-380: Detect outdated control requests and bounced brokers using broker generation

2018-10-28 Thread Dong Lin
Thanks for the updated KIP.

+1 (binding)

On Wed, Oct 24, 2018 at 4:52 PM Patrick Huang  wrote:

> Hi Jun,
>
> Sure. I already updated the KIP. Thanks!
>
> Best,
> Zhanxiang (Patrick) Huang
>
> 
> From: Jun Rao 
> Sent: Wednesday, October 24, 2018 14:17
> To: dev
> Subject: Re: [VOTE] KIP-380: Detect outdated control requests and bounced
> brokers using broker generation
>
> Hi, Patrick,
>
> Could you update the KIP with the changes to ControlledShutdownRequest
> based on the discussion thread?
>
> Thanks,
>
> Jun
>
>
> On Sun, Oct 21, 2018 at 2:25 PM, Mickael Maison 
> wrote:
>
> > +1( non-binding)
> > Thanks for the KIP!
> >
> > On Sun, Oct 21, 2018, 03:31 Harsha Chintalapani  wrote:
> >
> > > +1(binding). LGTM.
> > > -Harsha
> > > On Oct 20, 2018, 4:49 PM -0700, Dong Lin , wrote:
> > > > Thanks much for the KIP Patrick. Looks pretty good.
> > > >
> > > > +1 (binding)
> > > >
> > > > On Fri, Oct 19, 2018 at 10:17 AM Patrick Huang 
> > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I would like to call for a vote on KIP-380:
> > > > >
> > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 380%3A+Detect+outdated+control+requests+and+bounced+brokers+using+broker+
> > generation
> > > > >
> > > > > Here is the discussion thread:
> > > > >
> > > > >
> > > https://lists.apache.org/thread.html/2497114df64993342eaf9c78c0f14b
> > f8c1795bc3305f13b03dd39afd@%3Cdev.kafka.apache.org%3E
> > > > > KIP-380
> > > > > <
> > > https://lists.apache.org/thread.html/2497114df64993342eaf9c78c0f14b
> > f8c1795bc3305f13b03dd39afd@%3Cdev.kafka.apache.org%3EKIP-380
> > > >:
> > > > > Detect outdated control requests and bounced ...<
> > > > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 380%3A+Detect+outdated+control+requests+and+bounced+brokers+using+broker+
> > generation
> > > > > >
> > > > > Note: Normalizing the schema is a good-to-have optimization because
> > the
> > > > > memory footprint for the control requests hinders the controller
> from
> > > > > scaling up if we have many topics with large partition counts.
> > > > > cwiki.apache.org
> > > > >
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Zhanxiang (Patrick) Huang
> > > > >
> > >
> >
>


Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by specifying member id

2018-10-28 Thread Boyang Chen
Thanks everyone for the input on this thread! (Sorry it's been a while) I feel 
that we are very close to the final solution.


Hey Jason and Mike, I have two quick questions on the new features here:

  1.  so our proposal is that until we add a new static member into the group 
(scale up), we will not trigger rebalance until the "registration timeout"( the 
member has been offline for too long)? How about leader's rejoin request, I 
think we should still trigger rebalance when that happens, since the consumer 
group may have new topics to consume?
  2.  I'm not very clear on the scale up scenario in static membership here. 
Should we fallback to dynamic membership while adding/removing hosts (by 
setting member.name = null), or we still want to add instances with 
`member.name` so that we eventually expand/shrink the static membership? I 
personally feel the easier solution is to spin up new members and wait until 
either the same "registration timeout" or a "scale up timeout" before starting 
the rebalance. What do you think?

Meanwhile I will go ahead to make changes to the KIP with our newly discussed 
items and details. Really excited to see the design has become more solid.

Best,
Boyang


From: Jason Gustafson 
Sent: Saturday, August 25, 2018 6:04 AM
To: dev
Subject: Re: [DISCUSS] KIP-345: Reduce multiple consumer rebalances by 
specifying member id

Hey Mike,

Yeah, that's a good point. A long "registration timeout" may not be a great
idea. Perhaps in practice you'd set it long enough to be able to detect a
failure and provision a new instance. Maybe on the order of 10 minutes is
more reasonable.

In any case, it's probably a good idea to have an administrative way to
force deregistration. One option is to extend the DeleteGroups API with a
list of members names.

-Jason



On Fri, Aug 24, 2018 at 2:21 PM, Mike Freyberger 
wrote:

> Jason,
>
> Regarding step 4 in your proposal which suggests beginning a long timer
> (30 minutes) when a static member leaves the group, would there also be the
> ability for an admin to force a static membership expiration?
>
> I’m thinking that during particular types of outages or upgrades users
> would want forcefully remove a static member from the group.
>
> So the user would shut the consumer down normally, which wouldn’t trigger
> a rebalance. Then the user could use an admin CLI tool to force remove that
> consumer from the group, so the TopicPartitions that were previously owned
> by that consumer can be released.
>
> At a high level, we need consumer groups to gracefully handle intermittent
> failures and permanent failures. Currently, the consumer group protocol
> handles permanent failures well, but does not handle intermittent failures
> well (it creates unnecessary rebalances). I want to make sure the overall
> solution here handles both intermittent failures and permanent failures,
> rather than sacrificing support for permanent failures in order to provide
> support for intermittent failures.
>
> Mike
>
> Sent from my iPhone
>
> > On Aug 24, 2018, at 3:03 PM, Jason Gustafson  wrote:
> >
> > Hey Guozhang,
> >
> > Responses below:
> >
> > Originally I was trying to kill more birds with one stone with KIP-345,
> >> e.g. to fix the multi-rebalance issue on starting up / shutting down a
> >> multi-instance client (mentioned as case 1)/2) in my early email), and
> >> hence proposing to have a pure static-membership protocol. But thinking
> >> twice about it I now feel it may be too ambitious and worth fixing in
> >> another KIP.
> >
> >
> > I was considering an extension to support pre-initialization of the
> static
> > members of the group, but I agree we should probably leave this problem
> for
> > future work.
> >
> > 1. How this longish static member expiration timeout defined? Is it via a
> >> broker, hence global config, or via a client config which can be
> >> communicated to broker via JoinGroupRequest?
> >
> >
> > I am not too sure. I tend to lean toward server-side configs because they
> > are easier to evolve. If we have to add something to the protocol, then
> > we'll be stuck with it forever.
> >
> > 2. Assuming that for static members, LEAVE_GROUP request will not
> trigger a
> >> rebalance immediately either, similar to session timeout, but only the
> >> longer member expiration timeout, can we remove the internal "
> >> internal.leave.group.on.close" config, which is a quick walk-around
> then?
> >
> >
> > Yeah, I hope we can ultimately get rid of it, but we may need it for
> > compatibility with older brokers. A related question is what should be
> the
> > behavior of the consumer if `member.name` is provided but the broker
> does
> > not support it? We could either fail or silently downgrade to dynamic
> > membership.
> >
> > -Jason
> >
> >
> >> On Fri, Aug 24, 2018 at 11:44 AM, Guozhang Wang 
> wrote:
> >>
> >> Hey Jason,
> >>
> >> I like your idea to simplify the upgrade protocol to allow co-exist of
> >> 

Re: [DISCUSS] KIP-354 Time-based log compaction policy

2018-10-28 Thread Dong Lin
Hey Xiongqi,

Sorry for late reply. I have some comments below:

1) As discussed earlier in the email list, if the topic is configured with
both deletion and compaction, in some cases messages produced a long time
ago can not be deleted based on time. This is a valid use-case because we
actually have topic which is configured with both deletion and compaction
policy. And we should enforce the semantics for both policy. Solution A
sounds good. We do not need interface change (e.g. extra config) to enforce
solution A. All we need is to update implementation so that when broker
compacts a topic, if the message has timestamp (which is the common case),
messages that are too old (based on the time-based retention config) will
be discarded. Since this is a valid issue and it is also related to the
guarantee of when a message can be deleted, can we include the solution of
this problem in the KIP?

2) It is probably OK to assume that all messages have timestamp. The
per-message timestamp was introduced into Kafka 0.10.0 with KIP-31 and
KIP-32 as of Feb 2016. Kafka 0.10.0 or earlier versions are no longer
supported. Also, since the use-case for this feature is primarily for GDPR,
we can assume that client library has already been upgraded to support SSL,
which feature is added after KIP-31 and KIP-32.

3) In Proposed Change section 2.a, it is said that segment.largestTimestamp
- maxSegmentMs can be used to determine the timestamp of the earliest
message. Would it be simpler to just use the create time of the file to
determine the time?

4) The KIP suggests to use must-clean-ratio to select the partition to be
compacted. Unlike dirty ratio which is mostly for performance, the logs
whose "must-clean-ratio" is non-zero must be compacted immediately for
correctness reason (and for GDPR). And if this can no be achieved because
e.g. broker compaction throughput is too low, investigation will be needed.
So it seems simpler to first compact logs which has segment whose earliest
timetamp is earlier than now - max.compaction.lag.ms, instead of defining
must-clean-ratio and sorting logs based on this value.

5) The KIP says max.compaction.lag.ms is 0 by default and it is also
suggested that 0 means disable. Should we set this value to MAX_LONG by
default to effectively disable the feature added in this KIP?

6) It is probably cleaner and readable not to include in Public Interface
section those configs whose meaning is not changed.

7) The goal of this KIP is to ensure that log segment whose earliest
message is earlier than a given threshold will be compacted. This goal may
not be achieved if the compact throughput can not catchup with the total
bytes-in-rate for the compacted topics on the broker. Thus we need an easy
way to tell operator whether this goal is achieved. If we don't already
have such metric, maybe we can include metrics to show 1) the total number
of log segments (or logs) which needs to be immediately compacted as
determined by max.compaction.lag; and 2) the maximum value of now -
earliest_time_stamp_of_segment among all segments that needs to be
compacted.

8) The Performance Impact suggests user to use the existing metrics to
monitor the performance impact of this KIP. It i useful to list mean of
each jmx metrics that we want user to monitor, and possibly explain how to
interpret the value of these metrics to determine whether there is
performance issue.

Thanks,
Dong

On Tue, Oct 16, 2018 at 10:53 AM xiongqi wu  wrote:

> Mayuresh,
>
> Thanks for the comments.
> The requirement is that we need to pick up segments that are older than
> maxCompactionLagMs for compaction.
> maxCompactionLagMs is an upper-bound, which implies that picking up
> segments for compaction earlier doesn't violated the policy.
> We use the creation time of a segment as an estimation of its records
> arrival time, so these records can be compacted no later than
> maxCompactionLagMs.
>
> On the other hand, compaction is an expensive operation, we don't want to
> compact the log partition whenever a new segment is sealed.
> Therefore, we want to pick up a segment for compaction when the segment is
> closed to mandatory max compaction lag (so we use segment creation time as
> an estimation.)
>
>
> Xiongqi (Wesley) Wu
>
>
> On Mon, Oct 15, 2018 at 5:54 PM Mayuresh Gharat <
> gharatmayures...@gmail.com>
> wrote:
>
> > Hi Wesley,
> >
> > Thanks for the KIP and sorry for being late to the party.
> >  I wanted to understand, the scenario you mentioned in Proposed changes :
> >
> > -
> > >
> > > Estimate the earliest message timestamp of an un-compacted log segment.
> > we
> > > only need to estimate earliest message timestamp for un-compacted log
> > > segments to ensure timely compaction because the deletion requests that
> > > belong to compacted segments have already been processed.
> > >
> > >1.
> > >
> > >for the first (earliest) log segment:  The estimated earliest
> > >timestamp is set to the timestamp of the first message 

Build failed in Jenkins: kafka-trunk-jdk8 #3170

2018-10-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7524: Recommend Scala 2.12 and use it for development (#5530)

--
[...truncated 2.76 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: kafka-trunk-jdk11 #67

2018-10-28 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7524: Recommend Scala 2.12 and use it for development (#5530)

--
[...truncated 2.35 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

[jira] [Created] (KAFKA-7561) Console Consumer - system test fails

2018-10-28 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7561:
--

 Summary: Console Consumer - system test fails
 Key: KAFKA-7561
 URL: https://issues.apache.org/jira/browse/KAFKA-7561
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: Stanislav Kozlovski


The test under 
`kafkatest.sanity_checks.test_console_consumer.ConsoleConsumerTest.test_lifecycle`
 fails when I run it locally. 7 versions of the test failed for me and they all 
had a similar error message:
{code:java}
AssertionError: Node ducker@ducker11: did not stop within the specified timeout 
of 15 seconds
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7560) Client Quota - system test failure

2018-10-28 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7560:
--

 Summary: Client Quota - system test failure
 Key: KAFKA-7560
 URL: https://issues.apache.org/jira/browse/KAFKA-7560
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.1.0
Reporter: Stanislav Kozlovski


The test under `kafkatest.tests.client.quota_test.QuotaTest.test_quota` fails 
when I run it locally. It produces the following error message:


{code:java}
 File "/opt/kafka-dev/tests/kafkatest/tests/client/quota_test.py", line 196, in 
validate     metric.value for k, metrics in 
producer.metrics(group='producer-metrics', name='outgoing-byte-rate', 
client_id=producer.client_id) for metric in metrics ValueError: max() arg is an 
empty sequence
{code}
I assume it cannot find the metric it's searching for



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7559) ConnectStandaloneFileTest system tests do not pass

2018-10-28 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7559:
--

 Summary: ConnectStandaloneFileTest system tests do not pass
 Key: KAFKA-7559
 URL: https://issues.apache.org/jira/browse/KAFKA-7559
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.1.0
Reporter: Stanislav Kozlovski


Both tests `test_skip_and_log_to_dlq` and `test_file_source_and_sink` under 
`kafkatest.tests.connect.connect_test.ConnectStandaloneFileTest` fail with 
error messages similar to:
"TimeoutError: Kafka Connect failed to start on node: ducker@ducker04 in 
condition mode: LISTEN"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7524) Recommend Scala 2.12 and use it for development

2018-10-28 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7524.

Resolution: Fixed

> Recommend Scala 2.12 and use it for development
> ---
>
> Key: KAFKA-7524
> URL: https://issues.apache.org/jira/browse/KAFKA-7524
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Major
> Fix For: 2.2.0
>
>
> Scala 2.12 has better support for newer Java versions and includes additional 
> compiler warnings that are helpful during development. In addition, Scala 
> 2.11 hasn't been supported by the Scala community for a long time, the soon 
> to be released Spark 2.4.0 will finally support Scala 2.12 (this was the main 
> reason preventing many from upgrading to Scala 2.12) and Scala 2.13 is at the 
> RC stage. It's time to start recommending the Scala 2.12 build as we prepare 
> support for Scala 2.13 and start thinking about removing support for Scala 
> 2.11.
> In the meantime, Jenkins will continue to build all supported Scala versions 
> (including Scala 2.11) so the PR and trunk jobs will fail if people 
> accidentally use methods introduced in Scala 2.12.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-1.0-jdk7 #252

2018-10-28 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #3169

2018-10-28 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Remove unintentional tilde character from 
kafka-run-class.bat

--
[...truncated 2.52 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest >