Re: [VOTE] KIP-195: AdminClient.createPartitions()

2017-09-13 Thread Tom Bentley
The vote has passed. with three binding +1s (Jun, Ismael and Jason), no -1s
or +0s.

Thanks to those who commented and voted.

On 13 September 2017 at 23:22, Jason Gustafson  wrote:

> +1. Thanks for the KIP. Just one minor clarification: when no assignment is
> specified by the user, what will be sent in the protocol? A null assignment
> array? Probably worth mentioning this case explicitly in the KIP.
>
> Thanks,
> Jason
>
> On Wed, Sep 13, 2017 at 7:53 AM, Tom Bentley 
> wrote:
>
> > This KIP currently has 2 binding +1 votes. Today is the deadline for KIPs
> > to be added to Kafka 1.0.0. So if anyone else would like to see this
> > feature in 1.0.0 they will need to vote by the end of the day.
> >
> > If there are insufficient votes for the KIP to be adopted today then I
> will
> > keep the vote open for a while longer, but it won't be in 1.0.0 even if
> the
> > vote is eventually successful.
> >
> > Cheers,
> >
> > Tom
> >
> > On 13 September 2017 at 11:43, Ismael Juma  wrote:
> >
> > > Tom,
> > >
> > > Thanks for the KIP, +1 (binding) from me.
> > >
> > > Ismael
> > >
> > > On Fri, Sep 8, 2017 at 5:42 PM, Tom Bentley 
> > wrote:
> > >
> > > > I would like to start the vote on KIP-195 which adds an AdminClient
> API
> > > for
> > > > increasing the number of partitions of a topic. The details are here:
> > > >
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 195%3A+AdminClient
> > > .
> > > > createPartitions
> > > >
> > > > Cheers,
> > > >
> > > > Tom
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-5889) MetricsTest is flaky

2017-09-13 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-5889:
-

 Summary: MetricsTest is flaky
 Key: KAFKA-5889
 URL: https://issues.apache.org/jira/browse/KAFKA-5889
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu


The following appeared in several recent builds (e.g. 
https://builds.apache.org/job/kafka-trunk-jdk7/2758) :
{code}
kafka.metrics.MetricsTest > testMetricsLeak FAILED
java.lang.AssertionError: expected:<1216> but was:<1219>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
kafka.metrics.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:68)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at kafka.metrics.MetricsTest.testMetricsLeak(MetricsTest.scala:66)
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-196: Add metrics to Kafka Connect framework

2017-09-13 Thread Randall Hauch
The KIP has passed with three binding +1 votes (Gwen, Sriram, Jason) and no
-1 or +0 votes.

Thanks to everyone for the feedback.

On Tue, Sep 12, 2017 at 2:48 PM, Jason Gustafson  wrote:

> +1. Thanks for the KIP.
>
> On Tue, Sep 12, 2017 at 12:42 PM, Sriram Subramanian 
> wrote:
>
> > +1
> >
> > On Tue, Sep 12, 2017 at 12:41 PM, Gwen Shapira 
> wrote:
> >
> > > My +1 remains :)
> > >
> > > On Tue, Sep 12, 2017 at 12:31 PM Randall Hauch 
> wrote:
> > >
> > > > The KIP was modified (most changes due to reorganization of metrics).
> > > Feel
> > > > free to re-vote if you dislike the changes.
> > > >
> > > > On Mon, Sep 11, 2017 at 8:40 PM, Sriram Subramanian <
> r...@confluent.io>
> > > > wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Mon, Sep 11, 2017 at 2:56 PM, Gwen Shapira 
> > > wrote:
> > > > >
> > > > > > +1
> > > > > >
> > > > > > Thanks for this. Can't wait for more complete monitoring for
> > Connect.
> > > > > >
> > > > > > On Mon, Sep 11, 2017 at 7:40 AM Randall Hauch 
> > > > wrote:
> > > > > >
> > > > > > > I'd like to start the vote on KIP-196 to add metrics to the
> Kafka
> > > > > Connect
> > > > > > > framework so the worker processes can be measured. Details are
> > > here:
> > > > > > >
> > > > > > >
> > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 196%3A+Add+metrics+to+Kafka+Connect+framework
> > > > > > >
> > > > > > > Thanks, and best regards.
> > > > > > >
> > > > > > > Randall
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [DISCUSS] KIP-196: Add metrics to Kafka Connect framework

2017-09-13 Thread Roger Hoover
Sorry, one more thing occurred to me.  Can the names of the time-based
metrics include their units?  That makes it much easier for people
consuming the metrics to interpret them correctly.

For example, offset-commit-max-time would become offset-commit-max-time-ms
or offset-commit-max-time-microsecs (-us?) or whatever you plan to make the
unit be.

On Tue, Sep 12, 2017 at 6:19 PM, Sriram Subramanian 
wrote:

> FWIW, I agree that time metrics have been very useful in the past. The
> reasoning around perf overhead seems reasonable as well. Can we agree on a
> subset of time metrics that we feel would be super useful for debugging?
>
> On Tue, Sep 12, 2017 at 6:08 PM, Roger Hoover 
> wrote:
>
> > Thanks, Ewen.
> >
> > I agree with you on the overhead of measuring time for SMTs and
> > converters.  I'd still argue for keeping other metrics like flush time
> b/c
> > even small batches should still be small overhead compared to writing to
> a
> > sink.
> >
> > On Tue, Sep 12, 2017 at 3:06 PM, Ewen Cheslack-Postava <
> e...@confluent.io>
> > wrote:
> >
> > > Requests are generally substantial batches of data, you are not
> > guaranteed
> > > that for the processing batches both because source connectors can hand
> > you
> > > batches of whatever size they want and consumer's max.poll.records can
> be
> > > overridden.
> > >
> > > Both SMTs and converters are a concern because they can both be
> > relatively
> > > cheap such that just checking the time in between them could possibly
> > dwarf
> > > the cost of applying them.
> > >
> > > Also, another thought re: rebalance metrics: we are already getting
> some
> > > info via AbstractCoordinator and those actually provide a bit more
> detail
> > > in some ways (e.g. join & sync vs the entire rebalance). Not sure if we
> > > want to effectively duplicate some info so it can all be located under
> > > Connect names or rely on the existing metrics for some of these.
> > >
> > > -Ewen
> > >
> > > On Tue, Sep 12, 2017 at 2:05 PM, Roger Hoover 
> > > wrote:
> > >
> > > > Ewen,
> > > >
> > > > I don't know the details of the perf concern.  How is it that the
> Kafka
> > > > broker can keep latency stats per request without suffering too much
> > > > performance?  Maybe SMTs are the only concern b/c they are
> per-message.
> > > If
> > > > so, let's remove those and keep timing info for everything else like
> > > > flushes, which are batch-based.
> > > >
> > > >
> > > > On Tue, Sep 12, 2017 at 1:32 PM, Ewen Cheslack-Postava <
> > > e...@confluent.io>
> > > > wrote:
> > > >
> > > > > On Tue, Sep 12, 2017 at 10:55 AM, Gwen Shapira 
> > > > wrote:
> > > > >
> > > > > > Ewen, you gave a nice talk at Kafka Summit where you warned about
> > the
> > > > > > danger of SMTs that slow down the data pipe. If we don't provide
> > the
> > > > time
> > > > > > metrics, how will users know when their SMTs are causing
> > performance
> > > > > > issues?
> > > > > >
> > > > >
> > > > > Metrics aren't the only way to gain insight about performance and
> > > always
> > > > > measuring this even when it's not necessarily being used may not
> make
> > > > > sense. SMT authors are much better off starting out with a JMH or
> > > similar
> > > > > benchmark. What I was referring to in the talk is more about
> > > > understanding
> > > > > that the processing for SMTs is entirely synchronous and that means
> > > > certain
> > > > > classes of operations will just generally be a bad idea, e.g.
> > anything
> > > > that
> > > > > goes out over the network to another service. You don't even really
> > > need
> > > > > performance info to determine that that type of transformation will
> > > cause
> > > > > problems.
> > > > >
> > > > > But my point wasn't that timing info isn't useful. It's that we
> know
> > > that
> > > > > getting timestamps is pretty expensive and we'll already be doing
> so
> > > > > elsewhere (e.g. if a source record doesn't include a timestamp).
> For
> > > some
> > > > > use cases such as ByteArrayConverter + no SMTs + lightweight
> > processing
> > > > > (e.g. just gets handed to a background thread that deals with
> sending
> > > the
> > > > > data), it wouldn't be out of the question that adding 4 or so more
> > > calls
> > > > to
> > > > > get timestamps could become a bottleneck. Since I don't know if it
> > > would
> > > > > but we have definitely seen the issue come up before, I would be
> > > > > conservative in adding the metrics unless we had some numbers
> showing
> > > it
> > > > > doesn't matter or doesn't matter much.
> > > > >
> > > > > In general, I don't think metrics that require always-on
> measurement
> > > are
> > > > a
> > > > > good way to get fine grained performance information. Instrumenting
> > > > > different phases that imply different types of performance problems
> > can
> > > > be
> > > > > helpful (e.g. "processing time" that should be CPU/memory
> 

Jenkins build is back to normal : kafka-0.11.0-jdk7 #307

2017-09-13 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-189 - Improve principal builder interface and add support for SASL

2017-09-13 Thread Mayuresh Gharat
Sure .

Thanks,

Mayuresh

On Wed, Sep 13, 2017 at 3:12 PM, Jun Rao  wrote:

> Hi, Mayuresh,
>
> Does this KIP obviate the need for KIP-111? If so, could you close that
> one?
>
> Thanks,
>
> Jun
>
> On Wed, Sep 13, 2017 at 8:43 AM, Jason Gustafson 
> wrote:
>
>> Hi All,
>>
>> I wanted to mention one minor change that came out of the code review.
>> We've added an additional method to AuthenticationContext to expose the
>> address of the authenticated client. This can be useful, for example, to
>> enforce host-based quotas. I've updated the KIP.
>>
>> Thanks,
>> Jason
>>
>> On Fri, Sep 8, 2017 at 1:12 AM, Edoardo Comar  wrote:
>>
>> > I am late to the party and my +1 vote is useless - but I took eventually
>> > the time to go through it and it's a great improvement.
>> > It'd enable us to carry along with the Principal a couple of additional
>> > attributes without the hacks we're doing today :-)
>> >
>> > cheers
>> > --
>> >
>> > Edoardo Comar
>> >
>> > IBM Message Hub
>> >
>> > IBM UK Ltd, Hursley Park, SO21 2JN
>> >
>> >
>> >
>> > From:   Jason Gustafson 
>> > To: dev@kafka.apache.org
>> > Date:   07/09/2017 17:23
>> > Subject:Re: [VOTE] KIP-189 - Improve principal builder interface
>> > and add support for SASL
>> >
>> >
>> >
>> > I am closing the vote. Here are the totals:
>> >
>> > Binding: Ismael, Rajini, Jun, (Me)
>> > Non-binding: Mayuresh, Manikumar, Mickael
>> >
>> > Thanks all for the reviews!
>> >
>> >
>> >
>> > On Wed, Sep 6, 2017 at 2:22 PM, Jason Gustafson 
>> > wrote:
>> >
>> > > Hi All,
>> > >
>> > > When implementing this, I found that the SecurityProtocol class has
>> some
>> > > internal details which we might not want to expose to users (in
>> > particular
>> > > to enable testing). Since it's still useful to know the security
>> > protocol
>> > > in use in some cases, and since the security protocol names are
>> already
>> > > exposed in configuration (and hence cannot easily change), I have
>> > modified
>> > > the method in AuthenticationContext to return the name of the security
>> > > protocol instead. Let me know if there are any concerns with this
>> > change.
>> > > Otherwise, I will close out the vote.
>> > >
>> > > Thanks,
>> > > Jason
>> > >
>> > > On Tue, Sep 5, 2017 at 11:10 AM, Ismael Juma 
>> wrote:
>> > >
>> > >> Thanks for the KIP, +1 (binding).
>> > >>
>> > >> Ismael
>> > >>
>> > >> On Wed, Aug 30, 2017 at 4:51 PM, Jason Gustafson > >
>> > >> wrote:
>> > >>
>> > >> > I'd like to open the vote for KIP-189:
>> > >> >
>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
>> > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
>> > iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=
>> > 8TbXL3wrbGFsuFCex8zcLvXRZAxdxLXNvEzr4K-VfSQ=
>> > zDCjH3kSYjz3pYaMq9En4suoqr4LNK54NfE95khHkRo=
>> >
>> > >> > 189%3A+Improve+principal+builder+interface+and+add+support+
>> for+SASL.
>> > >> > Thanks to everyone who helped review.
>> > >> >
>> > >> > -Jason
>> > >> >
>> > >>
>> > >
>> > >
>> >
>> >
>> >
>> > Unless stated otherwise above:
>> > IBM United Kingdom Limited - Registered in England and Wales with number
>> > 741598.
>> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
>> 3AU
>> >
>>
>
>


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


[GitHub] kafka pull request #3855: MINOR: Logging changes

2017-09-13 Thread chetnachaudhari
GitHub user chetnachaudhari opened a pull request:

https://github.com/apache/kafka/pull/3855

MINOR: Logging changes

Capitalised few log lines to match with rest of the code base. Please 
review the change
@junrao 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chetnachaudhari/kafka logging-changes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3855.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3855


commit 7d21ef3aab980e4db27a3dae6b0cf69ebbb1ea85
Author: Chetna Chaudhari 
Date:   2017-06-21T07:58:58Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit e7c8b9387beef144d8e25e80f999f14b05c5406a
Author: Chetna Chaudhari 
Date:   2017-06-23T21:39:40Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 5d1522298ed0a71706138ffbe3ff52ce92cd6844
Author: Chetna Chaudhari 
Date:   2017-08-20T22:57:07Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit d31e805304b3475ff213ec944e80ae05e884d1e9
Author: Chetna Chaudhari 
Date:   2017-08-22T04:33:02Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 24a60e33b3c5051f3942236ef550363a4c042ce6
Author: Chetna Chaudhari 
Date:   2017-09-06T04:17:38Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 01afa45dbd1a3f9df99e8a4dfd24ff088833f4c6
Author: Chetna Chaudhari 
Date:   2017-09-13T22:02:04Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 21a3ccc5b0c2190192076819eab4ae93dd6743f1
Author: Chetna Chaudhari 
Date:   2017-09-13T23:05:53Z

Capitalised few log lines to match with rest of codebase




---


[GitHub] kafka pull request #3854: MINOR: Fix LogContext message format in KafkaProdu...

2017-09-13 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/3854

MINOR: Fix LogContext message format in KafkaProducer



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka 
minor/fix_log_context_message_format

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3854.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3854


commit 8f9e448c203fac68b20c304b5a41ed420a46abbf
Author: Vahid Hashemian 
Date:   2017-09-13T22:41:11Z

MINOR: Fix LogContext message format in KafkaProducer




---


[GitHub] kafka pull request #3842: KAFKA-5301 Improve exception handling on consumer ...

2017-09-13 Thread ConcurrencyPractitioner
GitHub user ConcurrencyPractitioner reopened a pull request:

https://github.com/apache/kafka/pull/3842

KAFKA-5301 Improve exception handling on consumer path

This is an improvised approach towards fixing @guozhangwang 's second 
issue. 
I have changed the method return type as well as override such that it 
returns exception.
If the exception returned is not null (the default value), than we skip the 
callback.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ConcurrencyPractitioner/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3842.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3842


commit 6290df2070f215d0b355f3e59717d911e50b8973
Author: Richard Yu 
Date:   2017-09-13T03:19:24Z

[Kafka-5301] Improve exception handling on consumer path

commit 3a01de2d4e293d15da5c390bc5179243bbdb833e
Author: Richard Yu 
Date:   2017-09-13T22:34:11Z

Exception handling add-on




---


[GitHub] kafka pull request #3853: MINOR: Capitalise topicPurgatory name

2017-09-13 Thread chetnachaudhari
GitHub user chetnachaudhari opened a pull request:

https://github.com/apache/kafka/pull/3853

MINOR: Capitalise topicPurgatory name 

This is a minor change, to capitalise topicPurgatory name to have 
consistent logging. Please find attached snapshot
https://user-images.githubusercontent.com/1351546/30403652-cfba8eba-9936-11e7-874c-b0cb3bbf4058.png;>


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chetnachaudhari/kafka minor-change

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3853.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3853


commit 7d21ef3aab980e4db27a3dae6b0cf69ebbb1ea85
Author: Chetna Chaudhari 
Date:   2017-06-21T07:58:58Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit e7c8b9387beef144d8e25e80f999f14b05c5406a
Author: Chetna Chaudhari 
Date:   2017-06-23T21:39:40Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 5d1522298ed0a71706138ffbe3ff52ce92cd6844
Author: Chetna Chaudhari 
Date:   2017-08-20T22:57:07Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit d31e805304b3475ff213ec944e80ae05e884d1e9
Author: Chetna Chaudhari 
Date:   2017-08-22T04:33:02Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 24a60e33b3c5051f3942236ef550363a4c042ce6
Author: Chetna Chaudhari 
Date:   2017-09-06T04:17:38Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 01afa45dbd1a3f9df99e8a4dfd24ff088833f4c6
Author: Chetna Chaudhari 
Date:   2017-09-13T22:02:04Z

Merge remote-tracking branch 'apache/trunk' into trunk

commit 68cf1e42e2a1e07e03c2904d50bcd8b2c4d1fbcb
Author: Chetna Chaudhari 
Date:   2017-09-13T22:20:29Z

Capitalise topicPurgatory name for consistent logging




---


Re: [VOTE] KIP-195: AdminClient.createPartitions()

2017-09-13 Thread Jason Gustafson
+1. Thanks for the KIP. Just one minor clarification: when no assignment is
specified by the user, what will be sent in the protocol? A null assignment
array? Probably worth mentioning this case explicitly in the KIP.

Thanks,
Jason

On Wed, Sep 13, 2017 at 7:53 AM, Tom Bentley  wrote:

> This KIP currently has 2 binding +1 votes. Today is the deadline for KIPs
> to be added to Kafka 1.0.0. So if anyone else would like to see this
> feature in 1.0.0 they will need to vote by the end of the day.
>
> If there are insufficient votes for the KIP to be adopted today then I will
> keep the vote open for a while longer, but it won't be in 1.0.0 even if the
> vote is eventually successful.
>
> Cheers,
>
> Tom
>
> On 13 September 2017 at 11:43, Ismael Juma  wrote:
>
> > Tom,
> >
> > Thanks for the KIP, +1 (binding) from me.
> >
> > Ismael
> >
> > On Fri, Sep 8, 2017 at 5:42 PM, Tom Bentley 
> wrote:
> >
> > > I would like to start the vote on KIP-195 which adds an AdminClient API
> > for
> > > increasing the number of partitions of a topic. The details are here:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 195%3A+AdminClient
> > .
> > > createPartitions
> > >
> > > Cheers,
> > >
> > > Tom
> > >
> >
>


Build failed in Jenkins: kafka-trunk-jdk8 #2018

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix typo in test log4j.properties

[wangguoz] MINOR: update tutorial doc to match ak-site

--
[...truncated 2.03 MB...]

org.apache.kafka.common.security.ssl.SslFactoryTest > testClientMode PASSED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration STARTED

org.apache.kafka.common.security.ssl.SslFactoryTest > 
testSslFactoryConfiguration PASSED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse STARTED

org.apache.kafka.common.security.kerberos.KerberosNameTest > testParse PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testPrincipalNameCanContainSeparator PASSED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode STARTED

org.apache.kafka.common.security.auth.KafkaPrincipalTest > 
testEqualsAndHashCode PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameOverride PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testSingleOption STARTED

org.apache.kafka.common.security.JaasContextTest > testSingleOption PASSED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes STARTED

org.apache.kafka.common.security.JaasContextTest > 
testNumericOptionWithoutQuotes PASSED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testConfigNoOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithWrongListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
STARTED

org.apache.kafka.common.security.JaasContextTest > testNumericOptionWithQuotes 
PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionValue PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingLoginModule PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingSemicolon PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleOptions PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForClientWithListenerName PASSED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMultipleLoginModules 
PASSED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag 
STARTED

org.apache.kafka.common.security.JaasContextTest > testMissingControlFlag PASSED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback STARTED

org.apache.kafka.common.security.JaasContextTest > 
testLoadForServerWithListenerNameAndFallback PASSED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName STARTED

org.apache.kafka.common.security.JaasContextTest > testQuotedOptionName PASSED

org.apache.kafka.common.security.JaasContextTest > testControlFlag STARTED

org.apache.kafka.common.security.JaasContextTest > testControlFlag PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms STARTED


Re: [VOTE] KIP-189 - Improve principal builder interface and add support for SASL

2017-09-13 Thread Jun Rao
Hi, Mayuresh,

Does this KIP obviate the need for KIP-111? If so, could you close that one?

Thanks,

Jun

On Wed, Sep 13, 2017 at 8:43 AM, Jason Gustafson  wrote:

> Hi All,
>
> I wanted to mention one minor change that came out of the code review.
> We've added an additional method to AuthenticationContext to expose the
> address of the authenticated client. This can be useful, for example, to
> enforce host-based quotas. I've updated the KIP.
>
> Thanks,
> Jason
>
> On Fri, Sep 8, 2017 at 1:12 AM, Edoardo Comar  wrote:
>
> > I am late to the party and my +1 vote is useless - but I took eventually
> > the time to go through it and it's a great improvement.
> > It'd enable us to carry along with the Principal a couple of additional
> > attributes without the hacks we're doing today :-)
> >
> > cheers
> > --
> >
> > Edoardo Comar
> >
> > IBM Message Hub
> >
> > IBM UK Ltd, Hursley Park, SO21 2JN
> >
> >
> >
> > From:   Jason Gustafson 
> > To: dev@kafka.apache.org
> > Date:   07/09/2017 17:23
> > Subject:Re: [VOTE] KIP-189 - Improve principal builder interface
> > and add support for SASL
> >
> >
> >
> > I am closing the vote. Here are the totals:
> >
> > Binding: Ismael, Rajini, Jun, (Me)
> > Non-binding: Mayuresh, Manikumar, Mickael
> >
> > Thanks all for the reviews!
> >
> >
> >
> > On Wed, Sep 6, 2017 at 2:22 PM, Jason Gustafson 
> > wrote:
> >
> > > Hi All,
> > >
> > > When implementing this, I found that the SecurityProtocol class has
> some
> > > internal details which we might not want to expose to users (in
> > particular
> > > to enable testing). Since it's still useful to know the security
> > protocol
> > > in use in some cases, and since the security protocol names are already
> > > exposed in configuration (and hence cannot easily change), I have
> > modified
> > > the method in AuthenticationContext to return the name of the security
> > > protocol instead. Let me know if there are any concerns with this
> > change.
> > > Otherwise, I will close out the vote.
> > >
> > > Thanks,
> > > Jason
> > >
> > > On Tue, Sep 5, 2017 at 11:10 AM, Ismael Juma 
> wrote:
> > >
> > >> Thanks for the KIP, +1 (binding).
> > >>
> > >> Ismael
> > >>
> > >> On Wed, Aug 30, 2017 at 4:51 PM, Jason Gustafson 
> > >> wrote:
> > >>
> > >> > I'd like to open the vote for KIP-189:
> > >> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> > apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> > iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=
> > 8TbXL3wrbGFsuFCex8zcLvXRZAxdxLXNvEzr4K-VfSQ=
> > zDCjH3kSYjz3pYaMq9En4suoqr4LNK54NfE95khHkRo=
> >
> > >> > 189%3A+Improve+principal+builder+interface+and+add+
> support+for+SASL.
> > >> > Thanks to everyone who helped review.
> > >> >
> > >> > -Jason
> > >> >
> > >>
> > >
> > >
> >
> >
> >
> > Unless stated otherwise above:
> > IBM United Kingdom Limited - Registered in England and Wales with number
> > 741598.
> > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> 3AU
> >
>


Re: [DISCUSS] KIP-190: Handle client-ids consistently between clients and brokers

2017-09-13 Thread Gwen Shapira
OK. LGTM then :)

On Wed, Sep 13, 2017 at 12:38 PM Mickael Maison 
wrote:

> Yes exactly !
> I've updated the KIP to make it more explicit.
>
> Also I noticed my initial email didn't contain the link to the KIP, so
> here it is:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-190%3A+Handle+client-ids+consistently+between+clients+and+brokers
>
> On Tue, Sep 12, 2017 at 7:23 PM, Gwen Shapira  wrote:
> > If I understand you correctly, you are saying:
> > 1. KIP-190 will not affect anyone who doesn't use special characters in
> > their client IDs
> > 2. Those who have special characters in client IDs already have tons of
> > metrics issues and won't be inconvenienced by a KIP that fixes them.
> >
> > Did I get it right?
> >
> > On Sat, Sep 9, 2017 at 9:54 AM Mickael Maison 
> > wrote:
> >
> >> Hi Gwen, thanks for taking a look at the KIP.
> >>
> >> I understand your concern trying to make the transition as smooth as
> >> possible. However there are several issues with the way client-ids
> >> with special characters are handled:
> >> Client-ids that contain invalid ObjectName characters (colon, equals,
> >> etc) currently fail to be registered by the build-in JMX reporter so
> >> they already don't appear in all monitoring systems ! These also cause
> >> issues with Quotas.
> >>
> >> The Java clients as well as the kafka-configs.sh tool already reject
> >> them (even though the error you get from the Produce/Consumer is
> >> pretty cryptic).
> >>
> >> People currently using client-ids with special characters have to be
> >> running 3rd party clients and probably encounter strange quotas issues
> >> as well as missing metrics (if they use JMX).
> >>
> >> So if we really want to do the smallest possible change, we could only
> >> encode ObjectName special characters instead of all special
> >> characters. That way at least the JMX reporter would work correctly.
> >> Display a warning when using any other special characters. Then in a
> >> later release, encode everything like we currently do for the
> >> User/Principal.
> >>
> >> What do you think ?
> >>
> >> On Fri, Sep 1, 2017 at 7:33 AM, Gwen Shapira  wrote:
> >> > Thanks for bumping this. I do have a concern:
> >> >
> >> > This proposal changes the names of existing metrics - as such, it will
> >> > require all owners of monitoring systems to update their dashboards.
> It
> >> > will also complicate monitoring of multiple clusters with different
> >> > versions and require some modifications to existing monitoring
> automation
> >> > systems.
> >> >
> >> > What do you think of an alternative solution:
> >> > 1. For the next release, add the validations on the broker side and
> >> print a
> >> > "warning" that this client id is invalid, that it will break metrics
> and
> >> > that it will be rejected in newer versions.
> >> > 2. Few releases later, actually turn the validation on and return
> >> > InvalidClientID error to clients.
> >> >
> >> > We did something similar when we deprecated acks=2.
> >> >
> >> > Gwen
> >> >
> >> >
> >> > On Thu, Aug 31, 2017 at 12:13 PM Mickael Maison <
> >> mickael.mai...@gmail.com>
> >> > wrote:
> >> >
> >> >> Even though it's pretty non controversial, I was expecting a few
> >> comments.
> >> >> I'll wait until next week for comments then I'll start the vote.
> >> >>
> >> >> Thanks
> >> >>
> >> >> On Mon, Aug 21, 2017 at 6:51 AM, Mickael Maison
> >> >>  wrote:
> >> >> > Hi all,
> >> >> >
> >> >> > I have created a KIP to cleanup the way client-ids are handled by
> >> >> > brokers and clients.
> >> >> >
> >> >> > Currently the Java clients have some restrictions on the client-ids
> >> >> > that are not enforced by the brokers. Using 3rd party clients,
> >> >> > client-ids containing any characters can be used causing some
> strange
> >> >> > behaviours in the way brokers handle metrics and quotas.
> >> >> >
> >> >> > Feedback is appreciated.
> >> >> >
> >> >> > Thanks
> >> >>
> >>
>


Re: [VOTE] KIP-192 : Provide cleaner semantics when idempotence is enabled

2017-09-13 Thread Apurva Mehta
The KIP has passed with three binding +1 votes (Guozhang, Ismael, Jason)
and no -1 or +0 votes.

Thanks to everyone for the feedback.

Apurva

On Mon, Sep 11, 2017 at 8:31 PM, Apurva Mehta  wrote:

> Hi Becket,
>
> You are right: the calculations are per partition produced to by each
> idempotent producer. I actually think this makes the problem more acute
> when we actually end up enabling the idempotent producer by default.
> However, even the most optimized version will still result in an overhead
> of at least 46 bytes per (producer, broker, partition) triplet.
>
> I will call this out in KIP-185. I think we would want to optimize the
> memory utilization of each (partition, broker, producer) triplet before
> turning on the default. Another option could be to default max.in.flight to
> 2 and retain the metadata of just 2 batches. This would significantly
> reduce the memory overhead in the default distribution, and in the most
> common cases would not result in produce responses without the record
> metadata. That may be a simple way to address the issue.
> Thanks,
> Apurva
>
> On Mon, Sep 11, 2017 at 7:40 PM, Becket Qin  wrote:
>
>> Hi Apurva,
>>
>> Thanks for the explanation.
>>
>> I think the STO will be per producer/partition, right? Am I missing
>> something? You are right that the proposal does not strengthen the
>> semantic. The goal is more about trying to bound the memory consumption to
>> some reasonable number and use the memory more efficiently.
>>
>> But I agree it is not a blocker for this KIP as the memory pressure may
>> not
>> be that big. With 5K partitions per broker, 50 producers per partition on
>> average, we are going to consume about 35 MB of memory with 5 entries (142
>> bytes) in cache. So it is probably still OK and it is more urgent to fix
>> the upgrade path.
>>
>> Thanks,
>>
>> Jiangjie (Becket) Qin
>>
>>
>>
>>
>>
>> On Mon, Sep 11, 2017 at 4:13 PM, Apurva Mehta 
>> wrote:
>>
>> > Hi Becket,
>> >
>> > Regarding the current implementation: we opted for a simpler server side
>> > implementation where we _don't_ snapshot the metadata of the last 5
>> batches
>> > to disk. So if a broker fails, comes back online, and is the leader
>> again,
>> > it will only have the last batch in memory. With max.in.flight = 5, it
>> is
>> > thus possible to receive duplicates of 4 prior batches and yet not have
>> the
>> > metadata.
>> >
>> > Since we can return DuplicateSequence to the client, and since this is
>> > considered a successful response on the client, this is a good
>> solution. It
>> > also means that we no longer have a hard limit of max.in.flight == 5: if
>> > you use a larger value, you are more likely to receive responses without
>> > the offset/timestamp metadata.
>> >
>> > Your suggestion for sending the lastAckdSequence per partition would
>> help
>> > reduce memory on the broker, but it won't bound it: we need at least one
>> > cached batch per producer to do sequence number validation so it will
>> > always grow proportional to the number of active producers. Nor does it
>> > uniquely solve the problem of removing the cap on max.in.flight:
>> producers
>> > are still not guaranteed that the metadata for all their inflight
>> batches
>> > will always be cached and returned.
>> >
>> > So it doesn't strengthen any semantics, but does optimize memory usage
>> on
>> > the broker. But what's the usage with the proposed changes? With 5
>> cached
>> > batches, each producerIdEntry will be 142 bytes, or 7000 active
>> producers
>> > who use idempotence will take 1MB per broker. With 1 cached batch, we
>> save
>> > at most 96 bytes per producer, so we could have at most 21000 active
>> > producers using idempotence per MB of broker memory.
>> >
>> > The savings are significant, but I am not convinced optimizing this is
>> > worth it right now since we are not making the idempotent producer the
>> > default in this release. Further there are a bunch of other items which
>> > make exactly once semantics more usable which we are working on in this
>> > release. Given that there are only 8 days left till feature freeze, I
>> would
>> > rather tackle the usability issues (KIP-192) than make these memory
>> > optimizations on the broker. The payoff from the latter seems much
>> smaller,
>> > and it is always possible to do in the future.
>> >
>> > What does the rest of the community think?
>> >
>> > Thanks,
>> > Apurva
>> >
>> > On Mon, Sep 11, 2017 at 2:39 PM, Becket Qin 
>> wrote:
>> >
>> > > Hi Apurva,
>> > >
>> > > Sorry for being late on this thread. I am trying to understand the
>> > > implementation of case that we will throw DuplicateSequenceException.
>> My
>> > > understanding is the following:
>> > > 1. On the broker side, we will cache 5 most recent
>> > > sequence/timestamp/offset (STO) for each of the producer ID.
>> > > 2. When duplicate occurs and the producer has 

Build failed in Jenkins: kafka-0.11.0-jdk7 #306

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Un-hide the tutorial buttons on web docs

--
[...truncated 2.44 MB...]

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToChangeLogOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnPutIfAbsentWhenNoPreviousValue PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldLogKeyNullOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnNullOnGetWhenDoesntExist PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnOldValueOnDelete PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldNotWriteToInnerOnPutIfAbsentWhenValueForKeyExists PASSED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists STARTED

org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStoreTest > 
shouldReturnValueOnGetWhenExists PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRemove PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldRollSegments PASSED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange STARTED

org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest > 
shouldFindValuesWithinRange PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED


Build failed in Jenkins: kafka-trunk-jdk7 #2758

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Fix typo in test log4j.properties

[wangguoz] MINOR: update tutorial doc to match ak-site

--
[...truncated 924.06 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED


Re: [ANNOUCE] Apache Kafka 0.11.0.1 Released

2017-09-13 Thread Ted Yu
Using a downstream project, I verified that artifacts for 0.11.0.1 can be
pulled down.

Thanks

On Wed, Sep 13, 2017 at 2:00 PM, Damian Guy  wrote:

> AFAICT they are there
> http://repo.maven.apache.org/maven2/org/apache/kafka/kafka-
> clients/0.11.0.1/
> On Wed, 13 Sep 2017 at 21:46, Ted Yu  wrote:
>
> > Damian:
> > Looks like maven artifacts are not populated yet:
> >
> > https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
> >
> > Do you know when artifacts would be visible for downstream projects ?
> >
> > Thanks
> >
> > On Wed, Sep 13, 2017 at 4:36 AM, Damian Guy 
> wrote:
> >
> > > The Apache Kafka community is pleased to announce the release for
> Apache
> > > Kafka 0.11.0.1. This is a bug fix release that fixes 51 issues in
> > 0.11.0.0.
> > >
> > > All of the changes in this release can be found in the release notes:
> > > *https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html
> > > 
> > >  > > .>
> > >
> > > Apache Kafka is a distributed streaming platform with four four core
> > APIs:
> > >
> > > ** The Producer API allows an application to publish a stream records
> to
> > > one or more Kafka topics.
> > >
> > > ** The Consumer API allows an application to subscribe to one or more
> > > topics and process the stream of records produced to them.
> > >
> > > ** The Streams API allows an application to act as a stream processor,
> > > consuming an input stream from one or more topics and producing an
> output
> > > stream to one or more output topics, effectively transforming the input
> > > streams to output streams.
> > >
> > > ** The Connector API allows building and running reusable producers or
> > > consumers that connect Kafka topics to existing applications or data
> > > systems. For example, a connector to a relational database might
> capture
> > > every change to a table.three key capabilities:
> > >
> > >
> > > With these APIs, Kafka can be used for two broad classes of
> application:
> > >
> > > ** Building real-time streaming data pipelines that reliably get data
> > > between systems or applications.
> > >
> > > ** Building real-time streaming applications that transform or react to
> > the
> > > streams of data.
> > >
> > >
> > > You can download the source release from
> > > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/k
> > > afka-0.11.0.1-src.tgz
> > >  > > 1/kafka-0.10.2.1-src.tgz>
> > >
> > > and binary releases from
> > > *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> > > kafka_2.11-0.11.0.1.tgz
> > >  > > 0/kafka_2.11-0.11.0.0.tgz>
> > >  > > kafka_2.11-0.11.0.1.tgz
> > >  > > 0/kafka_2.11-0.11.0.0.tgz>
> > > >*
> > >
> > > *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> > > kafka_2.12-0.11.0.1.tgz
> > >  > > 0/kafka_2.12-0.11.0.0.tgz>
> > >  > > kafka_2.12-0.11.0.1.tgz
> > >  > > 0/kafka_2.12-0.11.0.0.tgz>
> > > >*
> > >
> > >
> > > A big thank you for the following 33 contributors to this release!
> > >
> > > Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or,
> Dong
> > > Lin, dongeforever, Eno Thereska, Ewen Cheslack-Postava, Gregor
> > Uhlenheuer,
> > > Guozhang Wang, Hooman Broujerdi, huxihx, Ismael Juma, Jan Burkhardt,
> > Jason
> > > Gustafson, Jeff Klukas, Jiangjie Qin, Joel Dice, Konstantine
> Karantasis,
> > > Manikumar Reddy, Matthias J. Sax, Max Zheng, Paolo Patierno, ppatierno,
> > > radzish, Rajini Sivaram, Randall Hauch, Robin Moffatt, Stephane Roset,
> > > umesh chaudhary, Vahid Hashemian, Xavier Léauté
> > >
> > > We welcome your help and feedback. For more information on how to
> > > report problems, and to get involved, visit the project website at
> > > http://kafka.apache.org/
> > >
> > >
> > > Thanks,
> > > Damian
> > >
> >
>


[jira] [Created] (KAFKA-5888) Transactions system test should check for message order

2017-09-13 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-5888:
---

 Summary: Transactions system test should check for message order
 Key: KAFKA-5888
 URL: https://issues.apache.org/jira/browse/KAFKA-5888
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
Assignee: Apurva Mehta
 Fix For: 1.0.0


Currently, the transactions system test doesn't check for correct ordering of 
the messages in a transaction. With KAFKA-5494, we can have multiple inflight 
requests for a single transaction, which could yield to out of order messages 
in the log if there are bugs. So we should assert that order is maintained in 
our system tests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2017

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Un-hide the tutorial buttons on web docs

--
[...truncated 2.53 MB...]
org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
shouldAddStateStoreToRegexDefinedSource PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithZeroSizedCache PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperationsWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountWithInternalStore PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > 
testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
java.lang.AssertionError: expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.cleanGlobal(ResetIntegrationTest.java:363)
at 
org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:172)


Is ConsumerConfig class supposed to contain all config options?

2017-09-13 Thread Jakub Scholz
Hi,

When going through the Kafka client examples I noticed in the Consumer
example (
https://github.com/apache/kafka/blob/trunk/examples/src/main/java/kafka/examples/Consumer.java)
that it is using class ConsumerConfig instead of specifying the
configuration keys manually (e.g. ConsumerConfig.GROUP_ID_CONFIG instead of
"group.id). From this example code I though that the purpose of the
ConsumerConfig and ProducerConfig classes is to make it easier to configure
the clients and used it like that.

But then I was looking for some SSL options and realised that these two
classes do not contain all configuration options. All the SSL and SASL
options are in SslConfigs and SaslConfigs. Some other options are also in
CommonConfig. But it looks like the keys for some options from CommonConfig
are also partially mirrored in ConsumerConfig and ProducerConfig (e.g.
BOOTSTRAP_SERVERS_CONFIG). This seems a bit confusing for me as a
developer. I know that the documentation on web lists all the options. But
I thought it would be nice to have this easily available also from the IDE.

So I'm a bit wondering ... what is the purpose of the ProducerConfig and
ConsumerConfig class? IMHO it should either:
- contain all the client options so that it can be used for everything
(i.e. it should link all options from CommonConfig, SslConfig etc.)
 or:
- it should not be used in the examples to not mislead anyone that this is
a helper class with all options.

Either case, I would gladly raise JIRA and PR for this. So please let me
know what do you think about it.

Thanks & Regards
Jakub


Build failed in Jenkins: kafka-trunk-jdk9 #8

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Un-hide the tutorial buttons on web docs

[jason] MINOR: Fix typo in test log4j.properties

[wangguoz] MINOR: update tutorial doc to match ak-site

--
[...truncated 1.16 MB...]
kafka.api.LegacyAdminClientTest > testOffsetsForTimesWhenOffsetNotFound STARTED

kafka.api.LegacyAdminClientTest > testOffsetsForTimesWhenOffsetNotFound PASSED

kafka.api.LegacyAdminClientTest > testDescribeConsumerGroupForNonExistentGroup 
STARTED

kafka.api.LegacyAdminClientTest > testDescribeConsumerGroupForNonExistentGroup 
PASSED

kafka.api.LegacyAdminClientTest > testLogStartOffsetAfterDeleteRecords STARTED

kafka.api.LegacyAdminClientTest > testLogStartOffsetAfterDeleteRecords PASSED

kafka.api.LegacyAdminClientTest > testGetConsumerGroupSummary STARTED

kafka.api.LegacyAdminClientTest > testGetConsumerGroupSummary PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > testInvalidAlterConfigs 
STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > testInvalidAlterConfigs 
PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > testValidAlterConfigs STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > testValidAlterConfigs PASSED

kafka.api.AdminClientWithPoliciesIntegrationTest > 
testInvalidAlterConfigsDueToPolicy STARTED

kafka.api.AdminClientWithPoliciesIntegrationTest > 
testInvalidAlterConfigsDueToPolicy PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAttemptToCreateInvalidAcls 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAttemptToCreateInvalidAcls 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclAuthorizationDenied STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclAuthorizationDenied PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations2 STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testAclOperations2 PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testInvalidAlterConfigs STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testInvalidAlterConfigs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testMinimumRequestTimeouts PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testForceClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testForceClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testListNodes STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testListNodes PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDelayedClose STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDelayedClose PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeReplicaLogDir STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeReplicaLogDir PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testCreateDeleteTopics STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testCreateDeleteTopics PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeLogDirs STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeLogDirs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeCluster STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeCluster PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeNonExistingTopic 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeNonExistingTopic 
PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeAndAlterConfigs 
STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testDescribeAndAlterConfigs PASSED

kafka.api.SaslSslAdminClientIntegrationTest > 
testAlterReplicaLogDirBeforeTopicCreation STARTED

kafka.api.SaslSslAdminClientIntegrationTest > 
testAlterReplicaLogDirBeforeTopicCreation PASSED

kafka.api.SaslSslAdminClientIntegrationTest > testCallInFlightTimeouts STARTED

kafka.api.SaslSslAdminClientIntegrationTest > testCallInFlightTimeouts PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec STARTED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec PASSED

kafka.api.test.ProducerCompressionTest > 

[GitHub] kafka pull request #3852: MINOR: Update TransactionManager to use LogContext

2017-09-13 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/3852

MINOR: Update TransactionManager to use LogContext



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka 
minor-use-log-context-txn-manager

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3852.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3852


commit ed9809dad1cb5647a1a5d4d22ead333d098571bd
Author: Jason Gustafson 
Date:   2017-09-13T21:01:27Z

MINOR: Update TransactionManager to use LogContext




---


Re: [ANNOUCE] Apache Kafka 0.11.0.1 Released

2017-09-13 Thread Damian Guy
AFAICT they are there
http://repo.maven.apache.org/maven2/org/apache/kafka/kafka-clients/0.11.0.1/
On Wed, 13 Sep 2017 at 21:46, Ted Yu  wrote:

> Damian:
> Looks like maven artifacts are not populated yet:
>
> https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients
>
> Do you know when artifacts would be visible for downstream projects ?
>
> Thanks
>
> On Wed, Sep 13, 2017 at 4:36 AM, Damian Guy  wrote:
>
> > The Apache Kafka community is pleased to announce the release for Apache
> > Kafka 0.11.0.1. This is a bug fix release that fixes 51 issues in
> 0.11.0.0.
> >
> > All of the changes in this release can be found in the release notes:
> > *https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html
> > 
> >  > .>
> >
> > Apache Kafka is a distributed streaming platform with four four core
> APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> > one or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming an input stream from one or more topics and producing an output
> > stream to one or more output topics, effectively transforming the input
> > streams to output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers that connect Kafka topics to existing applications or data
> > systems. For example, a connector to a relational database might capture
> > every change to a table.three key capabilities:
> >
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between systems or applications.
> >
> > ** Building real-time streaming applications that transform or react to
> the
> > streams of data.
> >
> >
> > You can download the source release from
> > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/k
> > afka-0.11.0.1-src.tgz
> >  > 1/kafka-0.10.2.1-src.tgz>
> >
> > and binary releases from
> > *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> > kafka_2.11-0.11.0.1.tgz
> >  > 0/kafka_2.11-0.11.0.0.tgz>
> >  > kafka_2.11-0.11.0.1.tgz
> >  > 0/kafka_2.11-0.11.0.0.tgz>
> > >*
> >
> > *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> > kafka_2.12-0.11.0.1.tgz
> >  > 0/kafka_2.12-0.11.0.0.tgz>
> >  > kafka_2.12-0.11.0.1.tgz
> >  > 0/kafka_2.12-0.11.0.0.tgz>
> > >*
> >
> >
> > A big thank you for the following 33 contributors to this release!
> >
> > Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or, Dong
> > Lin, dongeforever, Eno Thereska, Ewen Cheslack-Postava, Gregor
> Uhlenheuer,
> > Guozhang Wang, Hooman Broujerdi, huxihx, Ismael Juma, Jan Burkhardt,
> Jason
> > Gustafson, Jeff Klukas, Jiangjie Qin, Joel Dice, Konstantine Karantasis,
> > Manikumar Reddy, Matthias J. Sax, Max Zheng, Paolo Patierno, ppatierno,
> > radzish, Rajini Sivaram, Randall Hauch, Robin Moffatt, Stephane Roset,
> > umesh chaudhary, Vahid Hashemian, Xavier Léauté
> >
> > We welcome your help and feedback. For more information on how to
> > report problems, and to get involved, visit the project website at
> > http://kafka.apache.org/
> >
> >
> > Thanks,
> > Damian
> >
>


Build failed in Jenkins: kafka-trunk-jdk7 #2757

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] MINOR: Un-hide the tutorial buttons on web docs

--
[...truncated 929.49 KB...]
kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 

Re: [ANNOUCE] Apache Kafka 0.11.0.1 Released

2017-09-13 Thread Ted Yu
Damian:
Looks like maven artifacts are not populated yet:

https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients

Do you know when artifacts would be visible for downstream projects ?

Thanks

On Wed, Sep 13, 2017 at 4:36 AM, Damian Guy  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 0.11.0.1. This is a bug fix release that fixes 51 issues in 0.11.0.0.
>
> All of the changes in this release can be found in the release notes:
> *https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html
> 
>  .>
>
> Apache Kafka is a distributed streaming platform with four four core APIs:
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an output
> stream to one or more output topics, effectively transforming the input
> streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might capture
> every change to a table.three key capabilities:
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react to the
> streams of data.
>
>
> You can download the source release from
> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/k
> afka-0.11.0.1-src.tgz
>  1/kafka-0.10.2.1-src.tgz>
>
> and binary releases from
> *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> kafka_2.11-0.11.0.1.tgz
>  0/kafka_2.11-0.11.0.0.tgz>
>  kafka_2.11-0.11.0.1.tgz
>  0/kafka_2.11-0.11.0.0.tgz>
> >*
>
> *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> kafka_2.12-0.11.0.1.tgz
>  0/kafka_2.12-0.11.0.0.tgz>
>  kafka_2.12-0.11.0.1.tgz
>  0/kafka_2.12-0.11.0.0.tgz>
> >*
>
>
> A big thank you for the following 33 contributors to this release!
>
> Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or, Dong
> Lin, dongeforever, Eno Thereska, Ewen Cheslack-Postava, Gregor Uhlenheuer,
> Guozhang Wang, Hooman Broujerdi, huxihx, Ismael Juma, Jan Burkhardt, Jason
> Gustafson, Jeff Klukas, Jiangjie Qin, Joel Dice, Konstantine Karantasis,
> Manikumar Reddy, Matthias J. Sax, Max Zheng, Paolo Patierno, ppatierno,
> radzish, Rajini Sivaram, Randall Hauch, Robin Moffatt, Stephane Roset,
> umesh chaudhary, Vahid Hashemian, Xavier Léauté
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> http://kafka.apache.org/
>
>
> Thanks,
> Damian
>


Re: [ANNOUCE] Apache Kafka 0.11.0.1 Released

2017-09-13 Thread Sriram Subramanian
Thanks for driving the release Damian.

> On Sep 13, 2017, at 1:18 PM, Guozhang Wang  wrote:
> 
> Thanks for driving this Damian!
> 
> 
> Guozhang
> 
>> On Wed, Sep 13, 2017 at 4:36 AM, Damian Guy  wrote:
>> 
>> The Apache Kafka community is pleased to announce the release for Apache
>> Kafka 0.11.0.1. This is a bug fix release that fixes 51 issues in 0.11.0.0.
>> 
>> All of the changes in this release can be found in the release notes:
>> *https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html
>> 
>> > .>
>> 
>> Apache Kafka is a distributed streaming platform with four four core APIs:
>> 
>> ** The Producer API allows an application to publish a stream records to
>> one or more Kafka topics.
>> 
>> ** The Consumer API allows an application to subscribe to one or more
>> topics and process the stream of records produced to them.
>> 
>> ** The Streams API allows an application to act as a stream processor,
>> consuming an input stream from one or more topics and producing an output
>> stream to one or more output topics, effectively transforming the input
>> streams to output streams.
>> 
>> ** The Connector API allows building and running reusable producers or
>> consumers that connect Kafka topics to existing applications or data
>> systems. For example, a connector to a relational database might capture
>> every change to a table.three key capabilities:
>> 
>> 
>> With these APIs, Kafka can be used for two broad classes of application:
>> 
>> ** Building real-time streaming data pipelines that reliably get data
>> between systems or applications.
>> 
>> ** Building real-time streaming applications that transform or react to the
>> streams of data.
>> 
>> 
>> You can download the source release from
>> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/k
>> afka-0.11.0.1-src.tgz
>> > 1/kafka-0.10.2.1-src.tgz>
>> 
>> and binary releases from
>> *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
>> kafka_2.11-0.11.0.1.tgz
>> > 0/kafka_2.11-0.11.0.0.tgz>
>> > kafka_2.11-0.11.0.1.tgz
>> > 0/kafka_2.11-0.11.0.0.tgz>
>>> *
>> 
>> *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
>> kafka_2.12-0.11.0.1.tgz
>> > 0/kafka_2.12-0.11.0.0.tgz>
>> > kafka_2.12-0.11.0.1.tgz
>> > 0/kafka_2.12-0.11.0.0.tgz>
>>> *
>> 
>> 
>> A big thank you for the following 33 contributors to this release!
>> 
>> Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or, Dong
>> Lin, dongeforever, Eno Thereska, Ewen Cheslack-Postava, Gregor Uhlenheuer,
>> Guozhang Wang, Hooman Broujerdi, huxihx, Ismael Juma, Jan Burkhardt, Jason
>> Gustafson, Jeff Klukas, Jiangjie Qin, Joel Dice, Konstantine Karantasis,
>> Manikumar Reddy, Matthias J. Sax, Max Zheng, Paolo Patierno, ppatierno,
>> radzish, Rajini Sivaram, Randall Hauch, Robin Moffatt, Stephane Roset,
>> umesh chaudhary, Vahid Hashemian, Xavier Léauté
>> 
>> We welcome your help and feedback. For more information on how to
>> report problems, and to get involved, visit the project website at
>> http://kafka.apache.org/
>> 
>> 
>> Thanks,
>> Damian
> 
> 
> 
> -- 
> -- Guozhang


[GitHub] kafka pull request #3851: MINOR: Fix typo

2017-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3851


---


[GitHub] kafka pull request #3847: MINOR: update tutorial doc to match ak-site

2017-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3847


---


Re: [ANNOUCE] Apache Kafka 0.11.0.1 Released

2017-09-13 Thread Guozhang Wang
Thanks for driving this Damian!


Guozhang

On Wed, Sep 13, 2017 at 4:36 AM, Damian Guy  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 0.11.0.1. This is a bug fix release that fixes 51 issues in 0.11.0.0.
>
> All of the changes in this release can be found in the release notes:
> *https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html
> 
>  .>
>
> Apache Kafka is a distributed streaming platform with four four core APIs:
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an output
> stream to one or more output topics, effectively transforming the input
> streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might capture
> every change to a table.three key capabilities:
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react to the
> streams of data.
>
>
> You can download the source release from
> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/k
> afka-0.11.0.1-src.tgz
>  1/kafka-0.10.2.1-src.tgz>
>
> and binary releases from
> *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> kafka_2.11-0.11.0.1.tgz
>  0/kafka_2.11-0.11.0.0.tgz>
>  kafka_2.11-0.11.0.1.tgz
>  0/kafka_2.11-0.11.0.0.tgz>
> >*
>
> *https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
> kafka_2.12-0.11.0.1.tgz
>  0/kafka_2.12-0.11.0.0.tgz>
>  kafka_2.12-0.11.0.1.tgz
>  0/kafka_2.12-0.11.0.0.tgz>
> >*
>
>
> A big thank you for the following 33 contributors to this release!
>
> Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or, Dong
> Lin, dongeforever, Eno Thereska, Ewen Cheslack-Postava, Gregor Uhlenheuer,
> Guozhang Wang, Hooman Broujerdi, huxihx, Ismael Juma, Jan Burkhardt, Jason
> Gustafson, Jeff Klukas, Jiangjie Qin, Joel Dice, Konstantine Karantasis,
> Manikumar Reddy, Matthias J. Sax, Max Zheng, Paolo Patierno, ppatierno,
> radzish, Rajini Sivaram, Randall Hauch, Robin Moffatt, Stephane Roset,
> umesh chaudhary, Vahid Hashemian, Xavier Léauté
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> http://kafka.apache.org/
>
>
> Thanks,
> Damian
>



-- 
-- Guozhang


[GitHub] kafka pull request #3850: MINOR: Un-hide the tutorial buttons on web docs

2017-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3850


---


Re: [DISCUSS] KIP-190: Handle client-ids consistently between clients and brokers

2017-09-13 Thread Mickael Maison
Yes exactly !
I've updated the KIP to make it more explicit.

Also I noticed my initial email didn't contain the link to the KIP, so
here it is:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-190%3A+Handle+client-ids+consistently+between+clients+and+brokers

On Tue, Sep 12, 2017 at 7:23 PM, Gwen Shapira  wrote:
> If I understand you correctly, you are saying:
> 1. KIP-190 will not affect anyone who doesn't use special characters in
> their client IDs
> 2. Those who have special characters in client IDs already have tons of
> metrics issues and won't be inconvenienced by a KIP that fixes them.
>
> Did I get it right?
>
> On Sat, Sep 9, 2017 at 9:54 AM Mickael Maison 
> wrote:
>
>> Hi Gwen, thanks for taking a look at the KIP.
>>
>> I understand your concern trying to make the transition as smooth as
>> possible. However there are several issues with the way client-ids
>> with special characters are handled:
>> Client-ids that contain invalid ObjectName characters (colon, equals,
>> etc) currently fail to be registered by the build-in JMX reporter so
>> they already don't appear in all monitoring systems ! These also cause
>> issues with Quotas.
>>
>> The Java clients as well as the kafka-configs.sh tool already reject
>> them (even though the error you get from the Produce/Consumer is
>> pretty cryptic).
>>
>> People currently using client-ids with special characters have to be
>> running 3rd party clients and probably encounter strange quotas issues
>> as well as missing metrics (if they use JMX).
>>
>> So if we really want to do the smallest possible change, we could only
>> encode ObjectName special characters instead of all special
>> characters. That way at least the JMX reporter would work correctly.
>> Display a warning when using any other special characters. Then in a
>> later release, encode everything like we currently do for the
>> User/Principal.
>>
>> What do you think ?
>>
>> On Fri, Sep 1, 2017 at 7:33 AM, Gwen Shapira  wrote:
>> > Thanks for bumping this. I do have a concern:
>> >
>> > This proposal changes the names of existing metrics - as such, it will
>> > require all owners of monitoring systems to update their dashboards. It
>> > will also complicate monitoring of multiple clusters with different
>> > versions and require some modifications to existing monitoring automation
>> > systems.
>> >
>> > What do you think of an alternative solution:
>> > 1. For the next release, add the validations on the broker side and
>> print a
>> > "warning" that this client id is invalid, that it will break metrics and
>> > that it will be rejected in newer versions.
>> > 2. Few releases later, actually turn the validation on and return
>> > InvalidClientID error to clients.
>> >
>> > We did something similar when we deprecated acks=2.
>> >
>> > Gwen
>> >
>> >
>> > On Thu, Aug 31, 2017 at 12:13 PM Mickael Maison <
>> mickael.mai...@gmail.com>
>> > wrote:
>> >
>> >> Even though it's pretty non controversial, I was expecting a few
>> comments.
>> >> I'll wait until next week for comments then I'll start the vote.
>> >>
>> >> Thanks
>> >>
>> >> On Mon, Aug 21, 2017 at 6:51 AM, Mickael Maison
>> >>  wrote:
>> >> > Hi all,
>> >> >
>> >> > I have created a KIP to cleanup the way client-ids are handled by
>> >> > brokers and clients.
>> >> >
>> >> > Currently the Java clients have some restrictions on the client-ids
>> >> > that are not enforced by the brokers. Using 3rd party clients,
>> >> > client-ids containing any characters can be used causing some strange
>> >> > behaviours in the way brokers handle metrics and quotas.
>> >> >
>> >> > Feedback is appreciated.
>> >> >
>> >> > Thanks
>> >>
>>


Re: Build failed in Jenkins: kafka-trunk-jdk8 #2016

2017-09-13 Thread Ted Yu
I have filed INFRA-15080 for the 'No space left on device' error.

On Wed, Sep 13, 2017 at 12:07 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See  display/redirect?page=changes>
>
> Changes:
>
> [ismael] KAFKA-4501; Fix EasyMock and disable PowerMock tests under Java 9
>
> --
> [...truncated 4.77 MB...]
> org.apache.kafka.streams.kstream.TimeWindowsTest >
> advanceIntervalMustNotBeLargerThanWindowSize PASSED
>
> org.apache.kafka.streams.kstream.TimeWindowsTest >
> retentionTimeMustNoBeSmallerThanWindowSize STARTED
>
> org.apache.kafka.streams.kstream.TimeWindowsTest >
> retentionTimeMustNoBeSmallerThanWindowSize PASSED
>
> org.apache.kafka.streams.kstream.TimeWindowsTest >
> shouldUseWindowSizeAsRentitionTimeIfWindowSizeIsLargerThanDefaultRetentionTime
> STARTED
>
> org.apache.kafka.streams.kstream.TimeWindowsTest >
> shouldUseWindowSizeAsRentitionTimeIfWindowSizeIsLargerThanDefaultRetentionTime
> PASSED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldThrowIfEndIsSmallerThanStart STARTED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldThrowIfEndIsSmallerThanStart PASSED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldNotBeEqualIfDifferentWindowType STARTED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldNotBeEqualIfDifferentWindowType PASSED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldBeEqualIfStartAndEndSame STARTED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldBeEqualIfStartAndEndSame PASSED
>
> org.apache.kafka.streams.kstream.WindowTest > shouldNotBeEqualIfNull
> STARTED
>
> org.apache.kafka.streams.kstream.WindowTest > shouldNotBeEqualIfNull
> PASSED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldThrowIfStartIsNegative STARTED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldThrowIfStartIsNegative PASSED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldNotBeEqualIfStartOrEndIsDifferent STARTED
>
> org.apache.kafka.streams.kstream.WindowTest >
> shouldNotBeEqualIfStartOrEndIsDifferent PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldCreateProcessorThatPrintsToStdOut STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldCreateProcessorThatPrintsToStdOut PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldPrintWithKeyValueMapper STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldPrintWithKeyValueMapper PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowTopologyExceptionIfFilePathIsEmpty STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowTopologyExceptionIfFilePathIsEmpty PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowNullPointerExceptionIfLabelIsNull STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowNullPointerExceptionIfLabelIsNull PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowNullPointerExceptionIfMapperIsNull STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowNullPointerExceptionIfMapperIsNull PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest > shouldPrintWithLabel
> STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest > shouldPrintWithLabel PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowNullPointerExceptionIfFilePathIsNull STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowNullPointerExceptionIfFilePathIsNull PASSED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldCreateProcessorThatPrintsToFile STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldCreateProcessorThatPrintsToFile FAILED
> java.lang.AssertionError:
> Expected: "[processor]: hi, 1\n"
>  but: was ""
> at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> at org.junit.Assert.assertThat(Assert.java:956)
> at org.junit.Assert.assertThat(Assert.java:923)
> at org.apache.kafka.streams.kstream.PrintedTest.
> shouldCreateProcessorThatPrintsToFile(PrintedTest.java:67)
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowTopologyExceptionIfFilePathDoesntExist STARTED
>
> org.apache.kafka.streams.kstream.PrintedTest >
> shouldThrowTopologyExceptionIfFilePathDoesntExist PASSED
>
> org.apache.kafka.streams.kstream.WindowsTest >
> retentionTimeMustNotBeNegative STARTED
>
> org.apache.kafka.streams.kstream.WindowsTest >
> retentionTimeMustNotBeNegative PASSED
>
> org.apache.kafka.streams.kstream.WindowsTest >
> numberOfSegmentsMustBeAtLeastTwo STARTED
>
> org.apache.kafka.streams.kstream.WindowsTest >
> numberOfSegmentsMustBeAtLeastTwo PASSED
>
> org.apache.kafka.streams.kstream.WindowsTest >
> shouldSetWindowRetentionTime STARTED
>
> org.apache.kafka.streams.kstream.WindowsTest >
> shouldSetWindowRetentionTime PASSED
>
> org.apache.kafka.streams.kstream.WindowsTest > shouldSetNumberOfSegments
> STARTED
>
> 

Build failed in Jenkins: kafka-trunk-jdk8 #2016

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-4501; Fix EasyMock and disable PowerMock tests under Java 9

--
[...truncated 4.77 MB...]
org.apache.kafka.streams.kstream.TimeWindowsTest > 
advanceIntervalMustNotBeLargerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
retentionTimeMustNoBeSmallerThanWindowSize PASSED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldUseWindowSizeAsRentitionTimeIfWindowSizeIsLargerThanDefaultRetentionTime 
STARTED

org.apache.kafka.streams.kstream.TimeWindowsTest > 
shouldUseWindowSizeAsRentitionTimeIfWindowSizeIsLargerThanDefaultRetentionTime 
PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldThrowIfEndIsSmallerThanStart STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldThrowIfEndIsSmallerThanStart PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfDifferentWindowType STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfDifferentWindowType PASSED

org.apache.kafka.streams.kstream.WindowTest > shouldBeEqualIfStartAndEndSame 
STARTED

org.apache.kafka.streams.kstream.WindowTest > shouldBeEqualIfStartAndEndSame 
PASSED

org.apache.kafka.streams.kstream.WindowTest > shouldNotBeEqualIfNull STARTED

org.apache.kafka.streams.kstream.WindowTest > shouldNotBeEqualIfNull PASSED

org.apache.kafka.streams.kstream.WindowTest > shouldThrowIfStartIsNegative 
STARTED

org.apache.kafka.streams.kstream.WindowTest > shouldThrowIfStartIsNegative 
PASSED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfStartOrEndIsDifferent STARTED

org.apache.kafka.streams.kstream.WindowTest > 
shouldNotBeEqualIfStartOrEndIsDifferent PASSED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldCreateProcessorThatPrintsToStdOut STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldCreateProcessorThatPrintsToStdOut PASSED

org.apache.kafka.streams.kstream.PrintedTest > shouldPrintWithKeyValueMapper 
STARTED

org.apache.kafka.streams.kstream.PrintedTest > shouldPrintWithKeyValueMapper 
PASSED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowTopologyExceptionIfFilePathIsEmpty STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowTopologyExceptionIfFilePathIsEmpty PASSED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowNullPointerExceptionIfLabelIsNull STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowNullPointerExceptionIfLabelIsNull PASSED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowNullPointerExceptionIfMapperIsNull STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowNullPointerExceptionIfMapperIsNull PASSED

org.apache.kafka.streams.kstream.PrintedTest > shouldPrintWithLabel STARTED

org.apache.kafka.streams.kstream.PrintedTest > shouldPrintWithLabel PASSED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowNullPointerExceptionIfFilePathIsNull STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowNullPointerExceptionIfFilePathIsNull PASSED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldCreateProcessorThatPrintsToFile STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldCreateProcessorThatPrintsToFile FAILED
java.lang.AssertionError: 
Expected: "[processor]: hi, 1\n"
 but: was ""
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
at org.junit.Assert.assertThat(Assert.java:956)
at org.junit.Assert.assertThat(Assert.java:923)
at 
org.apache.kafka.streams.kstream.PrintedTest.shouldCreateProcessorThatPrintsToFile(PrintedTest.java:67)

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowTopologyExceptionIfFilePathDoesntExist STARTED

org.apache.kafka.streams.kstream.PrintedTest > 
shouldThrowTopologyExceptionIfFilePathDoesntExist PASSED

org.apache.kafka.streams.kstream.WindowsTest > retentionTimeMustNotBeNegative 
STARTED

org.apache.kafka.streams.kstream.WindowsTest > retentionTimeMustNotBeNegative 
PASSED

org.apache.kafka.streams.kstream.WindowsTest > numberOfSegmentsMustBeAtLeastTwo 
STARTED

org.apache.kafka.streams.kstream.WindowsTest > numberOfSegmentsMustBeAtLeastTwo 
PASSED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetWindowRetentionTime 
STARTED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetWindowRetentionTime 
PASSED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetNumberOfSegments STARTED

org.apache.kafka.streams.kstream.WindowsTest > shouldSetNumberOfSegments PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > classMethod STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > classMethod FAILED
java.lang.RuntimeException: Failed to create a temp dir

Caused by:
java.nio.file.FileSystemException: 

Build failed in Jenkins: kafka-trunk-jdk7 #2756

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-4501; Fix EasyMock and disable PowerMock tests under Java 9

--
[...truncated 2.51 MB...]

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled PASSED


Jenkins build is back to normal : kafka-trunk-jdk9 #6

2017-09-13 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request #3851: MINOR: Fix typo

2017-09-13 Thread jeffwidman
GitHub user jeffwidman opened a pull request:

https://github.com/apache/kafka/pull/3851

MINOR: Fix typo



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeffwidman/kafka patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3851.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3851


commit 3276b2d28a0db1b60d863daa937b1ff99667f90b
Author: Jeff Widman 
Date:   2017-09-13T18:32:15Z

Fix typo




---


Build failed in Jenkins: kafka-trunk-jdk8 #2015

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GUARANTEE_CONFIG

--
[...truncated 3.87 MB...]
org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringInt 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidators STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidators PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateCannotParse STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateCannotParse PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidate STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidate PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testInternalConfigDoesntShowUpInDocs STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testInternalConfigDoesntShowUpInDocs PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefaultString PASSED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords STARTED

org.apache.kafka.common.config.ConfigDefTest > testSslPasswords PASSED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testBaseConfigDefDependents 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringClass 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringShort 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault STARTED

org.apache.kafka.common.config.ConfigDefTest > testInvalidDefault PASSED

org.apache.kafka.common.config.ConfigDefTest > toRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig STARTED

org.apache.kafka.common.config.ConfigDefTest > testCanAddInternalConfig PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingRequired PASSED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed STARTED

org.apache.kafka.common.config.ConfigDefTest > 
testParsingEmptyDefaultValueForStringFieldShouldSucceed PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringPassword 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringList 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringLong 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringBoolean 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testNullDefaultWithValidator 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testMissingDependentConfigs 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringDouble 
PASSED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst STARTED

org.apache.kafka.common.config.ConfigDefTest > toEnrichedRst PASSED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice STARTED

org.apache.kafka.common.config.ConfigDefTest > testDefinedTwice PASSED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testConvertValueToStringString 
PASSED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass STARTED

org.apache.kafka.common.config.ConfigDefTest > testNestedClass PASSED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs STARTED

org.apache.kafka.common.config.ConfigDefTest > testBadInputs PASSED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
STARTED

org.apache.kafka.common.config.ConfigDefTest > testValidateMissingConfigKey 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
STARTED

org.apache.kafka.common.config.AbstractConfigTest > testOriginalsWithPrefix 
PASSED

org.apache.kafka.common.config.AbstractConfigTest > testConfiguredInstances 
STARTED


[GitHub] kafka pull request #3850: MINOR: Un-hide the tutorial buttons on web docs

2017-09-13 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/3850

MINOR: Un-hide the tutorial buttons on web docs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KMinor-unhide-button

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3850.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3850


commit 0f0ed8659039dbd2a02e70a182b17e22d739c7d4
Author: Guozhang Wang 
Date:   2017-09-13T17:52:03Z

unhide the tutorial buttons




---


[jira] [Created] (KAFKA-5887) Enable findBugs (or equivalent) when building with Java 9

2017-09-13 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5887:
--

 Summary: Enable findBugs (or equivalent) when building with Java 9
 Key: KAFKA-5887
 URL: https://issues.apache.org/jira/browse/KAFKA-5887
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
 Fix For: 2.0.0


findBugs doesn't support Java 9 and it seems to be abandonware at this point:

https://github.com/findbugsproject/findbugs/issues/105
https://github.com/gradle/gradle/issues/720

It has been forked, but the fork requires Java 8:

https://github.com/spotbugs/spotbugs
https://github.com/spotbugs/spotbugs/blob/master/docs/migration.rst#findbugs-gradle-plugin

We should migrate once we move to Java 8 if spotbugs is still being actively 
developed and findBugs continues to be dead.

Additional tasks:

1. Remove the code that disables the Gradle plugin for findBugs (or spotbugs) 
when building with Java 9.

2. Enable the findBugs plugin in Jenkins for the relevant builds.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3849: Implement KIP-91 delivery.timeout.ms

2017-09-13 Thread sutambe
GitHub user sutambe opened a pull request:

https://github.com/apache/kafka/pull/3849

Implement KIP-91 delivery.timeout.ms

First shot at implementing kip-91 delivery.timeout.ms. Summary

1. Added delivery.timeout.ms config. Default 120,000
2. Changed retries to MAX_INT.
3. batches may expire whether they are inflight or not. So muted is no 
longer used in RecordAccumulator.expiredBatches.
4. In some rare situations batch.done may be called twice. Attempted 
transitions from failed to succeeded are logged. Successful to successful is an 
error (exception as before). Other transitions (failed-->aborted, 
aborted-->failed) are ignored.
5. The old test from RecordAccumulatorTest is removed. It has three 
additional tests. testExpiredBatchSingle, testExpiredBatchesSize, 
testExpiredBatchesRetry. All of them test that expiry is independent of muted.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sutambe/kafka kip91

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3849.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3849


commit 9d9ff99e1e41f92e75ece0da81ad75948bd33132
Author: Sumant Tambe 
Date:   2017-09-13T16:56:03Z

implement delivery.timeout.ms




---


Build failed in Jenkins: kafka-trunk-jdk7 #2755

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[damian.guy] MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GUARANTEE_CONFIG

--
[...truncated 926.13 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs PASSED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume STARTED

kafka.server.MultipleListenersWithAdditionalJaasContextTest > 
testProduceConsume PASSED

kafka.server.OffsetCommitTest > testUpdateOffsets STARTED

kafka.server.OffsetCommitTest > testUpdateOffsets 

[jira] [Created] (KAFKA-5886) Implement KIP-91

2017-09-13 Thread Sumant Tambe (JIRA)
Sumant Tambe created KAFKA-5886:
---

 Summary: Implement KIP-91
 Key: KAFKA-5886
 URL: https://issues.apache.org/jira/browse/KAFKA-5886
 Project: Kafka
  Issue Type: Improvement
  Components: producer 
Reporter: Sumant Tambe
Assignee: Sumant Tambe
 Fix For: 1.0.0


Implement 
[KIP-91|https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk9 #5

2017-09-13 Thread Apache Jenkins Server
See 

--
Started by user ijuma
[EnvInject] - Loading node environment variables.
Building remotely on H21 (couchdbtest ubuntu xenial) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/kafka-4501-support-java-9^{commit} # 
 > timeout=10
 > git rev-parse refs/remotes/origin/origin/kafka-4501-support-java-9^{commit} 
 > # timeout=10
 > git rev-parse origin/kafka-4501-support-java-9^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch 
configuration for this job.
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/kafka-4501-support-java-9^{commit} # 
 > timeout=10
 > git rev-parse refs/remotes/origin/origin/kafka-4501-support-java-9^{commit} 
 > # timeout=10
 > git rev-parse origin/kafka-4501-support-java-9^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch 
configuration for this job.
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/kafka-4501-support-java-9^{commit} # 
 > timeout=10
 > git rev-parse refs/remotes/origin/origin/kafka-4501-support-java-9^{commit} 
 > # timeout=10
 > git rev-parse origin/kafka-4501-support-java-9^{commit} # timeout=10
ERROR: Couldn't find any revision to build. Verify the repository and branch 
configuration for this job.
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?



[GitHub] kafka pull request #3845: KAFKA-4501: Fix EasyMock and disable PowerMock tes...

2017-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3845


---


[GitHub] kafka pull request #3843: MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GU...

2017-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3843


---


[GitHub] kafka pull request #3848: KAFKA-5692: Change PreferredReplicaLeaderElectionC...

2017-09-13 Thread tombentley
GitHub user tombentley opened a pull request:

https://github.com/apache/kafka/pull/3848

KAFKA-5692: Change PreferredReplicaLeaderElectionCommand to use Admin…

…Client

See also KIP-183.

The contribution is my original work and I license the work to the project 
under the project's open source license.

This implements the following algorithm:

1. AdminClient sends ElectPreferredLeadersRequest.
2. KafakApis receives ElectPreferredLeadersRequest and delegates to
   ReplicaManager.electPreferredLeaders()
3. ReplicaManager delegates to KafkaController.electPreferredLeaders()
4. KafkaController adds a PreferredReplicaLeaderElection to the 
EventManager,
5. ReplicaManager.electPreferredLeaders()'s callback uses the
   delayedElectPreferredReplicasPurgatory to wait for the results of the
   election to appear in the metadata cache. If there are no results
   because of errors, or because the preferred leaders are already leading
   the partitions then a response is returned immediately.

In the EventManager work thread the preferred leader is elected as follows:

1. The EventManager runs PreferredReplicaLeaderElection.process()
2. process() calls KafkaController.onPreferredReplicaElectionWithResults()
3. KafkaController.onPreferredReplicaElectionWithResults()
   calls the PartitionStateMachine.handleStateChangesWithResults() to
   perform the election (asynchronously the PSM will send 
LeaderAndIsrRequest
   to the new and old leaders and UpdateMetadataRequest to all brokers)
   then invokes the callback.

Note: the change in parameter type for CollectionUtils.groupDataByTopic().
This makes sense because the AdminClient APIs use Collection consistently,
rather than List or Set. If binary compatiblity is a consideration the old
version should be kept, delegating to the new version.

I had to add PartitionStateMachine.handleStateChangesWithResults()
in order to be able to process a set of state changes in the
PartitionStateMachine *and get back individual results*.
At the same time I noticed that all callers of existing handleStateChange()
were destructuring a TopicAndPartition that they already had in order
to call handleStateChange(), and that handleStateChange() immediately
instantiated a new TopicAndPartition. Since TopicAndPartition is immutable
this is pointless, so I refactored it. handleStateChange() also now returns
any exception it caught, which is necessary for 
handleStateChangesWithResults()

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tombentley/kafka KAFKA-5692-elect-preferred

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3848.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3848


commit 6b9bf178049e1eedfb5f07771cc3c595c02484d9
Author: Tom Bentley 
Date:   2017-09-06T14:39:24Z

KAFKA-5692: Change PreferredReplicaLeaderElectionCommand to use AdminClient

See also KIP-183.

This implements the following algorithm:

1. AdminClient sends ElectPreferredLeadersRequest.
2. KafakApis receives ElectPreferredLeadersRequest and delegates to
   ReplicaManager.electPreferredLeaders()
3. ReplicaManager delegates to KafkaController.electPreferredLeaders()
4. KafkaController adds a PreferredReplicaLeaderElection to the 
EventManager,
5. ReplicaManager.electPreferredLeaders()'s callback uses the
   delayedElectPreferredReplicasPurgatory to wait for the results of the
   election to appear in the metadata cache. If there are no results
   because of errors, or because the preferred leaders are already leading
   the partitions then a response is returned immediately.

In the EventManager work thread the preferred leader is elected as follows:

1. The EventManager runs PreferredReplicaLeaderElection.process()
2. process() calls KafkaController.onPreferredReplicaElectionWithResults()
3. KafkaController.onPreferredReplicaElectionWithResults()
   calls the PartitionStateMachine.handleStateChangesWithResults() to
   perform the election (asynchronously the PSM will send 
LeaderAndIsrRequest
   to the new and old leaders and UpdateMetadataRequest to all brokers)
   then invokes the callback.

Note: the change in parameter type for CollectionUtils.groupDataByTopic().
This makes sense because the AdminClient APIs use Collection consistently,
rather than List or Set. If binary compatiblity is a consideration the old
version should be kept, delegating to the new version.

I had to add PartitionStateMachine.handleStateChangesWithResults()
in order to be able to 

Re: [VOTE] KIP-189 - Improve principal builder interface and add support for SASL

2017-09-13 Thread Jason Gustafson
Hi All,

I wanted to mention one minor change that came out of the code review.
We've added an additional method to AuthenticationContext to expose the
address of the authenticated client. This can be useful, for example, to
enforce host-based quotas. I've updated the KIP.

Thanks,
Jason

On Fri, Sep 8, 2017 at 1:12 AM, Edoardo Comar  wrote:

> I am late to the party and my +1 vote is useless - but I took eventually
> the time to go through it and it's a great improvement.
> It'd enable us to carry along with the Principal a couple of additional
> attributes without the hacks we're doing today :-)
>
> cheers
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
> From:   Jason Gustafson 
> To: dev@kafka.apache.org
> Date:   07/09/2017 17:23
> Subject:Re: [VOTE] KIP-189 - Improve principal builder interface
> and add support for SASL
>
>
>
> I am closing the vote. Here are the totals:
>
> Binding: Ismael, Rajini, Jun, (Me)
> Non-binding: Mayuresh, Manikumar, Mickael
>
> Thanks all for the reviews!
>
>
>
> On Wed, Sep 6, 2017 at 2:22 PM, Jason Gustafson 
> wrote:
>
> > Hi All,
> >
> > When implementing this, I found that the SecurityProtocol class has some
> > internal details which we might not want to expose to users (in
> particular
> > to enable testing). Since it's still useful to know the security
> protocol
> > in use in some cases, and since the security protocol names are already
> > exposed in configuration (and hence cannot easily change), I have
> modified
> > the method in AuthenticationContext to return the name of the security
> > protocol instead. Let me know if there are any concerns with this
> change.
> > Otherwise, I will close out the vote.
> >
> > Thanks,
> > Jason
> >
> > On Tue, Sep 5, 2017 at 11:10 AM, Ismael Juma  wrote:
> >
> >> Thanks for the KIP, +1 (binding).
> >>
> >> Ismael
> >>
> >> On Wed, Aug 30, 2017 at 4:51 PM, Jason Gustafson 
> >> wrote:
> >>
> >> > I'd like to open the vote for KIP-189:
> >> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=jf_
> iaSHvJObTbx-siA1ZOg=EzRhmSah4IHsUZVekRUIINhltZK7U0OaeRo7hgW4_tQ=
> 8TbXL3wrbGFsuFCex8zcLvXRZAxdxLXNvEzr4K-VfSQ=
> zDCjH3kSYjz3pYaMq9En4suoqr4LNK54NfE95khHkRo=
>
> >> > 189%3A+Improve+principal+builder+interface+and+add+support+for+SASL.
> >> > Thanks to everyone who helped review.
> >> >
> >> > -Jason
> >> >
> >>
> >
> >
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-13 Thread Jeyhun Karimov
Hi Damian,

Thanks for your feedback. Actually, this (what you propose) was the first
idea of KIP-149. Then we decided to divide it into two KIPs. I also
expressed my opinion that keeping the two interfaces (Rich and withKey)
separate would add more overloads. So, email discussion resulted that this
would not be a problem.

Our initial idea was similar to :

public abstract class RichValueMapper  implements
ValueMapperWithKey, RichFunction {
..
}


So, we check the type of object, whether it is RichXXX or XXXWithKey inside
the called method and continue accordingly.

If this is ok with the community, I would like to revert the current design
to this again.

Cheers,
Jeyhun

On Wed, Sep 13, 2017 at 3:02 PM Damian Guy  wrote:

> Hi Jeyhun,
>
> Thanks for sending out the update. I guess i was thinking more along the
> lines of option 2 where we collapse the Rich and ValueWithKey etc
> interfaces into 1 interface that has all of the arguments. I think we then
> only need to add one additional overload for each operator?
>
> Thanks,
> Damian
>
> On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov  wrote:
>
> > Dear all,
> >
> > I would like to resume the discussion on KIP-159. I (and Guozhang) think
> > that releasing KIP-149 and KIP-159 in the same release would make sense
> to
> > avoid a release with "partial" public APIs. There is a KIP [1] proposed
> by
> > Guozhang (and approved by me) to unify both KIPs.
> > Please feel free to comment on this.
> >
> > [1]
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73637757
> >
> > Cheers,
> > Jeyhun
> >
> > On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov 
> > wrote:
> >
> > > Hi Matthias, Damian, all,
> > >
> > > Thanks for your comments and sorry for super-late update.
> > >
> > > Sure, the DSL refactoring is not blocking for this KIP.
> > > I made some changes to KIP document based on my prototype.
> > >
> > > Please feel free to comment.
> > >
> > > Cheers,
> > > Jeyhun
> > >
> > > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax 
> > > wrote:
> > >
> > >> I would not block this KIP with regard to DSL refactoring. IMHO, we
> can
> > >> just finish this one and the DSL refactoring will help later on to
> > >> reduce the number of overloads.
> > >>
> > >> -Matthias
> > >>
> > >> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
> > >> > I am following the related thread in the mailing list and looking
> > >> forward
> > >> > for one-shot solution for overloads issue.
> > >> >
> > >> > Cheers,
> > >> > Jeyhun
> > >> >
> > >> > On Fri, Jul 7, 2017 at 10:32 AM Damian Guy 
> > >> wrote:
> > >> >
> > >> >> Hi Jeyhun,
> > >> >>
> > >> >> About overrides, what other alternatives do we have? For
> > >> >>> backwards-compatibility we have to add extra methods to the
> existing
> > >> >> ones.
> > >> >>>
> > >> >>>
> > >> >> It wasn't clear to me in the KIP if these are new methods or
> > replacing
> > >> >> existing ones.
> > >> >> Also, we are currently discussing options for replacing the
> > overrides.
> > >> >>
> > >> >> Thanks,
> > >> >> Damian
> > >> >>
> > >> >>
> > >> >>> About ProcessorContext vs RecordContext, you are right. I think I
> > >> need to
> > >> >>> implement a prototype to understand the full picture as some parts
> > of
> > >> the
> > >> >>> KIP might not be as straightforward as I thought.
> > >> >>>
> > >> >>>
> > >> >>> Cheers,
> > >> >>> Jeyhun
> > >> >>>
> > >> >>> On Wed, Jul 5, 2017 at 10:40 AM Damian Guy 
> > >> wrote:
> > >> >>>
> > >>  HI Jeyhun,
> > >> 
> > >>  Is the intention that these methods are new overloads on the
> > KStream,
> > >>  KTable, etc?
> > >> 
> > >>  It is worth noting that a ProcessorContext is not a
> RecordContext.
> > A
> > >>  RecordContext, as it stands, only exists during the processing
> of a
> > >> >>> single
> > >>  record. Whereas the ProcessorContext exists for the lifetime of
> the
> > >>  Processor. Sot it doesn't make sense to cast a ProcessorContext
> to
> > a
> > >>  RecordContext.
> > >>  You mentioned above passing the InternalProcessorContext to the
> > >> init()
> > >>  calls. It is internal for a reason and i think it should remain
> > that
> > >> >> way.
> > >>  It might be better to move the recordContext() method from
> > >>  InternalProcessorContext to ProcessorContext.
> > >> 
> > >>  In the KIP you have an example showing:
> > >>  richMapper.init((RecordContext) processorContext);
> > >>  But the interface is:
> > >>  public interface RichValueMapper {
> > >>  VR apply(final V value, final RecordContext recordContext);
> > >>  }
> > >>  i.e., there is no init(...), besides as above this wouldn't make
> > >> sense.
> > >> 
> > >>  Thanks,
> > >>  Damian
> > >> 
> > >>  On Tue, 4 Jul 2017 at 23:30 Jeyhun Karimov 

[GitHub] kafka pull request #3842: KAFKA-5301 Improve exception handling on consumer ...

2017-09-13 Thread ConcurrencyPractitioner
Github user ConcurrencyPractitioner closed the pull request at:

https://github.com/apache/kafka/pull/3842


---


Kafka Streaming with filters

2017-09-13 Thread Subramanian.Kumarappan
Dear Team

We are doing a poc on filtering option in kafka streaming with 
spring cloud binder.
Specifically with @StreamListener and condition. Is there any java sample 
applications with the filtering concepts? If there please let me know.
The usecase is:
Messages are published from upstream to multiple topics.Each and every topics 
has listeners,but only listens to the specific messages which they are intended 
to operate/process.The messages are identified using KStream filters 
(Predicates with any business specific conditions like account_no, 
message_type, etc).
The filter option in kafka streams retain the messages based on the predicates 
and left those which are false.

Thanks & Regards
Subbu

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: [VOTE] KIP-195: AdminClient.createPartitions()

2017-09-13 Thread Tom Bentley
This KIP currently has 2 binding +1 votes. Today is the deadline for KIPs
to be added to Kafka 1.0.0. So if anyone else would like to see this
feature in 1.0.0 they will need to vote by the end of the day.

If there are insufficient votes for the KIP to be adopted today then I will
keep the vote open for a while longer, but it won't be in 1.0.0 even if the
vote is eventually successful.

Cheers,

Tom

On 13 September 2017 at 11:43, Ismael Juma  wrote:

> Tom,
>
> Thanks for the KIP, +1 (binding) from me.
>
> Ismael
>
> On Fri, Sep 8, 2017 at 5:42 PM, Tom Bentley  wrote:
>
> > I would like to start the vote on KIP-195 which adds an AdminClient API
> for
> > increasing the number of partitions of a topic. The details are here:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-195%3A+AdminClient
> .
> > createPartitions
> >
> > Cheers,
> >
> > Tom
> >
>


Jenkins build is back to normal : kafka-trunk-jdk8 #2014

2017-09-13 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-5885) NPE in ZKClient

2017-09-13 Thread Dustin Cote (JIRA)
Dustin Cote created KAFKA-5885:
--

 Summary: NPE in ZKClient
 Key: KAFKA-5885
 URL: https://issues.apache.org/jira/browse/KAFKA-5885
 Project: Kafka
  Issue Type: Bug
  Components: zkclient
Affects Versions: 0.10.2.1
Reporter: Dustin Cote


A null znode for a topic (reason how this happen isn't totally clear, but not 
the focus of this issue) can currently cause controller leader election to 
fail. When looking at the broker logging, you can see there is a 
NullPointerException emanating from the ZKClient:
{code}
[2017-09-11 00:00:21,441] ERROR Error while electing or becoming leader on 
broker 1010674 (kafka.server.ZookeeperLeaderElector)
kafka.common.KafkaException: Can't parse json string: null
at kafka.utils.Json$.liftedTree1$1(Json.scala:40)
at kafka.utils.Json$.parseFull(Json.scala:36)
at 
kafka.utils.ZkUtils$$anonfun$getReplicaAssignmentForTopics$1.apply(ZkUtils.scala:704)
at 
kafka.utils.ZkUtils$$anonfun$getReplicaAssignmentForTopics$1.apply(ZkUtils.scala:700)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.utils.ZkUtils.getReplicaAssignmentForTopics(ZkUtils.scala:700)
at 
kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:742)
at 
kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:333)
at 
kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:160)
at 
kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:85)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:154)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:154)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:154)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at 
kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:153)
at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:825)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:72)
Caused by: java.lang.NullPointerException
{code}

Regardless of how a null topic znode ended up in ZooKeeper, we can probably 
handle this better, at least by printing the path up to the problematic znode 
in the log. The way this particular problem was resolved was by using the 
``kafka-topics`` command and seeing it persistently fail trying to read a 
particular topic with this same message. Then deleting the null znode allowed 
the leader election to complete.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5884) Enable PowerMock tests when running on Java 9

2017-09-13 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5884:
--

 Summary: Enable PowerMock tests when running on Java 9
 Key: KAFKA-5884
 URL: https://issues.apache.org/jira/browse/KAFKA-5884
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
 Fix For: 1.1.0


PowerMock 2.0.0 will support Java 9. Once that is released, we should upgrade 
to it and remove the following code from build.gradle:

{code}
String[] testsToExclude = []
  if (JavaVersion.current().isJava9Compatible()) {
testsToExclude = [
  "**/KafkaProducerTest.*", "**/BufferPoolTest.*",
  "**/SourceTaskOffsetCommitterTest.*", "**/WorkerSinkTaskTest.*", 
"**/WorkerSinkTaskThreadedTest.*",
  "**/WorkerSourceTaskTest.*", "**/WorkerTest.*", 
"**/DistributedHerderTest.*", "**/WorkerCoordinatorTest.*",
  "**/RestServerTest.*", "**/ConnectorPluginsResourceTest.*", 
"**/ConnectorsResourceTest.*",
  "**/StandaloneHerderTest.*", "**/FileOffsetBakingStoreTest.*", 
"**/KafkaConfigBackingStoreTest.*",
  "**/KafkaOffsetBackingStoreTest.*", "**/OffsetStorageWriterTest.*", 
"**/KafkaBasedLogTest.*"
]
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5076) Remove usage of java.xml.bind.* classes hidden by default in JDK9

2017-09-13 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-5076.

Resolution: Fixed

Done as part of https://github.com/apache/kafka/pull/3647.

> Remove usage of java.xml.bind.* classes hidden by default in JDK9
> -
>
> Key: KAFKA-5076
> URL: https://issues.apache.org/jira/browse/KAFKA-5076
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Assignee: Ismael Juma
>Priority: Minor
> Fix For: 1.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5883) Run tests on Java 9 with –illegal-access=deny

2017-09-13 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-5883:
--

 Summary: Run tests on Java 9 with –illegal-access=deny
 Key: KAFKA-5883
 URL: https://issues.apache.org/jira/browse/KAFKA-5883
 Project: Kafka
  Issue Type: Task
Reporter: Ismael Juma
 Fix For: 1.1.0


The default was changed from –illegal-access=deny to –illegal-access=warn late 
in the Java 9 cycle. By using the former, we will ensure that our code is not 
relying on functionality that will be removed in a future Java version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-13 Thread Damian Guy
Hi Jeyhun,

Thanks for sending out the update. I guess i was thinking more along the
lines of option 2 where we collapse the Rich and ValueWithKey etc
interfaces into 1 interface that has all of the arguments. I think we then
only need to add one additional overload for each operator?

Thanks,
Damian

On Wed, 13 Sep 2017 at 10:59 Jeyhun Karimov  wrote:

> Dear all,
>
> I would like to resume the discussion on KIP-159. I (and Guozhang) think
> that releasing KIP-149 and KIP-159 in the same release would make sense to
> avoid a release with "partial" public APIs. There is a KIP [1] proposed by
> Guozhang (and approved by me) to unify both KIPs.
> Please feel free to comment on this.
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73637757
>
> Cheers,
> Jeyhun
>
> On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov 
> wrote:
>
> > Hi Matthias, Damian, all,
> >
> > Thanks for your comments and sorry for super-late update.
> >
> > Sure, the DSL refactoring is not blocking for this KIP.
> > I made some changes to KIP document based on my prototype.
> >
> > Please feel free to comment.
> >
> > Cheers,
> > Jeyhun
> >
> > On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax 
> > wrote:
> >
> >> I would not block this KIP with regard to DSL refactoring. IMHO, we can
> >> just finish this one and the DSL refactoring will help later on to
> >> reduce the number of overloads.
> >>
> >> -Matthias
> >>
> >> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
> >> > I am following the related thread in the mailing list and looking
> >> forward
> >> > for one-shot solution for overloads issue.
> >> >
> >> > Cheers,
> >> > Jeyhun
> >> >
> >> > On Fri, Jul 7, 2017 at 10:32 AM Damian Guy 
> >> wrote:
> >> >
> >> >> Hi Jeyhun,
> >> >>
> >> >> About overrides, what other alternatives do we have? For
> >> >>> backwards-compatibility we have to add extra methods to the existing
> >> >> ones.
> >> >>>
> >> >>>
> >> >> It wasn't clear to me in the KIP if these are new methods or
> replacing
> >> >> existing ones.
> >> >> Also, we are currently discussing options for replacing the
> overrides.
> >> >>
> >> >> Thanks,
> >> >> Damian
> >> >>
> >> >>
> >> >>> About ProcessorContext vs RecordContext, you are right. I think I
> >> need to
> >> >>> implement a prototype to understand the full picture as some parts
> of
> >> the
> >> >>> KIP might not be as straightforward as I thought.
> >> >>>
> >> >>>
> >> >>> Cheers,
> >> >>> Jeyhun
> >> >>>
> >> >>> On Wed, Jul 5, 2017 at 10:40 AM Damian Guy 
> >> wrote:
> >> >>>
> >>  HI Jeyhun,
> >> 
> >>  Is the intention that these methods are new overloads on the
> KStream,
> >>  KTable, etc?
> >> 
> >>  It is worth noting that a ProcessorContext is not a RecordContext.
> A
> >>  RecordContext, as it stands, only exists during the processing of a
> >> >>> single
> >>  record. Whereas the ProcessorContext exists for the lifetime of the
> >>  Processor. Sot it doesn't make sense to cast a ProcessorContext to
> a
> >>  RecordContext.
> >>  You mentioned above passing the InternalProcessorContext to the
> >> init()
> >>  calls. It is internal for a reason and i think it should remain
> that
> >> >> way.
> >>  It might be better to move the recordContext() method from
> >>  InternalProcessorContext to ProcessorContext.
> >> 
> >>  In the KIP you have an example showing:
> >>  richMapper.init((RecordContext) processorContext);
> >>  But the interface is:
> >>  public interface RichValueMapper {
> >>  VR apply(final V value, final RecordContext recordContext);
> >>  }
> >>  i.e., there is no init(...), besides as above this wouldn't make
> >> sense.
> >> 
> >>  Thanks,
> >>  Damian
> >> 
> >>  On Tue, 4 Jul 2017 at 23:30 Jeyhun Karimov 
> >> >> wrote:
> >> 
> >> > Hi Matthias,
> >> >
> >> > Actually my intend was to provide to RichInitializer and later on
> we
> >>  could
> >> > provide the context of the record as you also mentioned.
> >> > I remove that not to confuse the users.
> >> > Regarding the RecordContext and ProcessorContext interfaces, I
> just
> >> > realized the InternalProcessorContext class. Can't we pass this
> as a
> >> > parameter to init() method of processors? Then we would be able to
> >> >> get
> >> > RecordContext easily with just a method call.
> >> >
> >> >
> >> > Cheers,
> >> > Jeyhun
> >> >
> >> > On Thu, Jun 29, 2017 at 10:14 PM Matthias J. Sax <
> >> >>> matth...@confluent.io>
> >> > wrote:
> >> >
> >> >> One more thing:
> >> >>
> >> >> I don't think `RichInitializer` does make sense. As we don't have
> >> >> any
> >> >> input record, there is also no context. We could of course
> provide
> >> >>> the
> >> >> context 

Build failed in Jenkins: kafka-trunk-jdk7 #2754

2017-09-13 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use sha512 instead of sha2 suffix in release artifacts

--
[...truncated 2.51 MB...]

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryStateWithNonZeroSizedCache PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerInnerJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftOuterJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoin PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftInnerJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldInnerLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldLeftLeftJoinQueryable PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > 
shouldOuterOuterJoinQueryable PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > 
shouldFanoutTheInput PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithEosEnabled PASSED


[GitHub] kafka pull request #3847: MINOR: update tutorial doc to match ak-site

2017-09-13 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3847

MINOR: update tutorial doc to match ak-site



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-doc-update

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3847.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3847


commit 94820fc5508893d7ac0133422ad01967a96ab738
Author: Damian Guy 
Date:   2017-09-13T12:54:43Z

update tutorial doc to match ak-site




---


[jira] [Resolved] (KAFKA-3124) Update protocol wiki page to reflect latest request/response formats

2017-09-13 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3124.
--
Resolution: Fixed

KAFKA-3361 added support for generating protocol documentation page to Kafka 
docs.

> Update protocol wiki page to reflect latest request/response formats
> 
>
> Key: KAFKA-3124
> URL: https://issues.apache.org/jira/browse/KAFKA-3124
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Joel Koshy
>  Labels: newbie
>
> The protocol wiki 
> (https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol)
>  is slightly out of date. It does not have some of the newer request/response 
> formats.
> We should actually figure out a way to _source_ the protocol definitions from 
> the last official release version into that wiki.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3846: MINOR: update release script for streams quickstar...

2017-09-13 Thread dguy
GitHub user dguy opened a pull request:

https://github.com/apache/kafka/pull/3846

MINOR: update release script for streams quickstart



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dguy/kafka minor-release-script

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3846.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3846


commit 3ee5c9c6ca3ddbc0b448afee8da0a79807cc69c0
Author: Damian Guy 
Date:   2017-09-13T12:40:47Z

update release script for streams quickstart




---


[jira] [Resolved] (KAFKA-1354) Failed to load class "org.slf4j.impl.StaticLoggerBinder"

2017-09-13 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1354.
--
Resolution: Fixed

This was fixed in 0.8.2  by adding slf4j-log4j12 binding to Kafka libs.

> Failed to load class "org.slf4j.impl.StaticLoggerBinder"
> 
>
> Key: KAFKA-1354
> URL: https://issues.apache.org/jira/browse/KAFKA-1354
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.1
> Environment: RHEL
>Reporter: RakeshAcharya
>  Labels: newbie, patch, usability
>   Original Estimate: 672h
>  Remaining Estimate: 672h
>
> Getting below errors during Kafka startup
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> [2014-03-31 18:55:36,488] INFO Will not load MX4J, mx4j-tools.jar is not in 
> the classpath (kafka.utils.Mx4jLoader$)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3845: KAFKA-4501: Fix EasyMock and disable PowerMock tes...

2017-09-13 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3845

KAFKA-4501: Fix EasyMock and disable PowerMock tests under Java 9

- EasyMock 3.5 supports Java 9.
- Removed unnecessary PowerMock dependency from 3 tests.
- Disable remaining PowerMock tests when running with Java 9
until https://github.com/powermock/powermock/issues/783 is
done.
- Once we merge this PR, we can run tests on PRs with Java 9.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-4501-easymock-powermock-java-9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3845.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3845


commit ab60da36358a792e1079aed25859828602c1a619
Author: Ismael Juma 
Date:   2017-08-21T13:51:00Z

Use EasyMock instead of PowerMock in 3 connect tests

commit 6e0f8c7d9c147d0340c711b6edd8cbadac344a31
Author: Ismael Juma 
Date:   2017-08-21T13:51:37Z

Exclude PowerMock tests if running with Java 9

commit f076943217f522d6a5a516a559939dc1e7711b40
Author: Ismael Juma 
Date:   2017-09-13T12:02:35Z

Upgrade easymock to 3.5




---


[jira] [Resolved] (KAFKA-5177) Automatic creation of topic prompts warnings into console

2017-09-13 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5177.
--
Resolution: Won't Fix

 As mentioned above, these are transient warning messages. Please reopen if you 
think otherwise

> Automatic creation of topic prompts warnings into console
> -
>
> Key: KAFKA-5177
> URL: https://issues.apache.org/jira/browse/KAFKA-5177
> Project: Kafka
>  Issue Type: Wish
>Affects Versions: 0.10.1.0
> Environment: Mac OSX 16.5.0 Darwin Kernel Version 16.5.0 
> root:xnu-3789.51.2~3/RELEASE_X86_64 x86_64 & JDK 1.8.0_121
>Reporter: Pranas Baliuka
>Priority: Minor
>
> The quick guide https://kafka.apache.org/0101/documentation.html#quickstart
> Leaves the bad first impression at the step when test messages are appended:
> {code}
> kafka_2.11-0.10.1.0 pranas$ bin/kafka-topics.sh --list --zookeeper 
> localhost:2181
> session-1
> kafka_2.11-0.10.1.0 pranas$ bin/kafka-console-producer.sh --broker-list 
> localhost:9092 --topic test
> Message 1
> [2017-05-05 09:05:10,923] WARN Error while fetching metadata with correlation 
> id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
> Message 2
> Message 3
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #3844: MINOR: Use sha512 instead of sha2 suffix in releas...

2017-09-13 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3844


---


Admin Client : no way to create topic with default partitions and replication factor

2017-09-13 Thread Paolo Patierno
Hi devs,


taking a look at the Admin Client API and the related implementation it seems 
that :


  *   the CreateTopics request allows you to have "num_partitions" and 
"replication_factor" with -1 as value (it means unset) which is used when you 
specify the "replica_assignment" instead
  *   the NewTopic available constructor doesn't allow you to use such a 
feature. Even trying to pass null as "replicaAssignments" throws an exception 
because a collection is built there.

I think that it could be useful from an administrative point of view having 
this possibility as it already happens when you have auto creation enabled and 
a topic is created with default partitions and replicas when a 
consumer/producer asks for metadata (and the topic doesn't exist).


Of course this proposal needs a KIP because we are changing the Admin Client 
API (but not breaking). Other than a change in the admin client side it will 
need a change in the broker as well because the current path (when a create 
topic request comes) doesn't handle the -1 values for "num_partitions" and 
"replication_factor", so it needs to set default values in this case.


What do you think about that ?


Thanks.


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience


[ANNOUCE] Apache Kafka 0.11.0.1 Released

2017-09-13 Thread Damian Guy
The Apache Kafka community is pleased to announce the release for Apache
Kafka 0.11.0.1. This is a bug fix release that fixes 51 issues in 0.11.0.0.

All of the changes in this release can be found in the release notes:
*https://archive.apache.org/dist/kafka/0.11.0.1/RELEASE_NOTES.html

.>

Apache Kafka is a distributed streaming platform with four four core APIs:

** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an output
stream to one or more output topics, effectively transforming the input
streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might capture
every change to a table.three key capabilities:


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react to the
streams of data.


You can download the source release from
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/k
afka-0.11.0.1-src.tgz


and binary releases from
*https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
kafka_2.11-0.11.0.1.tgz


>*

*https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.1/
kafka_2.12-0.11.0.1.tgz


>*


A big thank you for the following 33 contributors to this release!

Apurva Mehta, Bill Bejeck, Colin P. Mccabe, Damian Guy, Derrick Or, Dong
Lin, dongeforever, Eno Thereska, Ewen Cheslack-Postava, Gregor Uhlenheuer,
Guozhang Wang, Hooman Broujerdi, huxihx, Ismael Juma, Jan Burkhardt, Jason
Gustafson, Jeff Klukas, Jiangjie Qin, Joel Dice, Konstantine Karantasis,
Manikumar Reddy, Matthias J. Sax, Max Zheng, Paolo Patierno, ppatierno,
radzish, Rajini Sivaram, Randall Hauch, Robin Moffatt, Stephane Roset,
umesh chaudhary, Vahid Hashemian, Xavier Léauté

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
http://kafka.apache.org/


Thanks,
Damian


[GitHub] kafka pull request #3844: MINOR: Use sha512 instead of sha2 suffix in releas...

2017-09-13 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/3844

MINOR: Use sha512 instead of sha2 suffix in release artifacts

As per Apache guidelines:

http://www.apache.org/dev/release-distribution#sigs-and-sums

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka fix-sha512-naming

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3844.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3844


commit 63683bbe7a6ad19dd4f54456e9e1b502cee73687
Author: Ismael Juma 
Date:   2017-09-13T11:34:57Z

MINOR: Use sha512 instead of sha2 suffix in release artifacts

As per Apache guidelines:
http://www.apache.org/dev/release-distribution#sigs-and-sums




---


Re: [VOTE] 0.11.0.1 RC0

2017-09-13 Thread Ismael Juma
Just a quick follow-up: we have renamed the files ending in .sha2 to end in
.sha512 instead as per Apache guidelines. This was requested by Henk P.
Penning.

Ismael

On Tue, Sep 12, 2017 at 1:54 PM, Damian Guy  wrote:

> Thanks all the vote has now closed and the release has been accepted. I'll
> post an announcement to the dev list shortly.
>
> On Tue, 12 Sep 2017 at 13:14 Thomas Crayford 
> wrote:
>
> > Heroku has vetted this through our typical performance and regression
> > testing, and everything looks good. +1 (non-binding) from us.
> >
> > On Tue, Sep 5, 2017 at 9:34 PM, Damian Guy  wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for release of Apache Kafka 0.11.0.1.
> > >
> > > This is a bug fix release and it includes fixes and improvements from
> 49
> > > JIRAs (including a few critical bugs).
> > >
> > > Release notes for the 0.11.0.1 release:
> > > http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/
> RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Saturday, September 9, 9am PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/javadoc/
> > >
> > > * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.1 tag:
> > > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > > a8aa61266aedcf62e45b3595a2cf68c819ca1a6c
> > >
> > >
> > > * Documentation:
> > > Note the documentation can't be pushed live due to changes that will
> not
> > go
> > > live until the release. You can manually verify by downloading
> > > http://home.apache.org/~damianguy/kafka-0.11.0.1-rc0/
> > > kafka_2.11-0.11.0.1-site-docs.tgz
> > >
> > >
> > > * Protocol:
> > > http://kafka.apache.org/0110/protocol.html
> > >
> > > * Successful Jenkins builds for the 0.11.0 branch:
> > > Unit/integration tests: https://builds.apache.org/job/
> > > kafka-0.11.0-jdk7/298
> > >
> > > System tests:
> > > http://confluent-kafka-0-11-0-system-test-results.s3-us-
> > > west-2.amazonaws.com/2017-09-05--001.1504612096--apache--0.
> > > 11.0--7b6e5f9/report.html
> > >
> > > /**
> > >
> > > Thanks,
> > > Damian
> > >
> >
>


[GitHub] kafka pull request #3843: MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GU...

2017-09-13 Thread leigh-perry
GitHub user leigh-perry opened a pull request:

https://github.com/apache/kafka/pull/3843

MINOR: Fix JavaDoc for StreamsConfig.PROCESSING_GUARANTEE_CONFIG

The contribution is my original work and I license the work to the project 
under the project's open source licence.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/leigh-perry/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3843.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3843


commit af56db57e7373457c5a47463927f3285200ab197
Author: lperry 
Date:   2017-09-13T11:03:49Z

Fix JavaDoc for StreamsConfig.PROCESSING_GUARANTEE_CONFIG




---


[jira] [Resolved] (KAFKA-5296) Unable to write to some partitions of newly created topic in 10.2

2017-09-13 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-5296.
--
Resolution: Fixed

Closing as per above discussion. Multiple controllers can create these kind of 
issues. Controlled related issues are being addressed in Kafka Controller 
redesign jiras/KAFKA-5027

> Unable to write to some partitions of newly created topic in 10.2
> -
>
> Key: KAFKA-5296
> URL: https://issues.apache.org/jira/browse/KAFKA-5296
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0
>Reporter: Abhisek Saikia
>
> We are using kafka 10.2 and the cluster was running fine for a month with 50 
> topics and now we are having issue in producing message by creating new 
> topics. The create topic command is successful but producers are throwing 
> error while writing to some partitions. 
> Error in producer-
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
> [topic1]-8: 30039 ms has passed since batch creation plus linger time
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:70)
>  ~[kafka-clients-0.10.2.0.jar:na]
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:57)
>  ~[kafka-clients-0.10.2.0.jar:na]
>   at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
>  ~[kafka-clients-0.10.2.0.jar:na]
> On the broker side, I don't see any topic-parition folder getting created for 
> the broker who is the leader for the partition. 
> While using 0.8 client, the write used to hang while it starts writing to the 
> partition having this issue. With 10.2 it resolved the the producer hang issue
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] KIP-195: AdminClient.createPartitions()

2017-09-13 Thread Ismael Juma
Tom,

Thanks for the KIP, +1 (binding) from me.

Ismael

On Fri, Sep 8, 2017 at 5:42 PM, Tom Bentley  wrote:

> I would like to start the vote on KIP-195 which adds an AdminClient API for
> increasing the number of partitions of a topic. The details are here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-195%3A+AdminClient.
> createPartitions
>
> Cheers,
>
> Tom
>


Re: [DISCUSS]: KIP-159: Introducing Rich functions to Streams

2017-09-13 Thread Jeyhun Karimov
Dear all,

I would like to resume the discussion on KIP-159. I (and Guozhang) think
that releasing KIP-149 and KIP-159 in the same release would make sense to
avoid a release with "partial" public APIs. There is a KIP [1] proposed by
Guozhang (and approved by me) to unify both KIPs.
Please feel free to comment on this.

[1]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=73637757

Cheers,
Jeyhun

On Fri, Jul 21, 2017 at 2:00 AM Jeyhun Karimov  wrote:

> Hi Matthias, Damian, all,
>
> Thanks for your comments and sorry for super-late update.
>
> Sure, the DSL refactoring is not blocking for this KIP.
> I made some changes to KIP document based on my prototype.
>
> Please feel free to comment.
>
> Cheers,
> Jeyhun
>
> On Fri, Jul 7, 2017 at 9:35 PM Matthias J. Sax 
> wrote:
>
>> I would not block this KIP with regard to DSL refactoring. IMHO, we can
>> just finish this one and the DSL refactoring will help later on to
>> reduce the number of overloads.
>>
>> -Matthias
>>
>> On 7/7/17 5:28 AM, Jeyhun Karimov wrote:
>> > I am following the related thread in the mailing list and looking
>> forward
>> > for one-shot solution for overloads issue.
>> >
>> > Cheers,
>> > Jeyhun
>> >
>> > On Fri, Jul 7, 2017 at 10:32 AM Damian Guy 
>> wrote:
>> >
>> >> Hi Jeyhun,
>> >>
>> >> About overrides, what other alternatives do we have? For
>> >>> backwards-compatibility we have to add extra methods to the existing
>> >> ones.
>> >>>
>> >>>
>> >> It wasn't clear to me in the KIP if these are new methods or replacing
>> >> existing ones.
>> >> Also, we are currently discussing options for replacing the overrides.
>> >>
>> >> Thanks,
>> >> Damian
>> >>
>> >>
>> >>> About ProcessorContext vs RecordContext, you are right. I think I
>> need to
>> >>> implement a prototype to understand the full picture as some parts of
>> the
>> >>> KIP might not be as straightforward as I thought.
>> >>>
>> >>>
>> >>> Cheers,
>> >>> Jeyhun
>> >>>
>> >>> On Wed, Jul 5, 2017 at 10:40 AM Damian Guy 
>> wrote:
>> >>>
>>  HI Jeyhun,
>> 
>>  Is the intention that these methods are new overloads on the KStream,
>>  KTable, etc?
>> 
>>  It is worth noting that a ProcessorContext is not a RecordContext. A
>>  RecordContext, as it stands, only exists during the processing of a
>> >>> single
>>  record. Whereas the ProcessorContext exists for the lifetime of the
>>  Processor. Sot it doesn't make sense to cast a ProcessorContext to a
>>  RecordContext.
>>  You mentioned above passing the InternalProcessorContext to the
>> init()
>>  calls. It is internal for a reason and i think it should remain that
>> >> way.
>>  It might be better to move the recordContext() method from
>>  InternalProcessorContext to ProcessorContext.
>> 
>>  In the KIP you have an example showing:
>>  richMapper.init((RecordContext) processorContext);
>>  But the interface is:
>>  public interface RichValueMapper {
>>  VR apply(final V value, final RecordContext recordContext);
>>  }
>>  i.e., there is no init(...), besides as above this wouldn't make
>> sense.
>> 
>>  Thanks,
>>  Damian
>> 
>>  On Tue, 4 Jul 2017 at 23:30 Jeyhun Karimov 
>> >> wrote:
>> 
>> > Hi Matthias,
>> >
>> > Actually my intend was to provide to RichInitializer and later on we
>>  could
>> > provide the context of the record as you also mentioned.
>> > I remove that not to confuse the users.
>> > Regarding the RecordContext and ProcessorContext interfaces, I just
>> > realized the InternalProcessorContext class. Can't we pass this as a
>> > parameter to init() method of processors? Then we would be able to
>> >> get
>> > RecordContext easily with just a method call.
>> >
>> >
>> > Cheers,
>> > Jeyhun
>> >
>> > On Thu, Jun 29, 2017 at 10:14 PM Matthias J. Sax <
>> >>> matth...@confluent.io>
>> > wrote:
>> >
>> >> One more thing:
>> >>
>> >> I don't think `RichInitializer` does make sense. As we don't have
>> >> any
>> >> input record, there is also no context. We could of course provide
>> >>> the
>> >> context of the record that triggers the init call, but this seems
>> >> to
>> >>> be
>> >> semantically questionable. Also, the context for this first record
>> >>> will
>> >> be provided by the consecutive call to aggregate anyways.
>> >>
>> >>
>> >> -Matthias
>> >>
>> >> On 6/29/17 1:11 PM, Matthias J. Sax wrote:
>> >>> Thanks for updating the KIP.
>> >>>
>> >>> I have one concern with regard to backward compatibility. You
>> >>> suggest
>> > to
>> >>> use RecrodContext as base interface for ProcessorContext. This
>> >> will
>> >>> break compatibility.
>> >>>
>> >>> I think, we should just have two independent 

Re: [VOTE] KIP-182 - Reduce Streams DSL overloads and allow easier use of custom storage engines

2017-09-13 Thread Damian Guy
Hi Guozhang,

I had an offline discussion with Matthias and Bill about it. It is thought
that `to` offers some benefit, i.e., syntactic sugar, so perhaps no harm in
keeping it. However, `through` less so, seeing as we can materialize stores
via `filter`, `map` etc, so one of the main benefits of `through` no longer
exists. WDYT?

Thanks,
Damian

On Tue, 12 Sep 2017 at 18:17 Guozhang Wang  wrote:

> Hi Damian,
>
> Why we are deprecating KTable.through while keeping KTable.to? Should we
> either keep both of them or deprecate both of them in favor or
> KTable.toStream if people agree that it is confusing to users?
>
>
> Guozhang
>
>
> On Tue, Sep 12, 2017 at 1:18 AM, Damian Guy  wrote:
>
> > Hi All,
> >
> > A minor update to the KIP, i needed to add KTable.to(Produced) for
> > consistency. KTable.through will be deprecated in favour of using
> > KTable.toStream().through()
> >
> > Thanks,
> > Damian
> >
> > On Thu, 7 Sep 2017 at 08:52 Damian Guy  wrote:
> >
> > > Thanks all. The vote is now closed and the KIP has been accepted with:
> > > 2 non binding votes - bill and matthias
> > > 3 binding  - Damian, Guozhang, Sriram
> > >
> > > Regards,
> > > Damian
> > >
> > > On Tue, 5 Sep 2017 at 22:24 Sriram Subramanian 
> wrote:
> > >
> > >> +1
> > >>
> > >> On Tue, Sep 5, 2017 at 1:33 PM, Guozhang Wang 
> > wrote:
> > >>
> > >> > +1
> > >> >
> > >> > On Fri, Sep 1, 2017 at 3:45 PM, Matthias J. Sax <
> > matth...@confluent.io>
> > >> > wrote:
> > >> >
> > >> > > +1
> > >> > >
> > >> > > On 9/1/17 2:53 PM, Bill Bejeck wrote:
> > >> > > > +1
> > >> > > >
> > >> > > > On Thu, Aug 31, 2017 at 10:20 AM, Damian Guy <
> > damian@gmail.com>
> > >> > > wrote:
> > >> > > >
> > >> > > >> Thanks everyone for voting! Unfortunately i've had to make a
> bit
> > >> of an
> > >> > > >> update based on some issues found during implementation.
> > >> > > >> The main changes are:
> > >> > > >> BytesStoreSupplier -> StoreSupplier
> > >> > > >> Addition of:
> > >> > > >> WindowBytesStoreSupplier, KeyValueBytesStoreSupplier,
> > >> > > >> SessionBytesStoreSupplier that will restrict store types to
> >  > >> > > byte[]>
> > >> > > >> 3 new overloads added to Materialized to enable developers to
> > >> create a
> > >> > > >> Materialized of the appropriate type, i..e, WindowStore etc
> > >> > > >> Update DSL where Materialized is used such that the stores have
> > >> > generic
> > >> > > >> types of 
> > >> > > >> Some minor changes to the arguments to
> > Store#persistentWindowStore
> > >> and
> > >> > > >> Store#persistentSessionStore
> > >> > > >>
> > >> > > >> Please take a look and recast the votes.
> > >> > > >>
> > >> > > >> Thanks for your time,
> > >> > > >> Damian
> > >> > > >>
> > >> > > >> On Fri, 25 Aug 2017 at 17:05 Matthias J. Sax <
> > >> matth...@confluent.io>
> > >> > > >> wrote:
> > >> > > >>
> > >> > > >>> Thanks Damian. Great KIP!
> > >> > > >>>
> > >> > > >>> +1
> > >> > > >>>
> > >> > > >>>
> > >> > > >>> -Matthias
> > >> > > >>>
> > >> > > >>> On 8/25/17 6:45 AM, Damian Guy wrote:
> > >> > >  Hi,
> > >> > > 
> > >> > >  I've just realised we need to add two methods to
> > >> StateStoreBuilder
> > >> > or
> > >> > > >> it
> > >> > >  isn't going to work:
> > >> > > 
> > >> > >  Map logConfig();
> > >> > >  boolean loggingEnabled();
> > >> > > 
> > >> > >  These are needed when we are building the topology and
> > >> determining
> > >> > >  changelog topic names and configs.
> > >> > > 
> > >> > > 
> > >> > >  I've also update the KIP to add
> > >> > > 
> > >> > >  StreamBuilder#stream(String topic)
> > >> > > 
> > >> > >  StreamBuilder#stream(String topic, Consumed options)
> > >> > > 
> > >> > > 
> > >> > >  Thanks
> > >> > > 
> > >> > > 
> > >> > >  On Thu, 24 Aug 2017 at 22:11 Sriram Subramanian <
> > >> r...@confluent.io>
> > >> > > >>> wrote:
> > >> > > 
> > >> > > > +1
> > >> > > >
> > >> > > > On Thu, Aug 24, 2017 at 10:20 AM, Guozhang Wang <
> > >> > wangg...@gmail.com>
> > >> > > > wrote:
> > >> > > >
> > >> > > >> +1. Thanks Damian!
> > >> > > >>
> > >> > > >> On Thu, Aug 24, 2017 at 9:47 AM, Bill Bejeck <
> > >> bbej...@gmail.com>
> > >> > > >>> wrote:
> > >> > > >>
> > >> > > >>> Thanks for the KIP!
> > >> > > >>>
> > >> > > >>> +1
> > >> > > >>>
> > >> > > >>> Thanks,
> > >> > > >>> Bill
> > >> > > >>>
> > >> > > >>> On Thu, Aug 24, 2017 at 12:25 PM, Damian Guy <
> > >> > damian@gmail.com
> > >> > > >
> > >> > > >> wrote:
> > >> > > >>>
> > >> > >  Hi,
> > >> > > 
> > >> > >  I'd like to kick off the voting thread for KIP-182:
> > >> > >  https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> > >  

[jira] [Created] (KAFKA-5882) NullPointerException in ConsumerCoordinator

2017-09-13 Thread Seweryn Habdank-Wojewodzki (JIRA)
Seweryn Habdank-Wojewodzki created KAFKA-5882:
-

 Summary: NullPointerException in ConsumerCoordinator
 Key: KAFKA-5882
 URL: https://issues.apache.org/jira/browse/KAFKA-5882
 Project: Kafka
  Issue Type: Bug
Reporter: Seweryn Habdank-Wojewodzki


It seems bugfix [KAFKA-5073|https://issues.apache.org/jira/browse/KAFKA-5073] 
is made, but introduce some other issues.
In some cases (I am not sure which ones) I got NPE (below).

I would expect that even in case of FATAL error anythink except NPE is thrown.

{code}
2017-09-12 23:34:54 ERROR ConsumerCoordinator:269 - User provided listener 
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener for 
group streamer failed on partition assignment
java.lang.NullPointerException: null
at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:123)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:1234)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:294)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:254)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:1313)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.access$1100(StreamThread.java:73)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:183)
 ~[myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:265)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:363)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:310)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:297)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043) 
[myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:582)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:553)
 [myapp-streamer.jar:?]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:527)
 [myapp-streamer.jar:?]
2017-09-12 23:34:54 INFO  StreamThread:1040 - stream-thread 
[streamer-3a44578b-faa8-4b5b-bbeb-7a7f04639563-StreamThread-1] Shutting down
2017-09-12 23:34:54 INFO  KafkaProducer:972 - Closing the Kafka producer with 
timeoutMillis = 9223372036854775807 ms.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5881) Consuming from added partitions without restarting the consumer

2017-09-13 Thread Viliam Durina (JIRA)
Viliam Durina created KAFKA-5881:


 Summary: Consuming from added partitions without restarting the 
consumer
 Key: KAFKA-5881
 URL: https://issues.apache.org/jira/browse/KAFKA-5881
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 0.11.0.0
Reporter: Viliam Durina


Currently the {{KafkaConsumer}} is not able to return events from newly added 
partitions, neither in automatic nor in manual assignment. I have to create a 
new consumer. This was a surprise to me and 
[https://stackoverflow.com/q/46175275/952135 other users].

With manual assignment, the {{consumer.partitionsFor("topic")}} should 
eventually return new partitions.

With automatic assignment, one of the consumers should start consuming from new 
partitions.

If this is technically not possible, it should at least be documented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)