Re: [DISCUSS] KIP-209 Connection String Support

2017-10-24 Thread Michael André Pearce
Fair enough on URL encoding but as mentioned it is important to be able to 
escape, I agree with backslash option.

I would still like some form of prefix to the string to denote it is for kafka.

 E.g. kafka:: (if semicolon separators)

Sent from my iPad

> On 24 Oct 2017, at 17:27, Colin McCabe  wrote:
> 
> Hi Clebert,
> 
> As some other people mentioned, a comma is probably not a great choice
> for the entry separator.  We have a lot of configuration values that
> already include commas.  How about using a semicolon instead?
> 
> You also need an escaping system in case someone needs a semicolon (or
> whatever) that is part of a configuration key or configuration value. 
> How about a simple backslash?  And then if you want a literal backslash,
> you put in two backslashes.
> 
>> On Thu, Oct 19, 2017, at 18:10, Michael André Pearce wrote:
>> Just another point to why I’d propose the below change to the string
>> format I propose , is an ability to encode the strings easily.
>> 
>> We should note that it’s quite typical for serializers to user a
>> schematic registry where one of their properties they will need to set
>> would be in some form like:
>> 
>> schema.registry.url=http://schema1:80,schema2:80/api
>> 
>> So being able to safely encode this is important. 
>> 
>> Sent from my iPhone
>> 
>>> On 20 Oct 2017, at 01:47, Michael André Pearce 
>>>  wrote:
>>> 
>>> Hi Clebert
>>> 
>>> Great kip!
>>> 
>>> Instead of ‘;’ to separate the host sections with the params section could 
>>> it be a ‘?’
>>> 
>>> And like wise ‘,’ param separator could this be ‘&’ (keep the ‘,’ for host 
>>> separator just makes easier to distinguish)
>>> 
>>> Also this was it makes it easier to encode params etc as can just re use 
>>> url encoders.
> 
> Please, no.  URL encoders will mangle a lot of things horribly (like
> less than signs, greater than signs, etc.)  We should not make this a
> URL or pseudo-URL (see the discussion above).  We should make it clear
> that this is not a URL.
> 
>> Invalid conversions would throw InvalidArgumentException (with a description 
>> of the invalid conversion)
>> Invalid parameters would throw InvalidArgumentException (with the name of 
>> the invalid parameter).
> 
> This will cause a lot of compatibility problems, right?  If I switch
> back and forth between two Kafka versions, they will support slightly
> different sets of configuration parameters.  It seems saner to simply
> ignore configuration parameters that we don't understand, like we do
> now.
> 
> best,
> Colin
> 
> 
>>> 
>>> Also as like many systems it typical to note what the connection string is 
>>> for with a prefix eg ‘kafka://‘
>>> 
>>> Just makes it obvious when an app has a list of connection strings in their 
>>> runtime properties which is for which technology.
>>> 
>>> Eg example connection string would be:
>>> 
>>> kafka://host1:port1,host2:port2?param1=value1&parm2=value2
>>> 
>>> Cheers
>>> Mike
>>> 
>>> Sent from my iPhone
>>> 
 On 19 Oct 2017, at 19:29, Clebert Suconic  
 wrote:
 
 Do I have to do anything here?
 
 I wonder how long I need to wait before proposing the vote.
 
 On Tue, Oct 17, 2017 at 1:17 PM, Clebert Suconic
  wrote:
> I had these updates in already... you just changed the names at the
> string.. but it was pretty much the same thing I think... I had taken
> you suggestions though.
> 
> 
> The Exceptions.. these would be implementation details... all I wanted
> to make sure is that users would get the name of the invalid parameter
> as part of a string on a message.
> 
> On Tue, Oct 17, 2017 at 3:15 AM, Satish Duggana
>  wrote:
>> You may need to update KIP with the details discussed in this thread in
>> proposed changes section.
>> 
 My proposed format for the connection string would be:
 IP1:host1,IP2:host2,...IPN:hostn;parameterName=value1;parameterName2=value2;...
>> parameterNameN=valueN
>> Format should be
>> host1:port1,host2:port2,…host:portn;param-name1=param-val1,..
>> 
 Invalid conversions would throw InvalidArgumentException (with a
>> description of the invalid conversion)
 Invalid parameters would throw InvalidArgumentException (with the name 
 of
>> the invalid parameter).
>> 
>> Should throw IllegalArgumentException with respective message.
>> 
>> Thanks,
>> Satish.
>> 
>> On Tue, Oct 17, 2017 at 4:46 AM, Clebert Suconic 
>> 
>> wrote:
>> 
>>> That works.
>>> 
 On Mon, Oct 16, 2017 at 6:59 PM Ted Yu  wrote:
 
 Can't you use IllegalArgumentException ?
 
 Some example in current code base:
 
 clients/src/main/java/org/apache/kafka/clients/Metadata.java:
 throw new IllegalArgumentException("Max time to wait for metadata
>>> updates
 should not be < 0 milliseconds");
 
 On Mon, Oct 16, 2017

[GitHub] kafka pull request #4130: HOTFIX: Remove sysout logging

2017-10-24 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/4130

HOTFIX: Remove sysout logging



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka KHotfix-0110-remove-logging

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4130.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4130






---


[GitHub] kafka pull request #4129: KAFKA-6115: TaskManager should be type aware

2017-10-24 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4129

KAFKA-6115: TaskManager should be type aware

 - remove type specific methods from Task interface
 - add generics to preserve task type
 - add sub classes for different task types

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka 
kafka-6115-taskManager-should-be-type-aware

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4129.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4129


commit 48baeb368cb8072669780506f56afde084c943d9
Author: Matthias J. Sax 
Date:   2017-10-24T23:39:34Z

KAFKA-6115: TaskManager should be type aware
 - remove type specific methods from Task interface
 - add generics to preserve task type
 - add sub classes for different task types




---


Re: [DISCUSS] KIP-205: Add getAllKeys() API to ReadOnlyWindowStore

2017-10-24 Thread Richard Yu
I think we can come up with this compromise: range(long timeFrom, long
timeTo) will be changed to getKeys(long timeFrom, long timeTo). Sounds fair?


On Tue, Oct 24, 2017 at 10:44 AM, Xavier Léauté  wrote:

> >
> > Generally I think having `all / range` is better in terms of consistency
> > with key-value windows. I.e. queries with key are named as `get / fetch`
> > for kv / window stores, and queries without key are named as `range /
> all`.
> >
>
> For kv stores, range takes a range of keys, and with this proposal range on
> window stores would take a range of time, that does not sound consistent to
> me at all.
>
> We also already have fetch which take both a range of time and keys.
>


[jira] [Resolved] (KAFKA-6116) Major performance issue due to excessive logging during leader election

2017-10-24 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6116.

Resolution: Fixed

Fixed in 0.11.0, 1.0 and trunk. Commit for 0.11.0:

https://github.com/apache/kafka/commit/d798c515992bdfd57b0a958d5b430d1e1a3e296e

> Major performance issue due to excessive logging during leader election
> ---
>
> Key: KAFKA-6116
> URL: https://issues.apache.org/jira/browse/KAFKA-6116
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Reporter: Ismael Juma
>Assignee: Onur Karaman
>Priority: Blocker
> Fix For: 1.0.0, 0.11.0.2, 1.1.0
>
>
> This was particularly problematic in clusters with a large number of 
> partitions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6116) Major performance issue due to excessive logging during leader election

2017-10-24 Thread Ismael Juma (JIRA)
Ismael Juma created KAFKA-6116:
--

 Summary: Major performance issue due to excessive logging during 
leader election
 Key: KAFKA-6116
 URL: https://issues.apache.org/jira/browse/KAFKA-6116
 Project: Kafka
  Issue Type: Bug
  Components: controller
Reporter: Ismael Juma
Assignee: Onur Karaman
Priority: Blocker
 Fix For: 1.0.0, 0.11.0.2, 1.1.0


This was particularly problematic in clusters with a large number of partitions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6115) TaskManager should be type aware

2017-10-24 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-6115:
--

 Summary: TaskManager should be type aware
 Key: KAFKA-6115
 URL: https://issues.apache.org/jira/browse/KAFKA-6115
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax
 Fix For: 1.1.0


Currently, in {{TaskManager}} we don't distinguish between {{StreamTask}} and 
{{StandbyTask}}. However, both have quite different live cycles and thus we 
should try do distinguish between both.

This also affect interface {{Task}} that should only contain method we need for 
both type of tasks, but not contain sub type specific methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6114) kafka Java API Consumer and producer Offset value comparison?

2017-10-24 Thread veerendra nath jasthi (JIRA)
veerendra nath jasthi created KAFKA-6114:


 Summary: kafka Java API Consumer and producer Offset value 
comparison?
 Key: KAFKA-6114
 URL: https://issues.apache.org/jira/browse/KAFKA-6114
 Project: Kafka
  Issue Type: Wish
  Components: consumer, offset manager, producer 
Affects Versions: 0.11.0.0
 Environment: Linux 
Reporter: veerendra nath jasthi



I have a requirement to match Kafka producer offset value to consumer offset by 
using Java API?

I am new to KAFKA,Could anyone suggest how to proceed with this ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6113) broker failure leads to under replicated partitions

2017-10-24 Thread Takao Kobayashi (JIRA)
Takao Kobayashi created KAFKA-6113:
--

 Summary: broker failure leads to under replicated partitions
 Key: KAFKA-6113
 URL: https://issues.apache.org/jira/browse/KAFKA-6113
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.1.1
Reporter: Takao Kobayashi
 Attachments: Screen Shot 2017-10-20 at 10.57.28 AM.png, kafka1.csv, 
kafka2.csv, kafka3.csv, kafka4.csv, kafka5.csv, zookeeper2.csv

A similar issue to https://issues.apache.org/jira/browse/KAFKA-2729 but with 
some slight differences: We're using a 5 kafka, 3 zookeeper node setup running 
on kubernetes on aws. One node (5.kafka.production1) suddenly failed and was 
offline for ~13min. 
During the outage many partitions were under replicated. As soon as the node 
came back online, all brokers recovered. 
Looking through the logs, there were a bunch of partitions that failed to 
shrink ISR (to remove the failed broker) since the cached zkVersion on the 
kafka node was not equal to that in zookeeper (screenshot of one such example 
is attached)
I've attached the logs for all the kafka nodes and one of the zookeeper nodes. 
Any advice or insight would be much appreciate



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4128: MINOR: random cleanup and JavaDoc improvements for...

2017-10-24 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/4128

MINOR: random cleanup and JavaDoc improvements for clients and Streams



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka minor-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4128.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4128


commit f6e38aadde3efbb0be483d21c7dcc6d78d61b9dd
Author: Matthias J. Sax 
Date:   2017-10-20T18:10:45Z

MINOR: random cleanup and JavaDoc improvements for clients and Streams




---


[GitHub] kafka-site pull request #103: MINOR: Fix typo in title

2017-10-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/103


---


[GitHub] kafka-site issue #103: MINOR: Fix typo in title

2017-10-24 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/103
  
LGTM. Merged to `asf-site`.


---


[GitHub] kafka-site issue #103: MINOR: Fix typo in title

2017-10-24 Thread guozhangwang
Github user guozhangwang commented on the issue:

https://github.com/apache/kafka-site/pull/103
  
Could you submit a patch to `kafka` repo under `docs/streams/index.html` as 
well?


---


[GitHub] kafka pull request #4127: MINOR use proper template classes for internalSele...

2017-10-24 Thread tedyu
GitHub user tedyu opened a pull request:

https://github.com/apache/kafka/pull/4127

MINOR use proper template classes for internalSelectKey()

As pointed out in this thread: 
http://search-hadoop.com/m/Kafka/uyzND1fy2K7I85G1?subj=Kafka+source+code+Build+Error
 , Eclipse shows syntax error for the following:
```
return new KeyValue<>(mapper.apply(key, value), 
value);
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tedyu/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4127.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4127


commit 582f353b08f8bd6c1d12e7e85cc72bc7875abaa7
Author: tedyu 
Date:   2017-10-24T20:07:00Z

MINOR use proper template classes for internalSelectKey()




---


Re: [DISCUSS] KIP-212: Enforce set of legal characters for connector names

2017-10-24 Thread Colin McCabe
On Tue, Oct 24, 2017, at 11:28, Sönke Liebau wrote:
> Hi,
> 
> after reading your messages I'll grant that I might have picked a
> somewhat
> draconic option to solve these issues.
> 
> In general I believe that properly encoding the URLs after having created
> the connectors should solve a lot of the issues already. For some
> characters the rest api returns an error on creating the connector as
> well,
> so for that URL encoding won't help. However the connectors do get
> created
> even though an error is returned, I've never investigated if they are in
> a
> consistent state tbh - I'll give this another look.
> 
> @colin: Entity encoding would allow us to encode a lot of characters,
> however I am unsure whether we should prefer it over url encoding in this
> case, as mostly the end user would have to encode the characters himself.
> And due to entity encoding ending every character with a ; which causes
> the
> embedded jetty server to cut the connector name at that character we'd
> probably need to encode that character in URL encoding again for that to
> work out - which might get a bit too complex tbh.

Sorry, I meant to write percent-encoding, not entity refs.
https://en.wikipedia.org/wiki/Percent-encoding

best,
Colin


> I will further investigate which characters the url decoding that jetty
> brings to the table will let us use and if all of these are correctly
> handled during connector creation and report back with a new list of
> characters that I think we can support fairly easily.
> 
> Kind regards,
> Sönke
> 
> 
> On Tue, Oct 24, 2017 at 6:42 PM, Colin McCabe  wrote:
> 
> > It should be possible to use entity references to encode these
> > characters in URLs.  See https://dev.w3.org/html5/html-author/charref
> > Maybe I'm misunderstanding the problem, but can we simply encode the
> > URLs, rather than restricting the names?
> >
> > best,
> > Colin
> >
> >
> > On Mon, Oct 23, 2017, at 14:12, Randall Hauch wrote:
> > > Here's the link to KIP-212:
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=74684586
> > >
> > > I do think it's worthwhile to define the rules for connector names.
> > > However, I think it would be better to describe the current restrictions
> > > for names outside of them appearing within URLs. For example, if we can
> > > keep connector names relatively free of constraints but instead define
> > > how
> > > names should be encoded when used within URLs (e.g., URL encoding), then
> > > we
> > > may not have (m)any backward compatibility issues other than fixing some
> > > bugs related to proper encoding/decoding.
> > >
> > > Thoughts?
> > >
> > >
> > > On Mon, Oct 23, 2017 at 3:44 PM, Sönke Liebau <
> > > soenke.lie...@opencore.com.invalid> wrote:
> > >
> > > > All,
> > > >
> > > > I've created a KIP to discuss enforcing of rules on what characters are
> > > > allowed in connector names.
> > > >
> > > > Since this may break api calls that are currently working I figured a
> > KIP
> > > > is the better way to go than to just create a jira.
> > > >
> > > > I'd love to hear your input on this!
> > > >
> >
> 
> 
> 
> -- 
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


Re: [DISCUSS] KIP-212: Enforce set of legal characters for connector names

2017-10-24 Thread Sönke Liebau
Hi,

after reading your messages I'll grant that I might have picked a somewhat
draconic option to solve these issues.

In general I believe that properly encoding the URLs after having created
the connectors should solve a lot of the issues already. For some
characters the rest api returns an error on creating the connector as well,
so for that URL encoding won't help. However the connectors do get created
even though an error is returned, I've never investigated if they are in a
consistent state tbh - I'll give this another look.

@colin: Entity encoding would allow us to encode a lot of characters,
however I am unsure whether we should prefer it over url encoding in this
case, as mostly the end user would have to encode the characters himself.
And due to entity encoding ending every character with a ; which causes the
embedded jetty server to cut the connector name at that character we'd
probably need to encode that character in URL encoding again for that to
work out - which might get a bit too complex tbh.
I will further investigate which characters the url decoding that jetty
brings to the table will let us use and if all of these are correctly
handled during connector creation and report back with a new list of
characters that I think we can support fairly easily.

Kind regards,
Sönke


On Tue, Oct 24, 2017 at 6:42 PM, Colin McCabe  wrote:

> It should be possible to use entity references to encode these
> characters in URLs.  See https://dev.w3.org/html5/html-author/charref
> Maybe I'm misunderstanding the problem, but can we simply encode the
> URLs, rather than restricting the names?
>
> best,
> Colin
>
>
> On Mon, Oct 23, 2017, at 14:12, Randall Hauch wrote:
> > Here's the link to KIP-212:
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=74684586
> >
> > I do think it's worthwhile to define the rules for connector names.
> > However, I think it would be better to describe the current restrictions
> > for names outside of them appearing within URLs. For example, if we can
> > keep connector names relatively free of constraints but instead define
> > how
> > names should be encoded when used within URLs (e.g., URL encoding), then
> > we
> > may not have (m)any backward compatibility issues other than fixing some
> > bugs related to proper encoding/decoding.
> >
> > Thoughts?
> >
> >
> > On Mon, Oct 23, 2017 at 3:44 PM, Sönke Liebau <
> > soenke.lie...@opencore.com.invalid> wrote:
> >
> > > All,
> > >
> > > I've created a KIP to discuss enforcing of rules on what characters are
> > > allowed in connector names.
> > >
> > > Since this may break api calls that are currently working I figured a
> KIP
> > > is the better way to go than to just create a jira.
> > >
> > > I'd love to hear your input on this!
> > >
>



-- 
Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany


Re: Before creating KIP : Kafka Connect / Add a configuration provider class

2017-10-24 Thread Florian Hussonnois
Yes, the provider classes will need to be installed on each worker (same
installation mechanism than a connector-plugin).

A new provider instance should be create for each connector instance but
will be configure at the worker level.

A provider class may have two behaviors  :
 - provide a default configuration subset for connectors (a connector can
still override these defaults).
 - enforce a subset of configuration (connectors can't override a provider
configuration).

This behavior could be configured at the worker level.

Finally, I think a provider should be able to trigger a connector
reconfiguration.


2017-10-23 17:30 GMT+02:00 Sönke Liebau 
:

> I agree, sounds like an intriguing idea. I think we could probably come up
> with a few common enough implementations that merit including in Kafka.
> FileConfigProvider for example, so you can distribute common configs
> throughout your cluster with some orchestration tool and users simply state
> the identifier of some connection..
>
> What I am wondering is, whether these classes would always return an entire
> configuration, or is it a more specific approach where you might use a
> FileConfigProvider to retrieve some hostname and some other ConfigProvider
> to retrieve credentials, etc...
>
>
>
> On Mon, Oct 23, 2017 at 5:12 PM, Randall Hauch  wrote:
>
> > Very interesting. Would the proposed configuration provider be set at the
> > connector level or the worker level? The latter would obviously be
> required
> > to handle all/multiple connector configurations. Either way, the provider
> > class(es) would need to be installed on the worker (really, every
> worker),
> > correct?
> >
> > Would all provider implementations be custom implementations, or are
> there
> > some provider implementations that are general enough for Connect to
> > include them?
> >
> > Best regards,
> >
> > Randall
> >
> > On Fri, Oct 20, 2017 at 5:08 AM, Florian Hussonnois <
> fhussonn...@gmail.com
> > >
> > wrote:
> >
> > > Hi Team
> > >
> > > Before submitting a new KIP I would like to open the discussion
> regarding
> > > an enhancement of Kafka Connect.
> > >
> > > Currently, the only way to configure a connector (in distributed mode)
> is
> > > through REST endpoints while creating or updating a connector.
> > >
> > > It would be nice to have the possibility to specify a configs provider
> > > class (as we specify the connector class) in the JSON payload sent over
> > the
> > > REST API.
> > > This class would be called during the connector creation to complete
> the
> > > configs submitted via REST.
> > >
> > > The motivations for a such functionality is for example to enforce a
> > > configuration for all deployed connectors, to provide default configs
> or
> > to
> > > provide sensitive configs like user/password.
> > >
> > > I've met these requirements on different projects.
> > >
> > > Do you think, this feature merits a new KIP ?
> > >
> > > Thanks,
> > >
> > > --
> > > Florian HUSSONNOIS
> > >
> >
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>



-- 
Florian HUSSONNOIS


Re: [DISCUSS] KIP-205: Add getAllKeys() API to ReadOnlyWindowStore

2017-10-24 Thread Xavier Léauté
>
> Generally I think having `all / range` is better in terms of consistency
> with key-value windows. I.e. queries with key are named as `get / fetch`
> for kv / window stores, and queries without key are named as `range / all`.
>

For kv stores, range takes a range of keys, and with this proposal range on
window stores would take a range of time, that does not sound consistent to
me at all.

We also already have fetch which take both a range of time and keys.


[GitHub] kafka-site issue #103: MINOR: Fix typo in title

2017-10-24 Thread joel-hamill
Github user joel-hamill commented on the issue:

https://github.com/apache/kafka-site/pull/103
  
ping @guozhangwang 


---


[GitHub] kafka-site pull request #103: MINOR: Fix typo in title

2017-10-24 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/103

MINOR: Fix typo in title



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site dev-guide-tittle

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/103.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #103


commit ede62b9a90763152d690355f83c6dcbf3c7e7052
Author: Joel Hamill 
Date:   2017-10-24T17:40:56Z

MINOR: Fix typo in title




---


[GitHub] kafka-site pull request #102: Dev guide title fix

2017-10-24 Thread joel-hamill
Github user joel-hamill closed the pull request at:

https://github.com/apache/kafka-site/pull/102


---


[GitHub] kafka-site pull request #102: Dev guide title fix

2017-10-24 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/102

Dev guide title fix



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site dev-guide-title-fix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #102


commit 72b3bc737c4d53a8a4685adca2ef0e5a0f65c355
Author: Joel Hamill 
Date:   2017-10-04T00:51:03Z

MINOR: Update verbiage on landing page

Author: Joel Hamill 
Author: Joel Hamill <11722533+joel-ham...@users.noreply.github.com>

Reviewers: Guozhang Wang , Michael G. Noll 
, Damian Guy 

Closes #77 from joel-hamill/joel-hamill/nav-fixes-streams

commit dba4c03eaf5d1828d41479b1d8760559c0383e2d
Author: Joel Hamill 
Date:   2017-10-24T17:27:14Z

Merge branch 'asf-site' of github.com:apache/kafka-site into asf-site

commit 7f52fcefd0227e248bb4c6ae9c54a9a13d87f6af
Author: Joel Hamill 
Date:   2017-10-24T17:34:21Z

MINOR: Typo in title




---


[GitHub] kafka-site pull request #101: MINOR: Fix typo in Dev Guide title

2017-10-24 Thread joel-hamill
Github user joel-hamill closed the pull request at:

https://github.com/apache/kafka-site/pull/101


---


[GitHub] kafka-site pull request #101: Dev guide title

2017-10-24 Thread joel-hamill
GitHub user joel-hamill opened a pull request:

https://github.com/apache/kafka-site/pull/101

Dev guide title



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/joel-hamill/kafka-site dev-guide-title

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/101.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #101


commit 72b3bc737c4d53a8a4685adca2ef0e5a0f65c355
Author: Joel Hamill 
Date:   2017-10-04T00:51:03Z

MINOR: Update verbiage on landing page

Author: Joel Hamill 
Author: Joel Hamill <11722533+joel-ham...@users.noreply.github.com>

Reviewers: Guozhang Wang , Michael G. Noll 
, Damian Guy 

Closes #77 from joel-hamill/joel-hamill/nav-fixes-streams

commit dba4c03eaf5d1828d41479b1d8760559c0383e2d
Author: Joel Hamill 
Date:   2017-10-24T17:27:14Z

Merge branch 'asf-site' of github.com:apache/kafka-site into asf-site

commit 31be40f1869193bcf9e5a75bf95611b71c68a5cc
Author: Joel Hamill 
Date:   2017-10-24T17:32:22Z

MINOR: Typo in title




---


Re: [DISCUSS] KIP-212: Enforce set of legal characters for connector names

2017-10-24 Thread Colin McCabe
It should be possible to use entity references to encode these
characters in URLs.  See https://dev.w3.org/html5/html-author/charref 
Maybe I'm misunderstanding the problem, but can we simply encode the
URLs, rather than restricting the names?

best,
Colin


On Mon, Oct 23, 2017, at 14:12, Randall Hauch wrote:
> Here's the link to KIP-212:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=74684586
> 
> I do think it's worthwhile to define the rules for connector names.
> However, I think it would be better to describe the current restrictions
> for names outside of them appearing within URLs. For example, if we can
> keep connector names relatively free of constraints but instead define
> how
> names should be encoded when used within URLs (e.g., URL encoding), then
> we
> may not have (m)any backward compatibility issues other than fixing some
> bugs related to proper encoding/decoding.
> 
> Thoughts?
> 
> 
> On Mon, Oct 23, 2017 at 3:44 PM, Sönke Liebau <
> soenke.lie...@opencore.com.invalid> wrote:
> 
> > All,
> >
> > I've created a KIP to discuss enforcing of rules on what characters are
> > allowed in connector names.
> >
> > Since this may break api calls that are currently working I figured a KIP
> > is the better way to go than to just create a jira.
> >
> > I'd love to hear your input on this!
> >


[GitHub] kafka pull request #4126: KAFKA-6072: User ZookeeperClient in GroupCoordinat...

2017-10-24 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/4126

KAFKA-6072: User ZookeeperClient in GroupCoordinator and 
TransactionCoordinator



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka 
KAFKA-6072-ZK-IN-GRoupCoordinator

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4126.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4126


commit 65089d25b2f16f9dbdebc358ebd8252ce37b61d0
Author: Manikumar Reddy 
Date:   2017-10-19T15:51:55Z

KAFKA-6072: User ZookeeperClient in GroupCoordinator and 
TransactionCoordinator




---


Re: [DISCUSS] KIP-209 Connection String Support

2017-10-24 Thread Colin McCabe
Hi Clebert,

As some other people mentioned, a comma is probably not a great choice
for the entry separator.  We have a lot of configuration values that
already include commas.  How about using a semicolon instead?

You also need an escaping system in case someone needs a semicolon (or
whatever) that is part of a configuration key or configuration value. 
How about a simple backslash?  And then if you want a literal backslash,
you put in two backslashes.

On Thu, Oct 19, 2017, at 18:10, Michael André Pearce wrote:
> Just another point to why I’d propose the below change to the string
> format I propose , is an ability to encode the strings easily.
> 
> We should note that it’s quite typical for serializers to user a
> schematic registry where one of their properties they will need to set
> would be in some form like:
> 
> schema.registry.url=http://schema1:80,schema2:80/api
> 
> So being able to safely encode this is important. 
> 
> Sent from my iPhone
> 
> > On 20 Oct 2017, at 01:47, Michael André Pearce 
> >  wrote:
> > 
> > Hi Clebert
> > 
> > Great kip!
> > 
> > Instead of ‘;’ to separate the host sections with the params section could 
> > it be a ‘?’
> > 
> > And like wise ‘,’ param separator could this be ‘&’ (keep the ‘,’ for host 
> > separator just makes easier to distinguish)
> > 
> > Also this was it makes it easier to encode params etc as can just re use 
> > url encoders.

Please, no.  URL encoders will mangle a lot of things horribly (like
less than signs, greater than signs, etc.)  We should not make this a
URL or pseudo-URL (see the discussion above).  We should make it clear
that this is not a URL.

> Invalid conversions would throw InvalidArgumentException (with a description 
> of the invalid conversion)
> Invalid parameters would throw InvalidArgumentException (with the name of the 
> invalid parameter).

This will cause a lot of compatibility problems, right?  If I switch
back and forth between two Kafka versions, they will support slightly
different sets of configuration parameters.  It seems saner to simply
ignore configuration parameters that we don't understand, like we do
now.

best,
Colin


> > 
> > Also as like many systems it typical to note what the connection string is 
> > for with a prefix eg ‘kafka://‘
> > 
> > Just makes it obvious when an app has a list of connection strings in their 
> > runtime properties which is for which technology.
> > 
> > Eg example connection string would be:
> > 
> > kafka://host1:port1,host2:port2?param1=value1&parm2=value2
> > 
> > Cheers
> > Mike
> > 
> > Sent from my iPhone
> > 
> >> On 19 Oct 2017, at 19:29, Clebert Suconic  
> >> wrote:
> >> 
> >> Do I have to do anything here?
> >> 
> >> I wonder how long I need to wait before proposing the vote.
> >> 
> >> On Tue, Oct 17, 2017 at 1:17 PM, Clebert Suconic
> >>  wrote:
> >>> I had these updates in already... you just changed the names at the
> >>> string.. but it was pretty much the same thing I think... I had taken
> >>> you suggestions though.
> >>> 
> >>> 
> >>> The Exceptions.. these would be implementation details... all I wanted
> >>> to make sure is that users would get the name of the invalid parameter
> >>> as part of a string on a message.
> >>> 
> >>> On Tue, Oct 17, 2017 at 3:15 AM, Satish Duggana
> >>>  wrote:
>  You may need to update KIP with the details discussed in this thread in
>  proposed changes section.
>  
> >> My proposed format for the connection string would be:
> >> IP1:host1,IP2:host2,...IPN:hostn;parameterName=value1;parameterName2=value2;...
>  parameterNameN=valueN
>  Format should be
>  host1:port1,host2:port2,…host:portn;param-name1=param-val1,..
>  
> >> Invalid conversions would throw InvalidArgumentException (with a
>  description of the invalid conversion)
> >> Invalid parameters would throw InvalidArgumentException (with the name 
> >> of
>  the invalid parameter).
>  
>  Should throw IllegalArgumentException with respective message.
>  
>  Thanks,
>  Satish.
>  
>  On Tue, Oct 17, 2017 at 4:46 AM, Clebert Suconic 
>  
>  wrote:
>  
> > That works.
> > 
> >> On Mon, Oct 16, 2017 at 6:59 PM Ted Yu  wrote:
> >> 
> >> Can't you use IllegalArgumentException ?
> >> 
> >> Some example in current code base:
> >> 
> >> clients/src/main/java/org/apache/kafka/clients/Metadata.java:
> >> throw new IllegalArgumentException("Max time to wait for metadata
> > updates
> >> should not be < 0 milliseconds");
> >> 
> >> On Mon, Oct 16, 2017 at 3:06 PM, Clebert Suconic <
> >> clebert.suco...@gmail.com>
> >> wrote:
> >> 
> >>> I updated the wiki with the list on the proposed arguments.
> >>> 
> >>> I also changed it to include a new Exception class that would be named
> >>> InvalidParameterException (since I couldn't find an existing Exception
> >>> class that I could reuse into this). (I

Re: [VOTE] KIP-205: Add all() and range() API to ReadOnlyWindowStore

2017-10-24 Thread Guozhang Wang
+1. Thanks.

On Mon, Oct 23, 2017 at 8:11 PM, Richard Yu 
wrote:

> Hi all,
>
> I want to propose KIP-205 for the addition of new API. It is about adding
> methods similar to those found in ReadOnlyKeyValueStore to the
> ReadOnlyWindowStore class. As it appears the discussion has reached a
> conclusion, I would like to start the voting process.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 205%3A+Add+all%28%29+and+range%28%29+API+to+ReadOnlyWindowStore
>
> Thanks for your patience!
>



-- 
-- Guozhang


[jira] [Created] (KAFKA-6112) SSL + ACL does not seem to work

2017-10-24 Thread Jagadish Prasath Ramu (JIRA)
Jagadish Prasath Ramu created KAFKA-6112:


 Summary: SSL + ACL does not seem to work
 Key: KAFKA-6112
 URL: https://issues.apache.org/jira/browse/KAFKA-6112
 Project: Kafka
  Issue Type: Bug
  Components: security
Affects Versions: 0.11.0.0, 0.11.0.1
Reporter: Jagadish Prasath Ramu


I'm trying to enable ACL for a cluster that has SSL based authentication setup.

Similar issue (or exceptions) has been reported in the following JIRA:
https://issues.apache.org/jira/browse/KAFKA-3687 (refer the last 2 exceptions 
that were posted after the issue was closed).


error messages seen in Producer:

{noformat}



[2017-10-24 18:32:25,254] WARN Error while fetching metadata with correlation 
id 349 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-10-24 18:32:25,362] WARN Error while fetching metadata with correlation 
id 350 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-10-24 18:32:25,470] WARN Error while fetching metadata with correlation 
id 351 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
[2017-10-24 18:32:25,575] WARN Error while fetching metadata with correlation 
id 352 : {t1=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
{noformat}

security related kafka config.properties:

{noformat}
ssl.keystore.location=kafka.server.keystore.jks
ssl.keystore.password=abc123
ssl.key.password=abc123
ssl.truststore.location=kafka.server.truststore.jks
ssl.truststore.password=abc123

ssl.client.auth=required
ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
ssl.keystore.type = JKS
ssl.truststore.type = JKS
security.inter.broker.protocol = SSL

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false

super.users=User:Bob;User:"CN=localhost,OU=XXX,O=,L=XXX,ST=XX,C=XX"

listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093
{noformat}

client configuration file:
{noformat}
security.protocol=SSL
ssl.truststore.location=kafka.client.truststore.jks
ssl.truststore.password=abc123
ssl.keystore.location=kafka.client.keystore.jks
ssl.keystore.password=abc123
ssl.key.password=abc123
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.truststore.type=JKS
ssl.keystore.type=JKS
group.id=group-1
{noformat}

The debug messages of authorizer log does not show any "DENY" messages.

{noformat}
[2017-10-24 18:32:26,319] DEBUG operation = Create on resource = 
Cluster:kafka-cluster from host = 127.0.0.1 is Allow based on acl = 
User:CN=localhost,OU=XXX,O=,L=XXX,ST=XX,C=XX has Allow permission for 
operations: Create from hosts: 127.0.0.1 (kafka.authorizer.logger)
[2017-10-24 18:32:26,319] DEBUG Principal = 
User:CN=localhost,OU=XXX,O=,L=XXX,ST=XX,C=XX is Allowed Operation = Create 
from host = 127.0.0.1 on resource = Cluster:kafka-cluster 
(kafka.authorizer.logger)
{noformat}



I have followed the scripts stated in the thread:
http://comments.gmane.org/gmane.comp.apache.kafka.user/12619






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Metadata class doesn't "expose" topics with errors

2017-10-24 Thread Paolo Patierno
Hi Guozhang,

thanks for replying !


I see your point about the Metadata class which doesn't need to expose errors 
because transient.


Regarding the KIP-204, the delete operations in the "legacy" client doesn't 
have any retry logic but it just returns the error to the user which should 
retry himself (on topics where the operation failed).

If I should add a retry logic in the "new" admin client, considering a delete 
records operation on more topics partitions at same time, I should retry if at 
least one of the topics partitions will come with a LEADER_NOT_AVAILABLE (after 
metadata request), without going on with other topic partitions which have 
leaders.

Maybe it's better to continue with the operations on such topics and come back 
to the user with a LEADER_NOT_AVAILABLE for the others (it's the current 
behaviour with "legacy" admin client).


For now the current implementation I have (I'll push a PR soon), use the Call 
class for sending a MetadataRequest and then its handleResponse for using 
another Call instance for sending the DeleteRecordsRequest.


Thanks


Paolo Patierno
Senior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Azure & IoT
Microsoft Azure Advisor

Twitter : @ppatierno
Linkedin : paolopatierno
Blog : DevExperience



From: Guozhang Wang 
Sent: Tuesday, October 24, 2017 12:52 AM
To: dev@kafka.apache.org
Subject: Re: Metadata class doesn't "expose" topics with errors

Hello Paolo,

The reason we filtered the errors in the topics in the generated Cluster is
that Metadata and its "fetch()" returned Cluster is a common class that is
used among all clients (producer, consumer, connect, streams, admin), and
is treated as a high-level representation of the current snapshot of the
hosted topic information of the cluster, and hence we intentionally exclude
any transient errors in the representation to abstract such issues away
from its users.

As for your implementation on KIP-204, I think just wait-and-retry for the
updated metadata.fetch() Cluster contain the leader information for the
topic is fine: since if a LEADER_NOT_AVAILABLE is returned you'll need to
backoff and retry anyways, right?


Guozhang



On Mon, Oct 23, 2017 at 2:36 AM, Paolo Patierno  wrote:

> Finally another plan could be to use nesting of runnable calls.
>
> The first one for asking metadata (using the MetadataRequest which
> provides us all the errors) and then sending the delete records requests in
> the handleResponse() of such metadata request.
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience
>
>
> 
> From: Paolo Patierno 
> Sent: Monday, October 23, 2017 9:06 AM
> To: dev@kafka.apache.org
> Subject: Metadata class doesn't "expose" topics with errors
>
> Hi devs,
>
> while developing the KIP-204 (having delete records operation in the "new"
> Admin Client) I'm facing with the following doubt (or maybe a lack of info)
> ...
>
>
> As described by KIP-107 (which implements this feature at protocol level
> and in the "legacy" Admin Client), the request needs to be sent to the
> leader.
>
>
> For both KIPs, the operation has a Map (offset is
> a long in the "legacy" API but it's becoming to be a class in the "new"
> API) and in order to reduce the number of requests to different leaders, my
> code groups partitions having same leader so having a Map Map>.
>
>
> In order to know the leaders I need to request metadata and there are two
> ways for doing that :
>
>
>   *   using something like the producer does with Metadata class, putting
> the topics, request update and waiting for it
>   *   using the low level MetadataRequest and handling the related
> response (which is what the "legacy" API does today)
>
> I noticed that building the Cluster object from the MetadataResponse, the
> topics with errors are skipped and it means that in the final "high level"
> Metadata class (fetching the Cluster object) there is no information about
> them. So with first solution we have no info about topics with errors
> (maybe the only errors I'm able to handle is the "LEADER_NOT_AVAILABLE", if
> leaderFor() on the Cluster returns a null Node).
>
> Is there any specific reason why "topics with errors" are not exposed in
> the Metadata instance ?
> Is the preferred pattern using the low level protocol stuff in such case ?
>
> Thanks
>
>
> Paolo Patierno
> Senior Software Engineer (IoT) @ Red Hat
> Microsoft MVP on Azure & IoT
> Microsoft Azure Advisor
>
> Twitter : @ppatierno
> Linkedin : paolopatierno
> Blog : DevExperience

Re: [DISCUSS] KIP-208: Add SSL support to Kafka Connect REST interface

2017-10-24 Thread Jakub Scholz
There has been no discussion since my last update week ago. Unless someone
has some further comments in the next 48 hours, I will start the voting for
this KIP.

Thanks & Regards
Jakub

On Tue, Oct 17, 2017 at 5:54 PM, Jakub Scholz  wrote:

> Ok, so I updated the KIP according to what we discussed. Please have a
> look at the updates. Two points I'm not 100% sure about:
>
> 1) Should we mark the rest.host.name and rest.port options as deprecated?
>
> 2) I needed to also address the advertised hostname / port. With multiple
> listeners it is not clear anymore which one should be used. I saw as one
> option to add advertised.listeners option and some modified version of
> inter.broker.listener.name option to follow what is done in Kafka
> brokers. But for the Connect REST interface, we do not advertise the
> address to the clients like in Kafka broker. So we only need to tell other
> workers how to connect - and for that we need only one advertised address.
> So I decided to reuse the existing rest.advertised.host.name and
> rest.advertised.port options and add additional option
> rest.advertised.security.protocol to specify whether HTTP or HTTPS should
> be used. Does this make sense to you? DO you think this is the right
> approach?
>
> Thanks & Regards
> Jakub
>
> On Mon, Oct 16, 2017 at 6:34 PM, Randall Hauch  wrote:
>
>> The broker's configuration options are "listeners" (plural) and
>> "listeners.security.protocol.map". I agree that following the pattern set
>> by the broker is better, so these are really good ideas. However, at this
>> point I don't see a need for the "listeners.security.procotol.map", which
>> for the broker must be set if the listener name is not a security
>> protocol.
>> Can we not simply just allow "HTTP" and "HTTPS" as the names of the
>> listeners (rather than the broker's "PLAINTEXT", "SSL", etc.)? If so, then
>> for example "listeners" might be set to "http://myhost:8081,
>> https://myhost:80";, which seems to work out nicely without needing
>> listener
>> names other than security protocols.
>>
>> I also like using the worker's SSL and SASL security configs by default if
>> "https" is included in the listener, but allowing the overriding of this
>> via other additional properties. Here, I'm not a big fan of
>> "listeners.name.https.*" prefix, which I think is pretty verbose, but I
>> could see "listener.https.*" as a prefix. This allows us to add other
>> security protocols at some point, if that ever becomes necessary.
>>
>> +1 for continuing down this road. Nice work.
>>
>> On Mon, Oct 16, 2017 at 9:51 AM, Ted Yu  wrote:
>>
>> > +1 to this proposal.
>> >
>> > On Mon, Oct 16, 2017 at 7:49 AM, Jakub Scholz  wrote:
>> >
>> > > I was having some more thoughts about it. We can simply take over what
>> > > Kafka broker implements for the listeners:
>> > > - We can take over the "listener" and "listener.security.protocol.ma
>> p"
>> > > options to define multiple REST listeners and the security protocol
>> they
>> > > should use
>> > > - The HTTPS interface will by default use the default configuration
>> > options
>> > > ("ssl.keystore.localtion" etc.). But if desired, the values can be
>> > > overridden for given listener (again, as in Kafka broker "
>> listener.name
>> > > ..ssl.keystore.location")
>> > >
>> > > This should address both issues raised. But before I incorporate it
>> into
>> > > the KIP, I would love to get some feedback if this sounds OK. Please
>> let
>> > me
>> > > know what do you think ...
>> > >
>> > > Jakub
>> > >
>> > > On Sun, Oct 15, 2017 at 12:23 AM, Jakub Scholz 
>> wrote:
>> > >
>> > > > I agree, adding both HTTP and HTTPS is not complicated. I just
>> didn't
>> > saw
>> > > > the use case for it. But I can add it. Would you add just support
>> for a
>> > > > single HTTP and single HTTPS interface? Or do you see some value
>> even
>> > in
>> > > > allowing more than 2 interfaces (for example one HTTP and two HTTPS
>> > with
>> > > > different configuration)? It could be done similarly to how the
>> Kafka
>> > > > broker does it through the "listener" configuration parameter with
>> > comma
>> > > > separated list. What do you think?
>> > > >
>> > > > As for the "rest" prefix - if we remove it, some of the same
>> > > configuration
>> > > > options are already used today as the option for connecting from
>> Kafka
>> > > > Connect to Kafka broker. So I'm not sure we should mix them. I can
>> > > > definitely imagine some cases where the client SSL configuration
>> will
>> > not
>> > > > be the same as the REST HTTPS configuration. That is why I added the
>> > > > prefix. If we remove the prefix, how would you deal with this?
>> > > >
>> > > > On Fri, Oct 13, 2017 at 6:25 PM, Randall Hauch 
>> > wrote:
>> > > >
>> > > >> Also, do we need these properties to be preceded with `rest`? I'd
>> > argue
>> > > >> that we're just configuring the worker's SSL information, and that
>> the
>> > > >> REST
>> > > >> API would just use that. If we added another non-

Build failed in Jenkins: kafka-0.11.0-jdk7 #325

2017-10-24 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: poll with zero millis during restoration

--
[...truncated 2.45 MB...]

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToPerformMultipleTransactions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToCommitMultiplePartitionOffsets PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologies PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskGetsFencedUsingIsolatedAppInstances PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldNotViolateEosIfOneTaskFailsWithState PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRunWithTwoSubtopologiesAndMultiplePartitions PASSED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose STARTED

org.apache.kafka.streams.integration.EosIntegrationTest > 
shouldBeAbleToRestartAfterClose PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableLeftJoin PASSED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin STARTED

org.apache.kafka.streams.integration.GlobalKTableIntegrationTest > 
shouldKStreamGlobalKTableJoin PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldUseNewConfigsWhenPresent 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce STARTED

org.apache.kafka.streams.StreamsConfigTest > shouldAcceptAtLeastOnce PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldUseCorrectDefaultsWhenNoneSpecified PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerEnableIdempotenceIfEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
STARTED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSetDifferentDefaultsIfEosEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldNotOverrideUserConfigRetriesIfExactlyOnceEnabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportPrefixedConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldOverrideStreamsDefaultConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs STARTED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled 
STARTED

org.apache.kafka.streams.StreamsConfigTest > 
shouldAllowSettingProducerMaxInFlightRequestPerConnectionsWhenEosDisabled PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldThrowStreamsExceptionIfValueSerdeConfigFails STARTED