Build failed in Jenkins: kafka-trunk-jdk8 #2639

2018-05-10 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: A few small cleanups in AdminClient from KAFKA-6299 (#4989)

--
[...truncated 421.81 KB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 

[jira] [Created] (KAFKA-6897) Mirrormaker waits to shut down forever on produce failure with abort.on.send.failure=true

2018-05-10 Thread Koelli Mungee (JIRA)
Koelli Mungee created KAFKA-6897:


 Summary: Mirrormaker waits to shut down forever on produce failure 
with abort.on.send.failure=true 
 Key: KAFKA-6897
 URL: https://issues.apache.org/jira/browse/KAFKA-6897
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Koelli Mungee


Mirrormaker never shuts down after a produce failure:

 
{code:java}
[2018-05-07 08:29:32,417] ERROR Error when sending message to topic test with 
key: 52 bytes, value: 32615 bytes with error: 
(org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
test-11: 45646 ms has passed since last append
[2018-05-07 08:29:32,434] INFO Closing producer due to send failure. 
(kafka.tools.MirrorMaker$)
[2018-05-07 08:29:32,434] INFO [Producer clientId=producer-1] Closing the Kafka 
producer with timeoutMillis = 0 ms. 
(org.apache.kafka.clients.producer.KafkaProducer)
{code}

A stack trace of this mirrormaker process 9 hours later shows that the main 
thread is still active and it is waiting for metadata that it will never get 
since the producer send thread is no longer running.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-05-10 Thread Matt Farmer
Given that the conversation has lingered for a bit, I've gone ahead and
opened up a PR with the initial implementation. Let me know your thoughts!

https://github.com/apache/kafka/pull/5002

Also, voting is open - so if you like this idea please send me some binding
+1's before May 22nd so we can get this in Kafka 2.0 :)

On Tue, Apr 17, 2018 at 7:11 PM, Matt Farmer  wrote:

> Hello all, I've updated a KIP again to add a few sentences about the
> general problem we were facing in the motivation section. Please let me
> know if there is any further feedback.
>
> On Tue, Apr 3, 2018 at 1:46 PM, Matt Farmer  wrote:
>
>> Hey Randall,
>>
>> Devil's advocate sparring is always a fun game so I'm down. =)
>>
>> Rebalance caused by connectivity failure is the case that caused us to
>> notice the issue. But the issue
>> is really more about giving connectors the tools to facilitate rebalances
>> or a Kafka connect shutdown
>> cleanly. Perhaps that wasn't clear in the KIP.
>>
>> In our case timeouts were *not* uniformly affecting tasks. But every
>> time a timeout occurred in one task,
>> all tasks lost whatever forward progress they had made. So, yes, in the
>> specific case of timeouts a
>> backoff jitter in the connector is a solution for that particular issue.
>> However, this KIP *also* gives connectors
>> enough information to behave intelligently during any kind of rebalance
>> or shutdown because they're able
>> to discover that preCommit is being invoked for that specific reason. (As
>> opposed to being invoked
>> during normal operation.)
>>
>> On Tue, Apr 3, 2018 at 12:36 PM, Randall Hauch  wrote:
>>
>>> Matt,
>>>
>>> Let me play devil's advocate. Do we need this additional complexity? The
>>> motivation section talked about needing to deal with task failures due to
>>> connectivity problems. Generally speaking, isn't it possible that if one
>>> task has connectivity problems with either the broker or the external
>>> system that other tasks would as well? And in the particular case of S3,
>>> is
>>> it possible to try and prevent the task shutdown in the first place,
>>> perhaps by improving how the S3 connector retries? (We did this in the
>>> Elasticsearch connector using backoff with jitter; see
>>> https://github.com/confluentinc/kafka-connect-elasticsearch/pull/116 for
>>> details.)
>>>
>>> Best regards,
>>>
>>> Randall
>>>
>>> On Tue, Apr 3, 2018 at 8:39 AM, Matt Farmer  wrote:
>>>
>>> > I have made the requested updates to the KIP! :)
>>> >
>>> > On Mon, Apr 2, 2018 at 11:02 AM, Matt Farmer  wrote:
>>> >
>>> > > Ugh
>>> > >
>>> > > * I can update
>>> > >
>>> > > I need more caffeine...
>>> > >
>>> > > On Mon, Apr 2, 2018 at 11:01 AM, Matt Farmer  wrote:
>>> > >
>>> > >> I'm can update the rejected alternatives section as you describe.
>>> > >>
>>> > >> Also, adding a paragraph to the preCommit javadoc also seems like a
>>> > >> Very Very Good Idea™ so I'll make that update to the KIP as well.
>>> > >>
>>> > >> On Mon, Apr 2, 2018 at 10:48 AM, Randall Hauch 
>>> > wrote:
>>> > >>
>>> > >>> Thanks for the KIP proposal, Matt.
>>> > >>>
>>> > >>> You mention in the "Rejected Alternatives" section that you
>>> considered
>>> > >>> changing the signature of the `preCommit` method but rejected it
>>> > because
>>> > >>> it
>>> > >>> would break backward compatibility. Strictly speaking, it is
>>> possible
>>> > to
>>> > >>> do
>>> > >>> this without breaking compatibility by introducing a new
>>> `preCommit`
>>> > >>> method, deprecating the old one, and having the new implementation
>>> call
>>> > >>> the
>>> > >>> old one. Such an approach would be complicated, and I'm not sure it
>>> > adds
>>> > >>> any value. In fact, one of the benefits of having a context object
>>> is
>>> > >>> that
>>> > >>> we can add methods like the one you're proposing without causing
>>> any
>>> > >>> compatibility issues. Anyway, it probably is worth updating this
>>> > rejected
>>> > >>> alternative to be a bit more precise.
>>> > >>>
>>> > >>> Otherwise, I think this is a good approach, though I'd request
>>> that we
>>> > >>> update the `preCommit` JavaDoc to add a paragraph that explains
>>> this
>>> > >>> scenario. Thoughts?
>>> > >>>
>>> > >>> Randall
>>> > >>>
>>> > >>> On Wed, Mar 28, 2018 at 9:29 PM, Ted Yu 
>>> wrote:
>>> > >>>
>>> > >>> > I looked at WorkerSinkTask and it seems using a boolean for
>>> KIP-275
>>> > >>> should
>>> > >>> > suffice for now.
>>> > >>> >
>>> > >>> > Thanks
>>> > >>> >
>>> > >>> > On Wed, Mar 28, 2018 at 7:20 PM, Matt Farmer 
>>> wrote:
>>> > >>> >
>>> > >>> > > Hey Ted,
>>> > >>> > >
>>> > >>> > > I have not, actually!
>>> > >>> > >
>>> > >>> > > Do you think that we're likely to add multiple states here
>>> soon?
>>> > >>> > >
>>> > >>> > > My instinct is to keep it simple until there are multiple
>>> states
>>> > >>> that we
>>> > >>> > > 

Jenkins build is back to normal : kafka-trunk-jdk7 #3420

2018-05-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6896) add producer metrics exporting in KafkaStreams.java

2018-05-10 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-6896:
--

 Summary: add producer metrics exporting in KafkaStreams.java
 Key: KAFKA-6896
 URL: https://issues.apache.org/jira/browse/KAFKA-6896
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Boyang Chen
Assignee: Boyang Chen


We would like to also export the producer metrics from {{StreamThread}} just 
like consumer metrics, so that we could gain more visibility of stream 
application. The approach is to pass in the {{threadProducer}}into the 
StreamThread so that we could export its metrics in dynamic.

Note that this is a pure internal change that doesn't require a KIP, and in the 
future we also want to export admin client metrics. A followup KIP for admin 
client will be created once this is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-278: Add version option to Kafka's commands

2018-05-10 Thread Ted Yu
+1

On Thu, May 10, 2018 at 6:42 PM, Sasaki Toru 
wrote:

> Hi all,
>
> I would like to start the vote on KIP-278: Add version option to Kafka's
> commands.
>
> The link to this KIP is here:
>  +Add+version+option+to+Kafka%27s+commands>
>
> The discussion thread is here:
> 
>
>
> Many thanks,
> Sasaki
>
> --
> Sasaki Toru(sasaki...@oss.nttdata.com) NTT DATA CORPORATION
>
>


Re: [DISCUSS] KIP-278: Add version option to Kafka's commands

2018-05-10 Thread Sasaki Toru

 I started voting thread for this KIP. Thanks, Jason and Colin.

From: Colin McCabe 
Date: 2018-05-10 2:53 GMT+09:00
Subject: Re: [DISCUSS] KIP-278: Add version option to Kafka's commands
To: dev@kafka.apache.org


+1. Thanks, Sasaki.

Colin

On Wed, May 9, 2018, at 09:15, Jason Gustafson wrote:

Hi Sasaki,

Thanks for the update. The KIP looks good to me. I'd suggest moving to a
vote.

Thanks,
Jason

On Mon, May 7, 2018 at 7:08 AM, Sasaki Toru 
wrote:


Hi Manikumar, Colin,

Thank you for your comment.

As Colin said, I proposed adding an option to show version information

of

"local" tool,
because many software have this option and I think Kafka also needs this
one.

As you said, the function to show remote Kafka version is useful,
but I think it is better to create new KIP because this function has

some

points which should be considered.

If you have any better ideas, could you please tell us?


Many thanks,
Sasaki

From: Manikumar 

Date: 2018-05-03 4:11 GMT+09:00

Subject: Re: [DISCUSS] KIP-278: Add version option to Kafka's commands
To: dev 


Hi Colin,

Thanks for explanation. It's definitely useful to have  --version flag.

kafka-broker-api-versions.sh gives the API versions, not Kafka release
version.
Is not easy to figure out release version from API versions. Currently
release version is available via metric/JMX.
If required, we can implement this in future.


Thanks,

On Wed, May 2, 2018 at 10:58 PM, Colin McCabe 

wrote:

Hi Manikumar,

We already have a tool for getting the Kafka broker API versions,
"./bin/kafka-broker-api-versions.sh".  It was added as part of KIP-97.

What Saski is proposing here is having a way of getting the version of
locally installed Kafka software, which may be different from the

server

version.  Many pieces of software offer a --version flag, and it's

always

understood to refer to the local version of the software, not a

version

running somewhere else.  The user has no way to get this information

now,

other than perhaps trying to look at he names of jar files.

cheers,
Colin

On Tue, May 1, 2018, at 08:20, Manikumar wrote:


I assume the intent of the KIP to find out the Kafka broker

version.  In

this case, maybe we should expose
version using a Kafka request. This will help the remote

scripts/tools

to
query the Kafka version.

scripts (kafka-topics.sh, kafka-configs.sh, etc..)  may run from

remote

machines  and may use
older Kafka versions. In this case, current proposal prints on the

older

version.

On Tue, May 1, 2018 at 7:47 PM, Colin McCabe 
wrote:

Thanks, Sasaki.

Colin

On Sat, Apr 28, 2018, at 00:55, Sasaki Toru wrote:


Hi Colin, Jason,

Thank you for your beneficial comment.
I have updated my Pull Request to show git commit hash in version
information.> In my current Pull Request, we cat get the result

such

below:


 $ bin/kafka-topics.sh --version
 (snip)
 2.0.0-SNAPSHOT (Commit:f3876cd9617faf7e)


I have also updated to accept double-dash for this option (--
version) only.>

Many thanks,
Sasaki

From: Jason Gustafson 

Date: 2018-04-25 9:42 GMT+09:00
Subject: Re: [DISCUSS] KIP-278: Add version option to Kafka's
commands> > To: dev 


+1 on adding the git commit id to the output. We often encounter
environments which are testing off of trunk or have modifications


on

top of> > an existing release.

-Jason

On Tue, Apr 24, 2018 at 10:06 AM, Colin McCabe  >


On Tue, Apr 24, 2018, at 05:36, Sasaki Toru wrote:


Hi Jason, Colin,

Thank you for your comment, and I'm sorry for late reply.

> we refactored all of the tools so that we could use a

common

> set of> >>> base options,
> it might be a little annoying to have to continue

supporting

> both> >>> variations.

I feel KIP-14 is good idea (I'm sorry, I didn't know about
it), and> >>> I think it's no longer necessary both single-dash


and

double-

dash are> >>> supported when this KIP will be accepted, as you

said.

I revise my Pull Request to support single-dash option only.

Some my code in kafka-run-class.sh will become unnecessary when
KIP-14> >>> is accepted.
But I will keep this as temporary, because I seem that it's


useful>

before KIP-14 accepted.

> Can you give a little more detail about what would be
> displayed when> >>> the version command was used?

As Ismael said, the version string is got from
AppInfoParser#getVersion.> >>>
In my Pull Request, we can get the result such as below::

  $ bin/kafka-topics.sh --version
  (snip)
  Kafka 1.2.0-SNAPSHOT


Hi Sasaki,

Thanks for the info.  Can you add this to the KIP?

Also, I really think we should include the git hash.

best,
Colin


Many thanks,

Sasaki


From: Ismael Juma 

Date: 2018-04-24 3:44 GMT+09:00
Subject: Re: [DISCUSS] KIP-278: Add 

[VOTE] KIP-278: Add version option to Kafka's commands

2018-05-10 Thread Sasaki Toru

Hi all,

I would like to start the vote on KIP-278: Add version option to Kafka's 
commands.

The link to this KIP is here:


The discussion thread is here:



Many thanks,
Sasaki

--
Sasaki Toru(sasaki...@oss.nttdata.com) NTT DATA CORPORATION



Re: [VOTE] KIP-281: ConsumerPerformance: Increase Polling Loop Timeout and Make It Reachable by the End User

2018-05-10 Thread Ismael Juma
Thanks for the KIP, +1 (binding). A few suggestions:

1. We normally include the time unit in configs. Not sure if we do it for
command line parameters though, so can we please verify and make it
consistent?
2. The KIP mentions --polling-loop-timeout and --timeout. Which is it?
3. Can we include the description of the new parameter in the KIP? In the
PR it says "Consumer polling loop timeout", but I think this is a bit
unclear. What are we actually measuring here?

Ismael

On Mon, Apr 16, 2018 at 2:25 PM Alex Dunayevsky 
wrote:

> Hello friends,
>
> Let's start the vote for KIP-281: ConsumerPerformance: Increase Polling
> Loop Timeout and Make It Reachable by the End User:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-281%3A+ConsumerPerformance%3A+Increase+Polling+Loop+Timeout+and+Make+It+Reachable+by+the+End+User
>
> Thank you,
> Alexander Dunayevsky
>


Re: [VOTE] KIP-294 - Enable TLS hostname verification by default

2018-05-10 Thread Ismael Juma
Thanks for the KIP, +1 (binding) from me.

Ismael

On Wed, May 9, 2018 at 8:29 AM Rajini Sivaram 
wrote:

> Hi all,
>
> Since there have been no objections on this straightforward KIP, I would
> like to initiate the voting process. KIP-294 proposes to use a secure
> default value for endpoint identification when using SSL as the security
> protocol. The KIP Is here:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-294+-+Enable+TLS+hostname+verification+by+default
>
> If there are any concerns, please add them to this thread or the discussion
> thread (https://www.mail-archive.com/dev@kafka.apache.org/msg87549.html)
>
> Regards,
>
> Rajini
>


Re: [VOTE] KIP-294 - Enable TLS hostname verification by default

2018-05-10 Thread Jun Rao
Hi, Rajini,

Thanks for the KIP. +1

Could you document in the wiki how to
set ssl.endpoint.identification.algorithm to empty in the server property
file and through dynamic config? It's not obvious how to do that.

Jun

On Wed, May 9, 2018 at 8:28 AM, Rajini Sivaram 
wrote:

> Hi all,
>
> Since there have been no objections on this straightforward KIP, I would
> like to initiate the voting process. KIP-294 proposes to use a secure
> default value for endpoint identification when using SSL as the security
> protocol. The KIP Is here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 294+-+Enable+TLS+hostname+verification+by+default
>
> If there are any concerns, please add them to this thread or the discussion
> thread (https://www.mail-archive.com/dev@kafka.apache.org/msg87549.html)
>
> Regards,
>
> Rajini
>


Re: Request to create KIP

2018-05-10 Thread Matthias J. Sax
What is your Wiki ID ?

-Matthias

On 5/10/18 3:36 AM, Anurag Jain wrote:
> Hi,
> 
> I would like to create KIP for
> *https://issues.apache.org/jira/browse/KAFKA-2200
> *.
> 
> Can you please give me permission to create KIP for this?
> 
> Thanks,
> Anurag
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Resolved] (KAFKA-6894) Improve error message when connecting processor with a global store

2018-05-10 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6894.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> Improve error message when connecting processor with a global store
> ---
>
> Key: KAFKA-6894
> URL: https://issues.apache.org/jira/browse/KAFKA-6894
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Minor
> Fix For: 2.0.0
>
>
> I was trying to access a store from a {{GlobalKTable}} in 
> {{KStream.transform()}}, but I got the following error:
> {code}
> org.apache.kafka.streams.errors.TopologyException: Invalid topology: 
> StateStore globalStore is not added yet.
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.connectProcessorAndStateStore(InternalTopologyBuilder.java:716)
>   at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.connectProcessorAndStateStores(InternalTopologyBuilder.java:615)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamImpl.transform(KStreamImpl.java:521)
> {code}
> I have submitted a PR to improve the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2637

2018-05-10 Thread Apache Jenkins Server
See 




Request to create KIP

2018-05-10 Thread Anurag Jain
Hi,

I would like to create KIP for
*https://issues.apache.org/jira/browse/KAFKA-2200
*.

Can you please give me permission to create KIP for this?

Thanks,
Anurag


[jira] [Created] (KAFKA-6895) Schema Inferencing for JsonConverter

2018-05-10 Thread Allen Tang (JIRA)
Allen Tang created KAFKA-6895:
-

 Summary: Schema Inferencing for JsonConverter
 Key: KAFKA-6895
 URL: https://issues.apache.org/jira/browse/KAFKA-6895
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Allen Tang


Though there does exist a converter in the connect-json library called 
"JsonConverter", there are limitations as to the domain of JSON payloads this 
converter is compatible with on the Sink Connector side when serializing them 
into Kafka Connect datatypes; When reading byte arrays from Kafka, the 
JsonConverter expects its inputs to be a JSON envelope that contains the fields 
"schema" and "payload", otherwise it'll throw a DataException reporting:
??JsonConverter with schemas.enable requires "schema" and "payload" fields and 
may not contain additional fields. If you are trying to deserialize plain JSON 
data, set schemas.enable=false in your converter configuration.??
(when schemas.enable is true) or
??JSON value converted to Kafka Connect must be in envelope containing schema??
(when schemas.enable is false)
For example, if your JSON payload looks something on the order of:
??{ "c1": 1, "c2": "bar", "create_ts": 1501834166000, "update_ts": 
1501834166000 }??
This will not be compatible for Sink Connectors that require the schema for 
data ingest when mapping from Kafka Connect datatypes to, for example, JDBC 
datatypes. Rather, that data is expected to be structured like so:
??{ "schema": \{ "type": "struct", "fields": [{ "type": "int32", "optional": 
true, "field": "c1" }, \{ "type": "string", "optional": true, "field": "c2" }, 
\{ "type": "int64", "optional": false, "name": 
"org.apache.kafka.connect.data.Timestamp", "version": 1, "field": "create_ts" 
}, \{ "type": "int64", "optional": false, "name": 
"org.apache.kafka.connect.data.Timestamp", "version": 1, "field": "update_ts" 
}], "optional": false, "name": "foobar" }, "payload": \{ "c1": 1, "c2": 
"bar", "create_ts": 1501834166000, "update_ts": 1501834166000 } }??


The "schema" is a necessary component in order to dictate to the JsonConverter 
how to map the payload's JSON datatypes to Kafka Connect datatypes on the 
consumer side.

 

Introduce a new configuration for the JsonConverter class called 
"schemas.infer.enable". When this flag is set to "false", the existing behavior 
is exhibited. When it's set to "true", infer the schema from the contents of 
the JSON record, and return that as part of the SchemaAndValue object for Sink 
Connectors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-294 - Enable TLS hostname verification by default

2018-05-10 Thread Jakub Scholz
+1 (non-binding)

On Thu, May 10, 2018 at 11:24 AM, Edoardo Comar  wrote:

> +1 (non-binding)
>
> On 10 May 2018 at 09:36, Manikumar  wrote:
>
> > +1 (non-binding)
> >
> > Thanks.
> >
> > On Wed, May 9, 2018 at 10:09 PM, Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> > > +1, thanks for the KIP!
> > >
> > > On Wed, May 9, 2018 at 4:41 PM, Ted Yu  wrote:
> > > > +1
> > > >
> > > > On Wed, May 9, 2018 at 8:28 AM, Rajini Sivaram <
> > rajinisiva...@gmail.com>
> > > > wrote:
> > > >
> > > >> Hi all,
> > > >>
> > > >> Since there have been no objections on this straightforward KIP, I
> > would
> > > >> like to initiate the voting process. KIP-294 proposes to use a
> secure
> > > >> default value for endpoint identification when using SSL as the
> > security
> > > >> protocol. The KIP Is here:
> > > >>
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > >> 294+-+Enable+TLS+hostname+verification+by+default
> > > >>
> > > >> If there are any concerns, please add them to this thread or the
> > > discussion
> > > >> thread (https://www.mail-archive.com/dev@kafka.apache.org/msg87549.
> > html
> > > )
> > > >>
> > > >> Regards,
> > > >>
> > > >> Rajini
> > > >>
> > >
> >
>
>
>
> --
> "When the people fear their government, there is tyranny; when the
> government fears the people, there is liberty." [Thomas Jefferson]
>


[jira] [Resolved] (KAFKA-6893) Processors created after acceptor started which can cause in a brief refusal to accept connections

2018-05-10 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6893.

   Resolution: Fixed
Fix Version/s: 1.1.1

> Processors created after acceptor started which can cause in a brief refusal 
> to accept connections 
> ---
>
> Key: KAFKA-6893
> URL: https://issues.apache.org/jira/browse/KAFKA-6893
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Ryan P
>Assignee: Rajini Sivaram
>Priority: Minor
> Fix For: 1.1.1
>
>
> The acceptor starts before the the processor threads are actually created 
> which creates a very small window in which connections to the broker will 
> fail. When the acceptor takes on a new connection an illegal arithmetic 
> exception is thrown as shown below. 
>  
> Exception: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:391)
>  at java.lang.Thread.run(Thread.java:748)
> [2018-05-09 15:34:05,153] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
> java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:391)
>  at java.lang.Thread.run(Thread.java:748)
> [2018-05-09 15:34:05,153] ERROR Error while accepting connection 
> (kafka.network.Acceptor)
> java.lang.ArithmeticException: / by zero
>  at kafka.network.Acceptor.run(SocketServer.scala:391)
>  at java.lang.Thread.run(Thread.java:748)
> [2018-05-09 15:34:05,153] INFO Awaiting socket connections on 
> lontkfk02.mwam.local:9093. (kafka.network.Acceptor)
> [2018-05-09 15:34:05,268] INFO [SocketServer brokerId=2] Started 2 acceptor 
> threads (kafka.network.SocketServer)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk7 #3417

2018-05-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-6894) Cannot access GlobalKTable store from KStream.transform()

2018-05-10 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-6894:


 Summary: Cannot access GlobalKTable store from KStream.transform()
 Key: KAFKA-6894
 URL: https://issues.apache.org/jira/browse/KAFKA-6894
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
Reporter: Robert Yokota
Assignee: Robert Yokota


I was trying to access a store from a {{GlobalKTable}} in 
{{KStream.transform()}}, but I got the following error:

{code}
org.apache.kafka.streams.errors.TopologyException: Invalid topology: StateStore 
globalStore is not added yet.
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.connectProcessorAndStateStore(InternalTopologyBuilder.java:716)
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.connectProcessorAndStateStores(InternalTopologyBuilder.java:615)
at 
org.apache.kafka.streams.kstream.internals.KStreamImpl.transform(KStreamImpl.java:521)
{code}

I was able to make a change to 
{{InternalTopologyBuilder.connectProcessorAndState}} to allow me to access the 
global store from {{KStream.transform()}}.  I will submit a PR for review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Can anyone take a look at this KIP and Jira?

2018-05-10 Thread Ted Yu
Since the change is internal to *SensorAccess class, looks like KIP is not
required.*

On Thu, May 10, 2018 at 11:54 AM, qingjun wu  wrote:

> Dear Kafka Developers,
>
> I opened a KIP and also a Jira ticket related to this. Can you please take
> a look? It should be simple change to Kafka, but it should improve the
> performance a lot.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-NEXT%3A+Get+rid+of+
> unnecessary+read+lock
>
>
> https://issues.apache.org/jira/browse/KAFKA-6722
>
>
> --
> Best Regards
>  吴清俊|Wade Wu
>


[jira] [Created] (KAFKA-6893) Processors created after acceptor started which can cause in a brief refusal to accept connections

2018-05-10 Thread Ryan P (JIRA)
Ryan P created KAFKA-6893:
-

 Summary: Processors created after acceptor started which can cause 
in a brief refusal to accept connections 
 Key: KAFKA-6893
 URL: https://issues.apache.org/jira/browse/KAFKA-6893
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Ryan P
Assignee: Rajini Sivaram


The acceptor starts before the the processor threads are actually created which 
creates a very small window in which connections to the broker will fail. When 
the acceptor takes on a new connection an illegal arithmetic exception is 
thrown as shown below. 

 

Exception: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:391)
 at java.lang.Thread.run(Thread.java:748)
[2018-05-09 15:34:05,153] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:391)
 at java.lang.Thread.run(Thread.java:748)
[2018-05-09 15:34:05,153] ERROR Error while accepting connection 
(kafka.network.Acceptor)
java.lang.ArithmeticException: / by zero
 at kafka.network.Acceptor.run(SocketServer.scala:391)
 at java.lang.Thread.run(Thread.java:748)
[2018-05-09 15:34:05,153] INFO Awaiting socket connections on 
lontkfk02.mwam.local:9093. (kafka.network.Acceptor)
[2018-05-09 15:34:05,268] INFO [SocketServer brokerId=2] Started 2 acceptor 
threads (kafka.network.SocketServer)

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Can anyone take a look at this KIP and Jira?

2018-05-10 Thread qingjun wu
Dear Kafka Developers,

I opened a KIP and also a Jira ticket related to this. Can you please take
a look? It should be simple change to Kafka, but it should improve the
performance a lot.


https://cwiki.apache.org/confluence/display/KAFKA/KIP-NEXT%3A+Get+rid+of+unnecessary+read+lock


https://issues.apache.org/jira/browse/KAFKA-6722


-- 
Best Regards
 吴清俊|Wade Wu


[jira] [Resolved] (KAFKA-6141) Errors logs when running integration/kafka/tools/MirrorMakerIntegrationTest

2018-05-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6141.
--
Resolution: Fixed

Closing this as log level changed to debug in ZOOKEEPER-2795 / Zookeeper 3.4.12

> Errors logs when running integration/kafka/tools/MirrorMakerIntegrationTest
> ---
>
> Key: KAFKA-6141
> URL: https://issues.apache.org/jira/browse/KAFKA-6141
> Project: Kafka
>  Issue Type: Improvement
>  Components: unit tests
>Reporter: Pavel
>Priority: Trivial
>
> There are some error logs when running Tests extended from 
> ZooKeeperTestHarness, for example 
> integration/kafka/tools/MirrorMakerIntegrationTest:
> [2017-10-27 18:28:02,557] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> [2017-10-27 18:28:09,110] ERROR ZKShutdownHandler is not registered, so 
> ZooKeeper server won't take any action on ERROR or SHUTDOWN server state 
> changes (org.apache.zookeeper.server.ZooKeeperServer:472)
> And these logs have no impact on test results. I think it would be great to 
> eliminate these logs from output by providing a ZKShutdownHandler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk10 #96

2018-05-10 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2636

2018-05-10 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update dynamic broker configuration doc for truststore update

--
[...truncated 422.89 KB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumVerifyOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

Re: [VOTE] KIP-255: OAuth Authentication via SASL/OAUTHBEARER

2018-05-10 Thread Ron Dagostino
HI again, everyone.  Still looking for 2 more binding votes.  PR is now
available at https://github.com/apache/kafka/pull/4994.

Ron

On Tue, May 8, 2018 at 9:45 AM, Ron Dagostino  wrote:

> HI everyone.  Can we get 2 more binding votes on this KIP (and non-binding
> votes, too)?
>
> Ron
>
> On Fri, May 4, 2018 at 11:53 AM, Rajini Sivaram 
> wrote:
>
>> Hi Ron,
>>
>> +1 (binding)
>>
>> Thanks for the KIP!
>>
>> Regards,
>>
>> Rajini
>>
>> On Fri, May 4, 2018 at 4:55 AM, Ron Dagostino  wrote:
>>
>> > Hi everyone.  I would like to start the vote for KIP-255:
>> > https://cwiki.apache.org/confluence/pages/viewpage.action?
>> pageId=75968876
>> >
>> > This KIP proposes to add the following functionality related to
>> > SASL/OAUTHBEARER:
>> >
>> > 1) Allow clients (both brokers when SASL/OAUTHBEARER is the inter-broker
>> > protocol as well as non-broker clients) to flexibly retrieve an access
>> > token from an OAuth 2 authorization server based on the declaration of a
>> > custom login CallbackHandler implementation and have that access token
>> > transparently and automatically transmitted to a broker for
>> authentication.
>> >
>> > 2) Allow brokers to flexibly validate provided access tokens when a
>> client
>> > establishes a connection based on the declaration of a custom SASL
>> Server
>> > CallbackHandler implementation.
>> >
>> > 3) Provide implementations of the above retrieval and validation
>> features
>> > based on an unsecured JSON Web Token that function out-of-the-box with
>> > minimal configuration required (i.e. implementations of the two types of
>> > callback handlers mentioned above will be used by default with no need
>> to
>> > explicitly declare them).
>> >
>> > 4) Allow clients (both brokers when SASL/OAUTHBEARER is the inter-broker
>> > protocol as well as non-broker clients) to transparently retrieve a new
>> > access token in the background before the existing access token expires
>> in
>> > case the client has to open new connections.
>> >
>> > Thanks,
>> >
>> > Ron
>> >
>>
>
>


Re: [DISCUSS] KIP-290: Support for wildcard suffixed ACLs

2018-05-10 Thread Colin McCabe
Hi Andy,

The issue that I was trying to solve here is the Java API.  Right now, someone 
can write "new ResourceFilter(ResourceType.TRANSACTIONAL_ID, "foo*") and have a 
ResourceFilter that applies to a Transactional ID named "foo*".  This has to 
continue to work, or else we have broken compatibility.

I was proposing that there would be something like a new function like 
ResourceFilter.fromPattern(ResourceType.TRANSACTIONAL_ID, "foo*") which would 
create a ResourceFilter that applied to transactional IDs starting with "foo", 
rather than transactional IDs named "foo*" specifically.

I don't think it's important whether the Java class has an integer, an enum, or 
two string fields.  The important thing is that there's a new static function, 
or new constructor overload, etc. that works for patterns rather than literal 
strings.

On Thu, May 10, 2018, at 03:30, Andy Coates wrote:
> Rather than having name and pattern fields on the ResourceFilter, where 
> it’s only valid for one to be set, and we want to restrict the character 
> set in case future enhancements need them, we could instead add a new 
> integer ‘nameType’ field, and use constants to indicate how the name 
> field should be interpreted, e.g. 0 = literal, 1 = wildcard. This would 
> be extendable, e.g we can later add 2 = regex, or what ever, and 
> wouldn’t require any escaping.

This is very user-unfriendly, though.  Users don't want to have to explicitly 
supply a version number when using the API, which is what this would force them 
to do.  I don't think users are going to want to memorize that version 4 
supprted "+", whereas version 3 only supported "[0-9]", or whatever.   

Just as an example, do you remember which versions of FetchRequest added which 
features?  I don't.  I always have to look at the code to remember.

Also, escaping is still required any time you overload a character to mean two 
things.  Escaping is required in the current proposal to be able to create a 
pattern that matches only "foo*".  You have to type "foo\*"  It would be 
required if we forced users to specify a version, as well.

best,
Colin

> 
> Sent from my iPhone
> 
> > On 7 May 2018, at 05:16, Piyush Vijay  wrote:
> > 
> > Makes sense. I'll update the KIP.
> > 
> > Does anyone have any other comments? :)
> > 
> > Thanks
> > 
> > 
> > Piyush Vijay
> > 
> >> On Thu, May 3, 2018 at 11:55 AM, Colin McCabe  wrote:
> >> 
> >> Yeah, I guess that's a good point.  It probably makes sense to support the
> >> prefix scheme for consumer groups and transactional IDs as well as topics.
> >> 
> >> I agree that the current situation where anything goes in consumer group
> >> names and transactional ID names is not ideal.  I wish we could rewind the
> >> clock and impose restrictions on the names.  However, it doesn't seem
> >> practical at the moment.  Adding new restrictions would break a lot of
> >> existing users after an upgrade.  It would be a really bad upgrade
> >> experience.
> >> 
> >> However, I think we can support this in a compatible way.  From the
> >> perspective of AdminClient, we just have to add a new field to
> >> ResourceFilter.  Currently, it has two fields, resourceType and name:
> >> 
> >>> /**
> >>> * A filter which matches Resource objects.
> >>> *
> >>> * The API for this class is still evolving and we may break
> >> compatibility in minor releases, if necessary.
> >>> */
> >>> @InterfaceStability.Evolving
> >>> public class ResourceFilter {
> >>>private final ResourceType resourceType;
> >>>private final String name;
> >> 
> >> We can add a third field, pattern.
> >> 
> >> So the API will basically be, if I create a 
> >> ResourceFilter(resourceType=GROUP,
> >> name=foo*, pattern=null), it applies only to the consumer group named
> >> "foo*".  If I create a ResourceFilter(resourceType=GROUP, name=null,
> >> pattern=foo*), it applies to any consumer group starting in "foo".  name
> >> and pattern cannot be both set at the same time.  This preserves
> >> compatibility at the AdminClient level.
> >> 
> >> It's possible that we will want to add more types of pattern in the
> >> future.  So we should reserve "special characters" such as +, /, &, %, #,
> >> $, etc.  These characters should be treated as special unless they are
> >> prefixed with a backslash to escape them.  This will allow us to add
> >> support for using these characters in the future without breaking
> >> compatibility.
> >> 
> >> At the protocol level, we need a new API version for CreateAclsRequest /
> >> DeleteAclsRequest.  The new API version will send all special characters
> >> over the wire escaped rather than directly.  (So there is no need for the
> >> equivalent of both "name" and "pattern"-- we translate name into a validly
> >> escaped pattern that matches only one thing, by adding escape characters as
> >> appropriate.)  The broker will validate the new API version and reject
> >> malformed of unsupported 

Re: KIP-244: Add Record Header support to Kafka Streams

2018-05-10 Thread Matthias J. Sax
Thanks Guozhang! Sounds good to me!

-Matthias

On 5/10/18 7:55 AM, Guozhang Wang wrote:
> Thanks for your thoughts Matthias. I think if we do want to bring KIP-244
> into 2.0 then we need to keep its scope small and well defined. For that
> I'm proposing:
> 
> 1. Make the inheritance implementation of headers consistent with what we
> had with other record context fields. I.e. pass through the record context
> in `context.forward()`. Note that within a processor node, users can
> already manipulate the Headers with the given APIs, so at the time of
> forwarding, the library can just copy what-ever is left / updated to the
> next processor node.
> 
> 2. In the sink node, where a record is being sent to the Kafka topic, we
> should consider the following:
> 
> a. For sink topics, we will set the headers into the producer record.
> b. For repartition topics, we will the headers into the producer record.
> c. For changelog topics, we will drop the headers in the produce record
> since they will not be used in restoration and not stored in the state
> store either.
> 
> 
> We can discuss about extending the current protocol and how to enable users
> override those rule, and how to expose them in the DSL layer in a future
> KIP.
> 
> 
> 
> Guozhang
> 
> 
> On Mon, May 7, 2018 at 5:49 PM, Matthias J. Sax 
> wrote:
> 
>> Guozhang,
>>
>> if you advocate to forward headers by default, it might be a better
>> default strategy do forward the headers for all operators (similar to
>> topic/partition/offset metadata). It's usually harder for users to
>> reason about different cases and thus I would prefer to have consistent
>> behavior, ie, only one default strategy instead of introducing different
>> cases.
>>
>> Btw: My argument about dropping headers by default only implies, that
>> users need to copy the headers explicitly to the output records in there
>> code of they want to inspect them later -- it does not imply that
>> headers cannot be forwarded downstream. (Not sure if this was clear).
>>
>> I am also ok with copying be default thought (for me, it's a 51/49
>> preference for dropping by default only).
>>
>>
>> -Matthias
>>
>> On 5/7/18 4:52 PM, Guozhang Wang wrote:
>>> Hi Matthias,
>>>
>>> My concern of setting `null` in all cases is that it would make headers
>> not
>>> very useful in KIP-244 then, because headers will only be available at
>> the
>>> source stream / table, but not in any of the following instances. In
>>> practice users may be more likely to look into the headers later in the
>>> pipeline. Personally I'd suggest we pass the headers for all stateless
>>> operators in DSL and everywhere in PAPI's context.forward(). For
>>> repartition topics and sink topics, we also set them in the produced
>>> records accordingly; for changelog topics, we do not set them since they
>>> are not going to be used anywhere in the store.
>>>
>>>
>>> Guozhang
>>>
>>>
>>> On Sun, May 6, 2018 at 9:03 PM, Matthias J. Sax 
>>> wrote:
>>>
 I agree, that we should not block this KIP if possible. Nevertheless, we
 should try to get a reasonable default strategy for inheriting the
 headers so we don't need to change it later on.

 Let's see what other think. I still tend slightly to set to `null` by
 default for all cases. If the default strategy is different for
 different operators as you suggest, it might be confusion to users.
 IMHO, the default behavior should be as simple as possible.


 -Matthias


 On 5/6/18 8:53 PM, Guozhang Wang wrote:
> Matthias, thanks for sharing your opinions in the inheritance protocol
>> of
> the record context. I'm thinking maybe we should make this discussion
>> as
 a
> separate KIP by itself? If yes, then KIP-244's scope would be smaller,
 and
> within KIP-244 we can have a simple inheritance rule that setting it to
> null when 1) going through stateful operators and 2) sending to any
 topics.
>
>
> Guozhang
>
> On Sun, May 6, 2018 at 10:24 AM, Matthias J. Sax <
>> matth...@confluent.io>
> wrote:
>
>> Making the inheritance protocol a public contract seems reasonable to
 me.
>>
>> In the current implementation, all output records inherits the offset,
>> timestamp, topic, and partition metadata from the input record. We
>> already added an API to change the timestamp explicitly for the output
>> record thought.
>>
>> I think it make sense to keep the inheritance of offset, topic, and
>> partition. For headers, it's worth to discuss. I see arguments for two
>> strategies: (1) inherit by default, (2) set `null` by default.
>> Independent of the default behavior, we should add an API to set
>> headers
>> for output records explicitly though (similar to the "set timestamp
 API").
>>
>> From my point of view, timestamp/headers are a different
>> "class/category" of 

Re: KIP-244: Add Record Header support to Kafka Streams

2018-05-10 Thread Guozhang Wang
Thanks for your thoughts Matthias. I think if we do want to bring KIP-244
into 2.0 then we need to keep its scope small and well defined. For that
I'm proposing:

1. Make the inheritance implementation of headers consistent with what we
had with other record context fields. I.e. pass through the record context
in `context.forward()`. Note that within a processor node, users can
already manipulate the Headers with the given APIs, so at the time of
forwarding, the library can just copy what-ever is left / updated to the
next processor node.

2. In the sink node, where a record is being sent to the Kafka topic, we
should consider the following:

a. For sink topics, we will set the headers into the producer record.
b. For repartition topics, we will the headers into the producer record.
c. For changelog topics, we will drop the headers in the produce record
since they will not be used in restoration and not stored in the state
store either.


We can discuss about extending the current protocol and how to enable users
override those rule, and how to expose them in the DSL layer in a future
KIP.



Guozhang


On Mon, May 7, 2018 at 5:49 PM, Matthias J. Sax 
wrote:

> Guozhang,
>
> if you advocate to forward headers by default, it might be a better
> default strategy do forward the headers for all operators (similar to
> topic/partition/offset metadata). It's usually harder for users to
> reason about different cases and thus I would prefer to have consistent
> behavior, ie, only one default strategy instead of introducing different
> cases.
>
> Btw: My argument about dropping headers by default only implies, that
> users need to copy the headers explicitly to the output records in there
> code of they want to inspect them later -- it does not imply that
> headers cannot be forwarded downstream. (Not sure if this was clear).
>
> I am also ok with copying be default thought (for me, it's a 51/49
> preference for dropping by default only).
>
>
> -Matthias
>
> On 5/7/18 4:52 PM, Guozhang Wang wrote:
> > Hi Matthias,
> >
> > My concern of setting `null` in all cases is that it would make headers
> not
> > very useful in KIP-244 then, because headers will only be available at
> the
> > source stream / table, but not in any of the following instances. In
> > practice users may be more likely to look into the headers later in the
> > pipeline. Personally I'd suggest we pass the headers for all stateless
> > operators in DSL and everywhere in PAPI's context.forward(). For
> > repartition topics and sink topics, we also set them in the produced
> > records accordingly; for changelog topics, we do not set them since they
> > are not going to be used anywhere in the store.
> >
> >
> > Guozhang
> >
> >
> > On Sun, May 6, 2018 at 9:03 PM, Matthias J. Sax 
> > wrote:
> >
> >> I agree, that we should not block this KIP if possible. Nevertheless, we
> >> should try to get a reasonable default strategy for inheriting the
> >> headers so we don't need to change it later on.
> >>
> >> Let's see what other think. I still tend slightly to set to `null` by
> >> default for all cases. If the default strategy is different for
> >> different operators as you suggest, it might be confusion to users.
> >> IMHO, the default behavior should be as simple as possible.
> >>
> >>
> >> -Matthias
> >>
> >>
> >> On 5/6/18 8:53 PM, Guozhang Wang wrote:
> >>> Matthias, thanks for sharing your opinions in the inheritance protocol
> of
> >>> the record context. I'm thinking maybe we should make this discussion
> as
> >> a
> >>> separate KIP by itself? If yes, then KIP-244's scope would be smaller,
> >> and
> >>> within KIP-244 we can have a simple inheritance rule that setting it to
> >>> null when 1) going through stateful operators and 2) sending to any
> >> topics.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>> On Sun, May 6, 2018 at 10:24 AM, Matthias J. Sax <
> matth...@confluent.io>
> >>> wrote:
> >>>
>  Making the inheritance protocol a public contract seems reasonable to
> >> me.
> 
>  In the current implementation, all output records inherits the offset,
>  timestamp, topic, and partition metadata from the input record. We
>  already added an API to change the timestamp explicitly for the output
>  record thought.
> 
>  I think it make sense to keep the inheritance of offset, topic, and
>  partition. For headers, it's worth to discuss. I see arguments for two
>  strategies: (1) inherit by default, (2) set `null` by default.
>  Independent of the default behavior, we should add an API to set
> headers
>  for output records explicitly though (similar to the "set timestamp
> >> API").
> 
>  From my point of view, timestamp/headers are a different
>  "class/category" of data/metadata than topic/partition/offset. For the
>  first category, it makes sense to manipulate them and it's more than
>  "plain metadata"; especially the timestamp. For 

[jira] [Resolved] (KAFKA-6825) DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG is private

2018-05-10 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6825.
--
   Resolution: Fixed
 Assignee: Guozhang Wang
Fix Version/s: 2.0.0

> DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG is private
> 
>
> Key: KAFKA-6825
> URL: https://issues.apache.org/jira/browse/KAFKA-6825
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Anna O
>Assignee: Guozhang Wang
>Priority: Minor
> Fix For: 2.0.0
>
>
> org.apache.kafka.streams.StreamsConfig 
> DEFAULT_PRODUCTION_EXCEPTION_HANDLER_CLASS_CONFIG constant is private.
> Should be public.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6828) Index files are no longer sparse in Java 9/10 due to OpenJDK regression

2018-05-10 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6828.

Resolution: Fixed

Great! Let's close this then.

> Index files are no longer sparse in Java 9/10 due to OpenJDK regression
> ---
>
> Key: KAFKA-6828
> URL: https://issues.apache.org/jira/browse/KAFKA-6828
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 1.0.0
> Environment: CentosOS 7 on EXT4 FS
>Reporter: Enrico Olivelli
>Priority: Critical
>
> This is a very strage case. I have a Kafka broker (part of a cluster of 3 
> brokers) which cannot start upgrading Java from Oracle JDK8 to Oracle JDK 
> 9.0.4 (the same with JDK 10.0.0)
> There are a lot of .index and .timeindex files taking 10MB, they are for 
> empty partiions.
> Running with Java 9 the server seems to rebuild these files and each file 
> takes "really" 10MB.The sum of all the files (calculated using du -sh) is 
> 22GB and the broker crashes during startup, disk becomes full and no log more 
> is written. (I can send an extraction of the logs, but the tell only  about 
> 'rebuilding index', the same as on Java 8)
> Reverting the same broker to Java 8 and removing the index files, the broker 
> rebuilds such files, each files take 10MB, but the full sum of sizes 
> (calculated using du -sh) is 38 MB !
> I am running this broker on CentosOS 7 on EXT4 FS.
> I have upgraded the broker to latest and greatest Kafka 1.0.0 (from 0.10.2) 
> without any success.
>   
>  After checking on JDK nio-dev list it appears a regresion in the behaviour 
> of RandomAccessFile
>   Just for reference see this discussion  on nio-dev list on OpenJDK
>  [http://mail.openjdk.java.net/pipermail/nio-dev/2018-April/005008.html]
> see
>  [https://bugs.openjdk.java.net/browse/JDK-8168628]
>   
>   
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6870) Concurrency conflicts in SampledStat

2018-05-10 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-6870.
---
Resolution: Fixed
  Reviewer: Rajini Sivaram

Thanks for the reviews. [~rsivaram]

> Concurrency conflicts in SampledStat
> 
>
> Key: KAFKA-6870
> URL: https://issues.apache.org/jira/browse/KAFKA-6870
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
> Fix For: 2.0.0, 1.1.1
>
>
> The samples stored in SampledStat is not thread-safe. However, 
> ReplicaFetcherThreads used to handle replica to specified brokers may update 
> (when the samples is empty, we will add a new sample to it) and iterate the 
> samples concurrently, and then cause the ConcurrentModificationException.
> {code:java}
> [2018-05-03 13:50:56,087] ERROR [ReplicaFetcher replicaId=106, leaderId=100, 
> fetcherId=0] Error due to (kafka.server.ReplicaFetcherThread:76)
> java.util.ConcurrentModificationException
> at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
> at java.util.ArrayList$Itr.next(ArrayList.java:859)
> at 
> org.apache.kafka.common.metrics.stats.Rate$SampledTotal.combine(Rate.java:132)
> at 
> org.apache.kafka.common.metrics.stats.SampledStat.measure(SampledStat.java:78)
> at org.apache.kafka.common.metrics.stats.Rate.measure(Rate.java:66)
> at 
> org.apache.kafka.common.metrics.KafkaMetric.measurableValue(KafkaMetric.java:85)
> at org.apache.kafka.common.metrics.Sensor.checkQuotas(Sensor.java:201)
> at org.apache.kafka.common.metrics.Sensor.checkQuotas(Sensor.java:192)
> at 
> kafka.server.ReplicationQuotaManager.isQuotaExceeded(ReplicationQuotaManager.scala:104)
> at 
> kafka.server.ReplicaFetcherThread.kafka$server$ReplicaFetcherThread$$shouldFollowerThrottle(ReplicaFetcherThread.scala:384)
> at 
> kafka.server.ReplicaFetcherThread$$anonfun$buildFetchRequest$1.apply(ReplicaFetcherThread.scala:263)
> at 
> kafka.server.ReplicaFetcherThread$$anonfun$buildFetchRequest$1.apply(ReplicaFetcherThread.scala:261)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.server.ReplicaFetcherThread.buildFetchRequest(ReplicaFetcherThread.scala:261)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$2.apply(AbstractFetcherThread.scala:102)
> at 
> kafka.server.AbstractFetcherThread$$anonfun$2.apply(AbstractFetcherThread.scala:101)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
> at 
> kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:101)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
> {code}
> Before 
> [https://github.com/apache/kafka/commit/d734f4e56d276f84b8c52b602edd67d41cbb6c35|https://github.com/apache/kafka/commit/d734f4e56d276f84b8c52b602edd67d41cbb6c35]
>  the ConcurrentModificationException doesn't exist since all changes to 
> samples is "add" currently. Using the get(index) is able to avoid the 
> ConcurrentModificationException.
> In short, we can just make samples thread-safe. Or just replace the foreach 
> loop by get(index) if we have concerns about the performance of thread-safe 
> list...
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #95

2018-05-10 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6870 Concurrency conflicts in SampledStat (#4985)

--
[...truncated 1.50 MB...]
kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > pollException STARTED

kafka.network.SocketServerTest > pollException PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 

[jira] [Created] (KAFKA-6892) Kafka Streams memory usage grows

2018-05-10 Thread Dawid Kulig (JIRA)
Dawid Kulig created KAFKA-6892:
--

 Summary: Kafka Streams memory usage grows 
 Key: KAFKA-6892
 URL: https://issues.apache.org/jira/browse/KAFKA-6892
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
Reporter: Dawid Kulig
 Attachments: kafka-streams-per-pod-resources-usage.png

Hi. I am observing indefinite memory growth of my kafka-streams application. It 
gets killed by the OS when reaching the memory limit (10gb). 

It's running two unrelated pipelines (read from 4 source topics - 100 
partitions each - aggregate data and write to two destination topics) 

My environment: 
 * Kubernetes cluster
 * 4 app instances
 * 10GB memory limit per pod (instance)
 * JRE 8

JVM:
 * -Xms2g
 * -Xmx4g
 * num.stream.threads = 4

 

When my app is running for 24hours it reaches 10GB memory limit. Heap and GC 
looks good, non-heap avg memory usage is 120MB. I've read it might be related 
to the RocksDB that works underneath streams app, however I tried to tune it 
using [confluent 
doc|https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config]
 however with no luck. 

RocksDB config #1:
{code:java}
tableConfig.setBlockCacheSize(16 * 1024 * 1024L);
tableConfig.setBlockSize(16 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(2);{code}
RocksDB config #2 
{code:java}
tableConfig.setBlockCacheSize(1024 * 1024L);
tableConfig.setBlockSize(16 * 1024L);
tableConfig.setCacheIndexAndFilterBlocks(true);
options.setTableFormatConfig(tableConfig);
options.setMaxWriteBufferNumber(2);
options.setWriteBufferSize(8 * 1024L);{code}
 

This behavior has only been observed with our production traffic, where per 
topic input message rate is 10msg/sec. I am attaching cluster resources usage 
from last 24h.

Any help or advice would be much appreciated. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-235 Add DNS alias support for secured connection

2018-05-10 Thread Rajini Sivaram
Thanks Jonathan. You have binding votes from me and Gwen. One more binding
vote is required for this KIP to be approved.

On Thu, May 10, 2018 at 1:14 PM, Skrzypek, Jonathan <
jonathan.skrzy...@gs.com> wrote:

> Hi,
>
> Have implemented the changes discussed.
> bootstrap.reverse.dns.lookup is disabled by default.
> When enabled, the client will perform reverse dns lookup regardless of the
> security protocol used.
>
> https://github.com/apache/kafka/pull/4485
>
>
> Jonathan Skrzypek
>
>
> -Original Message-
> From: Skrzypek, Jonathan [Tech]
> Sent: 01 May 2018 17:17
> To: dev
> Subject: RE: [VOTE] KIP-235 Add DNS alias support for secured connection
>
> Oops, yes indeed that makes sense, got confused between SASL_SSL and SSL.
>
> Updated the KIP.
>
>
>
> Jonathan Skrzypek
>
>
> -Original Message-
> From: Rajini Sivaram [mailto:rajinisiva...@gmail.com]
> Sent: 01 May 2018 11:08
> To: dev
> Subject: Re: [VOTE] KIP-235 Add DNS alias support for secured connection
>
> Jonathan,
>
> Not doing the reverse lookup for SASL_SSL limits the usability of this KIP
> since it can no longer be used in a secure environment where Kerberos is
> used with TLS. Perhaps the best option is to do the lookup if the option is
> explicitly enabled regardless of what the security protocol is. If there is
> a SSL handshake failure with this option enabled, the error message can be
> updated to indicate that it could be because a reverse lookup was used. Can
> you state in the KIP that the default value of
> bootstrap.reverse.dns.lookup will
> be false and hence there is no backwards compatibility issue.
>
> On Mon, Apr 30, 2018 at 1:41 PM, Skrzypek, Jonathan <
> jonathan.skrzy...@gs.com> wrote:
>
> > Thanks for your comments.
> > Have updated the KIP.
> >
> > I agree SSL and SASL_SSL will face similar issues and should behave the
> > same.
> > Thinking about this further,  I'm wondering whether setting
> > bootstrap.reverse.dns.lookup to true whilst using any of those protocols
> > should throw a critical error and stop, or at least log a warning stating
> > that the lookup won't be performed.
> > This sounds better than silently ignoring and leave users with the
> > impression they can use SSL and bootstrap server aliases.
> > Abruptly stopping the client sounds a bit extreme so I'm leaning towards
> a
> > warning.
> >
> > Thoughts ?
> >
> > I'm not sure about checking whether the list has IP addresses.
> > There could be cases where the list has a mix of FQDNs and IPs, so I
> would
> > rather perform the lookup regardless of the case when the parameter is
> > enabled.
> >
> > On the security aspects, I am by no means a security or SASL expert so
> > commented the KIP with what I believe to be the case.
> >
> > Jonathan Skrzypek
> >
> > -Original Message-
> > From: Rajini Sivaram [mailto:rajinisiva...@gmail.com]
> > Sent: 29 April 2018 15:38
> > To: dev
> > Subject: Re: [VOTE] KIP-235 Add DNS alias support for secured connection
> >
> > Hi Jonathan,
> >
> > Thanks for the KIP.
> >
> > +1 (binding) with a couple comments below to add more detail to the KIP.
> >
> >1. Make it clearer when the new option `bootstrap.reverse.dns.lookup`
> >should or shouldn't be used. Document security considerations as well
> as
> >other system configurations that may have an impact.
> >2. The PR currently disables the new code path for security protocol
> >SSL. But this doesn't address SASL_SSL which could also do hostname
> >verification. Do we even want to do reverse lookup if bootstrap list
> >contains IP addresses? If we do, we should handle SSL and SASL_SSL in
> > the
> >same way (which basically means handling all protocols in the same
> way).
> >
> >
> > On Thu, Apr 26, 2018 at 2:16 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > +1 as a user
> > > BUT
> > >
> > > I am no security expert. I have experienced that issue while setting
> up a
> > > cluster and while I would have liked a feature like that (I opened a
> JIRA
> > > at the time), I always guessed that the reason was because of some
> > security
> > > protection.
> > >
> > > Now from a setup point of view this helps a ton, but I really want to
> > make
> > > sure this doesn't introduce any security risk by relaxing a constraint.
> > >
> > > Is there a security assessment possible by someone accredited ?
> > >
> > > Sorry for raising these questions just want to make sure it's addressed
> > >
> > > On Thu., 26 Apr. 2018, 5:32 pm Gwen Shapira, 
> wrote:
> > >
> > > > +1 (binding)
> > > >
> > > > This KIP is quite vital to running secured clusters in
> cloud/container
> > > > environment. Would love to see more support from the community to
> this
> > > (or
> > > > feedback...)
> > > >
> > > > Gwen
> > > >
> > > > On Mon, Apr 16, 2018 at 4:52 PM, Skrzypek, Jonathan <
> > > > jonathan.skrzy...@gs.com> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Could anyone take a look ?
> > > > > Does 

Build failed in Jenkins: kafka-trunk-jdk8 #2635

2018-05-10 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6870 Concurrency conflicts in SampledStat (#4985)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H24 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 4f7c11a1df26836c7a15f062a5431adb3d371a86 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 4f7c11a1df26836c7a15f062a5431adb3d371a86
Commit message: "KAFKA-6870 Concurrency conflicts in SampledStat (#4985)"
 > git rev-list --no-walk 9679c44d2b521b5c627e7bde375c0883f5857e0c # timeout=10
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins674739647489873.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.4/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing

FAILURE: Build failed with an exception.

* What went wrong:
Could not create service of type ScriptPluginFactory using 
BuildScopeServices.createScriptPluginFactory().
> Could not create service of type FileHasher using 
> BuildSessionScopeServices.createFileSnapshotter().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 2s
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=4f7c11a1df26836c7a15f062a5431adb3d371a86, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #2633
Recording test results
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to unregistered user rajinisiva...@googlemail.com


RE: [VOTE] KIP-235 Add DNS alias support for secured connection

2018-05-10 Thread Skrzypek, Jonathan
Hi,

Have implemented the changes discussed.
bootstrap.reverse.dns.lookup is disabled by default.
When enabled, the client will perform reverse dns lookup regardless of the 
security protocol used.

https://github.com/apache/kafka/pull/4485


Jonathan Skrzypek 


-Original Message-
From: Skrzypek, Jonathan [Tech] 
Sent: 01 May 2018 17:17
To: dev
Subject: RE: [VOTE] KIP-235 Add DNS alias support for secured connection

Oops, yes indeed that makes sense, got confused between SASL_SSL and SSL.

Updated the KIP.



Jonathan Skrzypek 


-Original Message-
From: Rajini Sivaram [mailto:rajinisiva...@gmail.com] 
Sent: 01 May 2018 11:08
To: dev
Subject: Re: [VOTE] KIP-235 Add DNS alias support for secured connection

Jonathan,

Not doing the reverse lookup for SASL_SSL limits the usability of this KIP
since it can no longer be used in a secure environment where Kerberos is
used with TLS. Perhaps the best option is to do the lookup if the option is
explicitly enabled regardless of what the security protocol is. If there is
a SSL handshake failure with this option enabled, the error message can be
updated to indicate that it could be because a reverse lookup was used. Can
you state in the KIP that the default value of
bootstrap.reverse.dns.lookup will
be false and hence there is no backwards compatibility issue.

On Mon, Apr 30, 2018 at 1:41 PM, Skrzypek, Jonathan <
jonathan.skrzy...@gs.com> wrote:

> Thanks for your comments.
> Have updated the KIP.
>
> I agree SSL and SASL_SSL will face similar issues and should behave the
> same.
> Thinking about this further,  I'm wondering whether setting
> bootstrap.reverse.dns.lookup to true whilst using any of those protocols
> should throw a critical error and stop, or at least log a warning stating
> that the lookup won't be performed.
> This sounds better than silently ignoring and leave users with the
> impression they can use SSL and bootstrap server aliases.
> Abruptly stopping the client sounds a bit extreme so I'm leaning towards a
> warning.
>
> Thoughts ?
>
> I'm not sure about checking whether the list has IP addresses.
> There could be cases where the list has a mix of FQDNs and IPs, so I would
> rather perform the lookup regardless of the case when the parameter is
> enabled.
>
> On the security aspects, I am by no means a security or SASL expert so
> commented the KIP with what I believe to be the case.
>
> Jonathan Skrzypek
>
> -Original Message-
> From: Rajini Sivaram [mailto:rajinisiva...@gmail.com]
> Sent: 29 April 2018 15:38
> To: dev
> Subject: Re: [VOTE] KIP-235 Add DNS alias support for secured connection
>
> Hi Jonathan,
>
> Thanks for the KIP.
>
> +1 (binding) with a couple comments below to add more detail to the KIP.
>
>1. Make it clearer when the new option `bootstrap.reverse.dns.lookup`
>should or shouldn't be used. Document security considerations as well as
>other system configurations that may have an impact.
>2. The PR currently disables the new code path for security protocol
>SSL. But this doesn't address SASL_SSL which could also do hostname
>verification. Do we even want to do reverse lookup if bootstrap list
>contains IP addresses? If we do, we should handle SSL and SASL_SSL in
> the
>same way (which basically means handling all protocols in the same way).
>
>
> On Thu, Apr 26, 2018 at 2:16 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > +1 as a user
> > BUT
> >
> > I am no security expert. I have experienced that issue while setting up a
> > cluster and while I would have liked a feature like that (I opened a JIRA
> > at the time), I always guessed that the reason was because of some
> security
> > protection.
> >
> > Now from a setup point of view this helps a ton, but I really want to
> make
> > sure this doesn't introduce any security risk by relaxing a constraint.
> >
> > Is there a security assessment possible by someone accredited ?
> >
> > Sorry for raising these questions just want to make sure it's addressed
> >
> > On Thu., 26 Apr. 2018, 5:32 pm Gwen Shapira,  wrote:
> >
> > > +1 (binding)
> > >
> > > This KIP is quite vital to running secured clusters in cloud/container
> > > environment. Would love to see more support from the community to this
> > (or
> > > feedback...)
> > >
> > > Gwen
> > >
> > > On Mon, Apr 16, 2018 at 4:52 PM, Skrzypek, Jonathan <
> > > jonathan.skrzy...@gs.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > Could anyone take a look ?
> > > > Does the proposal sound reasonable ?
> > > >
> > > > Jonathan Skrzypek
> > > >
> > > >
> > > > From: Skrzypek, Jonathan [Tech]
> > > > Sent: 23 March 2018 19:05
> > > > To: dev@kafka.apache.org
> > > > Subject: [VOTE] KIP-235 Add DNS alias support for secured connection
> > > >
> > > > Hi,
> > > >
> > > > I would like to start a vote for KIP-235
> > > >
> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__cwiki.
> apache.org_confluence_display_KAFKA_KIP-2D=DwIBaQ=
> 

Build failed in Jenkins: kafka-trunk-jdk7 #3416

2018-05-10 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6870 Concurrency conflicts in SampledStat (#4985)

--
[...truncated 64.07 KB...]
withAuthorizer(opts) { authorizer =>
  ^
:124:
 error writing <$anon: Function1>: 
:
 

 is not a directory
else resources.map(resource => resource -> authorizer.getAcls(resource))
^
:126:
 error writing <$anon: Function1>: 
:
 

 is not a directory
  for ((resource, acls) <- resourceToAcls)
   ^
:126:
 error writing <$anon: Function1>: 
:
 

 is not a directory
  for ((resource, acls) <- resourceToAcls)
^
:127:
 error writing <$anon: Function1>: 
:
 

 is not a directory
println(s"Current ACLs for resource `$resource`: $Newline 
${acls.map("\t" + _).mkString(Newline)} $Newline")

  ^
:101:
 error writing <$anon: Function1>: 
:
 

 is not a directory
withAuthorizer(opts) { authorizer =>
  ^
:104:
 error writing <$anon: Function1>: 
:
 

 is not a directory
  for ((resource, acls) <- resourceToAcl) {
   ^
:104:
 error writing <$anon: Function1>: 
:
 

 is not a directory
  for ((resource, acls) <- resourceToAcl) {
^
:109:
 error writing <$anon: Function1>: 
:
 

 is not a directory
  if (confirmAction(opts, s"Are you sure you want to remove ACLs: 
$Newline ${acls.map("\t" + _).mkString(Newline)} $Newline from resource 
`$resource`? (y/n)"))

   ^
:267:
 error writing <$anon: Function1>: 
:
 

Re: [DISCUSS] KIP-290: Support for wildcard suffixed ACLs

2018-05-10 Thread Andy Coates
Rather than having name and pattern fields on the ResourceFilter, where it’s 
only valid for one to be set, and we want to restrict the character set in case 
future enhancements need them, we could instead add a new integer ‘nameType’ 
field, and use constants to indicate how the name field should be interpreted, 
e.g. 0 = literal, 1 = wildcard. This would be extendable, e.g we can later add 
2 = regex, or what ever, and wouldn’t require any escaping.

Sent from my iPhone

> On 7 May 2018, at 05:16, Piyush Vijay  wrote:
> 
> Makes sense. I'll update the KIP.
> 
> Does anyone have any other comments? :)
> 
> Thanks
> 
> 
> Piyush Vijay
> 
>> On Thu, May 3, 2018 at 11:55 AM, Colin McCabe  wrote:
>> 
>> Yeah, I guess that's a good point.  It probably makes sense to support the
>> prefix scheme for consumer groups and transactional IDs as well as topics.
>> 
>> I agree that the current situation where anything goes in consumer group
>> names and transactional ID names is not ideal.  I wish we could rewind the
>> clock and impose restrictions on the names.  However, it doesn't seem
>> practical at the moment.  Adding new restrictions would break a lot of
>> existing users after an upgrade.  It would be a really bad upgrade
>> experience.
>> 
>> However, I think we can support this in a compatible way.  From the
>> perspective of AdminClient, we just have to add a new field to
>> ResourceFilter.  Currently, it has two fields, resourceType and name:
>> 
>>> /**
>>> * A filter which matches Resource objects.
>>> *
>>> * The API for this class is still evolving and we may break
>> compatibility in minor releases, if necessary.
>>> */
>>> @InterfaceStability.Evolving
>>> public class ResourceFilter {
>>>private final ResourceType resourceType;
>>>private final String name;
>> 
>> We can add a third field, pattern.
>> 
>> So the API will basically be, if I create a 
>> ResourceFilter(resourceType=GROUP,
>> name=foo*, pattern=null), it applies only to the consumer group named
>> "foo*".  If I create a ResourceFilter(resourceType=GROUP, name=null,
>> pattern=foo*), it applies to any consumer group starting in "foo".  name
>> and pattern cannot be both set at the same time.  This preserves
>> compatibility at the AdminClient level.
>> 
>> It's possible that we will want to add more types of pattern in the
>> future.  So we should reserve "special characters" such as +, /, &, %, #,
>> $, etc.  These characters should be treated as special unless they are
>> prefixed with a backslash to escape them.  This will allow us to add
>> support for using these characters in the future without breaking
>> compatibility.
>> 
>> At the protocol level, we need a new API version for CreateAclsRequest /
>> DeleteAclsRequest.  The new API version will send all special characters
>> over the wire escaped rather than directly.  (So there is no need for the
>> equivalent of both "name" and "pattern"-- we translate name into a validly
>> escaped pattern that matches only one thing, by adding escape characters as
>> appropriate.)  The broker will validate the new API version and reject
>> malformed of unsupported patterns.
>> 
>> At the ZK level, we can introduce a protocol version to the data in ZK--
>> or store it under a different root.
>> 
>> best,
>> Colin
>> 
>> 
>>> On Wed, May 2, 2018, at 18:09, Piyush Vijay wrote:
>>> Thank you everyone for the interest and, prompt and valuable feedback. I
>>> really appreciate the quick turnaround. I’ve tried to organize the
>> comments
>>> into common headings. See my replies below:
>>> 
>>> 
>>> 
>>> *Case of ‘*’ might already be present in consumer groups and
>> transactional
>>> ids*
>>> 
>>> 
>>> 
>>>   - We definitely need wildcard ACLs support for resources like consumer
>>>   groups and transactional ids for the reason Andy mentioned. A big win
>> of
>>>   this feature is that service providers don’t have to track and keep
>>>   up-to-date all the consumer groups their customers are using.
>>>   - I agree with Andy’s thoughts on the two possible ways.
>>>   - My vote would be to do the breaking change because we should
>> restrict
>>>   the format of consumer groups and transactional ids sooner than later.
>>>  - Consumer groups and transactional ids are basic Kafka concepts.
>>>  There is a lot of value in having a defined naming convention on
>> these
>>>  concepts.
>>>  - This will help us not run into more issues down the line.
>>>  - I’m not sure if people actually use ‘*’ in their consumer group
>>>  names anyway.
>>>  - Escaping ‘*’ isn’t trivial because ‘\’ is an allowed character
>> too.
>>> 
>>> 
>>> *Why introduce two new APIs?*
>>> 
>>> 
>>> 
>>>   - It’s possible to make this change without introducing new APIs but
>> new
>>>   APIs are required for inspection.
>>>   - For example: If I want to fetch all ACLs that match ’topicA*’, it’s
>>>   not possible without introducing new APIs AND 

Re: Use of a formatter like Scalafmt

2018-05-10 Thread Joan Goyeau
Thanks guys for your feedback.

Matthias, we can change the config to use JavaDoc (1 space) instead of the
Scala doc (2 space), that would limit the change indeed.

When I say we can apply this change module by module I mean we can specify
folders, so we could breakdown core too.

I updated the PR to include only the streams folder and the JavaDoc change,
see:
https://github.com/apache/kafka/pull/4965

That would be a good start, see how it goes and later move on to some other
modules.
Let me know your thoughts on the PR too about rewritings that you don't
like or like.

Thanks


On Thu, 10 May 2018, 05:18 Jeff Widman,  wrote:

> It certainly annoys me every time I open the code and my linter starts
> highlighting that some code is indented with spaces and some with tabs...
> increasing consistency across the codebase would be appreciated.
>
> On Wed, May 9, 2018 at 9:10 PM, Ismael Juma  wrote:
>
> > Sounds good about doing this for Kafka streams scala first. Core is a bit
> > more complicated so may require more discussion.
> >
> > Ismael
> >
> > On Wed, 9 May 2018, 16:59 Matthias J. Sax, 
> wrote:
> >
> > > Joan,
> > >
> > > thanks for starting this initiative. I am overall +1
> > >
> > > However, I am worried about applying it to `core` module as the change
> > > is massive. For the Kafka Streams Scala module, that is new and was not
> > > released yet, I am +1.
> > >
> > > A personal thing about the style: the 2-space indention for JavaDocs is
> > > a little weird.
> > >
> > > /**
> > >  *
> > >  */
> > >
> > > is changed to
> > >
> > > /**
> > >   *
> > >   */
> > >
> > > Not sure if this can be fixed easily in the style file? If not, I am
> > > also fine with the change.
> > >
> > > This change also affect the license headers of many files and exposing
> > > that those use the wrong comment format anyway. They should use regular
> > > comments
> > >
> > > /*
> > >  *
> > >  */
> > >
> > > but not JavaDoc comments
> > >
> > > /**
> > >  *
> > >  */
> > >
> > > (We fixed this for Java source code in the past already -- maybe it's
> > > time to fix it for Scala code base, too.
> > >
> > >
> > >
> > > -Matthias
> > >
> > > On 5/9/18 4:45 PM, Joan Goyeau wrote:
> > > > Hi Ted,
> > > >
> > > > As far as I understand this is an issue for PRs and back porting the
> > > > changes to other branches.
> > > >
> > > > Applying the tool to the other branches should also resolve the
> > conflicts
> > > > as the formattings will match, leaving only the actual changes in the
> > > diffs.
> > > > That's what we did sometime ago at my work and it went quiet
> smoothly.
> > > >
> > > > If we don't want to do a big bang commit then I'm thinking we might
> > want
> > > to
> > > > make it gradually by applying it module by module?
> > > > This is one idea do you have any other?
> > > >
> > > > I know formatting sounds like the useless thing that doesn't matter
> > and I
> > > > totally agree with this, that's why I don't want to care about it
> while
> > > > coding.
> > > >
> > > > Thanks
> > > >
> > > > On Thu, 10 May 2018 at 00:15 Ted Yu  wrote:
> > > >
> > > >> Applying the tool across code base would result in massive changes.
> > > >> How would this be handled ?
> > > >>  Original message From: Joan Goyeau <
> j...@goyeau.com>
> > > >> Date: 5/9/18  3:31 PM  (GMT-08:00) To: dev@kafka.apache.org
> Subject:
> > > Use
> > > >> of a formatter like Scalafmt
> > > >> Hi,
> > > >>
> > > >> Contributing to Kafka Streams' Scala API, I've been kinda lost on
> how
> > > >> should I format my code.
> > > >> I know formatting is the start of religion wars but personally I
> have
> > no
> > > >> preference at all. I just want consistency across the codebase, no
> > > >> unnecessary formatting diffs in PRs and offload the formatting to a
> > tool
> > > >> that will do it for me and concentrate on what matters (not
> > formatting).
> > > >>
> > > >> So I opened the following PR where I put arbitrary rules in
> > > .scalafmt.conf
> > > >> <
> > > >>
> > > https://github.com/apache/kafka/pull/4965/files#diff-
> > 8af3e1355c23c331ee2b848e12c5219f
> > > >>>
> > > >> :
> > > >> https://github.com/apache/kafka/pull/4965
> > > >>
> > > >> Please let me know what do you think and if we can move this forward
> > and
> > > >> settle something.
> > > >>
> > > >> Thanks
> > > >>
> > > >
> > >
> > >
> >
>
>
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>


Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-05-10 Thread Edoardo Comar
+1 (non-binding)

On 10 May 2018 at 10:29, zhenya Sun  wrote:

> +1 non-binding
>
> > 在 2018年5月10日,下午5:19,Manikumar  写道:
> >
> > +1 (non-binding).
> > Thanks.
> >
> > On Thu, May 10, 2018 at 2:33 PM, Mickael Maison <
> mickael.mai...@gmail.com>
> > wrote:
> >
> >> +1 (non binding)
> >> Thanks
> >>
> >> On Thu, May 10, 2018 at 9:39 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> >> wrote:
> >>> Hi Richard, Thanks for the KIP.
> >>>
> >>> +1 (binding)
> >>>
> >>> Regards,
> >>>
> >>> Rajini
> >>>
> >>> On Wed, May 9, 2018 at 10:54 PM, Guozhang Wang 
> >> wrote:
> >>>
>  +1 from me, thanks!
> 
> 
>  Guozhang
> 
>  On Wed, May 9, 2018 at 10:46 AM, Jason Gustafson 
>  wrote:
> 
> > Thanks for the KIP, +1 (binding).
> >
> > One small correction: the KIP mentions that close() will be
> >> deprecated,
>  but
> > we do not want to do this because it is needed by the Closeable
>  interface.
> > We only want to deprecate close(long, TimeUnit) in favor of
> > close(Duration).
> >
> > -Jason
> >
> > On Tue, May 8, 2018 at 12:43 AM, khaireddine Rezgui <
> > khaireddine...@gmail.com> wrote:
> >
> >> +1
> >>
> >> 2018-05-07 20:35 GMT+01:00 Bill Bejeck :
> >>
> >>> +1
> >>>
> >>> Thanks,
> >>> Bill
> >>>
> >>> On Fri, May 4, 2018 at 7:21 PM, Richard Yu <
>  yohan.richard...@gmail.com
> >>
> >>> wrote:
> >>>
>  Hi all, I would like to bump this thread since discussion in the
>  KIP
>  appears to be reaching its conclusion.
> 
> 
> 
>  On Thu, Mar 15, 2018 at 3:30 PM, Richard Yu <
> >> yohan.richard...@gmail.com>
>  wrote:
> 
> > Hi all,
> >
> > Since there does not seem to be too much discussion in
> >> KIP-266, I
> >> will
> >>> be
> > starting a voting thread.
> > Here is the link to KIP-266 for reference:
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.
>  action?pageId=75974886
> >
> > Recently, I have made some updates to the KIP. To reiterate, I
>  have
> > included KafkaConsumer's commitSync,
> > poll, and committed in the KIP. (we will be adding to a
> >>> TimeoutException
> > to them as well, in a similar manner
> > to what we will be doing for position())
> >
> > Thanks,
> > Richard Yu
> >
> >
> 
> >>>
> >>
> >>
> >>
> >> --
> >> Ingénieur en informatique
> >>
> >
> 
> 
> 
>  --
>  -- Guozhang
> 
> >>
>
>


-- 
"When the people fear their government, there is tyranny; when the
government fears the people, there is liberty." [Thomas Jefferson]


Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-05-10 Thread zhenya Sun
+1 non-binding

> 在 2018年5月10日,下午5:19,Manikumar  写道:
> 
> +1 (non-binding).
> Thanks.
> 
> On Thu, May 10, 2018 at 2:33 PM, Mickael Maison 
> wrote:
> 
>> +1 (non binding)
>> Thanks
>> 
>> On Thu, May 10, 2018 at 9:39 AM, Rajini Sivaram 
>> wrote:
>>> Hi Richard, Thanks for the KIP.
>>> 
>>> +1 (binding)
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> On Wed, May 9, 2018 at 10:54 PM, Guozhang Wang 
>> wrote:
>>> 
 +1 from me, thanks!
 
 
 Guozhang
 
 On Wed, May 9, 2018 at 10:46 AM, Jason Gustafson 
 wrote:
 
> Thanks for the KIP, +1 (binding).
> 
> One small correction: the KIP mentions that close() will be
>> deprecated,
 but
> we do not want to do this because it is needed by the Closeable
 interface.
> We only want to deprecate close(long, TimeUnit) in favor of
> close(Duration).
> 
> -Jason
> 
> On Tue, May 8, 2018 at 12:43 AM, khaireddine Rezgui <
> khaireddine...@gmail.com> wrote:
> 
>> +1
>> 
>> 2018-05-07 20:35 GMT+01:00 Bill Bejeck :
>> 
>>> +1
>>> 
>>> Thanks,
>>> Bill
>>> 
>>> On Fri, May 4, 2018 at 7:21 PM, Richard Yu <
 yohan.richard...@gmail.com
>> 
>>> wrote:
>>> 
 Hi all, I would like to bump this thread since discussion in the
 KIP
 appears to be reaching its conclusion.
 
 
 
 On Thu, Mar 15, 2018 at 3:30 PM, Richard Yu <
>> yohan.richard...@gmail.com>
 wrote:
 
> Hi all,
> 
> Since there does not seem to be too much discussion in
>> KIP-266, I
>> will
>>> be
> starting a voting thread.
> Here is the link to KIP-266 for reference:
> 
> https://cwiki.apache.org/confluence/pages/viewpage.
 action?pageId=75974886
> 
> Recently, I have made some updates to the KIP. To reiterate, I
 have
> included KafkaConsumer's commitSync,
> poll, and committed in the KIP. (we will be adding to a
>>> TimeoutException
> to them as well, in a similar manner
> to what we will be doing for position())
> 
> Thanks,
> Richard Yu
> 
> 
 
>>> 
>> 
>> 
>> 
>> --
>> Ingénieur en informatique
>> 
> 
 
 
 
 --
 -- Guozhang
 
>> 



Re: [VOTE] KIP-283: Efficient Memory Usage for Down-Conversion

2018-05-10 Thread Edoardo Comar
+1 (non-binding)

On 10 May 2018 at 09:56, Rajini Sivaram  wrote:

> Hi Dhruvil, Thanks for the KIP!
>
> +1 (binding)
>
> Regards,
>
> Rajini
>
> On Wed, May 9, 2018 at 9:28 PM, Dhruvil Shah  wrote:
>
> > Thanks for the feedback, Jason and Ismael. I renamed the config to
> > "message.downconversion.enable".
> >
> > Also, as an update, I found a potential problem with one of the
> suggestions
> > the KIP made, specifically about the case where the size of messages
> after
> > down-conversion is greater than the size before, and so we are not able
> to
> > send all the messages. Both the old and new consumers expect to receive
> at
> > least one full batch of messages for each partition, and throw a
> > `RecordTooLargeException` if that is not the case. Because of this, I
> made
> > a small change to the KIP to make sure we are able to send at least one
> > full message batch for each partition. Because this is more of an
> internal
> > implementation specific change and does not affect user-visible
> > functionality in any way, I went ahead and updated the KIP with this
> logic
> > (under the "Ensuring Consumer Progress" section).
> >
> > Thanks,
> > Dhruvil
> >
> > On Wed, May 9, 2018 at 9:09 AM, Ismael Juma  wrote:
> >
> > > Maybe it should message instead of record to be consistent with
> > > message.format.version.
> > >
> > > On Wed, 9 May 2018, 09:04 Jason Gustafson,  wrote:
> > >
> > > > Hi Dhruvil,
> > > >
> > > > Thanks for the KIP. +1 from me. Just a minor nitpick on the name of
> the
> > > new
> > > > config. I would suggest "record.downconversion.enable". The "record"
> > > prefix
> > > > emphasizes what is being down-converted and similar existing configs
> > use
> > > > "enable" rather than "enabled."
> > > >
> > > > -Jason
> > > >
> > > > On Wed, May 2, 2018 at 9:35 AM, Ted Yu  wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Wed, May 2, 2018 at 9:27 AM, Dhruvil Shah  >
> > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > I would like to start the vote on KIP-238: Efficient Memory Usage
> > for
> > > > > > Down-Conversion.
> > > > > >
> > > > > > For reference, the link to the KIP is here:
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 283%3A+Efficient+Memory+Usage+for+Down-Conversion
> > > > > >
> > > > > > and the discussion thread is here:
> > > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg86799.html
> > > > > >
> > > > > > Thanks,
> > > > > > Dhruvil
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 
"When the people fear their government, there is tyranny; when the
government fears the people, there is liberty." [Thomas Jefferson]


Re: [VOTE] KIP-294 - Enable TLS hostname verification by default

2018-05-10 Thread Edoardo Comar
+1 (non-binding)

On 10 May 2018 at 09:36, Manikumar  wrote:

> +1 (non-binding)
>
> Thanks.
>
> On Wed, May 9, 2018 at 10:09 PM, Mickael Maison 
> wrote:
>
> > +1, thanks for the KIP!
> >
> > On Wed, May 9, 2018 at 4:41 PM, Ted Yu  wrote:
> > > +1
> > >
> > > On Wed, May 9, 2018 at 8:28 AM, Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > >> Hi all,
> > >>
> > >> Since there have been no objections on this straightforward KIP, I
> would
> > >> like to initiate the voting process. KIP-294 proposes to use a secure
> > >> default value for endpoint identification when using SSL as the
> security
> > >> protocol. The KIP Is here:
> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >> 294+-+Enable+TLS+hostname+verification+by+default
> > >>
> > >> If there are any concerns, please add them to this thread or the
> > discussion
> > >> thread (https://www.mail-archive.com/dev@kafka.apache.org/msg87549.
> html
> > )
> > >>
> > >> Regards,
> > >>
> > >> Rajini
> > >>
> >
>



-- 
"When the people fear their government, there is tyranny; when the
government fears the people, there is liberty." [Thomas Jefferson]


Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-05-10 Thread Manikumar
+1 (non-binding).
Thanks.

On Thu, May 10, 2018 at 2:33 PM, Mickael Maison 
wrote:

> +1 (non binding)
> Thanks
>
> On Thu, May 10, 2018 at 9:39 AM, Rajini Sivaram 
> wrote:
> > Hi Richard, Thanks for the KIP.
> >
> > +1 (binding)
> >
> > Regards,
> >
> > Rajini
> >
> > On Wed, May 9, 2018 at 10:54 PM, Guozhang Wang 
> wrote:
> >
> >> +1 from me, thanks!
> >>
> >>
> >> Guozhang
> >>
> >> On Wed, May 9, 2018 at 10:46 AM, Jason Gustafson 
> >> wrote:
> >>
> >> > Thanks for the KIP, +1 (binding).
> >> >
> >> > One small correction: the KIP mentions that close() will be
> deprecated,
> >> but
> >> > we do not want to do this because it is needed by the Closeable
> >> interface.
> >> > We only want to deprecate close(long, TimeUnit) in favor of
> >> > close(Duration).
> >> >
> >> > -Jason
> >> >
> >> > On Tue, May 8, 2018 at 12:43 AM, khaireddine Rezgui <
> >> > khaireddine...@gmail.com> wrote:
> >> >
> >> > > +1
> >> > >
> >> > > 2018-05-07 20:35 GMT+01:00 Bill Bejeck :
> >> > >
> >> > > > +1
> >> > > >
> >> > > > Thanks,
> >> > > > Bill
> >> > > >
> >> > > > On Fri, May 4, 2018 at 7:21 PM, Richard Yu <
> >> yohan.richard...@gmail.com
> >> > >
> >> > > > wrote:
> >> > > >
> >> > > > > Hi all, I would like to bump this thread since discussion in the
> >> KIP
> >> > > > > appears to be reaching its conclusion.
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > On Thu, Mar 15, 2018 at 3:30 PM, Richard Yu <
> >> > > yohan.richard...@gmail.com>
> >> > > > > wrote:
> >> > > > >
> >> > > > > > Hi all,
> >> > > > > >
> >> > > > > > Since there does not seem to be too much discussion in
> KIP-266, I
> >> > > will
> >> > > > be
> >> > > > > > starting a voting thread.
> >> > > > > > Here is the link to KIP-266 for reference:
> >> > > > > >
> >> > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> >> > > > > action?pageId=75974886
> >> > > > > >
> >> > > > > > Recently, I have made some updates to the KIP. To reiterate, I
> >> have
> >> > > > > > included KafkaConsumer's commitSync,
> >> > > > > > poll, and committed in the KIP. (we will be adding to a
> >> > > > TimeoutException
> >> > > > > > to them as well, in a similar manner
> >> > > > > > to what we will be doing for position())
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > > Richard Yu
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > Ingénieur en informatique
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
>


Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-05-10 Thread Mickael Maison
+1 (non binding)
Thanks

On Thu, May 10, 2018 at 9:39 AM, Rajini Sivaram  wrote:
> Hi Richard, Thanks for the KIP.
>
> +1 (binding)
>
> Regards,
>
> Rajini
>
> On Wed, May 9, 2018 at 10:54 PM, Guozhang Wang  wrote:
>
>> +1 from me, thanks!
>>
>>
>> Guozhang
>>
>> On Wed, May 9, 2018 at 10:46 AM, Jason Gustafson 
>> wrote:
>>
>> > Thanks for the KIP, +1 (binding).
>> >
>> > One small correction: the KIP mentions that close() will be deprecated,
>> but
>> > we do not want to do this because it is needed by the Closeable
>> interface.
>> > We only want to deprecate close(long, TimeUnit) in favor of
>> > close(Duration).
>> >
>> > -Jason
>> >
>> > On Tue, May 8, 2018 at 12:43 AM, khaireddine Rezgui <
>> > khaireddine...@gmail.com> wrote:
>> >
>> > > +1
>> > >
>> > > 2018-05-07 20:35 GMT+01:00 Bill Bejeck :
>> > >
>> > > > +1
>> > > >
>> > > > Thanks,
>> > > > Bill
>> > > >
>> > > > On Fri, May 4, 2018 at 7:21 PM, Richard Yu <
>> yohan.richard...@gmail.com
>> > >
>> > > > wrote:
>> > > >
>> > > > > Hi all, I would like to bump this thread since discussion in the
>> KIP
>> > > > > appears to be reaching its conclusion.
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Thu, Mar 15, 2018 at 3:30 PM, Richard Yu <
>> > > yohan.richard...@gmail.com>
>> > > > > wrote:
>> > > > >
>> > > > > > Hi all,
>> > > > > >
>> > > > > > Since there does not seem to be too much discussion in KIP-266, I
>> > > will
>> > > > be
>> > > > > > starting a voting thread.
>> > > > > > Here is the link to KIP-266 for reference:
>> > > > > >
>> > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
>> > > > > action?pageId=75974886
>> > > > > >
>> > > > > > Recently, I have made some updates to the KIP. To reiterate, I
>> have
>> > > > > > included KafkaConsumer's commitSync,
>> > > > > > poll, and committed in the KIP. (we will be adding to a
>> > > > TimeoutException
>> > > > > > to them as well, in a similar manner
>> > > > > > to what we will be doing for position())
>> > > > > >
>> > > > > > Thanks,
>> > > > > > Richard Yu
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Ingénieur en informatique
>> > >
>> >
>>
>>
>>
>> --
>> -- Guozhang
>>


Re: [VOTE] KIP-283: Efficient Memory Usage for Down-Conversion

2018-05-10 Thread Rajini Sivaram
Hi Dhruvil, Thanks for the KIP!

+1 (binding)

Regards,

Rajini

On Wed, May 9, 2018 at 9:28 PM, Dhruvil Shah  wrote:

> Thanks for the feedback, Jason and Ismael. I renamed the config to
> "message.downconversion.enable".
>
> Also, as an update, I found a potential problem with one of the suggestions
> the KIP made, specifically about the case where the size of messages after
> down-conversion is greater than the size before, and so we are not able to
> send all the messages. Both the old and new consumers expect to receive at
> least one full batch of messages for each partition, and throw a
> `RecordTooLargeException` if that is not the case. Because of this, I made
> a small change to the KIP to make sure we are able to send at least one
> full message batch for each partition. Because this is more of an internal
> implementation specific change and does not affect user-visible
> functionality in any way, I went ahead and updated the KIP with this logic
> (under the "Ensuring Consumer Progress" section).
>
> Thanks,
> Dhruvil
>
> On Wed, May 9, 2018 at 9:09 AM, Ismael Juma  wrote:
>
> > Maybe it should message instead of record to be consistent with
> > message.format.version.
> >
> > On Wed, 9 May 2018, 09:04 Jason Gustafson,  wrote:
> >
> > > Hi Dhruvil,
> > >
> > > Thanks for the KIP. +1 from me. Just a minor nitpick on the name of the
> > new
> > > config. I would suggest "record.downconversion.enable". The "record"
> > prefix
> > > emphasizes what is being down-converted and similar existing configs
> use
> > > "enable" rather than "enabled."
> > >
> > > -Jason
> > >
> > > On Wed, May 2, 2018 at 9:35 AM, Ted Yu  wrote:
> > >
> > > > +1
> > > >
> > > > On Wed, May 2, 2018 at 9:27 AM, Dhruvil Shah 
> > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start the vote on KIP-238: Efficient Memory Usage
> for
> > > > > Down-Conversion.
> > > > >
> > > > > For reference, the link to the KIP is here:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > 283%3A+Efficient+Memory+Usage+for+Down-Conversion
> > > > >
> > > > > and the discussion thread is here:
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg86799.html
> > > > >
> > > > > Thanks,
> > > > > Dhruvil
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-05-10 Thread Rajini Sivaram
Hi Richard, Thanks for the KIP.

+1 (binding)

Regards,

Rajini

On Wed, May 9, 2018 at 10:54 PM, Guozhang Wang  wrote:

> +1 from me, thanks!
>
>
> Guozhang
>
> On Wed, May 9, 2018 at 10:46 AM, Jason Gustafson 
> wrote:
>
> > Thanks for the KIP, +1 (binding).
> >
> > One small correction: the KIP mentions that close() will be deprecated,
> but
> > we do not want to do this because it is needed by the Closeable
> interface.
> > We only want to deprecate close(long, TimeUnit) in favor of
> > close(Duration).
> >
> > -Jason
> >
> > On Tue, May 8, 2018 at 12:43 AM, khaireddine Rezgui <
> > khaireddine...@gmail.com> wrote:
> >
> > > +1
> > >
> > > 2018-05-07 20:35 GMT+01:00 Bill Bejeck :
> > >
> > > > +1
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Fri, May 4, 2018 at 7:21 PM, Richard Yu <
> yohan.richard...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Hi all, I would like to bump this thread since discussion in the
> KIP
> > > > > appears to be reaching its conclusion.
> > > > >
> > > > >
> > > > >
> > > > > On Thu, Mar 15, 2018 at 3:30 PM, Richard Yu <
> > > yohan.richard...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi all,
> > > > > >
> > > > > > Since there does not seem to be too much discussion in KIP-266, I
> > > will
> > > > be
> > > > > > starting a voting thread.
> > > > > > Here is the link to KIP-266 for reference:
> > > > > >
> > > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > action?pageId=75974886
> > > > > >
> > > > > > Recently, I have made some updates to the KIP. To reiterate, I
> have
> > > > > > included KafkaConsumer's commitSync,
> > > > > > poll, and committed in the KIP. (we will be adding to a
> > > > TimeoutException
> > > > > > to them as well, in a similar manner
> > > > > > to what we will be doing for position())
> > > > > >
> > > > > > Thanks,
> > > > > > Richard Yu
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Ingénieur en informatique
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-294 - Enable TLS hostname verification by default

2018-05-10 Thread Manikumar
+1 (non-binding)

Thanks.

On Wed, May 9, 2018 at 10:09 PM, Mickael Maison 
wrote:

> +1, thanks for the KIP!
>
> On Wed, May 9, 2018 at 4:41 PM, Ted Yu  wrote:
> > +1
> >
> > On Wed, May 9, 2018 at 8:28 AM, Rajini Sivaram 
> > wrote:
> >
> >> Hi all,
> >>
> >> Since there have been no objections on this straightforward KIP, I would
> >> like to initiate the voting process. KIP-294 proposes to use a secure
> >> default value for endpoint identification when using SSL as the security
> >> protocol. The KIP Is here:
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 294+-+Enable+TLS+hostname+verification+by+default
> >>
> >> If there are any concerns, please add them to this thread or the
> discussion
> >> thread (https://www.mail-archive.com/dev@kafka.apache.org/msg87549.html
> )
> >>
> >> Regards,
> >>
> >> Rajini
> >>
>


Re: [DISCUSS] Apache Kafka 2.0.0 Release Plan

2018-05-10 Thread Rajini Sivaram
Hi all,

A reminder that KIP freeze for 2.0.0 is May 22. I have updated the release
page with all the approved KIPs:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820

We have many KIPs still under discussion and/or ready for voting. Please
participate in discussions and votes to enable these to be added to the
release (or postponed if required). Voting needs to be complete by May 22
for the KIP to be added to the release.

Voting is in progress for these KIPs:


   - KIP-206:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-206%3A+Add+support+for+UUID+serialization+and+deserialization
   - KIP-235:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-235%3A+Add+DNS+alias+support+for+secured+connection
   - KIP-248:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-248+-+Create+New+ConfigCommand+That+Uses+The+New+AdminClient#KIP-248-CreateNewConfigCommandThatUsesTheNewAdminClient-DescribeQuotas
   - KIP-255:
   https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75968876
   - KIP-266:
   https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75974886
   - KIP-275:
   https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75977607
   - KIP-277:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-277+-+Fine+Grained+ACL+for+CreateTopics+API
   - KIP-281:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-281%3A+ConsumerPerformance%3A+Increase+Polling+Loop+Timeout+and+Make+It+Reachable+by+the+End+User
   - KIP-282:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-282%3A+Add+the+listener+name+to+the+authentication+context
   - KIP-283:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-283%3A+Efficient+Memory+Usage+for+Down-Conversion
   - KIP-294:
   
https://cwiki.apache.org/confluence/display/KAFKA/KIP-294+-+Enable+TLS+hostname+verification+by+default


Thanks,

Rajini

On Wed, Apr 25, 2018 at 11:18 AM, Rajini Sivaram 
wrote:

> I would like to volunteer to be the release manager for our next
> time-based feature release (v2.0.0). See https://cwiki.apache.org/
> confluence/display/KAFKA/Time+Based+Release+Plan
> 
> if you missed previous communication on time-based releases or need a
> reminder. See https://lists.apache.org/thread.html/
> 8a5ccd348c5ee6b16976ec4acf69bda074fa2e253ebc17be6110f776@%
> 3Cdev.kafka.apache.org%3E if you missed the voting thread on bumping the
> version of the June release to 2.0.0.
>
> I put together a draft release plan with June 2018 as the release month and
> a list of KIPs that have already been voted:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>
> Version 1.2.0 has been merged into 2.0.0.
>
> Some important (and fast approaching) dates:
>
>- KIP Freeze: May 22, 2018 (a KIP must be accepted by this date in
>order to be considered for this release)
>- Feature Freeze: May 29, 2018 (major features merged & working on
>stabilization, minor features have PR, release branch cut; anything not in
>this state will be automatically moved to the next release in JIRA)
>
>
> Regards,
>
> Rajini
>


Jenkins build is back to normal : kafka-trunk-jdk10 #94

2018-05-10 Thread Apache Jenkins Server
See