[jira] [Created] (KAFKA-7802) Connection to Broker Disconnected Taking Down the Whole Cluster

2019-01-08 Thread Candice Wan (JIRA)
Candice Wan created KAFKA-7802:
--

 Summary: Connection to Broker Disconnected Taking Down the Whole 
Cluster
 Key: KAFKA-7802
 URL: https://issues.apache.org/jira/browse/KAFKA-7802
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.0
Reporter: Candice Wan
 Attachments: thread_dump.log

We recently upgraded to 2.1.0. Since then, several times per day, we observe 
some brokers were disconnected when other brokers were trying to fetch the 
replicas. This issue took down the whole cluster, making all the producers and 
consumers not able to publish or consume messages.

Here is an example of what we're seeing in the broker which was trying to send 
fetch request to the problematic one:

2019-01-09 08:05:10.445 [ReplicaFetcherThread-0-3] INFO 
o.a.k.clients.FetchSessionHandler - [ReplicaFetcher replicaId=1, leaderId=3, 
fetcherId=0] Error sending fetch request (sessionId=937967566, epoch=1599941) 
to node 3: java.io.IOException: Connection to 3 was disconnected before the 
response was read.
2019-01-09 08:05:10.445 [ReplicaFetcherThread-1-3] INFO 
o.a.k.clients.FetchSessionHandler - [ReplicaFetcher replicaId=1, leaderId=3, 
fetcherId=1] Error sending fetch request (sessionId=506217047, epoch=1375749) 
to node 3: java.io.IOException: Connection to 3 was disconnected before the 
response was read.
2019-01-09 08:05:10.445 [ReplicaFetcherThread-0-3] WARN 
kafka.server.ReplicaFetcherThread - [ReplicaFetcher replicaId=1, leaderId=3, 
fetcherId=0] Error in response for fetch request (type=FetchRequest, 
replicaId=1, maxWait=500, minBytes=1, maxBytes=10485760, 
fetchData=\{__consumer_offsets-11=(offset=421032847, logStartOffset=0, 
maxBytes=1048576, currentLeaderEpoch=Optional[178])}, 
isolationLevel=READ_UNCOMMITTED, toForget=, metadata=(sessionId=937967566, 
epoch=1599941))
java.io.IOException: Connection to 3 was disconnected before the response was 
read
 at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:100)
 at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:99)
 at 
kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:199)
 at 
kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241)
 at 
kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130)
 at 
kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129)
 at scala.Option.foreach(Option.scala:257)
 at 
kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129)
 at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)

 

We also took the thread dump of the problematic broker (attached). We found all 
the kafka-request-handler were hanging and waiting for some locks, which seemed 
to be a resource leak there.

 

FYI java version we are running is 11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7801) TopicCommand should not be able to alter transaction topic partition count

2019-01-08 Thread huxihx (JIRA)
huxihx created KAFKA-7801:
-

 Summary: TopicCommand should not be able to alter transaction 
topic partition count
 Key: KAFKA-7801
 URL: https://issues.apache.org/jira/browse/KAFKA-7801
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.1.0
Reporter: huxihx
Assignee: huxihx


To keep align with the way it handles the offset topic, TopicCommand should not 
be able to alter transaction topic partition count.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #192

2019-01-08 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Kafka 2.2.0 in February 2018

2019-01-08 Thread Ismael Juma
Thanks for volunteering Matthias! The plan sounds good to me.

Ismael

On Tue, Jan 8, 2019, 1:07 PM Matthias J. Sax  Hi all,
>
> I would like to propose a release plan (with me being release manager)
> for the next time-based feature release 2.2.0 in February.
>
> The recent Kafka release history can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
> The release plan (with open issues and planned KIPs) for 2.2.0 can be
> found at
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512
> .
>
>
> Here are the suggested dates for Apache Kafka 2.2.0 release:
>
> 1) KIP Freeze: Jan 24, 2019.
>
> A KIP must be accepted by this date in order to be considered for this
> release)
>
> 2) Feature Freeze: Jan 31, 2019
>
> Major features merged & working on stabilization, minor features have
> PR, release branch cut; anything not in this state will be automatically
> moved to the next release in JIRA.
>
> 3) Code Freeze: Feb 14, 2019
>
> The KIP and feature freeze date is about 2-3 weeks from now. Please plan
> accordingly for the features you want push into Apache Kafka 2.2.0 release.
>
> 4) Release Date: Feb 28, 2019 (tentative)
>
>
> -Matthias
>
>


Re: [DISCUSS] KIP-411: Add option to make Kafka Connect task client ID values unique

2019-01-08 Thread Randall Hauch
Hi, Paul.

I concur with the others, and I like the new approach that avoids a new
configuration, especially because it does not change the behavior for
anyone already using `producer.client.id` and/or `consumer.client.id`. I
did leave a few comments on the PR. Perhaps the biggest one is whether the
producer used for the sink task error reporter (for dead letter queue)
should be `connector-producer-`, and whether that is distinct
enough from source tasks, which will be of the form
`connector-producer-`. Maybe it is fine. (The other
comments were minor.)

Best regards,

Randall

On Mon, Jan 7, 2019 at 1:19 PM Paul Davidson 
wrote:

> Thanks all. I've submitted a new PR with a possible implementation:
> https://github.com/apache/kafka/pull/6097. Note I did not include the
> group
> ID as part of the default client ID, mainly to avoid the connector name
> appearing twice by default. As noted in the original Jira (
> https://issues.apache.org/jira/browse/KAFKA-5061), leaving out the group
> ID
> could lead to naming conflicts if multiple clusters run the same Kafka
> cluster. This would probably not be a problem for many (including us) as
> metrics exporters can usually be configured to include a cluster ID and
> guarantee uniqueness. Will be interested to hear your thoughts on this.
>
> Paul
>
>
>
> On Mon, Jan 7, 2019 at 10:27 AM Ryanne Dolan 
> wrote:
>
> > I'd also prefer to avoid the new configuration property if possible.
> Seems
> > like a lighter touch without it.
> >
> > Ryanne
> >
> > On Sun, Jan 6, 2019 at 7:25 PM Paul Davidson 
> > wrote:
> >
> > > Hi Konstantine,
> > >
> > > Thanks for your feedback!  I think my reply to Ewen covers most of your
> > > points, and I mostly agree.  If there is general agreement that
> changing
> > > the default behavior is preferable to a config change I will update my
> PR
> > > to use  that approach.
> > >
> > > Paul
> > >
> > > On Fri, Jan 4, 2019 at 5:55 PM Konstantine Karantasis <
> > > konstant...@confluent.io> wrote:
> > >
> > > > Hi Paul.
> > > >
> > > > I second Ewen and I intended to give similar feedback:
> > > >
> > > > 1) Can we avoid a config altogether?
> > > > 2) If we prefer to add a config anyways, can we use a set of allowed
> > > values
> > > > instead of a boolean, even if initially these values are only two? As
> > the
> > > > discussion on Jira highlights, there is a potential for more naming
> > > > conventions in the future, even if now the extra functionality
> doesn't
> > > seem
> > > > essential. It's not optimal to have to deprecate a config instead of
> > just
> > > > extending its set of values.
> > > > 3) I agree, the config name sounds too general. How about
> > > > "client.ids.naming.policy" or "client.ids.naming" if you want two
> more
> > > > options?
> > > >
> > > > Konstantine
> > > >
> > > > On Fri, Jan 4, 2019 at 7:38 AM Ewen Cheslack-Postava <
> > e...@confluent.io>
> > > > wrote:
> > > >
> > > > > Hi Paul,
> > > > >
> > > > > Thanks for the KIP. A few comments.
> > > > >
> > > > > To me, biggest question here is if we can fix this behavior without
> > > > adding
> > > > > a config. In particular, today, we don't even set the client.id
> for
> > > the
> > > > > producer and consumer at all, right? The *only* way it is set is if
> > you
> > > > > include an override in the worker config, but in that case you need
> > to
> > > be
> > > > > explicitly opting in with a `producer.` or `consumer.` prefix, i.e.
> > the
> > > > > settings are `producer.client.id` and `consumer.client.id`.
> > > Otherwise, I
> > > > > think we're getting the default behavior where we generate unique,
> > > > > per-process IDs, i.e. via this logic
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L662-L664
> > > > >
> > > > > If that's the case, would it maybe be possible to compatibly change
> > the
> > > > > default to use task IDs in the client ID, but only if we don't see
> an
> > > > > existing override from the worker config? This would only change
> the
> > > > > behavior when someone is using the default, but since the default
> > would
> > > > > just use what is effectively a random ID that is useless for
> > monitoring
> > > > > metrics, presumably this wouldn't affect any existing users. I
> think
> > > that
> > > > > would avoid having to introduce the config, give better out of the
> > box
> > > > > behavior, and still be a safe, compatible change to make.
> > > > >
> > > > >
> > > > > Other than that, just two minor comments. On the config naming, not
> > > sure
> > > > > about a better name, but I think the config name could be a bit
> > clearer
> > > > if
> > > > > we need to have it. Maybe something including "task" like
> > > > > "task.based.client.ids" or something like that (or change the type
> to
> > > be
> > > > an
> > > > > enum and make it something like task.client.ids=[default|task] and
> > > leave
> > > > it
> > > > > 

[VOTE] KIP-414: Expose Embedded ClientIds in Kafka Streams

2019-01-08 Thread Guozhang Wang
Hello folks,

I'd like to start a voting process for the following KIP:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-414%3A+Expose+Embedded+ClientIds+in+Kafka+Streams

It is a pretty straight-forward and small augment to Stream's public
ThreadMetadata interface, as an outcome of the discussion on KIP-345. Hence
I think we can skip the DISCUSS thread and move on to voting directly. If
people have any questions about the context of KIP-414 or KIP-345, please
feel free to read these two wiki pages and let me know.


-- Guozhang


Jenkins build is back to normal : kafka-trunk-jdk8 #3293

2019-01-08 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk11 #191

2019-01-08 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: code cleanup (#6056)

[wangguoz] MINOR: Put states in proper order, increase timeout for starting 
(#6105)

--
[...truncated 2.25 MB...]

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithRelativeSymlinkForwards PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testOrderOfPluginUrlsWithJars STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testOrderOfPluginUrlsWithJars PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithClasses PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testConnectFrameworkClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testConnectFrameworkClasses PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testThirdPartyClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testThirdPartyClasses PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testClientConfigProvider STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testClientConfigProvider PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testAllowedConnectFrameworkClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testAllowedConnectFrameworkClasses PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testJavaLibraryClasses STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testJavaLibraryClasses PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testEmptyPluginUrls STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testEmptyPluginUrls PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithJars STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithJars PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithZips STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithZips PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithRelativeSymlinkBackwards STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithRelativeSymlinkBackwards PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithAbsoluteSymlink STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testPluginUrlsWithAbsoluteSymlink PASSED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testEmptyStructurePluginUrls STARTED

org.apache.kafka.connect.runtime.isolation.PluginUtilsTest > 
testEmptyStructurePluginUrls PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithNullVersion STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithNullVersion PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescComparison STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescComparison PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testRegularPluginDesc STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testRegularPluginDesc PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescEquality STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescEquality PASSED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithSystemClassLoader STARTED

org.apache.kafka.connect.runtime.isolation.PluginDescTest > 
testPluginDescWithSystemClassLoader PASSED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testWhiteListedManifestResources STARTED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testWhiteListedManifestResources PASSED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testOtherResources STARTED

org.apache.kafka.connect.runtime.isolation.DelegatingClassLoaderTest > 
testOtherResources PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > unconfiguredTransform 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > unconfiguredTransform 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > 
multipleTransformsOneDangling STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > 
multipleTransformsOneDangling PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > misconfiguredTransform 
STARTED

org.apache.kafka.connect.runtime.ConnectorConfigTest > misconfiguredTransform 
PASSED

org.apache.kafka.connect.runtime.ConnectorConfigTest > noTransforms STARTED


Re: [DISCUSSION] KIP-412: Extend Admin API to support dynamic application log levels

2019-01-08 Thread Ryanne Dolan
> To differentiate between the normal Kafka config settings and the
application's log level settings, we will introduce a new resource type -
BROKER_LOGGERS

Stanislav, can you explain why log level wouldn't be a "normal Kafka config
setting"?

Ryanne

On Tue, Jan 8, 2019, 4:26 PM Stanislav Kozlovski  Hey there everybody,
>
> I'd like to start a discussion about KIP-412. Please take a look at the KIP
> if you can, I would appreciate any feedback :)
>
> KIP: KIP-412
> <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels
> >
> JIRA: KAFKA-7800 
>
> --
> Best,
> Stanislav
>


[DISCUSS] KIP-300: Add Windowed KTable API in StreamsBuilder

2019-01-08 Thread Boyang Chen
Hey folks,

I would like to start a discussion thread on adding new time/session windowed 
KTable APIs for KStream:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-300%3A+Add+Windowed+KTable+API+in+StreamsBuilder

We have been working on this thread around 7 months ago, and it is successfully 
applied in our internal stream applications that enable
data sharing across multiple jobs. As a matter of fact, materialization of 
windowed store is definitely a concrete use case that could unblock stream 
users to
build more complex modules.

Let me know if the API changes makes sense.

Best,
Boyang
KIP-300: Add Windowed KTable API in StreamsBuilder - Apache Kafka - Apache 
Software 
Foundation
We have an existing table() API in the StreamsBuilder which could materialize a 
Kafka topic into a local state store called KTable. This interface is very 
useful when we want to back up a Kafka topic to local store. Sometimes we have 
certain requirement to materialize a windowed topic (or changlog ...
cwiki.apache.org




[jira] [Created] (KAFKA-7800) Extend Admin API to support dynamic application log levels

2019-01-08 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7800:
--

 Summary: Extend Admin API to support dynamic application log levels
 Key: KAFKA-7800
 URL: https://issues.apache.org/jira/browse/KAFKA-7800
 Project: Kafka
  Issue Type: New Feature
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


[https://cwiki.apache.org/confluence/display/KAFKA/KIP-412%3A+Extend+Admin+API+to+support+dynamic+application+log+levels]



Logging is a critical part of any system's infrastructure. It is the most 
direct way of observing what is happening with a system. In the case of issues, 
it helps us diagnose the problem quickly which in turn helps lower the 
[MTTR|http://enterprisedevops.org/article/devops-metric-mean-time-to-recovery-mttr-definition-and-reasoning].

Kafka supports application logging via the log4j library and outputs messages 
in various log levels (TRACE, DEBUG, INFO, WARN, ERROR). Log4j is a rich 
library that supports fine-grained logging configurations (e.g use INFO-level 
logging in {{kafka.server.ReplicaManager}} and use DEBUG-level in 
{{kafka.server.KafkaApis}}).
This is statically configurable through the 
[log4j.properties|https://github.com/apache/kafka/blob/trunk/config/log4j.properties]
 file which gets read once at broker start-up.

A problem with this static configuration is that we cannot alter the log levels 
when a problem arises. It is severely impractical to edit a properties file and 
restart all brokers in order to gain visibility of a problem taking place in 
production.
It would be very useful if we support dynamically altering the log levels at 
runtime without needing to restart the Kafka process.

Log4j itself supports dynamically altering the log levels in a programmatic way 
and Kafka exposes a [JMX 
API|https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/Log4jController.scala]
 that lets you alter them. This allows users to change the log levels via a GUI 
(jconsole) or a CLI (jmxterm) that uses JMX.

There is one problem with changing log levels through JMX that we hope to 
address and that is *Ease of Use*:
 * Establishing a connection - Connecting to a remote process via JMX requires 
configuring and exposing multiple JMX ports to the outside world. This is a 
burden on users, as most production deployments may stand behind layers of 
firewalls and have policies against opening ports. This makes opening the ports 
and connections in the middle of an incident even more burdensome
 * Security - JMX and tools around it support authentication and authorization 
but it is an additional hassle to set up credentials for another system.

 * Manual process - Changing the whole cluster's log level requires manually 
connecting to each broker. In big deployments, this is severely impractical and 
forces users to build tooling around it.

h4. Proposition

Ideally, Kafka would support dynamically changing log levels and address all of 
the aforementioned concerns out of the box.
We propose extending the IncrementalAlterConfig/DescribeConfig Admin API with 
functionality for dynamically altering the broker's log level.
This approach would also pave the way for even finer-grained logging logic (e.g 
log DEBUG level only for a certain topic) and would allow us to leverage the 
existing *AlterConfigPolicy* for custom user-defined validation of log-level 
changes.
These log-level changes will be *temporary* and reverted on broker restart - we 
will not persist them anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSSION] KIP-412: Extend Admin API to support dynamic application log levels

2019-01-08 Thread Stanislav Kozlovski
Hey there everybody,

I'd like to start a discussion about KIP-412. Please take a look at the KIP
if you can, I would appreciate any feedback :)

KIP: KIP-412

JIRA: KAFKA-7800 

-- 
Best,
Stanislav


Build failed in Jenkins: kafka-2.1-jdk8 #98

2019-01-08 Thread Apache Jenkins Server
See 


Changes:

[manikumar.reddy] MINOR: Update command line options in Authorization and ACLs

[jason] KAFKA-7253; The returned connector type is always null when creating

--
[...truncated 918.48 KB...]
kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate STARTED

kafka.log.ProducerStateManagerTest > testPrepareUpdateDoesNotMutate PASSED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic STARTED

kafka.log.ProducerStateManagerTest > 
testSequenceNotValidatedForGroupMetadataTopic PASSED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers STARTED

kafka.log.ProducerStateManagerTest > 
testLoadFromSnapshotRemovesNonRetainedProducers PASSED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset STARTED

kafka.log.ProducerStateManagerTest > testFirstUnstableOffset PASSED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached STARTED

kafka.log.ProducerStateManagerTest > testTxnFirstOffsetMetadataCached PASSED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload STARTED

kafka.log.ProducerStateManagerTest > testCoordinatorFencedAfterReload PASSED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch STARTED

kafka.log.ProducerStateManagerTest > testControlRecordBumpsEpoch PASSED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica STARTED

kafka.log.ProducerStateManagerTest > 
testAcceptAppendWithoutProducerStateOnReplica PASSED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile STARTED

kafka.log.ProducerStateManagerTest > testLoadFromCorruptSnapshotFile PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceWrapAround PASSED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout STARTED

kafka.log.ProducerStateManagerTest > testPidExpirationTimeout PASSED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
STARTED

kafka.log.ProducerStateManagerTest > testAcceptAppendWithSequenceGapsOnReplica 
PASSED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
STARTED

kafka.log.ProducerStateManagerTest > testAppendTxnMarkerWithNoProducerState 
PASSED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord STARTED

kafka.log.ProducerStateManagerTest > testOldEpochForControlRecord PASSED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots STARTED

kafka.log.ProducerStateManagerTest > 
testTruncateAndReloadRemovesOutOfRangeSnapshots PASSED

kafka.log.ProducerStateManagerTest > testStartOffset STARTED

kafka.log.ProducerStateManagerTest > testStartOffset PASSED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
STARTED

kafka.log.ProducerStateManagerTest > testProducerSequenceInvalidWrapAround 
PASSED

kafka.log.ProducerStateManagerTest > testTruncateHead STARTED

kafka.log.ProducerStateManagerTest > testTruncateHead PASSED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction STARTED

kafka.log.ProducerStateManagerTest > 
testNonTransactionalAppendWithOngoingTransaction PASSED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged STARTED

kafka.log.ProducerStateManagerTest > testSkipSnapshotIfOffsetUnchanged PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[1] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] STARTED

kafka.log.LogCleanerLagIntegrationTest > cleanerTest[4] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[0] 
STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerConfigUpdateTest[0] 
PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > 
testCleaningNestedMessagesWithMultipleVersions[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[0] STARTED

kafka.log.LogCleanerParameterizedIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerParameterizedIntegrationTest > 

[jira] [Created] (KAFKA-7799) Fix flaky test RestServerTest.testCORSEnabled

2019-01-08 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7799:
--

 Summary: Fix flaky test RestServerTest.testCORSEnabled
 Key: KAFKA-7799
 URL: https://issues.apache.org/jira/browse/KAFKA-7799
 Project: Kafka
  Issue Type: Test
  Components: KafkaConnect
Reporter: Jason Gustafson


Starting to see this failure quite a lot, locally and on jenkins:

{code}

org.apache.kafka.connect.runtime.rest.RestServerTest.testCORSEnabled

Failing for the past 7 builds (Since Failed#18600 )
Took 0.7 sec.
Error Message
java.lang.AssertionError: expected: but was:
Stacktrace
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.kafka.connect.runtime.rest.RestServerTest.checkCORSRequest(RestServerTest.java:221)
at 
org.apache.kafka.connect.runtime.rest.RestServerTest.testCORSEnabled(RestServerTest.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
at 
org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
{code}

If it helps, I see an uncaught exception in the stdout:

{code}
[2019-01-08 19:35:23,664] ERROR Uncaught exception in REST call to 
/connector-plugins/FileStreamSource/validate 
(org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)
javax.ws.rs.NotFoundException: HTTP 404 Not Found
at 
org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:274)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:272)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:268)
at org.glassfish.jersey.internal.Errors.process(Errors.java:316)
at org.glassfish.jersey.internal.Errors.process(Errors.java:298)
at org.glassfish.jersey.internal.Errors.process(Errors.java:268)
at 
org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:289)
at 
org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:256)
at 
org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:703)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7798) Expose embedded client context from KafkaStreams threadMetadata

2019-01-08 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-7798:


 Summary: Expose embedded client context from KafkaStreams 
threadMetadata
 Key: KAFKA-7798
 URL: https://issues.apache.org/jira/browse/KAFKA-7798
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


A KafkaStreams client today contains multiple embedded clients: producer, 
consumer and admin client. Currently these client's context like client id are 
not exposed via KafkaStreams. This ticket proposes to expose those context 
information at the per-thread basis (since each thread has its own embedded 
clients) via ThreadMetadata.

This also has an interplay with KIP-345: as we add group.instance.id in that 
KIP, this information should also be exposed as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7797) Replication throttling configs aren't in the docs

2019-01-08 Thread James Cheng (JIRA)
James Cheng created KAFKA-7797:
--

 Summary: Replication throttling configs aren't in the docs
 Key: KAFKA-7797
 URL: https://issues.apache.org/jira/browse/KAFKA-7797
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.1.0
Reporter: James Cheng


Docs for the following configs are not on the website:
 * leader.replication.throttled.rate
 * follower.replication.throttled.rate
 * replica.alter.log.dirs.io.max.bytes.per.second

 

They are available in the Operations section titled "Limiting Bandwidth Usage 
during Data Migration", but they are not in the general config section.

I think these are generally applicable, right? Not just during 
kafka-reassign-partitions.sh? If so, then they should be in the auto-generated 
docs.

Related: I think none of the configs in 
core/src/main/scala/kafka/server/DynamicConfig.scala are in the generated docs.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-345: Introduce static membership protocol to reduce consumer rebalances

2019-01-08 Thread Boyang Chen
Hey friends,

we will be closing this vote thread since we already got 3 binding votes 
(Guozhang, Harsha and Jason), and 2 non-binding votes (Stanislav, Mayuresh). 
Thanks again for everyone who have been actively contributing to this thread 
(Dong, Matthias, Colin, Mike, John, Konstantine), this has been a really 
exciting journey and I have learnt a lot through the whole process!

For next steps, we will further separate the KStream changes of exposing 
consumer info in a separate KIP, which will be led by Guozhang. I shall discuss 
with with Matthias for the rollout plan since he will be managing 2.2 release 
and hope to get some work done before the deadline. Feel free to keep raising 
comments or questions on the discussion thread, I will reply them at earliest 
convenience.

Best,
Boyang


From: Boyang Chen 
Sent: Saturday, January 5, 2019 8:58 AM
To: dev@kafka.apache.org; dev@kafka.apache.org
Subject: Re: [VOTE] KIP-345: Introduce static membership protocol to reduce 
consumer rebalances

Thanks so much Harsha and Jason! Will address the comment and make it a tuple.

Get Outlook for iOS


From: Jason Gustafson 
Sent: Friday, January 4, 2019 3:50 PM
To: dev
Subject: Re: [VOTE] KIP-345: Introduce static membership protocol to reduce 
consumer rebalances

Hey Boyang,

I had just a follow-up to my comment above. I wanted to suggest an
alternative schema for LeaveGroup:

LeaveGroupRequest => GroupId [GroupInstanceId MemberId]

So we have a single array instead of two arrays. Each element identifies a
single member. For dynamic members, we would use GroupInstanceId="" and
provide the dynamic MemberId, which is consistent with JoinGroup. For
static members, GroupInstanceId must be provided and Member could be
considered optional. I think this makes the schema more coherent, but I'll
leave it to you if there is a good reason to keep them separate.

In any case, my vote is +1. Thanks for the hard work on this KIP!

Best,
Jason

On Fri, Jan 4, 2019 at 1:09 PM Boyang Chen  wrote:

> Thanks Guozhang for the proposal! The update is done.
>
> 
> From: Guozhang Wang 
> Sent: Saturday, January 5, 2019 3:33 AM
> To: dev
> Subject: Re: [VOTE] KIP-345: Introduce static membership protocol to
> reduce consumer rebalances
>
> Hello Boyang,
>
> I've made another pass on the wiki page again. One minor comment is on the
> "Server Behavior Changes" section, we should have a paragraph on the
> logical changes on handling new versions of LeaveGroupRequest (e.g. how to
> handle dynamic member v.s. static member etc).
>
> Other than that, I do not have further comments. I think we can continue
> the voting process after that.
>
> Guozhang
>
> On Wed, Jan 2, 2019 at 10:00 AM Boyang Chen  wrote:
>
> > Thanks Jason for the comment! I answered it on the discuss thread.
> >
> > Folks, could we continue the vote for this KIP? This is a very critical
> > improvement for our streaming system
> > stability and we need to get things rolling right at the start of 2019.
> >
> > Thank you for your time!
> > Boyang
> >
> > 
> > From: Jason Gustafson 
> > Sent: Tuesday, December 18, 2018 7:40 AM
> > To: dev
> > Subject: Re: [VOTE] KIP-345: Introduce static membership protocol to
> > reduce consumer rebalances
> >
> > Hi Boyang,
> >
> > Thanks, the KIP looks good. Just one comment.
> >
> > The new schema for the LeaveGroup request is slightly odd since it is
> > handling both the single consumer use case and the administrative use
> case.
> > I wonder we could make it consistent from a batching perspective.
> >
> > In other words, instead of this:
> > LeaveGroupRequest => GroupId MemberId [GroupInstanceId]
> >
> > Maybe we could do this:
> > LeaveGroupRequest => GroupId [GroupInstanceId MemberId]
> >
> > For dynamic members, GroupInstanceId could be empty, which is consistent
> > with JoinGroup. What do you think?
> >
> > Also, just for clarification, what is the expected behavior if the
> current
> > memberId of a static member is passed to LeaveGroup? Will the static
> member
> > be removed? I know the consumer will not do this, but we'll still have to
> > handle the case on the broker.
> >
> > Best,
> > Jason
> >
> >
> > On Mon, Dec 10, 2018 at 11:54 PM Boyang Chen 
> wrote:
> >
> > > Thanks Stanislav!
> > >
> > > Get Outlook for iOS
> > >
> > > 
> > > From: Stanislav Kozlovski 
> > > Sent: Monday, December 10, 2018 11:28 PM
> > > To: dev@kafka.apache.org
> > > Subject: Re: [VOTE] KIP-345: Introduce static membership protocol to
> > > reduce consumer rebalances
> > >
> > > This is great work, Boyang. Thank you very much.
> > >
> > > +1 (non-binding)
> > >
> > > On Mon, Dec 10, 2018 at 6:09 PM Boyang Chen 
> wrote:
> > >
> > > > Hey there, could I get more votes on this thread?
> > > >
> > > > Thanks for the vote from Mayuresh and Mike 

[DISCUSS] Kafka 2.2.0 in February 2018

2019-01-08 Thread Matthias J. Sax
Hi all,

I would like to propose a release plan (with me being release manager)
for the next time-based feature release 2.2.0 in February.

The recent Kafka release history can be found at
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan.
The release plan (with open issues and planned KIPs) for 2.2.0 can be
found at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100827512.


Here are the suggested dates for Apache Kafka 2.2.0 release:

1) KIP Freeze: Jan 24, 2019.

A KIP must be accepted by this date in order to be considered for this
release)

2) Feature Freeze: Jan 31, 2019

Major features merged & working on stabilization, minor features have
PR, release branch cut; anything not in this state will be automatically
moved to the next release in JIRA.

3) Code Freeze: Feb 14, 2019

The KIP and feature freeze date is about 2-3 weeks from now. Please plan
accordingly for the features you want push into Apache Kafka 2.2.0 release.

4) Release Date: Feb 28, 2019 (tentative)


-Matthias



signature.asc
Description: OpenPGP digital signature


[jira] [Resolved] (KAFKA-7051) Improve the efficiency of the ReplicaManager when there are many partitions

2019-01-08 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7051.

Resolution: Fixed

> Improve the efficiency of the ReplicaManager when there are many partitions
> ---
>
> Key: KAFKA-7051
> URL: https://issues.apache.org/jira/browse/KAFKA-7051
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.0
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
> Fix For: 2.2.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #3292

2019-01-08 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-7773; Add end to end system test relying on verifiable 
consumer

--
[...truncated 4.51 MB...]
org.apache.kafka.connect.data.SchemaBuilderTest > testInt32Builder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testInt32Builder PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testBooleanBuilder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testBooleanBuilder PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testArraySchemaNull STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testArraySchemaNull PASSED

org.apache.kafka.connect.data.SchemaBuilderTest > testDoubleBuilder STARTED

org.apache.kafka.connect.data.SchemaBuilderTest > testDoubleBuilder PASSED

org.apache.kafka.connect.data.FieldTest > testEquality STARTED

org.apache.kafka.connect.data.FieldTest > testEquality PASSED

org.apache.kafka.connect.data.DateTest > 
testFromLogicalInvalidHasTimeComponents STARTED

org.apache.kafka.connect.data.DateTest > 
testFromLogicalInvalidHasTimeComponents PASSED

org.apache.kafka.connect.data.DateTest > testFromLogicalInvalidSchema STARTED

org.apache.kafka.connect.data.DateTest > testFromLogicalInvalidSchema PASSED

org.apache.kafka.connect.data.DateTest > testToLogical STARTED

org.apache.kafka.connect.data.DateTest > testToLogical PASSED

org.apache.kafka.connect.data.DateTest > testFromLogical STARTED

org.apache.kafka.connect.data.DateTest > testFromLogical PASSED

org.apache.kafka.connect.data.DateTest > testBuilder STARTED

org.apache.kafka.connect.data.DateTest > testBuilder PASSED

org.apache.kafka.connect.data.DateTest > testToLogicalInvalidSchema STARTED

org.apache.kafka.connect.data.DateTest > testToLogicalInvalidSchema PASSED

org.apache.kafka.connect.data.TimestampTest > testFromLogicalInvalidSchema 
STARTED

org.apache.kafka.connect.data.TimestampTest > testFromLogicalInvalidSchema 
PASSED

org.apache.kafka.connect.data.TimestampTest > testToLogical STARTED

org.apache.kafka.connect.data.TimestampTest > testToLogical PASSED

org.apache.kafka.connect.data.TimestampTest > testFromLogical STARTED

org.apache.kafka.connect.data.TimestampTest > testFromLogical PASSED

org.apache.kafka.connect.data.TimestampTest > testBuilder STARTED

org.apache.kafka.connect.data.TimestampTest > testBuilder PASSED

org.apache.kafka.connect.data.TimestampTest > testToLogicalInvalidSchema STARTED

org.apache.kafka.connect.data.TimestampTest > testToLogicalInvalidSchema PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldNotAllowNullKey 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldNotAllowNullKey 
PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldValidateLogicalTypes 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldValidateLogicalTypes 
PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldHaveToString STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldHaveToString PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformHeadersWithKey STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformHeadersWithKey PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddDate STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddDate PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddTime STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldAddTime PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotAddHeadersWithObjectValuesAndMismatchedSchema STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotAddHeadersWithObjectValuesAndMismatchedSchema PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotValidateMismatchedValuesWithBuiltInTypes STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldNotValidateMismatchedValuesWithBuiltInTypes PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldRemoveAllHeadersWithSameKey STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldRemoveAllHeadersWithSameKey PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformAndRemoveHeaders STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > 
shouldTransformAndRemoveHeaders PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldBeEquals STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldBeEquals PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldValidateBuildInTypes 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldValidateBuildInTypes 
PASSED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldRemoveAllHeaders 
STARTED

org.apache.kafka.connect.header.ConnectHeadersTest > shouldRemoveAllHeaders 
PASSED


Build failed in Jenkins: kafka-trunk-jdk11 #190

2019-01-08 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7253; The returned connector type is always null when creating

--
[...truncated 2.26 MB...]
org.apache.kafka.connect.transforms.TimestampConverterTest > testKey STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > testKey PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaFieldConversion PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaDateToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaIdentity PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToDate PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToTime PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessTimestampToUnix PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigMissingFormat PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigNoTargetType PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaStringToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimeToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testSchemalessUnixToTimestamp PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testWithSchemaTimestampToString PASSED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat STARTED

org.apache.kafka.connect.transforms.TimestampConverterTest > 
testConfigInvalidFormat PASSED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
STARTED

org.apache.kafka.connect.transforms.TimestampRouterTest > defaultConfiguration 
PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch STARTED

org.apache.kafka.connect.transforms.RegexRouterTest > doesntMatch PASSED

org.apache.kafka.connect.transforms.RegexRouterTest > identity STARTED


Build failed in Jenkins: kafka-2.0-jdk8 #210

2019-01-08 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7253; The returned connector type is always null when creating

--
[...truncated 437.65 KB...]
kafka.coordinator.group.GroupMetadataTest > testStableToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testStableToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > 
testOffsetCommitFailureWithAnotherPending STARTED

kafka.coordinator.group.GroupMetadataTest > 
testOffsetCommitFailureWithAnotherPending PASSED

kafka.coordinator.group.GroupMetadataTest > testDeadToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testDeadToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommit STARTED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommit PASSED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToStableTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testAwaitingRebalanceToStableTransition PASSED

kafka.coordinator.group.GroupMetadataTest > testSupportsProtocols STARTED

kafka.coordinator.group.GroupMetadataTest > testSupportsProtocols PASSED

kafka.coordinator.group.GroupMetadataTest > testEmptyToStableIllegalTransition 
STARTED

kafka.coordinator.group.GroupMetadataTest > testEmptyToStableIllegalTransition 
PASSED

kafka.coordinator.group.GroupMetadataTest > testCanRebalanceWhenStable STARTED

kafka.coordinator.group.GroupMetadataTest > testCanRebalanceWhenStable PASSED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommitWithAnotherPending 
STARTED

kafka.coordinator.group.GroupMetadataTest > testOffsetCommitWithAnotherPending 
PASSED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToPreparingRebalanceIllegalTransition STARTED

kafka.coordinator.group.GroupMetadataTest > 
testPreparingRebalanceToPreparingRebalanceIllegalTransition PASSED

kafka.coordinator.group.GroupMetadataTest > 
testSelectProtocolChoosesCompatibleProtocol STARTED

kafka.coordinator.group.GroupMetadataTest > 
testSelectProtocolChoosesCompatibleProtocol PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentGoodPathSequence PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentTxnGoodPathSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentTxnGoodPathSequence PASSED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentRandomSequence STARTED

kafka.coordinator.group.GroupCoordinatorConcurrencyTest > 
testConcurrentRandomSequence PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithEmptyControlBatch STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithEmptyControlBatch PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreNonEmptyGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreNonEmptyGroup PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithTombstones STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadOffsetsWithTombstones PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadWithCommittedAndAbortedTransactionalOffsetCommits STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadWithCommittedAndAbortedTransactionalOffsetCommits PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testTransactionalCommitOffsetCommitted STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testTransactionalCommitOffsetCommitted PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetsWithoutGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testLoadOffsetsWithoutGroup 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testGroupNotExists STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testGroupNotExists PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadEmptyGroupWithOffsets STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadEmptyGroupWithOffsets PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadTransactionalOffsetCommitsFromMultipleProducers STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadTransactionalOffsetCommitsFromMultipleProducers PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptySimpleGroup 
STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testStoreEmptySimpleGroup 
PASSED

kafka.coordinator.group.GroupMetadataManagerTest > testAddGroup STARTED

kafka.coordinator.group.GroupMetadataManagerTest > testAddGroup PASSED

kafka.coordinator.group.GroupMetadataManagerTest > 
testLoadGroupWithLargeGroupMetadataRecord STARTED

kafka.coordinator.group.GroupMetadataManagerTest > 

[jira] [Created] (KAFKA-7796) structured streaming fetched wrong current offset from kafka

2019-01-08 Thread Ryne Yang (JIRA)
Ryne Yang created KAFKA-7796:


 Summary: structured streaming fetched wrong current offset from 
kafka
 Key: KAFKA-7796
 URL: https://issues.apache.org/jira/browse/KAFKA-7796
 Project: Kafka
  Issue Type: Bug
  Components: consumer
 Environment: Linux, Centos 7
Reporter: Ryne Yang


when running spark structured streaming using lib: `"org.apache.spark" %% 
"spark-sql-kafka-0-10" % "2.4.0"`, we keep getting error regarding current 
offset fetching:
{code:java}
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 
0.0 (TID 3, qa2-hdp-4.acuityads.org, executor 2): java.lang.AssertionError: 
assertion failed: latest offs
et -9223372036854775808 does not equal -1
at scala.Predef$.assert(Predef.scala:170)
at 
org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.resolveRange(KafkaMicroBatchReader.scala:371)
at 
org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartitionReader.(KafkaMicroBatchReader.scala:329)
at 
org.apache.spark.sql.kafka010.KafkaMicroBatchInputPartition.createPartitionReader(KafkaMicroBatchReader.scala:314)
at 
org.apache.spark.sql.execution.datasources.v2.DataSourceRDD.compute(DataSourceRDD.scala:42)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}
for some reason, looks like fetchLatestOffset returned a Long.MIN_VALUE for one 
of the partitions. I checked the structured streaming checkpoint, that was 
correct, it's the currentAvailableOffset was set to Long.MIN_VALUE.

kafka broker version: 1.1.0.
lib we used:

{{libraryDependencies += "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.4.0" 
}}

how to reproduce:
basically we started a structured streamer and subscribed a topic of 4 
partitions. then produced some messages into topic, job crashed and logged the 
stacktrace like above.

also the committed offsets seem fine as we see in the logs: 
{code:java}
=== Streaming Query ===
Identifier: [id = c46c67ee-3514-4788-8370-a696837b21b1, runId = 
31878627-d473-4ee8-955d-d4d3f3f45eb9]
Current Committed Offsets: {KafkaV2[Subscribe[REVENUEEVENT]]: 
{"REVENUEEVENT":{"0":1}}}
Current Available Offsets: {KafkaV2[Subscribe[REVENUEEVENT]]: 
{"REVENUEEVENT":{"0":-9223372036854775808}}}
{code}
so spark streaming recorded the correct value for partition: 0, but the current 
available offsets returned from kafka is showing Long.MIN_VALUE. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7795) Broker fails on duplicate configuration keys

2019-01-08 Thread David Arthur (JIRA)
David Arthur created KAFKA-7795:
---

 Summary: Broker fails on duplicate configuration keys
 Key: KAFKA-7795
 URL: https://issues.apache.org/jira/browse/KAFKA-7795
 Project: Kafka
  Issue Type: Improvement
  Components: config
Reporter: David Arthur


It would be nice to perform some basic validation of the broker config so that 
things aren't unexpectedly overwritten.

For example, 

{code}
unclean.leader.election.enable=false

// many other lines of config

unclean.leader.election.enable=true
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-396: Add Commit/List Offsets Operations to AdminClient

2019-01-08 Thread Edoardo Comar
+1 (non-binding)
Thanks Mickael!

On Tue, 8 Jan 2019 at 17:39, Patrik Kleindl  wrote:

> +1 (non-binding)
> Thanks, sounds very helpful
> Best regards
> Patrik
>
> > Am 08.01.2019 um 18:10 schrieb Mickael Maison  >:
> >
> > Hi all,
> >
> > I'd like to start the vote on KIP-396:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484
> >
> > Thanks
>


-- 
"When the people fear their government, there is tyranny; when the
government fears the people, there is liberty." [Thomas Jefferson]


Re: [VOTE] KIP-396: Add Commit/List Offsets Operations to AdminClient

2019-01-08 Thread Patrik Kleindl
+1 (non-binding)
Thanks, sounds very helpful
Best regards
Patrik

> Am 08.01.2019 um 18:10 schrieb Mickael Maison :
> 
> Hi all,
> 
> I'd like to start the vote on KIP-396:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484
> 
> Thanks


[jira] [Created] (KAFKA-7794) kafka.tools.GetOffsetShell does not return the offset in some cases

2019-01-08 Thread Daniele Ascione (JIRA)
Daniele Ascione created KAFKA-7794:
--

 Summary: kafka.tools.GetOffsetShell does not return the offset in 
some cases
 Key: KAFKA-7794
 URL: https://issues.apache.org/jira/browse/KAFKA-7794
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 0.10.2.2, 0.10.2.1, 0.10.2.0
Reporter: Daniele Ascione


For some input for the timestamps (different from -1 or -2) the GetOffset is 
not able to retrieve the offset.


For example, if _x_ is the timestamp in that "not working range", and you 
execute:
{code:java}
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list $KAFKA_ADDRESS 
--topic $MY_TOPIC--time x
{code}
The output is:
{code:java}
MY_TOPIC:8:
MY_TOPIC:2:
MY_TOPIC:5:
MY_TOPIC:4:
MY_TOPIC:7:
MY_TOPIC:1:
MY_TOPIC:9:{code}
while after the last ":" an integer representing the offset is expected.

Steps to reproduce it:
 # Consume all the messages from the beginning with the timestamp:
{code:java}
bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
$KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true  > 
messages{code}

 # Sort the messages by timestamp and get some of the oldest messages:
{code:java}
 awk -F "CreateTime:" '{ print $2}' messages > msg_sorted{code}

 # Take (for example) the timestamp of the 10th oldest message, and see if 
GetOffsetShell is not able to print the offset:
{code:java}
timestamp="$(sed '10q;d' invals.ts.sorted.txt | cut -f1)"

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list $KAFKA_ADDRESS 
--topic $MY_TOPIC--time $timestamp

# The output should be something like:
# MY_TOPIC:1:
# MY_TOPIC:2:
(repeated for every partition){code}

 # Verify that the message with that timestamp is still in Kafka:
{code:java}
bin/kafka-simple-consumer-shell.sh --no-wait-at-logend --broker-list 
$KAFKA_ADDRESS --topic $MY_TOPIC --property print.timestamp=true | grep 
"CreateTime:$timestamp" {code}

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[VOTE] KIP-391: Allow Producing with Offsets for Cluster Replication

2019-01-08 Thread Edoardo Comar
Hi All,
I'd like to start a vote on KIP-391 

https://cwiki.apache.org/confluence/display/KAFKA/KIP-391%3A+Allow+Producing+with+Offsets+for+Cluster+Replication

thanks!

PS - Discussion thread 
https://lists.apache.org/list.html?dev@kafka.apache.org:lte=3M:KIP-391

--

Edoardo Comar

IBM Event Streams
IBM UK Ltd, Hursley Park, SO21 2JN

Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


Re: [DISCUSS] KIP-409: Allow creating under-replicated topics and partitions

2019-01-08 Thread Mickael Maison
Hi,

We've not received any feedback yet on this KIP. We still believe this
would be a nice improvement.

Thanks

On Tue, Dec 18, 2018 at 4:27 PM Mickael Maison  wrote:
>
> Hi,
>
> We have submitted a KIP to handle topics and partitions creation when
> a cluster is not fully available:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-409%3A+Allow+creating+under-replicated+topics+and+partitions
>
> As always, we welcome feedback and suggestions.
>
> Thanks
> Mickael and Edoardo


[VOTE] KIP-396: Add Commit/List Offsets Operations to AdminClient

2019-01-08 Thread Mickael Maison
Hi all,

I'd like to start the vote on KIP-396:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97551484

Thanks


[jira] [Resolved] (KAFKA-7253) The connector type responded by worker is always null when creating connector

2019-01-08 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7253.

   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.1
   2.2.0

> The connector type responded by worker is always null when creating connector
> -
>
> Key: KAFKA-7253
> URL: https://issues.apache.org/jira/browse/KAFKA-7253
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> {code}
> // Note that we use the updated connector config 
> despite the fact that we don't have an updated
> // snapshot yet. The existing task info should still 
> be accurate.
> Map map = 
> configState.connectorConfig(connName);
> ConnectorInfo info = new ConnectorInfo(connName, 
> config, configState.tasks(connName),
> map == null ? null : 
> connectorTypeForClass(map.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG)));
> {code}
> the null map causes the null type in response. The connector class name can 
> be taken from the config of request instead since we require the config 
> should contain the connector class name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk11 #189

2019-01-08 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2019-01-08 Thread Randall Hauch (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5117:
--
  Assignee: Randall Hauch  (was: Ewen Cheslack-Postava)

I'm reopening this because there appears to be a bug in the worker that 
incorrectly returns the externalized secrets when getting the connector config 
from the REST API. Instead, the REST API should return the config with the raw 
configuration values, including those of the form `${provider:path:key}`.

[KIP-297|https://cwiki.apache.org/confluence/display/KAFKA/KIP-297%3A+Externalizing+Secrets+for+Connect+Configurations]
 is actually not really clear on whether the REST API will be affected, and so 
IMO its behavior should not have been changed by the implementation of KIP-297 
and should return the connector configuration submitted via PUT and stored by 
the worker as before KIP-297. Only this behavior really works with tooling that 
edits connector configurations via GET and PUT operations. IMO, *any other 
behavioral changes should be determined through a new KIP.*

Here are the details. Consider a standalone worker config defines a config 
provider (using the example `{{FileConfigProvider}}` added by KIP-297):
{code}
...
# Define a config provider that reads from any file
config.providers=file
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
{code}

Then, create a connector with at least one property with an externalized 
placeholder for its value:

{code}
name=my-connector
...
connection.user=foobar
connection.password=${file:/path/to/secret.properties:db.password}
...
{code}

where the `{{/path/to/secrets.properties}}` file contains:

{code}
...
db.password=my-secret
...
{code}

then Connect will use the FileConfigProvider to replace the placeholder on the 
`{{connection.password}}` value with the `{{db.password}}` property in the  
`{{/path/to/secrets.properties}}` file *before* these properties are given to 
the connector upon startup.

In fact this works just fine. The problem is that when we perform a `{{GET}} 
connectors/my-connector/` we get the following:

{code}
{
  "name": "my-connector",
  "config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"mode": "incrementing",
"incrementing.column.name": "id",
"topic.prefix": "jdbc-",
"connection.password": "my-secret",
"connection.user": "foobar",
...
{code}

Note how the `{{connection.password}}` property is `{{my-secret}}` but should 
instead be `{{${file:/path/to/secret.properties:db.password}}}`.



> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>Assignee: Randall Hauch
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.0.0
>
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Implementing a custom request

2019-01-08 Thread grosserl
Dear all, I'm planning to implement a custom network request where a 
Consumer requests some partition metadata from a broker(s).
I would like to know if there are some references on what to consider 
when implementing a new request. Since there are already quite a range 
of different network requests,

I guess someone has figured out an efficient way of adding a new one. :)
To my knowledge I'll have to create a custom Builder class that builds 
the new request (similar to the MetadataRequest.java class), add 
corresponding API keys, and then create a new handleCustomRequest 
function which gets  the requested metadata. Does that sound about 
right? I'm not looking for code but rather things I would have to 
consider.

Best regards


Re: [DISCUSS] KIP-252: Extend ACLs to allow filtering based on ip ranges and subnets

2019-01-08 Thread Sönke Liebau
Hi Colin,

thanks for your response!

in theory we could get away without any additional path changes I
think.. I am still somewhat unsure about the best way of addressing
this. I'll outline my current idea and concerns that I still have,
maybe you have some thoughts on it.

ACLs are currently stored in two places in ZK: /kafka-acl and
/kafka-acl-extended based on whether they make use of prefixes or not.
The reasoning[1] for this is not fundamentally changed by anything we
are discussing here, so I think that split will need to remain.

ACLs are then stored in the form of a json array:
[zk: 127.0.0.1:2181(CONNECTED) 9] get /kafka-acl/Topic/*
{"version":1,"acls":[{"principal":"User:sliebau","permissionType":"Allow","operation":"Read","host":"*"},{"principal":"User:sliebau","permissionType":"Allow","operation":"Describe","host":"*"},{"principal":"User:sliebau2","permissionType":"Allow","operation":"Describe","host":"*"},{"principal":"User:sliebau2","permissionType":"Allow","operation":"Read","host":"*"}]}

What we could do is add a version property to the individual ACL
elements like so:
[
  {
"principal": "User:sliebau",
"permissionType": "Allow",
"operation": "Read",
"host": "*",
"acl_version": "1"
  }
]

We define the current state of ACLs as version 0 and the Authorizer
will default a missing "acl_version" element to this value for
backwards compatibility. So there should hopefully be no need to
migrate existing ACLs (concerns notwithstanding, see later).

Additionally the authorizer will get a max_supported_acl_version
setting which will cause it to ignore any ACLs larger than what is set
here, hence allowing for controlled upgrading similar to the process
using inter broker protocol version. If this happens we should
probably log a warning in case this was unintentional. Maybe even have
a setting that controls whether startup is even possible when not all
ACLs are in effect.

As I mentioned I have a few concerns, question marks still outstanding on this:
- This approach would necessitate being backwards compatible with all
earlier versions of ACLs unless we also add a min_acl_version setting
- which would put the topic of ACL migrations back on the agenda.
- Do we need to touch the wire protocol for the admin client for this?
In theory I think not, as the authorizer would write ACLs in the most
current (unless forced down by max_acl_version) version it knows, but
this takes any control over this away from the user.
- This adds json parsing logic to the Authorizer, as it would have to
check the version first, look up the proper ACL schema for that
version and then re-parse the ACL string with that schema - should not
be a real issue if the initial parsing is robust, but strictly
speaking we are parsing something that we don't know the schema for
which might create issues with updates down the line.

Beyond the practical concerns outlined above there are also some
broader things maybe worth thinking about. The long term goal is to
move away from Zookeeper and other data like consumer group offsets
has already been moved into Kafka topics - is that something that we'd
want to consider for ACLs as well? With the current storage model we'd
need more than one topic for this to cleanly separate resources and
prefixed ACLs - if we consider pursuing this option it might be a
chance for a "larger" change to the format which introduces versioning
and allows storing everything in one compacted topic.

Any thoughts on this?

Best regards,
Sönke



[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-290%3A+Support+for+Prefixed+ACLs


On Sat, Dec 22, 2018 at 5:51 AM Colin McCabe  wrote:
>
> Hi Sönke,
>
> One path forward would be to forbid the new ACL types from being created 
> until the inter-broker protocol had been upgraded.  We'd also have to figure 
> out how the new ACLs were stored in ZooKeeper.  There are a bunch of 
> proposals in this thread that could work for that-- I really hope we don't 
> keep changing the ZK path each time there is a version bump.
>
> best,
> Colin
>
>
> On Thu, Nov 29, 2018, at 14:25, Sönke Liebau wrote:
> > This has been dormant for a while now, can I interest anybody in chiming in
> > here?
> >
> > I think we need to come up with an idea of how to handle changes to ACLs
> > going forward, i.e. some sort of versioning scheme. Not necessarily what I
> > proposed in my previous mail, but something.
> > Currently this fairly simple change is stuck due to this being unsolved.
> >
> > I am happy to move forward without addressing the larger issue (I think the
> > issue raised by Colin is valid but could be mitigated in the release
> > notes), but that would mean that the next KIP to touch ACLs would inherit
> > the issue, which somehow doesn't seem right.
> >
> > Looking forward to your input :)
> >
> > Best regards,
> > Sönke
> >
> > On Tue, Jun 19, 2018 at 5:32 PM Sönke Liebau 
> > wrote:
> >
> > > Picking this back up, now that KIP-290 has been merged..
>