[jira] [Resolved] (KAFKA-14079) Source task will not commit offsets and develops memory leak if "error.tolerance" is set to "all"

2022-07-18 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-14079.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Source task will not commit offsets and develops memory leak if 
> "error.tolerance" is set to "all"
> -
>
> Key: KAFKA-14079
> URL: https://issues.apache.org/jira/browse/KAFKA-14079
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.2.0
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
>Priority: Critical
> Fix For: 3.3.0, 3.2.1
>
>
> KAFKA-13348 added the ability to ignore producer exceptions by setting 
> {{error.tolerance}} to {{{}all{}}}.  When this is set to all a null record 
> metadata is passed to commitRecord() and the task continues.
> The issue is that records are tracked by {{SubmittedRecords}} and the first 
> time an error happens the code does not ack the record with the error and 
> just skips it so it will not have the offsets committed or be removed from 
> SubmittedRecords before calling commitRecord(). 
> This leads to a bug where future offsets won't be committed anymore and also 
> a memory leak because the algorithm that removes acked records from the 
> internal map to commit offsets [looks 
> |https://github.com/apache/kafka/blob/3.2.0/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/SubmittedRecords.java#L177]
>  at the head of the Deque where the records are tracked in and if it sees the 
> record is unacked it will not process anymore removals. This leads to all new 
> records that go through the task to continue to be added and not have offsets 
> committed and never removed from tracking until an OOM error occurs.
> The fix is to make sure to ack the failed records so they can have their 
> offsets commited and be removed from tracking. This is fine to do as the 
> records are intended to be skipped and not reprocessed. Metrics also need to 
> be updated as well.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13770) Regression when Connect uses 0.10.x brokers due to recently added retry logic in KafkaBasedLog

2022-03-24 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-13770.
---
  Reviewer: Konstantine Karantasis
Resolution: Fixed

> Regression when Connect uses 0.10.x brokers due to recently added retry logic 
> in KafkaBasedLog
> --
>
> Key: KAFKA-13770
> URL: https://issues.apache.org/jira/browse/KAFKA-13770
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.2, 2.8.2, 3.2.0, 3.1.1, 3.0.2, 2.7.3, 2.6.4
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.8.2, 3.2.0, 3.1.1, 3.0.2, 2.7.3, 2.6.4
>
>
> KAFKA-12879 recently modified Connect's `KafkaBasedLog` class to add retry 
> logic when trying to get the latest offsets for the topic as the 
> `KafkaBasedLog` starts up. This method calls a new method in `TopicAdmin` to 
> read the latest offsets using retries.
> When Connect is using an old broker (version 0.10.x or earlier), the old 
> `KafkaBasedLog` logic would catch the `UnsupportedVersionException` thrown by 
> the `TopicAdmin` method, and use the consumer to read offsets instead. The 
> new retry logic unfortunately _wrapped_ the `UnsupportedVersionException` in 
> a `ConnectException`, which means the `KafkaBasedLog` logic doesn't degrade 
> and use the consumer, and instead fails.
> The `TopicAdmin.retryEndOffsets(...)` method should propagate the 
> `UnsupportedVersionException` rather than wrapping it. All other exceptions 
> from the admin client are either retriable or already wrapped by a 
> `ConnectException`. Therefore, it appears that `UnsupportedVersionException` 
> is the only special case here.
> KAFKA-12879 was backported to a lot of branches (tho only the revert was 
> merged to 2.5), so this new fix should be as well. It does not appear any 
> releases were made from any of those branches with the KAFKA-12879 change.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13770) Regression when Connect uses 0.10.x brokers due to recently added retry logic in KafkaBasedLog

2022-03-24 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-13770:
-

 Summary: Regression when Connect uses 0.10.x brokers due to 
recently added retry logic in KafkaBasedLog
 Key: KAFKA-13770
 URL: https://issues.apache.org/jira/browse/KAFKA-13770
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.2, 2.8.2, 3.2.0, 3.1.1, 3.0.2, 2.7.3, 2.6.4
Reporter: Randall Hauch
 Fix For: 2.5.2, 2.8.2, 3.2.0, 3.1.1, 3.0.2, 2.7.3, 2.6.4


KAFKA-12879 recently modified Connect's `KafkaBasedLog` class to add retry 
logic when trying to get the latest offsets for the topic as the 
`KafkaBasedLog` starts up. This method calls a new method in `TopicAdmin` to 
read the latest offsets using retries.

When Connect is using an old broker (version 0.10.x or earlier), the old 
`KafkaBasedLog` logic would catch the `UnsupportedVersionException` thrown by 
the `TopicAdmin` method, and use the consumer to read offsets instead. The new 
retry logic unfortunately _wrapped_ the `UnsupportedVersionException` in a 
`ConnectException`, which means the `KafkaBasedLog` logic doesn't degrade and 
use the consumer, and instead fails.

The `TopicAdmin.retryEndOffsets(...)` method should propagate the 
`UnsupportedVersionException` rather than wrapping it. All other exceptions 
from the admin client are either retriable or already wrapped by a 
`ConnectException`. Therefore, it appears that `UnsupportedVersionException` is 
the only special case here.

KAFKA-12879 was backported to a lot of branches, so this new fix should be as 
well. It does not appear any releases were made with the KAFKA-12879 change.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-12879) Compatibility break in Admin.listOffsets()

2022-03-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12879.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Compatibility break in Admin.listOffsets()
> --
>
> Key: KAFKA-12879
> URL: https://issues.apache.org/jira/browse/KAFKA-12879
> Project: Kafka
>  Issue Type: Bug
>  Components: admin
>Affects Versions: 2.8.0, 2.7.1, 2.6.2
>Reporter: Tom Bentley
>Assignee: Philip Nee
>Priority: Major
> Fix For: 2.5.2, 2.8.2, 3.2.0, 3.1.1, 3.0.2, 2.7.3, 2.6.4
>
>
> KAFKA-12339 incompatibly changed the semantics of Admin.listOffsets(). 
> Previously it would fail with {{UnknownTopicOrPartitionException}} when a 
> topic didn't exist. Now it will (eventually) fail with {{TimeoutException}}. 
> It seems this was more or less intentional, even though it would break code 
> which was expecting and handling the {{UnknownTopicOrPartitionException}}. A 
> workaround is to use {{retries=1}} and inspect the cause of the 
> {{TimeoutException}}, but this isn't really suitable for cases where the same 
> Admin client instance is being used for other calls where retries is 
> desirable.
> Furthermore as well as the intended effect on {{listOffsets()}} it seems that 
> the change could actually affect other methods of Admin.
> More generally, the Admin client API is vague about which exceptions can 
> propagate from which methods. This means that it's not possible to say, in 
> cases like this, whether the calling code _should_ have been relying on the 
> {{UnknownTopicOrPartitionException}} or not.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13118) Backport KAFKA-9887 to 3.0 branch after 3.0.0 release

2021-10-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-13118.
---
Resolution: Fixed

Backported to the `3.0` branch. Will be in the 3.0.1 release.

> Backport KAFKA-9887 to 3.0 branch after 3.0.0 release
> -
>
> Key: KAFKA-13118
> URL: https://issues.apache.org/jira/browse/KAFKA-13118
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Affects Versions: 3.0.1
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 3.0.1
>
>
> We need to backport the fix (commit hash `0314801a8e`) for KAFKA-9887 to the 
> `3.0` branch. That fix was merged to `trunk`, `2.8`, and `2.7` _after_ the 
> 3.0 code freeze, and that issue is not a blocker or regression.
> Be sure to update the "fix version" on KAFKA-9887 when the backport is 
> complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13118) Backport KAFKA-9887 to 3.0 branch after 3.0.0 release

2021-07-21 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-13118:
-

 Summary: Backport KAFKA-9887 to 3.0 branch after 3.0.0 release
 Key: KAFKA-13118
 URL: https://issues.apache.org/jira/browse/KAFKA-13118
 Project: Kafka
  Issue Type: Task
  Components: KafkaConnect
Affects Versions: 3.0.1
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 3.0.1


We need to backport the fix (commit hash `0314801a8e`) for KAFKA-9887 to the 
`3.0` branch. That fix was merged to `trunk`, `2.8`, and `2.7` _after_ the 3.0 
code freeze, and that issue is not a blocker or regression.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13035) Kafka Connect: Update documentation for POST /connectors/(string: name)/restart to include task Restart behavior

2021-07-06 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-13035.
---
Fix Version/s: 3.0.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` in time for 3.0.0

> Kafka Connect: Update documentation for POST /connectors/(string: 
> name)/restart to include task Restart behavior  
> --
>
> Key: KAFKA-13035
> URL: https://issues.apache.org/jira/browse/KAFKA-13035
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Kalpesh Patel
>Assignee: Kalpesh Patel
>Priority: Minor
> Fix For: 3.0.0
>
>
> KAFKA-4793 updated the behavior of POST /connectors/(string: name)/restart 
> based on queryParameters onlyFailed and includeTasks  based on 
> [KIP-475|https://cwiki.apache.org/confluence/display/KAFKA/KIP-745%3A+Connect+API+to+restart+connector+and+tasks].
>  We should update documentation to reflect this



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13028) AbstractConfig should allow config provider configuration to use variables referencing other config providers earlier in the list

2021-07-02 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-13028:
-

 Summary: AbstractConfig should allow config provider configuration 
to use variables referencing other config providers earlier in the list
 Key: KAFKA-13028
 URL: https://issues.apache.org/jira/browse/KAFKA-13028
 Project: Kafka
  Issue Type: Improvement
  Components: clients, KafkaConnect
Reporter: Randall Hauch


When AbstractConfig recognizes config provider properties, it instantiates all 
of the config providers first and then uses those config providers to resolve 
any variables in remaining configurations. This means that if you define two 
config providers with:

{code}
config.providers=providerA,providerB
...
{code}
then the configuration properties for the second provider (e.g., `providerB`) 
cannot use variables that reference the first provider (e.g., `providerA`). In 
other words, this is not possible:

{code}
config.providers=providerA,providerB
config.providers.providerA.class=FileConfigProvider
config.providers.providerB.class=ComplexConfigProvider
config.providers.providerA.param.client.key=${file:/usr/secrets:complex.client.key}
config.providers.providerA.param.client.secret=${file:/usr/secrets:complex.client.secret}
{code}

This should be possible if the config providers are instantiated and configured 
in the same order as they appear in the `config.providers` property. The 
benefit is that it allows another level of indirection so that any secrets 
required by config provider can be resolved using an earlier simple config 
provider.

For example, config providers are often defined in Connect worker 
configurations to resolve secrets within connector configurations, or to 
resolve secrets within the worker configuration itself (e.g., producer or 
consumer secrets). But it would be useful to also be able to resolve the 
secrets needed by one configuration provider using another configuration 
provider that is defined earlier in the list.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12717) Remove internal converter config properties

2021-07-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12717.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to the `trunk` branch for 3.0.0.

> Remove internal converter config properties
> ---
>
> Key: KAFKA-12717
> URL: https://issues.apache.org/jira/browse/KAFKA-12717
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> KAFKA-5540 / 
> [KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig]
>  deprecated but did not officially remove Connect's internal converter worker 
> config properties. With the upcoming 3.0 release, we can make the 
> backwards-incompatible change of completely removing these properties once 
> and for all.
>  
> One migration path for users who may still be running Connect clusters with 
> different internal converters can be:
>  # Stop all workers on the cluster
>  # For each internal topic (config, offsets, and status):
>  ## Create a new topic to take the place of the existing one
>  ## For every message in the existing topic:
>  ### Deserialize the message's key and value using the Connect cluster's old 
> internal key and value converters
>  ### Serialize the message's key and value using the [JSON 
> converter|https://github.com/apache/kafka/blob/trunk/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java]
>  with schemas disabled (by setting the {{schemas.enable}} property to 
> {{false}})
>  ### Write a message with the new key and value to the new internal topic
>  # Reconfigure each Connect worker to use the newly-created internal topics 
> from step 2
>  # Start all workers on the cluster



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12482) Remove deprecated rest.host.name and rest.port Connect worker configs

2021-06-23 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12482.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk`.

> Remove deprecated rest.host.name and rest.port Connect worker configs
> -
>
> Key: KAFKA-12482
> URL: https://issues.apache.org/jira/browse/KAFKA-12482
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Randall Hauch
>Assignee: Kalpesh Patel
>Priority: Critical
> Fix For: 3.0.0
>
>
> The following Connect worker configuration properties were deprecated and 
> should be removed in 3.0.0:
>  * {{rest.host.name}} (deprecated in 
> [KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])
>  * {{rest.port}} (deprecated in 
> [KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])
> See KAFKA-12717 for removing the internal converter configurations:
>  * {{internal.key.converter}} (deprecated in 
> [KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
>  * {{internal.value.converter}} (deprecated in 
> [KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12484) Enable Connect's connector log contexts by default

2021-06-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12484.
---
  Reviewer: Konstantine Karantasis
Resolution: Fixed

Merged to the `trunk` branch for inclusion in 3.0.0.

> Enable Connect's connector log contexts by default
> --
>
> Key: KAFKA-12484
> URL: https://issues.apache.org/jira/browse/KAFKA-12484
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> Connect's Log4J configuration does not by default log the connector contexts. 
> That feature was added in 
> [KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
>  and first appeared in AK 2.3.0, but it was not enabled by default since that 
> would not have been backward compatible.
> But with AK 3.0.0, we have the opportunity to change the default in 
> {{config/connect-log4j.properties}} to enable connector log contexts.
> See 
> [KIP-721|https://cwiki.apache.org/confluence/display/KAFKA/KIP-721%3A+Enable+connector+log+contexts+in+Connect+Log4j+configuration].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12483) Enable client overrides in connector configs by default

2021-06-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12483.
---
Resolution: Fixed

Merged to the `trunk` branch for inclusion in 3.0.0.

> Enable client overrides in connector configs by default
> ---
>
> Key: KAFKA-12483
> URL: https://issues.apache.org/jira/browse/KAFKA-12483
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> Connector-specific client overrides were added in 
> [KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy],
>  but that feature is not enabled by default since it would not have been 
> backward compatible.
> But with AK 3.0.0, we have the opportunity to enable connector client 
> overrides by default by changing the worker config's 
> {{connector.client.config.override.policy}} default value to {{All}}.
> See 
> [KIP-722|https://cwiki.apache.org/confluence/display/KAFKA/KIP-722%3A+Enable+connector+client+overrides+by+default].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12904) Connect's validate REST endpoint uses incorrect timeout

2021-06-07 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12904:
-

 Summary: Connect's validate REST endpoint uses incorrect timeout
 Key: KAFKA-12904
 URL: https://issues.apache.org/jira/browse/KAFKA-12904
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch


The fix for KAFKA-9374 changed how the `ConnectorPluginsResource` and its 
method to validate connector configurations used the 
`ConnectorsResource.REQUEST_TIMEOUT_MS` constant (90 seconds). However, in 
doing so it introduced a bug where the timeout was actually 1000x longer than 
desired/specified.

In particular, the following line is currently:
```
return 
validationCallback.get(ConnectorsResource.REQUEST_TIMEOUT_MS, TimeUnit.SECONDS);
```
but should be:
```
return 
validationCallback.get(ConnectorsResource.REQUEST_TIMEOUT_MS, 
TimeUnit.MILLISECONDS);
```




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7749) confluent does not provide option to set consumer properties at connector level

2021-06-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7749.
--
Fix Version/s: 2.3.0
   Resolution: Fixed

[KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy]
 introduced in AK 2.3.0 added support for connector-specific client overrides 
like the one described here.

Marking as resolved.

> confluent does not provide option to set consumer properties at connector 
> level
> ---
>
> Key: KAFKA-7749
> URL: https://issues.apache.org/jira/browse/KAFKA-7749
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Manjeet Duhan
>Priority: Major
> Fix For: 2.3.0
>
>
> _We want to increase consumer.max.poll.record to increase performance but 
> this  value can only be set in worker properties which is applicable to all 
> connectors given cluster._
>  __ 
> _Operative Situation :- We have one project which is communicating with 
> Elasticsearch and we set consumer.max.poll.record=500 after multiple 
> performance tests which worked fine for an year._
>  _Then one more project onboarded in the same cluster which required 
> consumer.max.poll.record=5000 based on their performance tests. This 
> configuration is moved to production._
>   _Admetric started failing as it was taking more than 5 minutes to process 
> 5000 polled records and started throwing commitfailed exception which is 
> vicious cycle as it will process same data over and over again._
>  __ 
> _We can control above if start consumer using plain java but this control was 
> not available at each consumer level in confluent connector._
> _I have overridden kafka code to accept connector properties which will be 
> applied to single connector and others will keep on using default properties 
> . These changes are already running in production for more than 5 months._
> _Some of the properties which were useful for us._
> max.poll.records
> max.poll.interval.ms
> request.timeout.ms
> key.deserializer
> value.deserializer
> heartbeat.interval.ms
> session.timeout.ms
> auto.offset.reset
> connections.max.idle.ms
> enable.auto.commit
>  
> auto.commit.interval.ms
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12252) Distributed herder tick thread loops rapidly when worker loses leadership

2021-05-06 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12252.
---
Fix Version/s: 2.8.1
   3.0.0
   Resolution: Fixed

I'm still working on backporting this to the 2.7 and 2.6 branches. When I'm 
able to do that, I'll update the fix versions on this issue.

> Distributed herder tick thread loops rapidly when worker loses leadership
> -
>
> Key: KAFKA-12252
> URL: https://issues.apache.org/jira/browse/KAFKA-12252
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.0.0, 2.8.1
>
>
> When a new session key is read from the config topic, if the worker is the 
> leader, it [schedules a new key 
> rotation|https://github.com/apache/kafka/blob/5cf9cfcaba67cffa2435b07ade58365449c60bd9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L1579-L1581].
>  The time between key rotations is configurable but defaults to an hour.
> The herder then continues its tick loop, which usually ends with a long poll 
> for rebalance activity. However, when a key rotation is scheduled, it will 
> [limit the time spent 
> polling|https://github.com/apache/kafka/blob/5cf9cfcaba67cffa2435b07ade58365449c60bd9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L384-L388]
>  at the end of the tick loop in order to be able to perform the rotation.
> Once woken up, the worker checks to see if a key rotation is necessary and, 
> if so, [sets the expected key rotation time to 
> Long.MAX_VALUE|https://github.com/apache/kafka/blob/bf4afae8f53471ab6403cbbfcd2c4e427bdd4568/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L344],
>  then [writes a new session key to the config 
> topic|https://github.com/apache/kafka/blob/bf4afae8f53471ab6403cbbfcd2c4e427bdd4568/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L345-L348].
>  The problem is, [the worker only ever decides a key rotation is necessary if 
> it is still the 
> leader|https://github.com/apache/kafka/blob/5cf9cfcaba67cffa2435b07ade58365449c60bd9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L456-L469].
>  If the worker is no longer the leader at the time of the key rotation 
> (likely due to falling out of the cluster after losing contact with the group 
> coordinator), its key expiration time won’t be reset, and the long poll for 
> rebalance activity at the end of the tick loop will be given a timeout of 0 
> ms and result in the tick loop being immediately restarted. Even if the 
> worker reads a new session key from the config topic, it’ll continue looping 
> like this since its scheduled key rotation won’t be updated. At this point, 
> the only thing that would help the worker get back into a healthy state would 
> be if it were made the leader of the cluster again.
> One possible fix could be to add a conditional check in the tick thread to 
> only limit the time spent on rebalance polling if the worker is currently the 
> leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8551) Comments for connectors() in Herder interface

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8551.
--
Resolution: Won't Fix

Marking as won't fix, since the details are insufficient to try to address.

> Comments for connectors() in Herder interface 
> --
>
> Key: KAFKA-8551
> URL: https://issues.apache.org/jira/browse/KAFKA-8551
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Luying Liu
>Priority: Major
>
> There are mistakes in the comments for connectors() in Herder interface.  The 
> mistakes are in the  file 
> [kafka|https://github.com/apache/kafka]/[connect|https://github.com/apache/kafka/tree/trunk/connect]/[runtime|https://github.com/apache/kafka/tree/trunk/connect/runtime]/[src|https://github.com/apache/kafka/tree/trunk/connect/runtime/src]/[main|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main]/[java|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java]/[org|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org]/[apache|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache]/[kafka|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache/kafka]/[connect|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache/kafka/connect]/[runtime|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime]/*Herder.java.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8664) non-JSON format messages when streaming data from Kafka to Mongo

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8664.
--
Resolution: Won't Fix

The reported problem is for a connector implementation that is not owned by the 
Apache Kafka project. Please report the issue with the provider of the 
connector.

> non-JSON format messages when streaming data from Kafka to Mongo
> 
>
> Key: KAFKA-8664
> URL: https://issues.apache.org/jira/browse/KAFKA-8664
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.0
>Reporter: Vu Le
>Priority: Major
> Attachments: MongoSinkConnector.properties, 
> log_error_when_stream_data_not_a_json_format.txt
>
>
> Hi team,
> I can stream data from Kafka to MongoDB with JSON messages. I use MongoDB 
> Kafka Connector 
> ([https://github.com/mongodb/mongo-kafka/blob/master/docs/install.md])
> However, if I send a non-JSON format message the Connector died. Please see 
> the log file for details.
> My config file:
> {code:java}
> name=mongo-sink
> topics=testconnector.class=com.mongodb.kafka.connect.MongoSinkConnector
> tasks.max=1
> key.ignore=true
> # Specific global MongoDB Sink Connector configuration
> connection.uri=mongodb://localhost:27017
> database=test_kafka
> collection=transaction
> max.num.retries=3
> retries.defer.timeout=5000
> type.name=kafka-connect
> key.converter=org.apache.kafka.connect.json.JsonConverter
> key.converter.schemas.enable=false
> value.converter=org.apache.kafka.connect.json.JsonConverter
> value.converter.schemas.enable=false
> {code}
> I have 2 separated questions:  
>  # how to ignore the message which is non-json format?
>  # how to defined a default-key for this kind of message (for example: abc -> 
> \{ "non-json": "abc" } )
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8867) Kafka Connect JDBC fails to create PostgreSQL table with default boolean value in schema

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8867.
--
Resolution: Won't Fix

The reported problem is for the Confluent JDBC source/sink connector, and 
should be reported via that connector's GitHub repository issues.

> Kafka Connect JDBC fails to create PostgreSQL table with default boolean 
> value in schema
> 
>
> Key: KAFKA-8867
> URL: https://issues.apache.org/jira/browse/KAFKA-8867
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Tudor
>Priority: Major
>
> The `CREATE TABLE ..` statement generated for JDBC sink connectors when 
> configured with `auto.create: true` generates field declarations that do not 
> conform to allowed PostgreSQL syntax when considering fields of type boolean 
> with default values.
> Example record value Avro schema:
> {code:java}
> {
>   "namespace": "com.test.avro.schema.v1",
>   "type": "record",
>   "name": "SomeEvent",
>   "fields": [
> {
>   "name": "boolean_field",
>   "type": "boolean",
>   "default": false
> }
>   ]
> }
> {code}
> The connector task fails with:  
> {code:java}
> ERROR WorkerSinkTask{id=test-events-sink-0} RetriableException from SinkTask: 
> (org.apache.kafka.connect.runtime.WorkerSinkTask:551)
> org.apache.kafka.connect.errors.RetriableException: java.sql.SQLException: 
> org.postgresql.util.PSQLException: ERROR: column "boolean_field" is of type 
> boolean but default expression is of type integer
>   Hint: You will need to rewrite or cast the expression.
>   at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:93)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){code}
>  
> The generated SQL statement is: 
> {code:java}
> CREATE TABLE "test_data" ("boolean_field" BOOLEAN DEFAULT 0){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8961) Unable to create secure JDBC connection through Kafka Connect

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8961.
--
Resolution: Won't Fix

This is not a problem of the Connect framework, and is instead an issue with 
the connector implementation – or more likely the _installation_ of the 
connector in the user's environment.

> Unable to create secure JDBC connection through Kafka Connect
> -
>
> Key: KAFKA-8961
> URL: https://issues.apache.org/jira/browse/KAFKA-8961
> Project: Kafka
>  Issue Type: Bug
>  Components: build, clients, KafkaConnect, network
>Affects Versions: 2.2.1
>Reporter: Monika Bainsala
>Priority: Major
>
> As per below article for enabling JDBC secure connection, we can use updated 
> URL parameter while calling the create connector REST API.
> Exampl:
> jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=X)(PORT=1520)))(CONNECT_DATA=(SERVICE_NAME=XXAP)));EncryptionLevel=requested;EncryptionTypes=RC4_256;DataIntegrityLevel=requested;DataIntegrityTypes=MD5"
>  
> But this approach is not working currently, kindly help in resolving this 
> issue.
>  
> Reference :
> [https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10715) Support Kafka connect converter for AVRO

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10715.
---
Resolution: Won't Do

> Support Kafka connect converter for AVRO
> 
>
> Key: KAFKA-10715
> URL: https://issues.apache.org/jira/browse/KAFKA-10715
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Ravindranath Kakarla
>Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> I want to add support for Avro data format converter to Kafka Connect. Right 
> now, Kafka connect supports [JSON 
> converter|[https://github.com/apache/kafka/tree/trunk/connect].] Since, Avro 
> is a commonly used data format with Kafka, it will be great to have support 
> for it. 
>  
> Confluent Schema Registry libraries have 
> [support|https://github.com/confluentinc/schema-registry/blob/master/avro-converter/src/main/java/io/confluent/connect/avro/AvroConverter.java]
>  for it. The code seems to be pretty generic and can be used directly with 
> Kafka connect without schema registry. They are also licensed under Apache 
> 2.0.
>  
> Can they be copied to this repository and made available for all users of 
> Kafka Connect?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12474) Worker can die if unable to write new session key

2021-04-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12474.
---
Fix Version/s: 2.6.2
   2.7.1
   2.5.2
   3.0.0
 Reviewer: Randall Hauch
   Resolution: Fixed

> Worker can die if unable to write new session key
> -
>
> Key: KAFKA-12474
> URL: https://issues.apache.org/jira/browse/KAFKA-12474
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.0.0, 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.0.0, 2.5.2, 2.7.1, 2.6.2
>
>
> If a distributed worker is unable to write (and then read back) a new session 
> key to the config topic, an uncaught exception will be thrown from its 
> herder's tick thread, killing the worker.
> See 
> [https://github.com/apache/kafka/blob/8da65936d7fc53d24c665c0d01893d25a430933b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java#L366-L369]
> One way we can handle this case by forcing a read to the end of the config 
> topic whenever an attempt to write a new session key to the config topic 
> fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10329) Enable connector context in logs by default

2021-03-21 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10329.
---
Resolution: Duplicate

This is older, but marking as duplicate of KAFKA-12484, which already has a KIP 
associated with it.

> Enable connector context in logs by default
> ---
>
> Key: KAFKA-10329
> URL: https://issues.apache.org/jira/browse/KAFKA-10329
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 3.0.0
>Reporter: Randall Hauch
>Priority: Blocker
>  Labels: needs-kip
> Fix For: 3.0.0
>
>
> When 
> [KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
>  was implemented and released as part of AK 2.3, we chose to not enable these 
> extra logging context information by default because it was not backward 
> compatible, and anyone relying upon the `connect-log4j.properties` file 
> provided by the AK distribution would after an upgrade to AK 2.3 (or later) 
> see different formats for their logs, which could break any log processing 
> functionality they were relying upon.
> However, we should enable this in AK 3.0, whenever that comes. Doing so will 
> require a fairly minor KIP to change the `connect-log4j.properties` file 
> slightly.
> Marked this as BLOCKER since it's a backward incompatible change that we 
> definitely want to do in the 3.0.0 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-3813) Let SourceConnector implementations access the offset reader

2021-03-18 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-3813.
--
Resolution: Duplicate

> Let SourceConnector implementations access the offset reader
> 
>
> Key: KAFKA-3813
> URL: https://issues.apache.org/jira/browse/KAFKA-3813
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Randall Hauch
>Priority: Major
>  Labels: needs-kip
>
> When a source connector is started, having access to the 
> {{OffsetStorageReader}} would allow it to more intelligently configure its 
> tasks based upon the stored offsets. (Currently only the {{SourceTask}} 
> implementations can access the offset reader (via the {{SourceTaskContext}}), 
> but the {{SourceConnector}} does not have access to the offset reader.)
> Of course, accessing the stored offsets is not likely to be useful for sink 
> connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-3813) Let SourceConnector implementations access the offset reader

2021-03-18 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-3813:
--

> Let SourceConnector implementations access the offset reader
> 
>
> Key: KAFKA-3813
> URL: https://issues.apache.org/jira/browse/KAFKA-3813
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Randall Hauch
>Priority: Major
>  Labels: needs-kip
>
> When a source connector is started, having access to the 
> {{OffsetStorageReader}} would allow it to more intelligently configure its 
> tasks based upon the stored offsets. (Currently only the {{SourceTask}} 
> implementations can access the offset reader (via the {{SourceTaskContext}}), 
> but the {{SourceConnector}} does not have access to the offset reader.)
> Of course, accessing the stored offsets is not likely to be useful for sink 
> connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10340) Source connectors should report error when trying to produce records to non-existent topics instead of hanging forever

2021-03-18 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10340.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Source connectors should report error when trying to produce records to 
> non-existent topics instead of hanging forever
> --
>
> Key: KAFKA-10340
> URL: https://issues.apache.org/jira/browse/KAFKA-10340
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.1, 2.7.0, 2.6.1, 2.8.0
>Reporter: Arjun Satish
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.0.0, 2.7.1, 2.6.2
>
>
> Currently, a source connector will blindly attempt to write a record to a 
> Kafka topic. When the topic does not exist, its creation is controlled by the 
> {{auto.create.topics.enable}} config on the brokers. When auto.create is 
> disabled, the producer.send() call on the Connect worker will hang 
> indefinitely (due to the "infinite retries" configuration for said producer). 
> In setups where this config is usually disabled, the source connector simply 
> appears to hang and not produce any output.
> It is desirable to either log an info or an error message (or inform the user 
> somehow) that the connector is simply stuck waiting for the destination topic 
> to be created. When the worker has permissions to inspect the broker 
> settings, it can use the {{listTopics}} and {{describeConfigs}} API in 
> AdminClient to check if the topic exists, the broker can 
> {{auto.create.topics.enable}} topics, and if these cases do not exist, either 
> throw an error.
> With the recently merged 
> [KIP-158|https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics],
>  this becomes even more specific a corner case: when topic creation settings 
> are enabled, the worker should handle the corner case where topic creation is 
> disabled, {{auto.create.topics.enable}} is set to false and topic does not 
> exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12484) Enable Connect's connector log contexts by default

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12484:
-

 Summary: Enable Connect's connector log contexts by default
 Key: KAFKA-12484
 URL: https://issues.apache.org/jira/browse/KAFKA-12484
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


Connect's Log4J configuration does not by default log the connector contexts. 
That feature was added in 
[KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
 and first appeared in AK 2.3.0, but it was not enabled by default since that 
would not have been backward compatible.

But with AK 3.0.0, we have the opportunity to change the default in 
{{config/connect-log4j.properties}} to enable connector log contexts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12483) Enable client overrides in connector configs by default

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12483:
-

 Summary: Enable client overrides in connector configs by default
 Key: KAFKA-12483
 URL: https://issues.apache.org/jira/browse/KAFKA-12483
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


Connector-specific client overrides were added in 
[KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy],
 but that feature is not enabled by default since it would not have been 
backward compatible.

But with AK 3.0.0, we have the opportunity to enable connector client overrides 
by default by changing the worker config's 
{{connector.client.config.override.policy}} default value to \{{All}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12482) Remove deprecated Connect worker configs

2021-03-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12482:
-

 Summary: Remove deprecated Connect worker configs
 Key: KAFKA-12482
 URL: https://issues.apache.org/jira/browse/KAFKA-12482
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch
 Fix For: 3.0.0


The following Connect worker configuration properties were deprecated and 
should be removed in 3.0.0:
 * {{rest.host.name}} (deprecated in 
[KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])

 * {{rest.port}} (deprecated in 
[KIP-208|https://cwiki.apache.org/confluence/display/KAFKA/KIP-208%3A+Add+SSL+support+to+Kafka+Connect+REST+interface])
 * {{internal.key.converter}} (deprecated in 
[KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
 * {{internal.value.converter}} (deprecated in 
[KIP-174|https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig])
 * sd
 *



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12380) Executor in Connect's Worker is not shut down when the worker is

2021-02-26 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12380:
-

 Summary: Executor in Connect's Worker is not shut down when the 
worker is
 Key: KAFKA-12380
 URL: https://issues.apache.org/jira/browse/KAFKA-12380
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Randall Hauch


The `Worker` class has an [`executor` 
field|https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L100]
 that the public constructor initializes with a new cached thread pool 
([https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L127|https://github.com/apache/kafka/blob/02226fa090513882b9229ac834fd493d71ae6d96/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java#L127].]).

When the worker is stopped, it does not shutdown this executor. This is 
normally okay in the Connect runtime and MirrorMaker 2 runtimes, because the 
worker is stopped only when the JVM is stopped (via the shutdown hook in the 
herders).

However, we instantiate and stop the herder many times in our integration 
tests, and this means we're not necessarily shutting down the herder's 
executor. Normally this won't hurt, as long as all of the runnables that the 
executor threads run actually do terminate. But it's possible those threads 
*might* not terminate in all tests. TBH, I don't know that such cases actually 
exist.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12339) Add retry to admin client's listOffsets

2021-02-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12339.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk`, and cherry-picked to:
 * `2.8` for inclusion in 2.8.0 (with release manager approval)
 * `2.7` for inclusion in 2.7.1
 * `2.6` for inclusion in 2.6.2 (with release manager approval)
 * `2.5` for inclusion in 2.5.2

> Add retry to admin client's listOffsets
> ---
>
> Key: KAFKA-12339
> URL: https://issues.apache.org/jira/browse/KAFKA-12339
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> After upgrading our connector env to 2.9.0-SNAPSHOT, sometimes the connect 
> cluster encounters following error.
> {quote}Uncaught exception in herder work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:324)
> org.apache.kafka.connect.errors.ConnectException: Error while getting end 
> offsets for topic 'connect-storage-topic-connect-cluster-1'
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:689)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
> at org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
> at 
> org.apache.kafka.connect.storage.KafkaStatusBackingStore.start(KafkaStatusBackingStore.java:216)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:129)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:310)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
> ... 10 more
> {quote}
> [https://github.com/apache/kafka/pull/9780] added shared admin to get end 
> offsets. KafkaAdmin#listOffsets does not handle topic-level error, hence the 
> UnknownTopicOrPartitionException on topic-level can obstruct worker from 
> running when the new internal topic is NOT synced to all brokers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12340) Recent change to use SharedTopicAdmin results in potential resource leak in deprecated backing store constructors

2021-02-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12340.
---
Resolution: Fixed

Merged to `trunk`, and cherry-picked to:
 * `2.8` for inclusion in 2.8.0 (with release manager approval)
 * `2.7` for inclusion in 2.7.1
 * `2.6` for inclusion in 2.6.2 (with release manager approval)
 * `2.5` for inclusion in 2.5.2

> Recent change to use SharedTopicAdmin results in potential resource leak in 
> deprecated backing store constructors
> -
>
> Key: KAFKA-12340
> URL: https://issues.apache.org/jira/browse/KAFKA-12340
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> When KAFKA-10021 modified the Connect `Kafka*BackingStore` classes, we 
> deprecated the old constructors and changed all uses within AK to use the new 
> constructors that take a `Supplier`.
> If the old deprecated constructors are used (outside of AK), then they will 
> not close the Admin clients that are created by the "default" supplier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12343) Recent change to use SharedTopicAdmin in KakfkaBasedLog fails with AK 0.10.x brokers

2021-02-19 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12343.
---
  Reviewer: Konstantine Karantasis
Resolution: Fixed

> Recent change to use SharedTopicAdmin in KakfkaBasedLog fails with AK 0.10.x 
> brokers
> 
>
> Key: KAFKA-12343
> URL: https://issues.apache.org/jira/browse/KAFKA-12343
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
>
> System test failure 
> ([sample|http://confluent-kafka-2-7-system-test-results.s3-us-west-2.amazonaws.com/2021-02-18--001.1613655226--confluentinc--2.7--54952635e5/report.html]):
> {code:java}
> Java.lang.Exception: UnsupportedVersionException: MetadataRequest versions 
> older than 4 don't support the allowAutoTopicCreation field
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:755)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1136)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1301)
> at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
> at java.lang.Thread.run(Thread.java:748)
> [2021-02-16 12:05:11,735] ERROR [Worker clientId=connect-1, 
> groupId=connect-cluster] Uncaught exception in herder work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.Di
> stributedHerder)
> org.apache.kafka.connect.errors.ConnectException: API to get the get the end 
> offsets for topic 'connect-offsets' is unsupported on brokers at worker25:9092
> at 
> org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:680)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
> at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
> at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:136)
> at org.apache.kafka.connect.runtime.Worker.start(Worker.java:197)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:128)
> at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:311)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnsupportedVersionException: MetadataRequest 
> versions older than 4 don't support the allowAutoTopicCre
> ation field
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at 
> org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
> ... 11 more   {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12343) Recent change to use SharedTopicAdmin in KakfkaBasedLog fails with AK 0.10.x brokers

2021-02-18 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12343:
-

 Summary: Recent change to use SharedTopicAdmin in KakfkaBasedLog 
fails with AK 0.10.x brokers
 Key: KAFKA-12343
 URL: https://issues.apache.org/jira/browse/KAFKA-12343
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Randall Hauch
Assignee: Randall Hauch


System test failure:
{code:java}

Java.lang.Exception: UnsupportedVersionException: MetadataRequest versions 
older than 4 don't support the allowAutoTopicCreation field
at 
org.apache.kafka.clients.admin.KafkaAdminClient$Call.fail(KafkaAdminClient.java:755)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.handleResponses(KafkaAdminClient.java:1136)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.processRequests(KafkaAdminClient.java:1301)
at 
org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1224)
at java.lang.Thread.run(Thread.java:748)
[2021-02-16 12:05:11,735] ERROR [Worker clientId=connect-1, 
groupId=connect-cluster] Uncaught exception in herder work thread, exiting:  
(org.apache.kafka.connect.runtime.distributed.Di
stributedHerder)
org.apache.kafka.connect.errors.ConnectException: API to get the get the end 
offsets for topic 'connect-offsets' is unsupported on brokers at worker25:9092
at 
org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:680)
at 
org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:338)
at 
org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:195)
at 
org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:136)
at org.apache.kafka.connect.runtime.Worker.start(Worker.java:197)
at 
org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:128)
at 
org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:311)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnsupportedVersionException: MetadataRequest 
versions older than 4 don't support the allowAutoTopicCre
ation field
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
org.apache.kafka.connect.util.TopicAdmin.endOffsets(TopicAdmin.java:668)
... 11 more   {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12340) Recent change to use SharedTopicAdmin results in potential resource leak in deprecated backing store constructors

2021-02-18 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12340:
-

 Summary: Recent change to use SharedTopicAdmin results in 
potential resource leak in deprecated backing store constructors
 Key: KAFKA-12340
 URL: https://issues.apache.org/jira/browse/KAFKA-12340
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.2, 2.8.0, 2.7.1, 2.6.2
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.5.2, 2.8.0, 2.7.1, 2.6.2


When KAFKA-10021 modified the Connect `Kafka*BackingStore` classes, we 
deprecated the old constructors and changed all uses within AK to use the new 
constructors that take a `Supplier`.

If the old deprecated constructors are used (outside of AK), then they will not 
close the Admin clients that are created by the "default" supplier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12270) Kafka Connect may fail a task when racing to create topic

2021-02-03 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-12270.
---
Fix Version/s: 2.6.2
   2.7.1
   2.8.0
 Reviewer: Konstantine Karantasis
   Resolution: Fixed

Merged to `trunk` for the upcoming 2.8.0, and cherrypicked to the 2.7 branch 
for the next 2.7.1 and to the 2.6 branch for the next 2.6.2.

> Kafka Connect may fail a task when racing to create topic
> -
>
> Key: KAFKA-12270
> URL: https://issues.apache.org/jira/browse/KAFKA-12270
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.6.0, 2.7.0, 2.8.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 2.8.0, 2.7.1, 2.6.2
>
>
> When a source connector configured with many tasks and to use the new topic 
> creation feature is run, it is possible that multiple tasks will attempt to 
> write to the same topic, will see that the topic does not exist, and then 
> race to create the topic. The topic is only created once, but some tasks 
> might fail with:
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Task failed to create new 
> topic (name=TOPICX, numPartitions=8, replicationFactor=3, 
> replicasAssignments=null, configs={cleanup.policy=delete}). Ensure that the 
> task is authorized to create topics or that the topic exists and restart the 
> task
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.maybeCreateTopic(WorkerSourceTask.java:436)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:364)
>   at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
>   at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
> ... {code}
> The reason appears to be that the WorkerSourceTask throws an exception if the 
> topic creation failed, and does not account for the fact that the topic may 
> have been created between the time the WorkerSourceTask lists existing topics 
> and tries to create the topic.
>  
> See in particular: 
> [https://github.com/apache/kafka/blob/5c562efb2d76407011ea88c1ca1b2355079935bc/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L415-L423]
>  
> This is only an issue when using topic creation settings in the source 
> connector configuration, and when running multiple tasks that write to the 
> same topic.
> The workaround is to create the topics manually before starting the 
> connector, or to simply restart the failed tasks using the REST API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12270) Kafka Connect may fail a task when racing to create topic

2021-02-02 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-12270:
-

 Summary: Kafka Connect may fail a task when racing to create topic
 Key: KAFKA-12270
 URL: https://issues.apache.org/jira/browse/KAFKA-12270
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.7.0, 2.6.0, 2.8.0
Reporter: Randall Hauch
Assignee: Randall Hauch


When a source connector configured with many tasks and to use the new topic 
creation feature is run, it is possible that multiple tasks will attempt to 
write to the same topic, will see that the topic does not exist, and then race 
to create the topic. The topic is only created once, but some tasks might fail 
with:
{code:java}
org.apache.kafka.connect.errors.ConnectException: Task failed to create new 
topic (name=TOPICX, numPartitions=8, replicationFactor=3, 
replicasAssignments=null, configs={cleanup.policy=delete}). Ensure that the 
task is authorized to create topics or that the topic exists and restart the 
task
  at 
org.apache.kafka.connect.runtime.WorkerSourceTask.maybeCreateTopic(WorkerSourceTask.java:436)
  at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:364)
  at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:264)
  at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
  at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
... {code}
The reason appears to be that the WorkerSourceTask throws an exception if the 
topic creation failed, and does not account for the fact that the topic may 
have been created between the time the WorkerSourceTask lists existing topics 
and tries to create the topic.

 

See in particular: 
https://github.com/apache/kafka/blob/5c562efb2d76407011ea88c1ca1b2355079935bc/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L415-L423



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10816) Connect REST API should have a resource that can be used as a readiness probe

2020-12-07 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10816:
-

 Summary: Connect REST API should have a resource that can be used 
as a readiness probe
 Key: KAFKA-10816
 URL: https://issues.apache.org/jira/browse/KAFKA-10816
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Randall Hauch


There are a few ways to accurately detect whether a Connect worker is 
*completely* ready to process all REST requests:

# Wait for `Herder started` in the Connect worker logs
# Use the REST API to issue a request that will be completed only after the 
herder has started, such as `GET /connectors/{name}/` or `GET 
/connectors/{name}/status`.

Other techniques can be used to detect other startup states, though none of 
these will guarantee that the worker has indeed completely started up and can 
process all REST requests:

* `GET /` can be used to know when the REST server has started, but this may be 
before the worker has started completely and successfully.
* `GET /connectors` can be used to know when the REST server has started, but 
this may be before the worker has started completely and successfully. And, for 
the distributed Connect worker, this may actually return an older list of 
connectors if the worker hasn't yet completely read through the internal config 
topic. It's also possible that this request returns even if the worker is 
having trouble reading from the internal config topic.
* `GET /connector-plugins` can be used to know when the REST server has 
started, but this may be before the worker has started completely and 
successfully.

The Connect REST API should have an endpoint that more obviously and more 
simply can be used as a readiness probe. This could be a new resource (e.g., 
`GET /status`), though this would only work on newer Connect runtimes, and 
existing tooling, installations, and examples would have to be modified to take 
advantage of this feature (if it exists). 

Alternatively, we could make sure that the existing resources (e.g., `GET /` or 
`GET /connectors`) wait for the herder to start completely; this wouldn't 
require a KIP and it would not require clients use different technique for 
newer and older Connect runtimes. (Whether or not we back port this is another 
question altogether, since it's debatable whether the behavior of the existing 
REST resources is truly a bug.)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10811) System exit from MirrorConnectorsIntegrationTest#testReplication

2020-12-04 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10811:
-

 Summary: System exit from 
MirrorConnectorsIntegrationTest#testReplication
 Key: KAFKA-10811
 URL: https://issues.apache.org/jira/browse/KAFKA-10811
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, mirrormaker
Affects Versions: 2.5.1, 2.6.0, 2.7.0, 2.8.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.7.0, 2.5.2, 2.6.1, 2.8.0


The MirrorConnectorsIntegrationTest::testReplication has been very frequently 
causing the build to fail with:

{noformat}
FAILURE: Build failed with an exception.
13:50:17  
13:50:17  * What went wrong:
13:50:17  Execution failed for task ':connect:mirror:integrationTest'.
13:50:17  > Process 'Gradle Test Executor 52' finished with non-zero exit value 
1
13:50:17This problem might be caused by incorrect test process 
configuration.
13:50:17Please refer to the test execution section in the User Manual at 
https://docs.gradle.org/6.7.1/userguide/java_testing.html#sec:test_execution
{noformat}

Even running this locally resulted in mostly failures, and specifically the 
`MirrorConnectorsIntegrationTest::testReplication` test method reliably fails 
due to the process being exited.

[~ChrisEgerton] traced this to the fact that these integration tests are 
creating multiple EmbeddedConnectCluster instances, each of which by default:
* mask the Exit procedures upon startup
* reset the Exit procedures upon stop

But since *each* cluster does this, then {{Exit.resetExitProcedure()}} is 
called when the first Connect cluster is stopped, and if any problems occur 
while the second Connect cluster is being stopped (e.g., the KafkaBasedLog 
produce thread is interrupted) then the Exit called by the Connect worker 
results in the termination of the JVM.

The solution is to change the MirrorConnectorsIntegrationTest to own the 
overriding of the exit procedures, and to tell the EmbeddedConnectCluster 
instances to not mask the exit procedures.

With these changes, running these tests locally made the tests always pass 
locally for me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10572) Rename MirrorMaker 2 blacklist configs for KIP-629

2020-10-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10572.
---
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk` branch and cherry-picked to the `2.7` branch for 
inclusion in 2.7.0.

> Rename MirrorMaker 2 blacklist configs for KIP-629
> --
>
> Key: KAFKA-10572
> URL: https://issues.apache.org/jira/browse/KAFKA-10572
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10332) MirrorMaker2 fails to detect topic if remote topic is created first

2020-10-19 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10332.
---
Fix Version/s: 2.6.1
   2.5.2
   2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk` branch (for future 2.8 release), and cherry-picked to the 
`2.7` for inclusion in the upcoming 2.7.0, the `2.6` branch for inclusion in 
the next 2.6.1 if/when it's released, and the `2.5` branch for the next 2.5.2 
if/when it's released.

> MirrorMaker2 fails to detect topic if remote topic is created first
> ---
>
> Key: KAFKA-10332
> URL: https://issues.apache.org/jira/browse/KAFKA-10332
> Project: Kafka
>  Issue Type: Bug
>  Components: mirrormaker
>Affects Versions: 2.6.0
>Reporter: Mickael Maison
>Assignee: Mickael Maison
>Priority: Major
> Fix For: 2.7.0, 2.5.2, 2.6.1
>
>
> Setup:
> - 2 clusters: source and target
> - Mirroring data from source to target
> - create a topic called source.mytopic on the target cluster
> - create a topic called mytopic on the source cluster
> At this point, MM2 does not start mirroring the topic.
> This also happens if you delete and recreate a topic that is being mirrored.
> The issue is in 
> [refreshTopicPartitions()|https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorSourceConnector.java#L211-L232]
>  which basically does a diff between the 2 clusters.
> When creating the topic on the source cluster last, it makes the partition 
> list of both clusters match, hence not triggering a reconfiguration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8370) Kafka Connect should check for existence of internal topics before attempting to create them

2020-10-16 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8370.
--
Resolution: Won't Fix

As mentioned above, we never can avoid the race condition of two Connect 
workers trying to create the same topic, and it's imperative that the 
create-topic request is handled atomically and throws TopicExistsException if 
the create-topic request fails because the topic already exists. KAFKA-8875 is 
now ensuring that happens, and Connect already properly handles the case when a 
create-topic request fails with TopicExistsException

The conclusion: there is no need for the check before creating the topic, 
because that is not guaranteed to be sufficient anyway.

> Kafka Connect should check for existence of internal topics before attempting 
> to create them
> 
>
> Key: KAFKA-8370
> URL: https://issues.apache.org/jira/browse/KAFKA-8370
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
>
> The Connect worker doesn't current check for the existence of the internal 
> topics, and instead is issuing a CreateTopic request and handling a 
> TopicExistsException. However, this can cause problems when the number of 
> brokers is fewer than the replication factor, *even if the topic already 
> exists* and the partitions of those topics all remain available on the 
> remaining brokers.
> One problem of the current approach is that the broker checks the requested 
> replication factor before checking for the existence of the topic, resulting 
> in unexpected exceptions when the topic does exist:
> {noformat}
> connect  | [2019-05-14 19:24:25,166] ERROR Uncaught exception in herder 
> work thread, exiting:  
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> connect  | org.apache.kafka.connect.errors.ConnectException: Error while 
> attempting to create/find topic(s) 'connect-offsets'
> connect  |at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:255)
> connect  |at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore$1.run(KafkaOffsetBackingStore.java:99)
> connect  |at 
> org.apache.kafka.connect.util.KafkaBasedLog.start(KafkaBasedLog.java:127)
> connect  |at 
> org.apache.kafka.connect.storage.KafkaOffsetBackingStore.start(KafkaOffsetBackingStore.java:109)
> connect  |at 
> org.apache.kafka.connect.runtime.Worker.start(Worker.java:164)
> connect  |at 
> org.apache.kafka.connect.runtime.AbstractHerder.startServices(AbstractHerder.java:114)
> connect  |at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:214)
> connect  |at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> connect  |at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> connect  |at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> connect  |at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> connect  |at java.lang.Thread.run(Thread.java:748)
> connect  | Caused by: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> connect  |at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> connect  |at 
> org.apache.kafka.connect.util.TopicAdmin.createTopics(TopicAdmin.java:228)
> connect  |... 11 more
> connect  | Caused by: 
> org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication 
> factor: 3 larger than available brokers: 2.
> connect  | [2019-05-14 19:24:25,168] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> {noformat}
> Instead of always issuing a CreateTopic request, the worker's admin client 
> should first check whether the topic exists, and if not *then* attempt to 
> create the topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10573) Rename connect transform configs for KIP-629

2020-10-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10573.
---
Fix Version/s: (was: 2.8.0)
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` for inclusion in 2.8.0 (or whatever major/minor release 
follows 2.7.0), and to the `2.7` branch for inclusion in 2.7.0.

> Rename connect transform configs for KIP-629
> 
>
> Key: KAFKA-10573
> URL: https://issues.apache.org/jira/browse/KAFKA-10573
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.7.0
>
>
> Part of the implementation for 
> [KIP-629|https://cwiki.apache.org/confluence/display/KAFKA/KIP-629%3A+Use+racially+neutral+terms+in+our+codebase]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10600) Connect adds error to property in validation result if connector does not define the property

2020-10-12 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10600:
-

 Summary: Connect adds error to property in validation result if 
connector does not define the property
 Key: KAFKA-10600
 URL: https://issues.apache.org/jira/browse/KAFKA-10600
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.0
Reporter: Randall Hauch


Kafka Connect's {{AbstractHerder.generateResult(...)}} method is responsible 
for taking the result of a {{Connector.validate(...)}} call and constructing 
the {{ConfigInfos}} object that is then mapped to the JSON representation.

As this method (see 
[code|https://github.com/apache/kafka/blob/1f8ac6e6fee3aa404fc1a4c01ac2e0c48429a306/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L504-L507])
 iterates over the {{ConfigKey}} objects in the connector's {{ConfigDef}} and 
the {{ConfigValue}} objects returned by the {{Connector.validate(...)}} method, 
this method adds an error message to any {{ConfigValue}} whose 
{{configValue.name()}} does not correspond to a {{ConfigKey}} in the 
connector's {{ConfigDef}}. 

{code}
if (!configKeys.containsKey(configName)) {
configValue.addErrorMessage("Configuration is not defined: " + 
configName);
configInfoList.add(new ConfigInfo(null, 
convertConfigValue(configValue, null)));
}
{code}

Interestingly, these errors are not included in the total error count of the 
response. Is that intentional??

This behavior does not allow connectors to report validation errors against 
extra properties not defined in the connector's {{ConfigDef}}. 

Consider a connector that allows arbitrary properties with some prefix (e.g., 
{{connection.*}}) to be included and used in the connector properties. One 
example is to supply additional properties to a JDBC connection, where the 
connector may not be able to know these "additional properties" in advance 
because the connector either works with multiple JDBC drivers or the connection 
properties allowed by a JDBC driver are many and/or vary over different JDBC 
driver versions or server versions.

Such "additional properties" are not prohibited by Connect API, yet if a 
connector implementation chooses to include any such additional properties in 
the {{Connector.validate(...)}} result (whether or not the corresponding 
{{ConfigValue}} has an error) then Connect will always add the following error 
to that property. 

{quote}
Configuration is not defined: 
{quote}

This code was in the 0.10.0.0 release of Kafka via the 
[PR|https://github.com/apache/kafka/pull/964] for KAFKA-3315, which is one of 
the tasks that implemented 
[KIP-26|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=58851767]
 for Kafka Connect (approved and partially added in 0.9.0.0). There is no 
mention of "validation" in KIP-26 nor any followup KIP (that I can find).

I can kind of imagine the original thought process: any user-supplied property 
that is not defined by a {{ConfigDef}} is inherently an error. However, this 
assumption is not matched by any mention in the Connect API, documentation, or 
one of Connect's KIP.
IMO, this is a bug in the {{AbstractHerder}} that over-constrains the connector 
properties to be only those defined in the connector's {{ConfigDef}}.

Quite a few connectors already support additional properties, and it's perhaps 
only by chance that this happens to work: 
* If a connector does not override {{Connector.validate(...)}}, extra 
properties are not validated and therefore are not included in the resulting 
{{Config}} response with one {{ConfigValue}} per property defined in the 
connector's {{ConfigDef}}.
* If a connector does override {{Connector.validate(...)}} and includes in the 
{{Config}} response a {{ConfigValue}} for the any additional properties, the 
{{AbstractHerder.generateResults(...)}} method does add the error but does not 
include this error in the error count, which is actually used to determine if 
there are any validation problems before starting/updating the connector.

I propose that the {{AbstractHerder.generateResult(...)}} method be changed to 
not add it's error message to the validation result, and to properly handle all 
{{ConfigValue}} objects regardless of whether there is a corresponding 
{{ConfigKey}} in the connector's {{ConfigDef}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9546) Make FileStreamSourceTask extendable with generic streams

2020-09-29 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9546.
--
Resolution: Won't Fix

I'm going to close this as WONTFIX, per my previous comment.

> Make FileStreamSourceTask extendable with generic streams
> -
>
> Key: KAFKA-9546
> URL: https://issues.apache.org/jira/browse/KAFKA-9546
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Csaba Galyo
>Assignee: Csaba Galyo
>Priority: Major
>  Labels: connect-api, needs-kip
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> Use case: I want to read a ZIP compressed text file with a file connector and 
> send it to Kafka.
> Currently, we have FileStreamSourceConnector which reads a \n delimited text 
> file. This connector always returns a task of type FileStreamSourceTask.
> The FileStreamSourceTask reads from stdio or opens a file InputStream. The 
> issue with this approach is that the input needs to be a text file, otherwise 
> it won't work. 
> The code should be modified so that users could change the default 
> InputStream to eg. ZipInputStream, or any other format. The code is currently 
> written in such a way that it's not possible to extend it, we cannot use a 
> different input stream. 
> See example here where the code got copy-pasted just so it could read from a 
> ZstdInputStream (which reads ZSTD compressed files): 
> [https://github.com/gcsaba2/kafka-zstd/tree/master/src/main/java/org/apache/kafka/connect/file]
>  
> I suggest 2 changes:
>  # FileStreamSourceConnector should be extendable to return tasks of 
> different types. These types would be input by the user through the 
> configuration map
>  # FileStreamSourceTask should be modified so it could be extended and child 
> classes could define different input streams.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10341) Add version 2.6 to streams and systems tests

2020-08-04 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10341.
---
  Reviewer: Matthias J. Sax
Resolution: Fixed

Merged the PR to the `trunk` branch, and did not backport.

> Add version 2.6 to streams and systems tests
> 
>
> Key: KAFKA-10341
> URL: https://issues.apache.org/jira/browse/KAFKA-10341
> Project: Kafka
>  Issue Type: Task
>  Components: build, streams, system tests
>Affects Versions: 2.7.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.7.0
>
>
> Part of the [2.6.0 release 
> process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-AnnouncetheRC].
>  This will be merged only to `trunk` for inclusion in 2.7.0
> See KAFKA-9779 for the changes made for the 2.5 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10359) Test failure during verification build of AK 2.6.0 RC2

2020-08-04 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10359:
-

 Summary: Test failure during verification build of AK 2.6.0 RC2
 Key: KAFKA-10359
 URL: https://issues.apache.org/jira/browse/KAFKA-10359
 Project: Kafka
  Issue Type: Bug
  Components: tools, unit tests
Affects Versions: 2.6.0
Reporter: Randall Hauch


The following error was reported by [~gshapira_impala_35cc] when she was 
verifying AK 2.6.0 RC2:
{noformat}
org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus failed, log
available in
/Users/gwenshap/releases/2.6.0-rc2/kafka-2.6.0-src/tools/build/reports/testOutput/org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus.test.stdout

org.apache.kafka.trogdor.agent.AgentTest > testAgentGetStatus FAILED
java.lang.RuntimeException:
at 
org.apache.kafka.trogdor.rest.RestExceptionMapper.toException(RestExceptionMapper.java:69)
at
org.apache.kafka.trogdor.rest.JsonRestServer$HttpResponse.body(JsonRestServer.java:285)
at
org.apache.kafka.trogdor.agent.AgentClient.status(AgentClient.java:130)
at
org.apache.kafka.trogdor.agent.AgentTest.testAgentGetStatus(AgentTest.java:115)
{noformat}

No similar issue appears to have been previously reported, and this did not 
occur in the Jenkins builds for 2.6.0 RC2 nor was this reported by anyone else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10358) Remove the 2.12 sitedocs

2020-08-04 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10358:
-

 Summary: Remove the 2.12 sitedocs
 Key: KAFKA-10358
 URL: https://issues.apache.org/jira/browse/KAFKA-10358
 Project: Kafka
  Issue Type: Task
  Components: release, build
Affects Versions: 2.7.0
Reporter: Randall Hauch


Per [~gshapira_impala_35cc]'s comment during the [AK 2.6.0 RC2 
vote|https://lists.apache.org/thread.html/rc8a3aa6986204adbb9ff326b8de849b3c8bac5b6b2b436e8143afea9%40%3Cdev.kafka.apache.org%3E]:
{quote}
There were two sitedoc files - 2.12 and 2.13, we don't really need two
sitedocs generated. Not a big deal, but maybe worth tracking and fixing.
{quote}

During the release, we're publishing site-docs for both 2.12 and 2.13, but we 
really don't need both. For example, in AK 2.6.0 we published:
{noformat}
...
kafka_2.12-2.6.0-site-docs.tgz
kafka_2.12-2.6.0-site-docs.tgz.asc
kafka_2.12-2.6.0-site-docs.tgz.md5
kafka_2.12-2.6.0-site-docs.tgz.sha1
kafka_2.12-2.6.0-site-docs.tgz.sha512
...
kafka_2.13-2.6.0-site-docs.tgz
kafka_2.13-2.6.0-site-docs.tgz.asc
kafka_2.13-2.6.0-site-docs.tgz.md5
kafka_2.13-2.6.0-site-docs.tgz.sha1
kafka_2.13-2.6.0-site-docs.tgz.sha512
{noformat}

Ideally we would change the build to avoid producing the site-docs for both 
Scala versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10341) Add version 2.6 to streams and systems tests

2020-08-03 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10341:
-

 Summary: Add version 2.6 to streams and systems tests
 Key: KAFKA-10341
 URL: https://issues.apache.org/jira/browse/KAFKA-10341
 Project: Kafka
  Issue Type: Task
  Components: build, streams, system tests
Affects Versions: 2.7.0
Reporter: Randall Hauch
Assignee: Randall Hauch


Part of the [2.6.0 release 
process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-AnnouncetheRC].

See KAFKA-9779 for the changes made for the 2.5 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10329) Enable connector context in logs by default

2020-07-30 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10329:
-

 Summary: Enable connector context in logs by default
 Key: KAFKA-10329
 URL: https://issues.apache.org/jira/browse/KAFKA-10329
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 3.0.0
Reporter: Randall Hauch
 Fix For: 3.0.0


When 
[KIP-449|https://cwiki.apache.org/confluence/display/KAFKA/KIP-449%3A+Add+connector+contexts+to+Connect+worker+logs]
 was implemented and released as part of AK 2.3, we chose to not enable these 
extra logging context information by default because it was not backward 
compatible, and anyone relying upon the `connect-log4j.properties` file 
provided by the AK distribution would after an upgrade to AK 2.3 (or later) see 
different formats for their logs, which could break any log processing 
functionality they were relying upon.

However, we should enable this in AK 3.0, whenever that comes. Doing so will 
require a fairly minor KIP to change the `connect-log4j.properties` file 
slightly.

Marked this as BLOCKER since it's a backward incompatible change that we 
definitely want to do in the 3.0.0 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10295) ConnectDistributedTest.test_bounce should wait for graceful stop

2020-07-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10295.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> ConnectDistributedTest.test_bounce should wait for graceful stop
> 
>
> Key: KAFKA-10295
> URL: https://issues.apache.org/jira/browse/KAFKA-10295
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 2.3.1, 2.5.0, 2.4.1, 2.6.0
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> In ConnectDistributedTest.test_bounce, there are flakey failures that appear 
> to follow this pattern:
>  # The test is parameterized for hard bounces, and with Incremental 
> Cooperative Rebalancing enabled (does not appear for protocol=eager)
>  # A source task is on a worker that will experience a hard bounce
>  # The source task has written records which it has not yet committed in 
> source offsets
>  # The worker is hard-bounced, and the source task is lost
>  # Incremental Cooperative Rebalance starts it's 
> scheduled.rebalance.max.delay.ms delay before recovering the task
>  # The test ends, connectors and Connect are stopped
>  # The test verifies that the sink connector has only written records that 
> have been committed by the source connector
>  # This verification fails because the source offsets are stale, and there 
> are un-committed records in the topic, and the sink connector has written at 
> least one of them.
> This can be addressed by ensuring that the test waits for the rebalance delay 
> to expire, and for the lost task to recover and commit offsets past the 
> progress it made before the bounce.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10286) Connect system tests should wait for workers to join group

2020-07-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10286.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Connect system tests should wait for workers to join group
> --
>
> Key: KAFKA-10286
> URL: https://issues.apache.org/jira/browse/KAFKA-10286
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> There are a few flakey test failures for {{connect_distributed_test}} in 
> which one of the workers does not join the group quickly, and the test fails 
> in the following manner:
>  # The test starts each of the connect workers, and waits for their REST APIs 
> to become available
>  # All workers start up, complete plugin scanning, and start their REST API
>  # At least one worker kicks off an asynchronous job to join the group that 
> hangs for a yet unknown reason (30s timeout)
>  # The test continues without all of the members joined
>  # The test makes a call to the REST api that it expects to succeed, and gets 
> an error
>  # The test fails without the worker ever joining the group
> Instead of allowing the test to fail in this manner, we could wait for each 
> worker to join the group with the existing 60s startup timeout. This change 
> would go into effect for all system tests using the 
> {{ConnectDistributedService}}, currently just {{connect_distributed_test}} 
> and {{connect_rest_test}}. 
> Alternatively we could retry the operation that failed, or ensure that we use 
> a known-good worker to continue the test, but these would require more 
> involved code changes. The existing wait-for-startup logic is the most 
> natural place to fix this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5722:
--

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5722.
--
Resolution: Duplicate

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-5722.
--
Resolution: Fixed

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient

2020-07-13 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-5722:
--

> Refactor ConfigCommand to use the AdminClient
> -
>
> Key: KAFKA-5722
> URL: https://issues.apache.org/jira/browse/KAFKA-5722
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Viktor Somogyi-Vass
>Assignee: Viktor Somogyi-Vass
>Priority: Major
>  Labels: kip, needs-kip
> Fix For: 2.6.0
>
>
> The ConfigCommand currently uses a direct connection to zookeeper. The 
> zookeeper dependency should be deprecated and an AdminClient API created to 
> be used instead.
> This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9018) Kafka Connect - throw clearer exceptions on serialisation errors

2020-07-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9018.
--
Resolution: Fixed

> Kafka Connect - throw clearer exceptions on serialisation errors
> 
>
> Key: KAFKA-9018
> URL: https://issues.apache.org/jira/browse/KAFKA-9018
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Robin Moffatt
>Assignee: Mario Molina
>Priority: Minor
> Fix For: 2.7.0
>
>
> When Connect fails on a deserialisation error, it doesn't show if that's the 
> *key or value* that's thrown the error, nor does it give the user any 
> indication of the *topic/partition/offset* of the message. Kafka Connect 
> should be improved to return this information.
> Example message that user will get (in this case caused by reading non-Avro 
> data with the Avro converter)
> {code:java}
> Caused by: org.apache.kafka.connect.errors.DataException: Failed to 
> deserialize data for topic sample_topic to Avro:
>  at 
> io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:110)
>  at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>  at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>  ... 13 more
>  Caused by: org.apache.kafka.common.errors.SerializationException: Error 
> deserializing Avro message for id -1
>  Caused by: org.apache.kafka.common.errors.SerializationException: Unknown 
> magic byte!{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10153) Error Reporting in Connect Documentation

2020-07-01 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10153.
---
  Reviewer: Randall Hauch
Resolution: Fixed

> Error Reporting in Connect Documentation
> 
>
> Key: KAFKA-10153
> URL: https://issues.apache.org/jira/browse/KAFKA-10153
> Project: Kafka
>  Issue Type: Task
>  Components: documentation, KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
>
> Add documentation for error reporting in Kafka Connect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10147) MockAdminClient#describeConfigs(Collection) is unable to handle broker resource

2020-06-17 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10147.
---
Fix Version/s: 2.7.0
 Reviewer: Boyang Chen
   Resolution: Fixed

Merged to `trunk` and backported to `2.6`.

> MockAdminClient#describeConfigs(Collection) is unable to 
> handle broker resource
> ---
>
> Key: KAFKA-10147
> URL: https://issues.apache.org/jira/browse/KAFKA-10147
> Project: Kafka
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Blocker
> Fix For: 2.6.0, 2.7.0
>
>
> MockAdminClient#describeConfigs(Collection) has new 
> implementation introduced by 
> https://github.com/apache/kafka/commit/48b56e533b3ff22ae0e2cf7fcc649e7df19f2b06.
>  It does not handle broker resource so 
> ReassignPartitionsUnitTest#testModifyBrokerThrottles throws NPE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7239) Kafka Connect secret externalization not working

2020-06-16 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7239.
--
Resolution: Invalid

Closing at the request of the reporter.

> Kafka Connect secret externalization not working
> 
>
> Key: KAFKA-7239
> URL: https://issues.apache.org/jira/browse/KAFKA-7239
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: satyanarayan komandur
>Priority: Major
>
> I used the Kafka FileConfigProvider to externalize the properties like 
> connection.user and connection.password for JDBC source connector. I noticed 
> that the values in the connection properties are being replaced after the 
> connector has attempted to establish a connection with original key/value 
> pairs (untransformed). This is resulting a failure in connection. I am not 
> sure if this issue belong to Kafka Connector framework or its an issue with 
> JDBC Source Connector



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9374) Worker can be disabled by blocked connectors

2020-06-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9374.
--
  Reviewer: Konstantine Karantasis
Resolution: Fixed

Merged to `trunk` and backported to the `2.6` branch for inclusion in 2.6.0.

> Worker can be disabled by blocked connectors
> 
>
> Key: KAFKA-9374
> URL: https://issues.apache.org/jira/browse/KAFKA-9374
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0, 
> 2.2.0, 2.1.1, 2.3.0, 2.2.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0
>
>
> If a connector hangs during any of its {{initialize}}, {{start}}, {{stop}}, 
> \{taskConfigs}}, {{taskClass}}, {{version}}, {{config}}, or {{validate}} 
> methods, the worker will be disabled for some types of requests thereafter, 
> including connector creation, connector reconfiguration, and connector 
> deletion.
>  -This only occurs in distributed mode and is due to the threading model used 
> by the 
> [DistributedHerder|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedHerder.java]
>  class.- This affects both distributed and standalone mode. Distributed 
> herders perform some connector work synchronously in their {{tick}} thread, 
> which also handles group membership and some REST requests. The majority of 
> the herder methods for the standalone herder are {{synchronized}}, including 
> those for creating, updating, and deleting connectors; as long as one of 
> those methods blocks, all subsequent calls to any of these methods will also 
> be blocked.
>  
> One potential solution could be to treat connectors that fail to start, stop, 
> etc. in time similarly to tasks that fail to stop within the [task graceful 
> shutdown timeout 
> period|https://github.com/apache/kafka/blob/03f763df8a8d9482d8c099806336f00cf2521465/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java#L121-L126]
>  by handling all connector interactions on a separate thread, waiting for 
> them to complete within a timeout, and abandoning the thread (and 
> transitioning the connector to the {{FAILED}} state, if it has been created 
> at all) if that timeout expires.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9969) ConnectorClientConfigRequest is loaded in isolation and throws LinkageError

2020-06-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9969.
--
  Reviewer: Konstantine Karantasis
Resolution: Fixed

[~kkonstantine] merged to `trunk` and backported to:
* `2.6` for inclusion in upcoming 2.6.0
* `2.5` for inclusion in upcoming 2.5.1
* `2.4` for inclusion in a future 2.4.2
* `2.3` for inclusion in a future 2.3.2

> ConnectorClientConfigRequest is loaded in isolation and throws LinkageError
> ---
>
> Key: KAFKA-9969
> URL: https://issues.apache.org/jira/browse/KAFKA-9969
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Greg Harris
>Assignee: Greg Harris
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> ConnectorClientConfigRequest (added by 
> [KIP-458|https://cwiki.apache.org/confluence/display/KAFKA/KIP-458%3A+Connector+Client+Config+Override+Policy])
>  is a class in connect-api, and should always be loaded by the system 
> classloader. If a plugin packages the connect-api jar, the REST API may fail 
> with the following stacktrace:
> {noformat}
> java.lang.LinkageError: loader constraint violation: loader (instance of 
> sun/misc/Launcher$AppClassLoader) previously initiated loading for a 
> different type with name 
> "org/apache/kafka/connect/connector/policy/ConnectorClientConfigRequest" at 
> java.lang.ClassLoader.defineClass1(Native Method) at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:763) at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at 
> java.net.URLClassLoader.access$100(URLClassLoader.java:74) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:369) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:363) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:362) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> org.apache.kafka.connect.runtime.AbstractHerder.validateClientOverrides(AbstractHerder.java:416)
>  at 
> org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:342)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
>  at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  ... 1 more
> {noformat}
> It appears that the other class in org.apache.kafka.connect.connector.policy, 
> ConnectorClientConfigOverridePolicy had a similar issue in KAFKA-8415, and 
> received a fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9216) Enforce connect internal topic configuration at startup

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9216.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` the second PR that enforces the `cleanup.policy` topic 
setting on Connect's three internal topics, and cherry-picked it to the `2.6` 
(for upcoming 2.6.0). However, merging to earlier branches requires too many 
changes in integration tests.

> Enforce connect internal topic configuration at startup
> ---
>
> Key: KAFKA-9216
> URL: https://issues.apache.org/jira/browse/KAFKA-9216
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Evelyn Bayes
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Users sometimes configure Connect's internal topic for configurations with 
> more than one partition. One partition is expected, however, and using more 
> than one leads to weird behavior that is sometimes not easy to spot.
> Here's one example of a log message:
> {noformat}
> "textPayload": "[2019-11-20 11:12:14,049] INFO [Worker clientId=connect-1, 
> groupId=td-connect-server] Current config state offset 284 does not match 
> group assignment 274. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:942)\n"
> {noformat}
> Would it be possible to add a check in the KafkaConfigBackingStore and 
> prevent the worker from starting if connect config partition count !=1 ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9845) plugin.path property does not work with config provider

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9845.
--
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk`, and backported to `2.6` (for upcoming 2.6.0), `2.5` (for 
upcoming 2.5.1), and `2.4` (for future 2.4.2).

> plugin.path property does not work with config provider
> ---
>
> Key: KAFKA-9845
> URL: https://issues.apache.org/jira/browse/KAFKA-9845
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0, 2.4.0, 2.3.1, 2.5.0, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Minor
> Fix For: 2.6.0, 2.4.2, 2.5.1, 2.7.0
>
>
> The config provider mechanism doesn't work if used for the {{plugin.path}} 
> property of a standalone or distributed Connect worker. This is because the 
> {{Plugins}} instance which performs plugin path scanning is created using the 
> raw worker config, pre-transformation (see 
> [ConnectStandalone|https://github.com/apache/kafka/blob/371ad143a6bb973927c89c0788d048a17ebac91a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectStandalone.java#L79]
>  and 
> [ConnectDistributed|https://github.com/apache/kafka/blob/371ad143a6bb973927c89c0788d048a17ebac91a/connect/runtime/src/main/java/org/apache/kafka/connect/cli/ConnectDistributed.java#L91]).
> Unfortunately, because config providers are loaded as plugins, there's a 
> circular dependency issue here. The {{Plugins}} instance needs to be created 
> _before_ the {{DistributedConfig}}/{{StandaloneConfig}} is created in order 
> for the config providers to be loaded correctly, and the config providers 
> need to be loaded in order to perform their logic on any properties 
> (including the {{plugin.path}} property).
> There is no clear fix for this issue in the code base, and the only known 
> workaround is to refrain from using config providers for the {{plugin.path}} 
> property.
> A couple improvements could potentially be made to improve the UX when this 
> issue arises:
>  #  Alter the config logging performed by the {{DistributedConfig}} and 
> {{StandaloneConfig}} classes to _always_ log the raw value for the 
> {{plugin.path}} property. Right now, the transformed value is logged even 
> though it isn't used, which is likely to cause confusion.
>  # Issue a {{WARN}}- or even {{ERROR}}-level log message when it's detected 
> that the user is attempting to use config providers for the {{plugin.path}} 
> property, which states that config providers cannot be used for that specific 
> property, instructs them to change the value for the property accordingly, 
> and/or informs them of the actual value that the framework will use for that 
> property when performing plugin path scanning.
> We should _not_ throw an error on startup if this condition is detected, as 
> this could cause previously-functioning, benignly-misconfigured Connect 
> workers to fail to start after an upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6942) Connect connectors api doesn't show versions of connectors

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-6942.
--
Resolution: Invalid

I'm going to close this as INVALID because the versions are available in the 
API, as noted above.

> Connect connectors api doesn't show versions of connectors
> --
>
> Key: KAFKA-6942
> URL: https://issues.apache.org/jira/browse/KAFKA-6942
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Antony Stubbs
>Priority: Minor
>  Labels: needs-kip
>
> Would be very useful to have the connector list API response also return the 
> version of the installed connectors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10115) Incorporate errors.tolerance with the Errant Record Reporter

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10115.
---
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `2.6` rather than `trunk` (accidentally) and cherry-picked to `trunk`.

> Incorporate errors.tolerance with the Errant Record Reporter
> 
>
> Key: KAFKA-10115
> URL: https://issues.apache.org/jira/browse/KAFKA-10115
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Major
> Fix For: 2.6.0
>
>
> The errors.tolerance config is currently not being used when using the Errant 
> Record Reporter. If errors.tolerance is none then the task should fail after 
> sending it to the DLQ in Kafka.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9066) Kafka Connect JMX : source & sink task metrics missing for tasks in failed state

2020-06-10 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9066.
--
Fix Version/s: 2.7.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk`, and backported to `2.6` (for upcoming 2.6.0). I'll file a 
separate issue to backport this to `2.5` (since we're in-progress on releasing 
2.5.1) and `2.4`.

> Kafka Connect JMX : source & sink task metrics missing for tasks in failed 
> state
> 
>
> Key: KAFKA-9066
> URL: https://issues.apache.org/jira/browse/KAFKA-9066
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.1.1
>Reporter: Mikołaj Stefaniak
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.7.0
>
>
> h2. Overview
> Kafka Connect exposes various metrics via JMX. Those metrics can be exported 
> i.e. by _Prometheus JMX Exporter_ for further processing.
> One of crucial attributes is connector's *task status.*
> According to official Kafka docs, status is available as +status+ attribute 
> of following MBean:
> {quote}kafka.connect:type=connector-task-metrics,connector="\{connector}",task="\{task}"status
>  - The status of the connector task. One of 'unassigned', 'running', 
> 'paused', 'failed', or 'destroyed'.
> {quote}
> h2. Issue
> Generally +connector-task-metrics+ are exposed propery for tasks in +running+ 
> status but not exposed at all if task is +failed+.
> Failed Task *appears* properly with failed status when queried via *REST API*:
>  
> {code:java}
> $ curl -X GET -u 'user:pass' 
> http://kafka-connect.mydomain.com/connectors/customerconnector/status
> {"name":"customerconnector","connector":{"state":"RUNNING","worker_id":"kafka-connect.mydomain.com:8080"},"tasks":[{"id":0,"state":"FAILED","worker_id":"kafka-connect.mydomain.com:8080","trace":"org.apache.kafka.connect.errors.ConnectException:
>  Received DML 'DELETE FROM mysql.rds_sysinfo .."}],"type":"source"}
> $ {code}
>  
> Failed Task *doesn't appear* as bean with +connector-task-metrics+ type when 
> queried via *JMX*:
>  
> {code:java}
> $ echo "beans -d kafka.connect" | java -jar 
> target/jmxterm-1.1.0-SNAPSHOT-uber.jar -l localhost:8081 -n -v silent | grep 
> connector=customerconnector
> kafka.connect:connector=customerconnector,task=0,type=task-error-metricskafka.connect:connector=customerconnector,type=connector-metrics
> $
> {code}
> h2. Expected result
> It is expected, that bean with +connector-task-metrics+ type will appear also 
> for tasks that failed.
> Below is example of how beans are properly registered for tasks in Running 
> state:
>  
> {code:java}
> $ echo "get -b 
> kafka.connect:connector=sinkConsentSubscription-1000,task=0,type=connector-task-metrics
>  status" | java -jar target/jmxterm-1.1.0-SNAPSHOT-ube r.jar -l 
> localhost:8081 -n -v silent
> status = running;
> $
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10146) Backport KAFKA-9066 to 2.5 and 2.4 branches

2020-06-10 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10146:
-

 Summary: Backport KAFKA-9066 to 2.5 and 2.4 branches
 Key: KAFKA-10146
 URL: https://issues.apache.org/jira/browse/KAFKA-10146
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.4.2, 2.5.2


KAFKA-9066 was merged on the same day we were trying to release 2.5.1, so this 
was not backported at the time. However, once 2.5.1 is out the door, the 
`775f0d484` commit on `trunk` should be backported to the `2.5` and `2.4` 
branches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9468) config.storage.topic partition count issue is hard to debug

2020-06-07 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9468.
--
  Assignee: Randall Hauch
Resolution: Fixed

> config.storage.topic partition count issue is hard to debug
> ---
>
> Key: KAFKA-9468
> URL: https://issues.apache.org/jira/browse/KAFKA-9468
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.0.2, 1.1.1, 2.0.1, 2.1.1, 2.2.2, 2.4.0, 2.3.1
>Reporter: Evelyn Bayes
>Assignee: Randall Hauch
>Priority: Minor
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> When you run connect distributed with 2 or more workers and 
> config.storage.topic has more then 1 partition, you can end up with one of 
> the workers rebalancing endlessly:
> [2020-01-13 12:53:23,535] INFO [Worker clientId=connect-1, 
> groupId=connect-cluster] Current config state offset 37 is behind group 
> assignment 63, reading to end of config log 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
>  [2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, 
> groupId=connect-cluster] Finished reading to end of log and updated config 
> snapshot, new config log offset: 37 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
>  [2020-01-13 12:53:23,584] INFO [Worker clientId=connect-1, 
> groupId=connect-cluster] Current config state offset 37 does not match group 
> assignment 63. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
>  
> In case any person viewing this doesn't know you are only ever meant to 
> create this topic with one partition.
>  
> *Suggested Solution*
> Make the connect worker check the partition count when it starts and if 
> partition count is > 1 Kafka Connect stops and logs the reason why.
> I think this is reasonable as it would stop users just starting out from 
> building it incorrectly and would be easy to fix early. For those upgrading 
> this would easily be caught in a PRE-PROD environment. And even if they 
> upgraded directly in PROD you would only be impacted if upgraded all connect 
> workers at the same time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-9216) Enforce connect internal topic configuration at startup

2020-06-07 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-9216:
--

The previous PR only checked the number of partitions, so I'm going to reopen 
this to add another PR that checks the internal topic cleanup policy, which 
should be `compact` (only), and should not be `delete,compact` or `delete`. 
Using any other topic cleanup policy for the internal topics can lead to lost 
configurations, source offsets, or statuses.

> Enforce connect internal topic configuration at startup
> ---
>
> Key: KAFKA-9216
> URL: https://issues.apache.org/jira/browse/KAFKA-9216
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Evelyn Bayes
>Priority: Major
> Fix For: 2.3.2, 2.6.0, 2.4.2, 2.5.1
>
>
> Users sometimes configure Connect's internal topic for configurations with 
> more than one partition. One partition is expected, however, and using more 
> than one leads to weird behavior that is sometimes not easy to spot.
> Here's one example of a log message:
> {noformat}
> "textPayload": "[2019-11-20 11:12:14,049] INFO [Worker clientId=connect-1, 
> groupId=td-connect-server] Current config state offset 284 does not match 
> group assignment 274. Forcing rebalance. 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder:942)\n"
> {noformat}
> Would it be possible to add a check in the KafkaConfigBackingStore and 
> prevent the worker from starting if connect config partition count !=1 ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9570) SSL cannot be configured for Connect in standalone mode

2020-06-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9570.
--
Fix Version/s: 2.5.1
   2.4.2
   2.6.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` and backported to the `2.6`, `2.5` and `2.4` branches.

> SSL cannot be configured for Connect in standalone mode
> ---
>
> Key: KAFKA-9570
> URL: https://issues.apache.org/jira/browse/KAFKA-9570
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 2.0.2, 2.3.0, 2.1.2, 
> 2.2.1, 2.2.2, 2.4.0, 2.3.1, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> When Connect is brought up in standalone, if the worker config contains _any_ 
> properties that begin with the {{listeners.https.}} prefix, SSL will not be 
> enabled on the worker.
> This is because the relevant SSL configs are only defined in the [distributed 
> worker 
> config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/distributed/DistributedConfig.java#L260]
>  instead of the [superclass worker 
> config|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConfig.java].
>  This, in conjunction with [a call 
> to|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L42]
>  
> [AbstractConfig::valuesWithPrefixAllOrNothing|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java],
>  causes all configs not defined in the {{WorkerConfig}} used by the worker to 
> be silently dropped when the worker configures its REST server if there is at 
> least one config present with the {{listeners.https.}} prefix.
> Unfortunately, the workaround of specifying all SSL configs without the 
> {{listeners.https.}} prefix will also fail if any passwords need to be 
> specified. This is because the password values in the {{Map}} returned from 
> {{AbstractConfig::valuesWithPrefixAllOrNothing}} aren't parsed as passwords, 
> but the [framework expects them to 
> be|https://github.com/apache/kafka/blob/ebcdcd9fa94efbff80e52b02c85d4a61c09f850b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/util/SSLUtils.java#L87].
>  However, if no keystore, truststore, or key passwords need to be configured, 
> then it should be possible to work around the issue by specifying all of 
> those configurations without a prefix (as long as they don't conflict with 
> any other configs in that namespace).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10111) SinkTaskContext.errantRecordReporter() added in KIP-610 should be a default method

2020-06-05 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10111:
-

 Summary: SinkTaskContext.errantRecordReporter() added in KIP-610 
should be a default method
 Key: KAFKA-10111
 URL: https://issues.apache.org/jira/browse/KAFKA-10111
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


[KIP-610|https://cwiki.apache.org/confluence/display/KAFKA/KIP-610%3A+Error+Reporting+in+Sink+Connectors]
 added a new `errantRecordReporter()` method to `SinkTaskContext`, but the KIP 
didn't make this method a default method. While the AK project can add this 
method to all of its implementations (actual and test), other projects such as 
connector projects might have their own mock implementations just to help test 
the connector implementation. That means when those projects upgrade, they'd 
get compilation problems for their own implementations of `SinkTaskContext`.

Making this method default will save such problems with downstream projects, 
and is actually easy since the method is already defined to return null if no 
reporter is configured.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10110) ConnectDistributed fails with NPE when Kafka cluster has no ID

2020-06-05 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-10110:
-

 Summary: ConnectDistributed fails with NPE when Kafka cluster has 
no ID
 Key: KAFKA-10110
 URL: https://issues.apache.org/jira/browse/KAFKA-10110
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


When a Connect worker starts, recent changes from KIP-606 / KAFKA-9960 attempt 
to put the Kafka cluster ID into the new KafkaMetricsContext. But the Kafka 
cluster ID can be null, resulting in an NPE shown in the following log snippet:
{noformat}
[2020-06-04 15:01:02,900] INFO Kafka cluster ID: null 
(org.apache.kafka.connect.util.ConnectUtils)
...
[2020-06-04 15:01:03,271] ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed)[2020-06-04 15:01:03,271] 
ERROR Stopping due to error 
(org.apache.kafka.connect.cli.ConnectDistributed)java.lang.NullPointerException 
at 
org.apache.kafka.common.metrics.KafkaMetricsContext.lambda$new$0(KafkaMetricsContext.java:48)
 at java.util.HashMap.forEach(HashMap.java:1289) at 
org.apache.kafka.common.metrics.KafkaMetricsContext.(KafkaMetricsContext.java:48)
 at 
org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:100) 
at org.apache.kafka.connect.runtime.Worker.(Worker.java:135) at 
org.apache.kafka.connect.runtime.Worker.(Worker.java:121) at 
org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:111)
 at 
org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9673) Conditionally apply SMTs

2020-05-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9673.
--
Fix Version/s: 2.6.0
 Reviewer: Konstantine Karantasis
   Resolution: Fixed

KIP-585 was approved by the 2.6.0 KIP freeze, and the PR was approved and 
merged to `trunk` before 2.6.0 feature freeze.

> Conditionally apply SMTs
> 
>
> Key: KAFKA-9673
> URL: https://issues.apache.org/jira/browse/KAFKA-9673
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 2.6.0
>
>
> KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of 
> a SMT being applied to a record lacking a given field. It's still not 
> possible to apply a SMT conditionally, which is what things like Debezium 
> really need in order to apply transformations only to non-schema change 
> events.
> [~rhauch] suggested a mechanism to conditionally apply any SMT but was 
> concerned about the possibility of a naming collision (assuming it was 
> configured by a simple config)
> I'd like to propose something which would solve this problem without the 
> possibility of such collisions. The idea is to have a higher-level condition, 
> which applies an arbitrary transformation (or transformation chain) according 
> to some predicate on the record. 
> More concretely, it might be configured like this:
> {noformat}
>   transforms.conditionalExtract.type: Conditional
>   transforms.conditionalExtract.transforms: extractInt
>   transforms.conditionalExtract.transforms.extractInt.type: 
> org.apache.kafka.connect.transforms.ExtractField$Key
>   transforms.conditionalExtract.transforms.extractInt.field: c1
>   transforms.conditionalExtract.condition: topic-matches:
> {noformat}
> * The {{Conditional}} SMT is configured with its own list of transforms 
> ({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
> like the top level {{transforms}} config, so subkeys can be used to configure 
> these transforms in the usual way.
> * The {{condition}} config defines the predicate for when the transforms are 
> applied to a record using a {{:}} syntax
> We could initially support three condition types:
> *{{topic-matches:}}* The transformation would be applied if the 
> record's topic name matched the given regular expression pattern. For 
> example, the following would apply the transformation on records being sent 
> to any topic with a name beginning with "my-prefix-":
> {noformat}
>transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
> {noformat}
>
> *{{has-header:}}* The transformation would be applied if the 
> record had at least one header with the given name. For example, the 
> following will apply the transformation on records with at least one header 
> with the name "my-header":
> {noformat}
>transforms.conditionalExtract.condition: has-header:my-header
> {noformat}
>
> *{{not:}}* This would negate the result of another named 
> condition using the condition config prefix. For example, the following will 
> apply the transformation on records which lack any header with the name 
> my-header:
> {noformat}
>   transforms.conditionalExtract.condition: not:hasMyHeader
>   transforms.conditionalExtract.condition.hasMyHeader: 
> has-header:my-header
> {noformat}
> I foresee one implementation concern with this approach, which is that 
> currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
> proposal would require something more flexible in order to allow the config 
> parameters to depend on the listed transform aliases (and similarly for named 
> predicate used for the {{not:}} predicate). I think this could be done by 
> adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
> given the config, for example.
> Obviously this would require a KIP, but before I spend any more time on this 
> I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9971) Error Reporting in Sink Connectors

2020-05-28 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9971.
--
Fix Version/s: 2.6.0
 Reviewer: Randall Hauch
 Assignee: Aakash Shah
   Resolution: Fixed

Merged to `trunk` for inclusion in the upcoming 2.6.0 release. This was 
approved and merged before feature freeze.

> Error Reporting in Sink Connectors
> --
>
> Key: KAFKA-9971
> URL: https://issues.apache.org/jira/browse/KAFKA-9971
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.6.0
>Reporter: Aakash Shah
>Assignee: Aakash Shah
>Priority: Critical
> Fix For: 2.6.0
>
>
> Currently, 
> [KIP-298|https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect]
>  provides error handling in Kafka Connect that includes functionality such as 
> retrying, logging, and sending errant records to a dead letter queue. 
> However, the dead letter queue functionality from KIP-298 only supports error 
> reporting within contexts of the transform operation, and key, value, and 
> header converter operation. Within the context of the {{put(...)}} method in 
> sink connectors, there is no support for dead letter queue/error reporting 
> functionality. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9960) Metrics Reporter should support additional context tags

2020-05-27 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9960.
--
Resolution: Fixed

Merged the PR to the `trunk` branch for inclusion in the AK 2.6.0 release.

> Metrics Reporter should support additional context tags
> ---
>
> Key: KAFKA-9960
> URL: https://issues.apache.org/jira/browse/KAFKA-9960
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
>Priority: Major
> Fix For: 2.6.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-606%3A+Add+Metadata+Context+to+MetricsReporter
> MetricsReporters often rely on additional context that is currently hard to 
> access or propagate through an application. The KIP linked above proposes to 
> address those shortcomings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6755) MaskField SMT should optionally take a literal value to use instead of using null

2020-05-24 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-6755.
--
Fix Version/s: 2.6.0
 Reviewer: Randall Hauch
   Resolution: Fixed

[KIP-437|https://cwiki.apache.org/confluence/display/KAFKA/KIP-437%3A+Custom+replacement+for+MaskField+SMT]
 was approved, and the PR was merged to the `trunk` branch for inclusion in the 
upcoming 2.6.0.

> MaskField SMT should optionally take a literal value to use instead of using 
> null
> -
>
> Key: KAFKA-6755
> URL: https://issues.apache.org/jira/browse/KAFKA-6755
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Major
>  Labels: needs-kip, newbie
> Fix For: 2.6.0
>
>   Original Estimate: 8h
>  Remaining Estimate: 8h
>
> The existing {{org.apache.kafka.connect.transforms.MaskField}} SMT always 
> uses the null value for the type of field. It'd be nice to *optionally* be 
> able to specify a literal value for the type, where the SMT would convert the 
> literal string value in the configuration to the desired type (using the new 
> {{Values}} methods).
> Use cases: mask out the IP address, or SSN, or other personally identifiable 
> information (PII).
> Since this changes the API, and thus will require a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9767) Basic auth extension should have logging

2020-05-24 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9767.
--
Fix Version/s: 2.6.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk` branch for inclusion in the upcoming 2.6.0 release.

> Basic auth extension should have logging
> 
>
> Key: KAFKA-9767
> URL: https://issues.apache.org/jira/browse/KAFKA-9767
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Minor
> Fix For: 2.6.0
>
>
> The three classes for Connect's basic auth REST extension are pretty light on 
> logging, which makes debugging issues and interplay between the REST 
> extension and whatever JAAS login module the user may have configured 
> difficult. We should add some more extensive logging to these classes, while 
> taking care to prevent log spam in the case of repeated unauthenticated 
> requests or leaking of sensitive information.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9944) Allow HTTP Response Headers to be Configured for Kafka Connect

2020-05-24 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9944.
--
  Reviewer: Randall Hauch
Resolution: Fixed

KIP-577 has been approved, and I merged the PR to the `trunk` branch for 
inclusion in the upcoming 2.6.0 release.

> Allow HTTP Response Headers to be Configured for Kafka Connect
> --
>
> Key: KAFKA-9944
> URL: https://issues.apache.org/jira/browse/KAFKA-9944
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Jeff Huang
>Assignee: Jeff Huang
>Priority: Minor
>  Labels: need-kip
> Fix For: 2.6.0
>
>
> More details here: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+577%3A+Allow+HTTP+Response+Headers+to+be+Configured+for+Kafka+Connect



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9780) Deprecate commit records without record metadata

2020-05-21 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9780.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` after 
[KIP-586|https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata]
 has been adopted

> Deprecate commit records without record metadata
> 
>
> Key: KAFKA-9780
> URL: https://issues.apache.org/jira/browse/KAFKA-9780
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.4.1
>Reporter: Mario Molina
>Assignee: Mario Molina
>Priority: Minor
> Fix For: 2.6.0
>
>
> Since KIP-382 (MirrorMaker 2.0) a new method {{commitRecord}} was included in 
> {{SourceTask}} class to be called by the worker adding a new parameter with 
> the record metadata. The old {{commitRecord}} method is called and from the 
> new one and it's preserved just for backwards compatibility.
> The idea is to deprecate this method so that we could remove it in a future 
> release.
> There is a KIP for this ticket: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-586%3A+Deprecate+commit+records+without+record+metadata]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9537) Abstract transformations in configurations cause unfriendly error message.

2020-05-15 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9537.
--
Fix Version/s: 2.5.1
   2.4.2
   2.6.0
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to `trunk` for inclusion in 2.6.0, and cherry-picked to `2.5` (for 
future 2.5.1 release) and `2.4` (for future 2.4.2 release).

> Abstract transformations in configurations cause unfriendly error message.
> --
>
> Key: KAFKA-9537
> URL: https://issues.apache.org/jira/browse/KAFKA-9537
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
>Priority: Minor
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> I was working with a coworker who had a bash script posting a config to 
> connect with
> {code:java}org.apache.kafka.connect.transforms.ExtractField.$Key{code} in the 
> script. Bash removed the $Key because it wasn't escaped properly.
> {code:java}
> org.apache.kafka.connect.transforms.ExtractField.{code}
> is made it to the rest interface. A Class was create for the abstract 
> implementation of ExtractField and passed to getConfigDefFromTransformation. 
> It tried to call newInstance which threw an exception. The following gets 
> returned via the rest interface. 
> {code}
> {
>   "error_code": 400,
>   "message": "Connector configuration is invalid and contains the following 1 
> error(s):\nInvalid value class 
> org.apache.kafka.connect.transforms.ExtractField for configuration 
> transforms.extractString.type: Error getting config definition from 
> Transformation: null\nYou can also find the above list of errors at the 
> endpoint `/{connectorType}/config/validate`"
> }
> {code}
> It would be a much better user experience if we returned something like 
> {code}
> {
>   "error_code": 400,
>   "message": "Connector configuration is invalid and contains the following 1 
> error(s):\nInvalid value class 
> org.apache.kafka.connect.transforms.ExtractField for configuration 
> transforms.extractString.type: Error getting config definition from 
> Transformation: Transformation is abstract and cannot be created.\nYou can 
> also find the above list of errors at the endpoint 
> `/{connectorType}/config/validate`"
> }
> {code}
> or
> {code}
> {
>   "error_code": 400,
>   "message": "Connector configuration is invalid and contains the following 1 
> error(s):\nInvalid value class 
> org.apache.kafka.connect.transforms.ExtractField for configuration 
> transforms.extractString.type: Error getting config definition from 
> Transformation: Transformation is abstract and cannot be created. Did you 
> mean ExtractField$Key, ExtractField$Value?\nYou can also find the above list 
> of errors at the endpoint `/{connectorType}/config/validate`"
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9931) Kafka Connect should accept '-1' as a valid replication factor

2020-04-28 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-9931:


 Summary: Kafka Connect should accept '-1' as a valid replication 
factor
 Key: KAFKA-9931
 URL: https://issues.apache.org/jira/browse/KAFKA-9931
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.5.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 2.6.0


[https://cwiki.apache.org/confluence/display/KAFKA/KIP-464%3A+Defaults+for+AdminClient%23createTopic]

As of KIP-464, the adminclient can use '-1' as the replication factor or 
partitions and the broker defaults. The Kafka Connect Frame work does not 
currently accept anything less than 1 as a valid replication factor. This 
should be changed so that Connect worker configurations can specify `-1` for 
the internal topic replication factors to default to use the broker's default 
replication factor for new topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9883) Connect request to restart task can result in IllegalArgumentError: "uriTemplate" parameter is null

2020-04-23 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9883.
--
Resolution: Fixed

> Connect request to restart task can result in IllegalArgumentError: 
> "uriTemplate" parameter is null
> ---
>
> Key: KAFKA-9883
> URL: https://issues.apache.org/jira/browse/KAFKA-9883
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Minor
> Fix For: 2.6.0, 2.4.2, 2.5.1
>
>
> When attempting to restart a connector, the following is logged by Connect:
>  
> {code:java}
> ERROR Uncaught exception in REST call to 
> /connectors/my-connector/tasks/0/restart 
> (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
> java.lang.IllegalArgumentException: "uriTemplate" parameter is null.
> at 
> org.glassfish.jersey.uri.internal.JerseyUriBuilder.uri(JerseyUriBuilder.java:189)
> at 
> org.glassfish.jersey.uri.internal.JerseyUriBuilder.uri(JerseyUriBuilder.java:72)
> at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:96)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.completeOrForwardRequest(ConnectorsResource.java:263)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.completeOrForwardRequest(ConnectorsResource.java:298)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.restartTask(ConnectorsResource.java:218)
> {code}
> Resubmitting the restart REST request will usually resolve the problem.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9883) Connect request to restart task can result in IllegalArgumentError: "uriTemplate" parameter is null

2020-04-16 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-9883:


 Summary: Connect request to restart task can result in 
IllegalArgumentError: "uriTemplate" parameter is null
 Key: KAFKA-9883
 URL: https://issues.apache.org/jira/browse/KAFKA-9883
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.4.0
Reporter: Randall Hauch


When attempting to restart a connector, the following is logged by Connect:



 
{code:java}
ERROR Uncaught exception in REST call to 
/connectors/my-connector/tasks/0/restart 
(org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper)
java.lang.IllegalArgumentException: "uriTemplate" parameter is null.
at 
org.glassfish.jersey.uri.internal.JerseyUriBuilder.uri(JerseyUriBuilder.java:189)
at 
org.glassfish.jersey.uri.internal.JerseyUriBuilder.uri(JerseyUriBuilder.java:72)
at javax.ws.rs.core.UriBuilder.fromUri(UriBuilder.java:96)
at 
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.completeOrForwardRequest(ConnectorsResource.java:263)
at 
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.completeOrForwardRequest(ConnectorsResource.java:298)
at 
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.restartTask(ConnectorsResource.java:218)
{code}
Resubmitting the restart REST request will usually resolve the problem.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9763) Recent changes to Connect's InsertField will fail to inject field on key of tombstone record

2020-03-25 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9763.
--
Resolution: Duplicate

Duplicate of KAFKA-9707, so closing this issue.

> Recent changes to Connect's InsertField will fail to inject field on key of 
> tombstone record
> 
>
> Key: KAFKA-9763
> URL: https://issues.apache.org/jira/browse/KAFKA-9763
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.2, 2.4.0, 2.3.1, 2.5.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.3, 2.3.2, 2.4.2, 2.5.1
>
>
> This is a regression due to the changes for KAFKA-8523.
> KAFKA-8523 was backported to multiple versions, and was released into 2.2.2, 
> 2.3.1, and 2.4.0, and will soon be released in 2.5.0.
> Unfortunately, that fix always makes the `InsertField` SMT skip all tombstone 
> records, even when using the `InsertField$Key`.
> Rather than:
> {code:java}
> private boolean isTombstoneRecord(R record) {
> return record.value() == null;
> }
> {code}
> the correct behavior would be:
> {code:java}
>  private boolean isTombstoneRecord(R record) {
>  return operatingValue(record) == null;
>  }
> {code}
> The method no longer detects just tombstone methods, so the code should be 
> refactored to return the record if the operatingValue for the record (which 
> for `InsertField$Key` is the record key and for `InsertField$Value` is the 
> record value) is null.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9763) Recent changes to Connect's InsertField will fail to inject field on key of tombstone record

2020-03-25 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-9763:


 Summary: Recent changes to Connect's InsertField will fail to 
inject field on key of tombstone record
 Key: KAFKA-9763
 URL: https://issues.apache.org/jira/browse/KAFKA-9763
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.3.1, 2.4.0, 2.2.2, 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.5.0
Reporter: Randall Hauch
Assignee: Randall Hauch
 Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.3, 2.3.2, 2.4.2, 2.5.1


This is a regression due to the changes for KAFKA-8523.

KAFKA-8523 was backported to multiple versions, and was released into 2.2.2, 
2.3.1, and 2.4.0, and will soon be released in 2.5.0.

Unfortunately, that fix always makes the `InsertField` SMT skip all tombstone 
records, even when using the `InsertField$Key`.

Rather than:
{code:java}
private boolean isTombstoneRecord(R record) {
return record.value() == null;
}
{code}
the correct behavior would be:
{code:java}
 private boolean isTombstoneRecord(R record) {
 return operatingValue(record) == null;
 }
{code}
The method no longer detects just tombstone methods, so the code should be 
refactored to return the record if the operatingValue for the record (which for 
`InsertField$Key` is the record key and for `InsertField$Value` is the record 
value) is null.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-7509) Kafka Connect logs unnecessary warnings about unused configurations

2020-03-20 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch reopened KAFKA-7509:
--

> Kafka Connect logs unnecessary warnings about unused configurations
> ---
>
> Key: KAFKA-7509
> URL: https://issues.apache.org/jira/browse/KAFKA-7509
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
>
> When running Connect, the logs contain quite a few warnings about "The 
> configuration '{}' was supplied but isn't a known config." This occurs when 
> Connect creates producers, consumers, and admin clients, because the 
> AbstractConfig is logging unused configuration properties upon construction. 
> It's complicated by the fact that the Producer, Consumer, and AdminClient all 
> create their own AbstractConfig instances within the constructor, so we can't 
> even call its {{ignore(String key)}} method.
> See also KAFKA-6793 for a similar issue with Streams.
> There are no arguments in the Producer, Consumer, or AdminClient constructors 
> to control  whether the configs log these warnings, so a simpler workaround 
> is to only pass those configuration properties to the Producer, Consumer, and 
> AdminClient that the ProducerConfig, ConsumerConfig, and AdminClientConfig 
> configdefs know about.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9601) Workers log raw connector configs, including values

2020-02-26 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9601.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Thanks for the fix, [~ChrisEgerton]!

Merged to trunk and cherry-picked to the 2.5, 2.4, 2.3, 2.2, 2.1, 2.0, 1.1, and 
1.0 branches; I didn't go back farther since it's unlikely we will issue 
additional patches for earlier branches.

> Workers log raw connector configs, including values
> ---
>
> Key: KAFKA-9601
> URL: https://issues.apache.org/jira/browse/KAFKA-9601
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Critical
> Fix For: 1.0.3, 1.1.2, 2.0.2, 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> [This line right 
> here|https://github.com/apache/kafka/blob/5359b2e3bc1cf13a301f32490a6630802afc4974/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerConnector.java#L78]
>  logs all configs (key and value) for a connector, which is bad, since it can 
> lead to secrets (db credentials, cloud storage credentials, etc.) being 
> logged in plaintext.
> We can remove this line. Or change it to just log config keys. Or try to do 
> some super-fancy parsing that masks sensitive values. Well, hopefully not 
> that. That sounds like a lot of work.
> Affects all versions of Connect back through 0.10.1.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9556) KIP-558 cannot be fully disabled and when enabled topic reset not working on connector deletion

2020-02-14 Thread Randall Hauch (Jira)
Randall Hauch created KAFKA-9556:


 Summary: KIP-558 cannot be fully disabled and when enabled topic 
reset not working on connector deletion
 Key: KAFKA-9556
 URL: https://issues.apache.org/jira/browse/KAFKA-9556
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.5.0
Reporter: Randall Hauch
Assignee: Konstantine Karantasis
 Fix For: 2.5.0


According to KIP-558 for the new Connect topic tracking feature, Connect should 
not write the topic status records when the topic is disabled. However, 
currently that is not the case: when the new topic tracking (KIP-558) feature 
is disabled, Connect still writes topic status records to the internal status 
topic. 

Also, according to the KIP, Connect should automatically reset the topic status 
when a connector is deleted, but that is not happening.

It'd be good to increase test coverage on the new feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9204) ReplaceField transformation fails when encountering tombstone event

2020-02-12 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9204.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   2.2.3
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, `2.3`, and `2.2` branches.

> ReplaceField transformation fails when encountering tombstone event
> ---
>
> Key: KAFKA-9204
> URL: https://issues.apache.org/jira/browse/KAFKA-9204
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Georgios Kalogiros
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> When applying the {{ReplaceField}} transformation to a tombstone event, an 
> exception is raised:
>  
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:506)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.connect.errors.DataException: Only Map objects 
> supported in absence of schema for [field replacement], found: null
>   at 
> org.apache.kafka.connect.transforms.util.Requirements.requireMap(Requirements.java:38)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.applySchemaless(ReplaceField.java:134)
>   at 
> org.apache.kafka.connect.transforms.ReplaceField.apply(ReplaceField.java:127)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>   ... 14 more
> {code}
> There was a similar bug for the InsertField transformation that got merged in 
> recently:
> https://issues.apache.org/jira/browse/KAFKA-8523
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9192) NullPointerException if field in schema not present in value

2020-02-12 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9192.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   2.2.3
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, `2.3`, and `2.2` branches.

> NullPointerException if field in schema not present in value
> 
>
> Key: KAFKA-9192
> URL: https://issues.apache.org/jira/browse/KAFKA-9192
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Mark Tinsley
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> Given a message:
> {code:java}
> {
>"schema":{
>   "type":"struct",
>   "fields":[
>  {
> "type":"string",
> "optional":true,
> "field":"abc"
>  }
>   ],
>   "optional":false,
>   "name":"foobar"
>},
>"payload":{
>}
> }
> {code}
> I would expect, given the field is optional, for the JsonConverter to still 
> process this value. 
> What happens is I get a null pointer exception, the stacktrace points to this 
> line: 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L701
>  called by 
> https://github.com/apache/kafka/blob/2.1/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L181
> Issue seems to be that we need to check and see if the jsonValue is null 
> before checking if the jsonValue has a null value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7052) ExtractField SMT throws NPE - needs clearer error message

2020-02-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7052.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
   2.2.3
 Reviewer: Randall Hauch
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, `2.3` and `2.2` branches.

> ExtractField SMT throws NPE - needs clearer error message
> -
>
> Key: KAFKA-7052
> URL: https://issues.apache.org/jira/browse/KAFKA-7052
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Robin Moffatt
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> With the following Single Message Transform: 
> {code:java}
> "transforms.ExtractId.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
> "transforms.ExtractId.field":"id"{code}
> Kafka Connect errors with : 
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.transforms.ExtractField.apply(ExtractField.java:61)
> at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38){code}
> There should be a better error message here, identifying the reason for the 
> NPE.
> Version: Confluent Platform 4.1.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7509) Kafka Connect logs unnecessary warnings about unused configurations

2020-02-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7509.
--
Resolution: Won't Fix

> Kafka Connect logs unnecessary warnings about unused configurations
> ---
>
> Key: KAFKA-7509
> URL: https://issues.apache.org/jira/browse/KAFKA-7509
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
>
> When running Connect, the logs contain quite a few warnings about "The 
> configuration '{}' was supplied but isn't a known config." This occurs when 
> Connect creates producers, consumers, and admin clients, because the 
> AbstractConfig is logging unused configuration properties upon construction. 
> It's complicated by the fact that the Producer, Consumer, and AdminClient all 
> create their own AbstractConfig instances within the constructor, so we can't 
> even call its {{ignore(String key)}} method.
> See also KAFKA-6793 for a similar issue with Streams.
> There are no arguments in the Producer, Consumer, or AdminClient constructors 
> to control  whether the configs log these warnings, so a simpler workaround 
> is to only pass those configuration properties to the Producer, Consumer, and 
> AdminClient that the ProducerConfig, ConsumerConfig, and AdminClientConfig 
> configdefs know about.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9074) Connect's Values class does not parse time or timestamp values from string literals

2020-02-04 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9074.
--
Fix Version/s: 2.4.1
   2.3.2
   2.5.0
 Reviewer: Jason Gustafson
   Resolution: Fixed

Merged to the `trunk`, `2.5`, `2.4`, and `2.3` branches. 

> Connect's Values class does not parse time or timestamp values from string 
> literals
> ---
>
> Key: KAFKA-9074
> URL: https://issues.apache.org/jira/browse/KAFKA-9074
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.5.0, 2.3.2, 2.4.1
>
>
> The `Values.parseString(String)` method that returns a `SchemaAndValue` is 
> not able to parse a string that contains a time or timestamp literal into a 
> logical time or timestamp value. This is likely because the `:` is a 
> delimiter for the internal parser, and so literal values such as 
> `2019-08-23T14:34:54.346Z` and `14:34:54.346Z` are separated into multiple 
> tokens before matching the pattern.
> The colon can be escaped to prevent the unexpected tokenization, but then the 
> literal string contains the backslash character before each colon, and again 
> the pattern matching for the time and timestamp literal strings fails to 
> match.
> This should be backported as far back as possible: the `Values` class was 
> introduced in AK 1.1.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9462) Correct exception message in DistributedHerder

2020-01-24 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9462.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Thanks, [~yuzhih...@gmail.com]. Merged to the `trunk` branch. IMO backporting 
is not really warranted this since this is a very minor change to an exception 
message.

> Correct exception message in DistributedHerder
> --
>
> Key: KAFKA-9462
> URL: https://issues.apache.org/jira/browse/KAFKA-9462
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Trivial
> Fix For: 2.5.0
>
>
> There are a few exception messages in DistributedHerder which were copied 
> from other exception message.
> This task corrects the messages to reflect actual condition



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9024) org.apache.kafka.connect.transforms.ValueToKey throws NPE

2020-01-21 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9024.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to `trunk` and backported to the `2.4`, `2.3`, and `2.2` branches, since 
we typically backport only to the last 2-3 branches.

> org.apache.kafka.connect.transforms.ValueToKey throws NPE
> -
>
> Key: KAFKA-9024
> URL: https://issues.apache.org/jira/browse/KAFKA-9024
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Robin Moffatt
>Assignee: Nigel Liang
>Priority: Minor
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> If a field named in the SMT does not exist a NPE is thrown. This is not 
> helpful to users and should be caught correctly and reported back in a more 
> friendly way.
> For example, importing data from a database with this transform: 
>  
> {code:java}
> transforms = [ksqlCreateKey, ksqlExtractString]
> transforms.ksqlCreateKey.fields = [ID]
> transforms.ksqlCreateKey.type = class 
> org.apache.kafka.connect.transforms.ValueToKey
> transforms.ksqlExtractString.field = ID
> transforms.ksqlExtractString.type = class 
> org.apache.kafka.connect.transforms.ExtractField$Key
> {code}
> If the field name is {{id}} not {{ID}} then the task fails : 
> {code:java}
> org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
> handler
>at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
>at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
>at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
>at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
>at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
>at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
>at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
>at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException
>at 
> org.apache.kafka.connect.transforms.ValueToKey.applyWithSchema(ValueToKey.java:85)
>at org.apache.kafka.connect.transforms.ValueToKey.apply(ValueToKey.java:65)
>at 
> org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
>at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
>at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
>... 11 more
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9083) Various parsing issues in Values class

2020-01-21 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9083.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged to the `trunk`, `2.4`, `2.3`, and `2.2` branches; we typically don't 
push bugfixes back further than 2-3 branches.

> Various parsing issues in Values class
> --
>
> Key: KAFKA-9083
> URL: https://issues.apache.org/jira/browse/KAFKA-9083
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> There are a number of small issues with the Connect framework's {{Values}} 
> class that lead to either unexpected exceptions, unintuitive (and arguably 
> incorrect) behavior, or confusing log messages. These include:
>  * A {{NullPointerException}} is thrown when parsing the string {{[null]}} 
> (which should be parsed as an array containing a single null element)
>  * A {{NullPointerException}} is thrown when parsing the string {{[]}} (which 
> should be parsed as an empty array)
>  * The returned schema is null when parsing the string {{{}}} (instead, it 
> should be a map schema, possibly with null key and value schemas)
>  * Strings that begin with what appear to be booleans (i.e., the literals 
> {{true}} or {{false}}) and which are followed by token delimiters (e.g., {{}, 
> {{]}}, {{:}}, etc.) are parsed as booleans when they should arguably be 
> parsed as strings; for example, the string {{true}}} is parsed as the boolean 
> {{true}} but should probably just be parsed as the string {{true}}}
>  * Arrays not containing commas are still parsed as arrays in some cases; for 
> example, the string {{[0 1 2]}} is parsed as the array {{[0, 1, 2]}} when it 
> should arguably be parsed as the string literal {{[0 1 2]}}
>  * An exception is logged when attempting to parse input as a map when it 
> doesn't contain the a final closing brace that states "Map is missing 
> terminating ']'" even though the expected terminating character is actually 
> {{}} and not {{]}}
>  * Finally, and possibly most contentious, escape sequences are not stripped 
> from string literals. Thus, the input string 
> {{foobar]}} is parsed as the literal string 
> {{foobar]}}, which is somewhat unexpected, since 
> that means it is impossible to pass in a string that is parsed as the string 
> literal {{foobar]}}, and it is the job of the caller to handle 
> stripping of such escape sequences. Given that the escape sequence can itself 
> be escaped, it seems better to automatically strip it from parsed string 
> literals before returning them to the user.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7489) ConnectDistributedTest is always running broker with dev version

2019-12-06 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-7489.
--
  Reviewer: John Roesler
Resolution: Fixed

Merged this to the `2.3`, `2.2`., and `2.1` branches. As mentioned above, it's 
already fixed on `2.4` and `trunk`.

> ConnectDistributedTest is always running broker with dev version
> 
>
> Key: KAFKA-7489
> URL: https://issues.apache.org/jira/browse/KAFKA-7489
> Project: Kafka
>  Issue Type: Test
>  Components: KafkaConnect, system tests
>Reporter: Andras Katona
>Assignee: Randall Hauch
>Priority: Major
> Fix For: 2.1.2, 2.2.3, 2.3.2
>
>
> h2. Test class
> kafkatest.tests.connect.connect_distributed_test.ConnectDistributedTest
> h2. Details
> _test_broker_compatibility_ is +parametrized+ with different types of 
> brokers, yet it is passed as string to _setup_services_ and this way 
> KafkaService is initialised with DEV version in the end.
> This is easy to fix, just wrap the _broker_version_ with KafkaVersion
> {panel}
> self.setup_services(broker_version={color:#FF}KafkaVersion{color}(broker_version),
>  auto_create_topics=auto_create_topics, security_protocol=security_protocol)
> {panel}
> But test is failing with the parameter LATEST_0_9 with the following error 
> message
> {noformat}
> Kafka Connect failed to start on node: ducker@ducker02 in condition mode: 
> LISTEN
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9184) Redundant task creation and periodic rebalances after zombie worker rejoins the group

2019-12-04 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9184.
--
  Reviewer: Randall Hauch
Resolution: Fixed

> Redundant task creation and periodic rebalances after zombie worker rejoins 
> the group
> -
>
> Key: KAFKA-9184
> URL: https://issues.apache.org/jira/browse/KAFKA-9184
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0, 2.3.2
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Blocker
> Fix For: 2.4.0, 2.5.0, 2.3.2
>
>
> First reported here: 
> https://stackoverflow.com/questions/58631092/kafka-connect-assigns-same-task-to-multiple-workers
> There seems to be an issue with task reassignment when a worker rejoins after 
> an unsuccessful join request. The worker seems to be outside the group for a 
> generation but when it joins again the same task is running in more than one 
> worker



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9258) Connect ConnectorStatusMetricsGroup sometimes NPE

2019-12-03 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9258.
--
  Reviewer: Randall Hauch
Resolution: Fixed

Merged into `trunk` and backported to the `2.4` branch after this was approved 
as a blocker for AK 2.4.0

> Connect ConnectorStatusMetricsGroup sometimes NPE
> -
>
> Key: KAFKA-9258
> URL: https://issues.apache.org/jira/browse/KAFKA-9258
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.0, 2.5.0
>Reporter: Cyrus Vafadari
>Assignee: Cyrus Vafadari
>Priority: Blocker
> Fix For: 2.4.0, 2.5.0
>
>
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.runtime.Worker$ConnectorStatusMetricsGroup.recordTaskRemoved(Worker.java:901)
>   at 
> org.apache.kafka.connect.runtime.Worker.awaitStopTask(Worker.java:720)
>   at 
> org.apache.kafka.connect.runtime.Worker.awaitStopTasks(Worker.java:740)
>   at 
> org.apache.kafka.connect.runtime.Worker.stopAndAwaitTasks(Worker.java:758)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.processTaskConfigUpdatesWithIncrementalCooperative(DistributedHerder.java:575)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:396)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266)
>   at java.util.concurrent.FutureTask.run(FutureTask.java)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9051) Source task source offset reads can block graceful shutdown

2019-11-22 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-9051.
--
Resolution: Fixed

> Source task source offset reads can block graceful shutdown
> ---
>
> Key: KAFKA-9051
> URL: https://issues.apache.org/jira/browse/KAFKA-9051
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.2, 1.1.1, 2.0.1, 2.1.1, 2.3.0, 2.2.1, 2.4.0, 2.5.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.3, 2.5.0, 2.3.2, 2.4.1
>
>
> When source tasks request source offsets from the framework, this results in 
> a call to 
> [Future.get()|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/storage/OffsetStorageReaderImpl.java#L79]
>  with no timeout. In distributed workers, the future is blocked on a 
> successful [read to the 
> end|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/storage/KafkaOffsetBackingStore.java#L136]
>  of the source offsets topic, which in turn will [poll that topic 
> indefinitely|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/util/KafkaBasedLog.java#L287]
>  until the latest messages for every partition of that topic have been 
> consumed.
> This normally completes in a reasonable amount of time. However, if the 
> connectivity between the Connect worker and the Kafka cluster is degraded or 
> dropped in the middle of one of these reads, it will block until connectivity 
> is restored and the request completes successfully.
> If a task is stopped (due to a manual restart via the REST API, a rebalance, 
> worker shutdown, etc.) while blocked on a read of source offsets during its 
> {{start}} method, not only will it fail to gracefully stop, but the framework 
> [will not even invoke its stop 
> method|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L183]
>  until its {{start}} method (and, as a result, the source offset read 
> request) [has 
> completed|https://github.com/apache/kafka/blob/8966d066bd2f80c6d8f270423e7e9982097f97b9/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSourceTask.java#L202-L206].
>  This prevents the task from being able to clean up any resources it has 
> allocated and can lead to OOM errors, excessive thread creation, and other 
> problems.
>  
> I've confirmed that this affects every release of Connect back through 1.0 at 
> least; I've tagged the most recent bug fix release of every major/minor 
> version from then on in the {{Affects Version/s}} field to avoid just putting 
> every version in that field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >