[jira] [Created] (KAFKA-15050) Prompts in the quickstarts

2023-06-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-15050:
---

 Summary: Prompts in the quickstarts
 Key: KAFKA-15050
 URL: https://issues.apache.org/jira/browse/KAFKA-15050
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley


In the quickstarts [Steps 
1-5|https://kafka.apache.org/documentation/#quickstart] use {{$}} to indicate 
the command prompt. When we start to use Kafka Connect in [Step 
6|https://kafka.apache.org/documentation/#quickstart_kafkaconnect] we switch to 
{{{}>{}}}. The [Kafka Streams 
quickstart|https://kafka.apache.org/documentation/streams/quickstart] also uses 
{{{}>{}}}. I don't think there's a reason for this, but if there is one (root 
vs user account?) it should be explained.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15049) Flaky test DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate

2023-06-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-15049:
---

 Summary: Flaky test 
DynamicBrokerReconfigurationTest#testAdvertisedListenerUpdate
 Key: KAFKA-15049
 URL: https://issues.apache.org/jira/browse/KAFKA-15049
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley


While testing 3.4.1RC3 
DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate failed repeatedly 
on my machine always with the following stacktrace.

{{org.opentest4j.AssertionFailedError: Unexpected exception type thrown, 
expected:  but was: 

 at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
 at app//org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:67)
 at app//org.junit.jupiter.api.AssertThrows.assertThrows(AssertThrows.java:35)
 at app//org.junit.jupiter.api.Assertions.assertThrows(Assertions.java:3083)
 at 
app//kafka.server.DynamicBrokerReconfigurationTest.testAdvertisedListenerUpdate(DynamicBrokerReconfigurationTest.scala:1066)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14566) Please Close This Ticket As It Was Inadvertently Created. See KAFKA-14565 For The Correct Ticket.

2023-01-11 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-14566.
-
Resolution: Invalid

> Please Close This Ticket As It Was Inadvertently Created.  See KAFKA-14565 
> For The Correct Ticket.
> --
>
> Key: KAFKA-14566
> URL: https://issues.apache.org/jira/browse/KAFKA-14566
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Terry Beard
>Priority: Major
>  Labels: needs-kip
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-13898) metrics.recording.level is underdocumented

2022-05-13 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13898:
---

 Summary: metrics.recording.level is underdocumented
 Key: KAFKA-13898
 URL: https://issues.apache.org/jira/browse/KAFKA-13898
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Tom Bentley


metrics.recording.level is only briefly described in the documentation. In 
particular the recording level associated with each metric is not documented, 
which makes it difficult to know the effect of changing the level.





--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13897) Add 3.1.1 to system tests and streams upgrade tests

2022-05-13 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13897:
---

 Summary: Add 3.1.1 to system tests and streams upgrade tests
 Key: KAFKA-13897
 URL: https://issues.apache.org/jira/browse/KAFKA-13897
 Project: Kafka
  Issue Type: Task
  Components: streams, system tests
Reporter: Tom Bentley
 Fix For: 3.3.0, 3.1.2, 3.2.1


Per the penultimate bullet on the [release 
checklist|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process#ReleaseProcess-Afterthevotepasses],
 Kafka v3.1.1 is released. We should add this version to the system tests.





--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13895) Fix javadocs build with JDK < 12

2022-05-12 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13895:
---

 Summary: Fix javadocs build with JDK < 12
 Key: KAFKA-13895
 URL: https://issues.apache.org/jira/browse/KAFKA-13895
 Project: Kafka
  Issue Type: Task
  Components: docs
Reporter: Tom Bentley


While doing the "Website update process" in the 3.1.1 release I found that I'd 
broken the javadoc search functionality due to having build the Java docs with 
Java 11. Java < 12 [a bug|https://bugs.openjdk.java.net/browse/JDK-8215291] 
that means the javadoc search functionality adds /undefined/ in the URL path 
(even though links between pages otherwise work. 

We could fix the build.gradle to use {{-no-module-directories}} when running 
with javadoc < v12, but that will then break the links to the JDK classes 
javadocs from the Kafka javadoc, [as described 
here|https://github.com/spring-projects/spring-security/issues/10944].

Alternatively we could change the release process docs to require building with 
Java 17. While this would fix the problem for the Javadocs published on the 
website, anyone building the javadocs for themselves would still be affected.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13882) Dockerfile for previewing website

2022-05-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13882:
---

 Summary: Dockerfile for previewing website
 Key: KAFKA-13882
 URL: https://issues.apache.org/jira/browse/KAFKA-13882
 Project: Kafka
  Issue Type: Task
  Components: docs, website
Reporter: Tom Bentley


Previewing changes to the website/documentation is rather difficult because you 
either have to [hack with the 
HTML|https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Website+Documentation+Changes#ContributingWebsiteDocumentationChanges-KafkaWebsiteRepository]
 or [install 
httpd|https://cwiki.apache.org/confluence/display/KAFKA/Setup+Kafka+Website+on+Local+Apache+Server].
 This is a barrier to contribution.

Having a Dockerfile for previewing the Kafka website (i.e. with httpd properly 
set up) would make it easier for people to contribute website/docs changes.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13881) Add package.java for public package javadoc

2022-05-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13881:
---

 Summary: Add package.java for public package javadoc
 Key: KAFKA-13881
 URL: https://issues.apache.org/jira/browse/KAFKA-13881
 Project: Kafka
  Issue Type: Task
Reporter: Tom Bentley


Our public javadoc ([https://kafka.apache.org/31/javadoc/index.html)] doesn't 
have any package descriptions, which is a bit intimidating for anyone who is 
new to the project.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13285) Use consistent access modifier for Admin clients Result classes

2021-09-09 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13285:
---

 Summary: Use consistent access modifier for Admin clients Result 
classes
 Key: KAFKA-13285
 URL: https://issues.apache.org/jira/browse/KAFKA-13285
 Project: Kafka
  Issue Type: Task
  Components: admin
Affects Versions: 3.0.0
Reporter: Tom Bentley


The following classes in the Admin client have public constructors, while the 
rest have package-private constructors:
AlterClientQuotasResult
AlterUserScramCredentialsResult
DeleteRecordsResult
DescribeClientQuotasResult
DescribeConsumerGroupsResult
ListOffsetsResult

There should be consistency across all the Result classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13276) Public DescribeConsumerGroupsResult constructor refers to KafkaFutureImpl

2021-09-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13276:
---

 Summary: Public DescribeConsumerGroupsResult constructor refers to 
KafkaFutureImpl
 Key: KAFKA-13276
 URL: https://issues.apache.org/jira/browse/KAFKA-13276
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Tom Bentley


The new public DescribeConsumerGroupsResult constructor refers to the 
non-public API KafkaFutureImpl



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13017) Excessive logging on sink task deserialization errors

2021-07-16 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-13017.
-
Fix Version/s: 3.1.0
   Resolution: Fixed

> Excessive logging on sink task deserialization errors
> -
>
> Key: KAFKA-13017
> URL: https://issues.apache.org/jira/browse/KAFKA-13017
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 3.0.0, 2.7.0, 2.8.0
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Major
> Fix For: 3.1.0
>
>
> Even with {{errors.log.enable}} set to {{false}}, deserialization failures 
> are still logged at {{ERROR}} level by the 
> {{org.apache.kafka.connect.runtime.WorkerSinkTask}} namespace. This becomes 
> problematic in pipelines with {{errors.tolerance}} set to {{all}}, and can 
> generate excessive logging of stack traces when deserialization errors are 
> encountered for most if not all of the records being consumed by a sink task.
> The logging added to the {{WorkerSinkTask}} class in KAFKA-9018 should be 
> removed and, if necessary, any valuable information from it not already 
> present in the log messages generated by Connect with {{errors.log.enable}} 
> and {{errors.log.include.messages}} set to {{true}} should be added in that 
> place instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13049) Log recovery threads use default thread pool naming

2021-07-14 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-13049.
-
Fix Version/s: 3.1.0
   Resolution: Fixed

> Log recovery threads use default thread pool naming
> ---
>
> Key: KAFKA-13049
> URL: https://issues.apache.org/jira/browse/KAFKA-13049
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
> Fix For: 3.1.0
>
>
> The threads used for log recovery use a pool 
> {{Executors.newFixedThreadPool(int)}} and hence pick up the naming scheme 
> from {{Executors.defaultThreadFactory()}}. It's not so clear in a thread dump 
> taken during log recovery what those threads are. They should have clearer 
> names. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13049) Log recovery threads use default thread pool naming

2021-07-08 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-13049:
---

 Summary: Log recovery threads use default thread pool naming
 Key: KAFKA-13049
 URL: https://issues.apache.org/jira/browse/KAFKA-13049
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Tom Bentley
Assignee: Tom Bentley


The threads used for log recovery use a pool 
{{Executors.newFixedThreadPool(int)}} and hence pick up the naming scheme from 
{{Executors.defaultThreadFactory()}}. It's not so clear in a thread dump taken 
during log recovery what those threads are. They should have clearer names. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12879) Compatibility break in Admin.listOffsets()

2021-06-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12879:
---

 Summary: Compatibility break in Admin.listOffsets()
 Key: KAFKA-12879
 URL: https://issues.apache.org/jira/browse/KAFKA-12879
 Project: Kafka
  Issue Type: Bug
  Components: admin
Affects Versions: 2.6.2, 2.7.1, 2.8.0
Reporter: Tom Bentley


KAFKA-12339 incompatibly changed the semantics of Admin.listOffsets(). 
Previously it would fail with {{UnknownTopicOrPartitionException}} when a topic 
didn't exist. Now it will (eventually) fail with {{TimeoutException}}. It seems 
this was more or less intentional, even though it would break code which was 
expecting and handling the {{UnknownTopicOrPartitionException}}. A workaround 
is to use {{retries=1}} and inspect the cause of the {{TimeoutException}}, but 
this isn't really suitable for cases where the same Admin client instance is 
being used for other calls where retries is desirable.

Furthermore as well as the intended effect on {{listOffsets()}} it seems that 
the change could actually affect other methods of Admin.

More generally, the Admin client API is vague about which exceptions can 
propagate from which methods. This means that it's not possible to say, in 
cases like this, whether the calling code _should_ have been relying on the 
{{UnknownTopicOrPartitionException}} or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12685) ReplicaStateMachine attempts invalid transition

2021-04-19 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12685:
---

 Summary: ReplicaStateMachine attempts invalid transition
 Key: KAFKA-12685
 URL: https://issues.apache.org/jira/browse/KAFKA-12685
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 2.7.0
Reporter: Tom Bentley
 Attachments: invalid-transition.log

The ReplicaStateMachine tried to perform the invalid transition {{NewReplica}} 
-> {{NewReplica}}, in a cluster which was being rolling restarted at the same 
time as the problem partition was being reassigned (first removing and then 
re-adding the replica).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8863) Add InsertHeader and DropHeaders connect transforms KIP-145

2021-04-16 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-8863.

Fix Version/s: 3.0.0
 Reviewer: Mickael Maison
 Assignee: Tom Bentley
   Resolution: Fixed

> Add InsertHeader and DropHeaders connect transforms KIP-145
> ---
>
> Key: KAFKA-8863
> URL: https://issues.apache.org/jira/browse/KAFKA-8863
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, KafkaConnect
>Reporter: Albert Lozano
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 3.0.0
>
>
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect]
> Continuing the work done in the PR 
> [https://github.com/apache/kafka/pull/4319] implementing the transforms to 
> work with headers would be awesome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12408) Document omitted ReplicaManager metrics

2021-04-12 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-12408.
-
Fix Version/s: 3.0.0
 Reviewer: Tom Bentley
   Resolution: Fixed

> Document omitted ReplicaManager metrics
> ---
>
> Key: KAFKA-12408
> URL: https://issues.apache.org/jira/browse/KAFKA-12408
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Dongjin Lee
>Assignee: Dongjin Lee
>Priority: Minor
> Fix For: 3.0.0
>
>
> There are several problems in ReplicaManager metrics documentation:
>  * kafka.server:type=ReplicaManager,name=OfflineReplicaCount is omitted.
>  * kafka.server:type=ReplicaManager,name=FailedIsrUpdatesPerSec is omitted.
>  * kafka.server:type=ReplicaManager,name=[PartitionCount|LeaderCount]'s 
> descriptions are omitted: 'mostly even across brokers'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12165) org.apache.kafka.common.quota classes omitted from Javadoc

2021-01-08 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12165:
---

 Summary: org.apache.kafka.common.quota classes omitted from Javadoc
 Key: KAFKA-12165
 URL: https://issues.apache.org/jira/browse/KAFKA-12165
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 2.7.0
Reporter: Tom Bentley
Assignee: Tom Bentley


The public API classes in `org.apache.kafka.common.quota` should be included in 
the javadoc, but are currently omitted. E.g. see 
https://kafka.apache.org/27/javadoc/org/apache/kafka/clients/admin/Admin.html#alterClientQuotas-java.util.Collection-



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12156) Document consequences of single threaded response handling

2021-01-07 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-12156:
---

 Summary: Document consequences of single threaded response handling
 Key: KAFKA-12156
 URL: https://issues.apache.org/jira/browse/KAFKA-12156
 Project: Kafka
  Issue Type: Improvement
  Components: admin
Reporter: Tom Bentley
Assignee: Tom Bentley


If users block the response handling thread in one call waiting for the result 
of a second "nested" call then the client effectively hangs because the 2nd 
call's response will never be processed. For example:


admin.listTopics().names().thenApply(topics -> {
// ... Some code to decide the topicsToCreate based on the topics
admin.createTopics(topicsToCreate).all().get()
return null;
}).get();


The {{createTopics()...get()}} block's indefinitely preventing the 
{{ListTopics}} response processing from dealing with the {{CreateTopics}} 
response.

This can be surprising to users of the Admin API, so we should at least 
document that this pattern should not be used. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10846) FileStreamSourceTask buffer can grow without bound

2020-12-11 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10846:
---

 Summary: FileStreamSourceTask buffer can grow without bound
 Key: KAFKA-10846
 URL: https://issues.apache.org/jira/browse/KAFKA-10846
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Tom Bentley
Assignee: Tom Bentley


When reading a large file the buffer used by {{FileStreamSourceTask}} can grow 
without bound. Even in the unit test 
org.apache.kafka.connect.file.FileStreamSourceTaskTest#testBatchSize the buffer 
grows from 1,024 to 524,288 bytes just reading 10,000 copies of a line of <100 
chars.

The problem is that the condition for growing the buffer is incorrect. The 
buffer is doubled whenever some bytes were read and the used space in the 
buffer == the buffer length.
The requirement to increase the buffer size should be related to whether 
{{extractLine()}} actually managed to read any lines. It's only when no 
complete lines were read since the last call to {{read()}} that we need to 
increase the buffer size (to cope with the large line).




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-10713) Surprising behaviour when bootstrap servers are separated by semicolons

2020-12-09 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley reopened KAFKA-10713:
-

> Surprising behaviour when bootstrap servers are separated by semicolons
> ---
>
> Key: KAFKA-10713
> URL: https://issues.apache.org/jira/browse/KAFKA-10713
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Mickael Maison
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 2.8.0
>
>
> When creating a Kafka client with {{bootstrap.servers}} set to 
> "kafka-0:9092;kafka-1:9092;kafka-2:9092", it has a strange behaviour.
> For once, there's no warning or error messages. The client will connect and 
> start working. However, it will only use the hostname after the last 
> semicolon as bootstrap server!
> The configuration {{bootstrap.servers}} is defined as a {{List}} in 
> {{AbstractConfig}}. So from a configuration point of view, 
> "kafka-0:9092;kafka-1:9092;kafka-2:9092" is a single entry.
> Then, {{Utils.getHost()}} returns "kafka-2" when parsing that string.
> {code:java}
> assertEquals("kafka-2", getHost("kafka-1:9092;kafka-1:9092;kafka-2:9092"));
> {code}
> So the client ends up with a single bootstrap server! 
> I believe semicolon are not valid characters in hostname/domain names, so we 
> should be able to provide better validation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10566) Erroneous unknown config warnings about ssl and sasl configs

2020-12-03 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-10566.
-
Resolution: Duplicate

> Erroneous unknown config warnings about ssl and sasl configs
> 
>
> Key: KAFKA-10566
> URL: https://issues.apache.org/jira/browse/KAFKA-10566
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> Warnings get printed about `ssl...` and `sasl...` configs being supplied but 
> unknown, for example:
> [2020-10-02 10:30:36,756] WARN The configuration 'ssl.trustmanager.algorithm' 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.admin.AdminClientConfig:366)
> [2020-10-02 10:30:36,756] WARN The configuration 'ssl.protocol' was supplied 
> but isn't a known config. 
> (org.apache.kafka.clients.admin.AdminClientConfig:366)
> [2020-10-02 10:30:36,756] WARN The configuration 'ssl.truststore.location' 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.admin.AdminClientConfig:366)
> [2020-10-02 10:30:36,756] WARN The configuration 'ssl.enabled.protocols' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.admin.AdminClientConfig:366)
> [2020-10-02 10:30:36,756] WARN The configuration 'ssl.truststore.password' 
> was supplied but isn't a known config. 
> (org.apache.kafka.clients.admin.AdminClientConfig:366)
> [2020-10-02 10:30:36,757] WARN The configuration 'ssl.truststore.type' was 
> supplied but isn't a known config. 
> (org.apache.kafka.clients.admin.AdminClientConfig:366)
> This can be seen from the tools and also from, e.g., 
> `SaslSslAdminIntegrationTest` with logging turned up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10801) Docs on configuration have multiple places using the same HTML anchor tag

2020-12-03 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-10801.
-
Resolution: Fixed

> Docs on configuration have multiple places using the same HTML anchor tag
> -
>
> Key: KAFKA-10801
> URL: https://issues.apache.org/jira/browse/KAFKA-10801
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.6.0, 2.5.1
>Reporter: James Cheng
>Priority: Minor
> Fix For: 2.7.0
>
>
> The configuration option "compression.type" is a configuration option on the 
> Kafka Producer as well as on the Kafka brokers.
>  
> The same HTML anchor #compression.type is used on both of those entries. So 
> if you click or bookmark the link 
> [http://kafka.apache.org/documentation/#compression.type] , it will always 
> bring you to the first entry (the broker-side config). It will never bring 
> you to the 2nd entry (producer config).
>  
> I've only noticed this for the compression.type config, but it is possible 
> that it also applies to any other config option that is the same between the 
> broker/producer/consumer.
>  
> We should at least fix it for compression.type, and we should possibly fix it 
> across the entire document.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10692) Rename broker master key config for KIP 681

2020-11-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10692:
---

 Summary: Rename broker master key config for KIP 681
 Key: KAFKA-10692
 URL: https://issues.apache.org/jira/browse/KAFKA-10692
 Project: Kafka
  Issue Type: Sub-task
  Components: core
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10566) Erroneous unknown config warnings about ssl and sasl configs

2020-10-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10566:
---

 Summary: Erroneous unknown config warnings about ssl and sasl 
configs
 Key: KAFKA-10566
 URL: https://issues.apache.org/jira/browse/KAFKA-10566
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


Warnings get printed about `ssl...` and `sasl...` configs being supplied but 
unknown, for example:

[2020-10-02 10:30:36,756] WARN The configuration 'ssl.trustmanager.algorithm' 
was supplied but isn't a known config. 
(org.apache.kafka.clients.admin.AdminClientConfig:366)
[2020-10-02 10:30:36,756] WARN The configuration 'ssl.protocol' was supplied 
but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:366)
[2020-10-02 10:30:36,756] WARN The configuration 'ssl.truststore.location' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.admin.AdminClientConfig:366)
[2020-10-02 10:30:36,756] WARN The configuration 'ssl.enabled.protocols' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.admin.AdminClientConfig:366)
[2020-10-02 10:30:36,756] WARN The configuration 'ssl.truststore.password' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.admin.AdminClientConfig:366)
[2020-10-02 10:30:36,757] WARN The configuration 'ssl.truststore.type' was 
supplied but isn't a known config. 
(org.apache.kafka.clients.admin.AdminClientConfig:366)

This can be seen from the tools and also from, e.g., 
`SaslSslAdminIntegrationTest` with logging turned up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10469) describeConfigs() for broker loggers returns incorrect values

2020-09-08 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10469:
---

 Summary: describeConfigs() for broker loggers returns incorrect 
values
 Key: KAFKA-10469
 URL: https://issues.apache.org/jira/browse/KAFKA-10469
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Tom Bentley
Assignee: Tom Bentley


{{Log4jController#loggers}} incorrectly uses the root logger's log level for 
any loggers which lack a configured log level of their own. This is incorrect 
because loggers without an explicit level inherit their level from their parent 
logger and this resolved level might be different from the root logger's level. 
This means that the levels reported from {{Admin.describeConfigs}}, which uses 
{{Log4jController#loggers}} are incorrect. This can be shown by using the 
default {{log4j.properties}} and describing a broker's loggers, it reports

{noformat}
kafka.controller=TRACE
kafka.controller.ControllerChannelManager=INFO
kafka.controller.ControllerEventManager$ControllerEventThread=INFO
kafka.controller.KafkaController=INFO
kafka.controller.RequestSendThread=INFO
kafka.controller.TopicDeletionManager=INFO
kafka.controller.ZkPartitionStateMachine=INFO
kafka.controller.ZkReplicaStateMachine=INFO
{noformat}

The default {{log4j.properties}} does indeed set {{kafka.controller}} to 
{{TRACE}}, but it does not configure the others, so they're actually at 
{{TRACE}} not {{INFO}} as reported.







--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10468) Log4jController.getLoggers serialization

2020-09-08 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10468:
---

 Summary: Log4jController.getLoggers serialization
 Key: KAFKA-10468
 URL: https://issues.apache.org/jira/browse/KAFKA-10468
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Tom Bentley


{{Log4jController#getLoggers()}} returns a {{java.util.List}} wrapper for a 
Scala {{List}}, which results in a {{ClassNotFoundException}} on any MBean 
client which doesn't have the scala wrapper class on its classpath.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10245) Using vulnerable log4j version

2020-07-08 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-10245.
-
Resolution: Duplicate

> Using vulnerable log4j version
> --
>
> Key: KAFKA-10245
> URL: https://issues.apache.org/jira/browse/KAFKA-10245
> Project: Kafka
>  Issue Type: Bug
>  Components: core, KafkaConnect
>Affects Versions: 2.5.0
>Reporter: Pavel Kuznetsov
>Priority: Major
>  Labels: security
>
> *Description*
> I checked kafka_2.12-2.5.0.tgz distribution with WhiteSource and find out 
> that log4j version, that used in kafka-connect and kafka-brocker, has 
> vulnerabilities
>  * log4j-1.2.17.jar has 
> [CVE-2019-17571|https://github.com/advisories/GHSA-2qrg-x229-3v8q] and 
> [CVE-2020-9488|https://github.com/advisories/GHSA-vwqq-5vrc-xw9h] 
> vulnerabilities. The way to fix it is to upgrade to 
> org.apache.logging.log4j:log4j-core:2.13.2
> *To Reproduce*
> Download kafka_2.12-2.5.0.tgz
> Open libs folder in it and find log4j-1.2.17.jar.
> Check [CVE-2019-17571|https://github.com/advisories/GHSA-2qrg-x229-3v8q] and 
> [CVE-2020-9488|https://github.com/advisories/GHSA-vwqq-5vrc-xw9h] to see that 
> log4j 1.2.17 is vulnerable.
> *Expected*
>  * log4j is log4j-core 2.13.2 or higher
> *Actual*
>  * log4j is 1.2.17



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10211) Add DirectoryConfigProvider

2020-06-29 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10211:
---

 Summary: Add DirectoryConfigProvider
 Key: KAFKA-10211
 URL: https://issues.apache.org/jira/browse/KAFKA-10211
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley


Add a ConfigProvider which reads secrets from files in a directory, per 
[KIP-632|https://cwiki.apache.org/confluence/display/KAFKA/KIP-632%3A+Add+DirectoryConfigProvider].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10206) Admin can transiently return incorrect results about topics

2020-06-26 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10206:
---

 Summary: Admin can transiently return incorrect results about 
topics
 Key: KAFKA-10206
 URL: https://issues.apache.org/jira/browse/KAFKA-10206
 Project: Kafka
  Issue Type: Bug
  Components: admin, core
Reporter: Tom Bentley
Assignee: Tom Bentley


When a broker starts up it can handle metadata requests before it has 
received UPDATE_METADATA requests from the controller. 
This manifests in the admin client via:

* listTopics returning an empty list
* describeTopics and describeConfigs of topics erroneously returning 
TopicOrPartitionNotFoundException

I assume this also affects the producer and consumer, though since 
`UnknownTopicOrPartitionException` is retriable those clients recover.

Testing locally suggests that the window for this happening is typically <1s.

There doesn't seem to be any way for the caller of the Admin client to detect 
this situation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10120) DescribeLogDirsResult exposes internal classes

2020-06-08 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10120:
---

 Summary: DescribeLogDirsResult exposes internal classes
 Key: KAFKA-10120
 URL: https://issues.apache.org/jira/browse/KAFKA-10120
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


DescribeLogDirsResult (returned by AdminClient#describeLogDirs(Collection)) 
exposes a number of internal types:
 * {{DescribeLogDirsResponse.LogDirInfo}}
 * {{DescribeLogDirsResponse.ReplicaInfo}}
 * {{Errors}}

{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10109) kafka-acls.sh/AclCommand opens multiple AdminClients

2020-06-05 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-10109:
---

 Summary: kafka-acls.sh/AclCommand opens multiple AdminClients
 Key: KAFKA-10109
 URL: https://issues.apache.org/jira/browse/KAFKA-10109
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Tom Bentley
Assignee: Tom Bentley


{{AclCommand.AclCommandService}} uses {{withAdminClient(opts: 
AclCommandOptions)(f: Admin => Unit)}} to abstract the execution of an action 
using an {{AdminClient}} instance. Unfortunately the use of this method in 
implemeting {{addAcls()}} and {{removeAcls()}} calls {{listAcls()}}. This 
causes the creation of a second {{AdminClient}} instance which then fails to 
register an MBean, resulting in a warning being logged.

{code}
./bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config 
config/broker_connection.conf.reproducing --add --allow-principal User:alice 
--operation Describe --topic 'test' --resource-pattern-type prefixed
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test, 
patternType=PREFIXED)`: 
(principal=User:alice, host=*, operation=DESCRIBE, 
permissionType=ALLOW) 

[2020-06-03 18:43:12,190] WARN Error registering AppInfo mbean 
(org.apache.kafka.common.utils.AppInfoParser)
javax.management.InstanceAlreadyExistsException: 
kafka.admin.client:type=app-info,id=administrator_data
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at 
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64)
at 
org.apache.kafka.clients.admin.KafkaAdminClient.(KafkaAdminClient.java:500)
at 
org.apache.kafka.clients.admin.KafkaAdminClient.createInternal(KafkaAdminClient.java:444)
at org.apache.kafka.clients.admin.Admin.create(Admin.java:59)
at 
org.apache.kafka.clients.admin.AdminClient.create(AdminClient.java:39)
at 
kafka.admin.AclCommand$AdminClientService.withAdminClient(AclCommand.scala:105)
at 
kafka.admin.AclCommand$AdminClientService.listAcls(AclCommand.scala:146)
at 
kafka.admin.AclCommand$AdminClientService.$anonfun$addAcls$1(AclCommand.scala:123)
at 
kafka.admin.AclCommand$AdminClientService.$anonfun$addAcls$1$adapted(AclCommand.scala:116)
at 
kafka.admin.AclCommand$AdminClientService.withAdminClient(AclCommand.scala:108)
at 
kafka.admin.AclCommand$AdminClientService.addAcls(AclCommand.scala:116)
at kafka.admin.AclCommand$.main(AclCommand.scala:78)
at kafka.admin.AclCommand.main(AclCommand.scala)
Current ACLs for resource `ResourcePattern(resourceType=TOPIC, name=test, 
patternType=PREFIXED)`: 
(principal=User:alice, host=*, operation=DESCRIBE, permissionType=ALLOW)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10051) kafka_2.11 2.5.0 isn't available on Maven Central

2020-05-27 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-10051.
-
Resolution: Won't Fix

Kafka 2.5.0 dropped support for Scala 2.11.

> kafka_2.11 2.5.0 isn't available on Maven Central
> -
>
> Key: KAFKA-10051
> URL: https://issues.apache.org/jira/browse/KAFKA-10051
> Project: Kafka
>  Issue Type: Bug
>  Components: packaging
>Affects Versions: 2.5.0
>Reporter: Stefan Zwanenburg
>Priority: Blocker
>
> I'm using Spring Boot in a project, and I tried updating it to version 2.3.0, 
> which internally depends on all sorts of Kafka artifacts version 2.5.0.
> One of these artifacts is _kafka_2.11_, which doesn't appear to be available 
> on Maven Central. This means my builds fail, unless I somehow force 
> dependencies on versions of Kafka artifacts that are available on Maven 
> Central.
> Links:
>  - [Search for kafka_2.11 version 2.5.0 on Maven 
> Central|https://search.maven.org/search?q=a:kafka_2.11%20AND%20v:2.5.0]
> PS: Not sure what the priority ought to be, but it's definitely blocking me 
> in my work :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5601) Refactor ReassignPartitionsCommand to use AdminClient

2020-05-15 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-5601.

Resolution: Fixed

Addressed by KIP-455

> Refactor ReassignPartitionsCommand to use AdminClient
> -
>
> Key: KAFKA-5601
> URL: https://issues.apache.org/jira/browse/KAFKA-5601
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Major
>  Labels: kip
>
> Currently the {{ReassignPartitionsCommand}} (used by 
> {{kafka-reassign-partitions.sh}}) talks directly to ZooKeeper. It would be 
> better to have it use the AdminClient API instead. 
> This would entail creating two new protocol APIs, one to initiate the request 
> and another to request the status of an in-progress reassignment. As such 
> this would require a KIP.
> This touches on the work of KIP-166, but that proposes to use the 
> {{ReassignPartitionsCommand}} API, so should not be affected so long as that 
> API is maintained. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6379) Work for KIP-240

2020-05-15 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-6379.

Resolution: Fixed

Addressed by KIP-455

> Work for KIP-240
> 
>
> Key: KAFKA-6379
> URL: https://issues.apache.org/jira/browse/KAFKA-6379
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> This issue is for the work described in KIP-240.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-1368) Upgrade log4j

2020-04-01 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-1368.

Resolution: Duplicate

> Upgrade log4j
> -
>
> Key: KAFKA-1368
> URL: https://issues.apache.org/jira/browse/KAFKA-1368
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Vladislav Pernin
>Assignee: Tom Bentley
>Priority: Major
>
> Upgrade log4j to at least 1.2.16 ou 1.2.17.
> Usage of EnhancedPatternLayout will be possible.
> It allows to set delimiters around the full log, stacktrace included, making 
> log messages collection easier with tools like Logstash.
> Example : <[%d{}]...[%t] %m%throwable>%n
> <[2014-04-08 11:07:20,360] ERROR [KafkaApi-1] Error when processing fetch 
> request for partition [X,6] offset 700 from consumer with correlation id 
> 0 (kafka.server.KafkaApis)
> kafka.common.OffsetOutOfRangeException: Request for offset 700 but we only 
> have log segments in the range 16021 to 16021.
> at kafka.log.Log.read(Log.scala:429)
> ...
> at java.lang.Thread.run(Thread.java:744)>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9775) IllegalFormatConversionException from kafka-consumer-perf-test.sh

2020-03-27 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9775:
--

 Summary: IllegalFormatConversionException from 
kafka-consumer-perf-test.sh
 Key: KAFKA-9775
 URL: https://issues.apache.org/jira/browse/KAFKA-9775
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Tom Bentley
Assignee: Tom Bentley


Exception in thread "main" java.util.IllegalFormatConversionException: f != 
java.lang.Integer
at 
java.base/java.util.Formatter$FormatSpecifier.failConversion(Formatter.java:4426)
at 
java.base/java.util.Formatter$FormatSpecifier.printFloat(Formatter.java:2951)
at 
java.base/java.util.Formatter$FormatSpecifier.print(Formatter.java:2898)
at java.base/java.util.Formatter.format(Formatter.java:2673)
at java.base/java.util.Formatter.format(Formatter.java:2609)
at java.base/java.lang.String.format(String.java:2897)
at scala.collection.immutable.StringLike.format(StringLike.scala:354)
at scala.collection.immutable.StringLike.format$(StringLike.scala:353)
at scala.collection.immutable.StringOps.format(StringOps.scala:33)
at kafka.utils.ToolsUtils$.$anonfun$printMetrics$3(ToolsUtils.scala:60)
at 
kafka.utils.ToolsUtils$.$anonfun$printMetrics$3$adapted(ToolsUtils.scala:58)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at kafka.utils.ToolsUtils$.printMetrics(ToolsUtils.scala:58)
at kafka.tools.ConsumerPerformance$.main(ConsumerPerformance.scala:82)
at kafka.tools.ConsumerPerformance.main(ConsumerPerformance.scala)




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9692) Flaky test - kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment

2020-03-20 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-9692.

Resolution: Done

This test got deleted as part of KAFKA-8820, when much of the testing changed 
to unit tests.

> Flaky test - 
> kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment
> --
>
> Key: KAFKA-9692
> URL: https://issues.apache.org/jira/browse/KAFKA-9692
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.5.0
>Reporter: Tom Bentley
>Priority: Major
>  Labels: flaky-test
>
> {noformat}
> java.lang.AssertionError: expected: but was: 101)>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:120)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.assertReplicas(ReassignPartitionsClusterTest.scala:1220)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.assertIsReassigning(ReassignPartitionsClusterTest.scala:1191)
>   at 
> kafka.admin.ReassignPartitionsClusterTest.znodeReassignmentShouldOverrideApiTriggeredReassignment(ReassignPartitionsClusterTest.scala:897)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at jdk.internal.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
>   at 
> 

[jira] [Created] (KAFKA-9692) Flaky test - kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment

2020-03-10 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9692:
--

 Summary: Flaky test - 
kafka.admin.ReassignPartitionsClusterTest#znodeReassignmentShouldOverrideApiTriggeredReassignment
 Key: KAFKA-9692
 URL: https://issues.apache.org/jira/browse/KAFKA-9692
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 2.5.0
Reporter: Tom Bentley


{noformat}
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
kafka.admin.ReassignPartitionsClusterTest.assertReplicas(ReassignPartitionsClusterTest.scala:1220)
at 
kafka.admin.ReassignPartitionsClusterTest.assertIsReassigning(ReassignPartitionsClusterTest.scala:1191)
at 
kafka.admin.ReassignPartitionsClusterTest.znodeReassignmentShouldOverrideApiTriggeredReassignment(ReassignPartitionsClusterTest.scala:897)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.GeneratedMethodAccessor33.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 

[jira] [Created] (KAFKA-9691) Flaky test kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress

2020-03-10 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9691:
--

 Summary: Flaky test 
kafka.admin.TopicCommandWithAdminClientTest#testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress
 Key: KAFKA-9691
 URL: https://issues.apache.org/jira/browse/KAFKA-9691
 Project: Kafka
  Issue Type: Bug
  Components: unit tests
Affects Versions: 2.5.0
Reporter: Tom Bentley


Stacktrace:

{noformat}
java.lang.NullPointerException
at 
kafka.admin.TopicCommandWithAdminClientTest.$anonfun$testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress$3(TopicCommandWithAdminClientTest.scala:673)
at 
kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderReplicatedPartitionsWhenReassignmentIsInProgress(TopicCommandWithAdminClientTest.scala:671)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 

[jira] [Created] (KAFKA-9673) Conditionally apply SMTs

2020-03-06 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9673:
--

 Summary: Conditionally apply SMTs
 Key: KAFKA-9673
 URL: https://issues.apache.org/jira/browse/KAFKA-9673
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Tom Bentley
Assignee: Tom Bentley


KAFKA-7052 ended up using IAE with a message, rather than NPE in the case of a 
SMT being applied to a record lacking a given field. It's still not possible to 
apply a SMT conditionally, which is what things like Debezium really need in 
order to apply transformations only to non-schema change events.

[~rhauch] suggested a mechanism to conditionally apply any SMT but was 
concerned about the possibility of a naming collision (assuming it was 
configured by a simple config)

I'd like to propose something which would solve this problem without the 
possibility of such collisions. The idea is to have a higher-level condition, 
which applies an arbitrary transformation (or transformation chain) according 
to some predicate on the record. 

More concretely, it might be configured like this:

{noformat}
  transforms.conditionalExtract.type: Conditional
  transforms.conditionalExtract.transforms: extractInt
  transforms.conditionalExtract.transforms.extractInt.type: 
org.apache.kafka.connect.transforms.ExtractField$Key
  transforms.conditionalExtract.transforms.extractInt.field: c1
  transforms.conditionalExtract.condition: topic-matches:
{noformat}

* The {{Conditional}} SMT is configured with its own list of transforms 
({{transforms.conditionalExtract.transforms}}) to apply. This would work just 
like the top level {{transforms}} config, so subkeys can be used to configure 
these transforms in the usual way.
* The {{condition}} config defines the predicate for when the transforms are 
applied to a record using a {{:}} syntax

We could initially support three condition types:

*{{topic-matches:}}* The transformation would be applied if the 
record's topic name matched the given regular expression pattern. For example, 
the following would apply the transformation on records being sent to any topic 
with a name beginning with "my-prefix-":
{noformat}
   transforms.conditionalExtract.condition: topic-matches:my-prefix-.*
{noformat}
   
*{{has-header:}}* The transformation would be applied if the 
record had at least one header with the given name. For example, the following 
will apply the transformation on records with at least one header with the name 
"my-header":
{noformat}
   transforms.conditionalExtract.condition: has-header:my-header
{noformat}
   
*{{not:}}* This would negate the result of another named 
condition using the condition config prefix. For example, the following will 
apply the transformation on records which lack any header with the name 
my-header:

{noformat}
  transforms.conditionalExtract.condition: not:hasMyHeader
  transforms.conditionalExtract.condition.hasMyHeader: has-header:my-header
{noformat}

I foresee one implementation concern with this approach, which is that 
currently {{Transformation}} has to return a fixed {{ConfigDef}}, and this 
proposal would require something more flexible in order to allow the config 
parameters to depend on the listed transform aliases (and similarly for named 
predicate used for the {{not:}} predicate). I think this could be done by 
adding a {{default}} method to {{Transformation}} for getting the ConfigDef 
given the config, for example.

Obviously this would require a KIP, but before I spend any more time on this 
I'd be interested in your thoughts [~rhauch], [~rmoff], [~gunnar.morling].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9663) KafkaStreams.metadataForKey, queryMetadataForKey docs don't mention null

2020-03-05 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9663:
--

 Summary: KafkaStreams.metadataForKey, queryMetadataForKey docs 
don't mention null
 Key: KAFKA-9663
 URL: https://issues.apache.org/jira/browse/KAFKA-9663
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Tom Bentley
Assignee: Tom Bentley


The Javadoc for {{KafkaStreams.metadataForKey}} and 
{{KafkaStreams.queryMetadataForKey}} don't document the possible null return 
value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9651) Divide by zero in DefaultPartitioner

2020-03-04 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9651:
--

 Summary: Divide by zero in DefaultPartitioner
 Key: KAFKA-9651
 URL: https://issues.apache.org/jira/browse/KAFKA-9651
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


The following exception was observed in a Kafka Streams application running on 
Kafka 2.3:

java.lang.ArithmeticException: / by zero
at 
org.apache.kafka.clients.producer.internals.DefaultPartitioner.partition(DefaultPartitioner.java:69)
at 
org.apache.kafka.streams.processor.internals.DefaultStreamPartitioner.partition(DefaultStreamPartitioner.java:39)
at 
org.apache.kafka.streams.processor.internals.StreamsMetadataState.getStreamsMetadataForKey(StreamsMetadataState.java:255)
at 
org.apache.kafka.streams.processor.internals.StreamsMetadataState.getMetadataWithKey(StreamsMetadataState.java:155)
at 
org.apache.kafka.streams.KafkaStreams.metadataForKey(KafkaStreams.java:1019)

The cause is that the {{Cluster}} returns an empty list from 
{{partitionsForTopic(topic)}} and the size is then used as a divisor.

The same pattern of using the size of the partitions as divisor is used in 
other implementations of {{Partitioner}} and also {{StickyPartitionCache}}, so 
presumably they're also prone to this problem when {{Cluster}} lacks 
information about a topic. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9650) Include human readable quantities for default config docs

2020-03-04 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9650:
--

 Summary: Include human readable quantities for default config docs
 Key: KAFKA-9650
 URL: https://issues.apache.org/jira/browse/KAFKA-9650
 Project: Kafka
  Issue Type: Improvement
  Components: docs
Reporter: Tom Bentley
Assignee: Tom Bentley


The Kafka config docs include default values for quantities in milliseconds and 
bytes, for example {{log.segment.bytes}} has default: {{1073741824}}. Many 
readers won't know that that's 1GiB, so will have to work it out. It would make 
the docs more readable if we included the quantity in the appropriate unit in 
parenthesis after the actual default value, like this:

default: 1073741824 (=1GiB)

Similarly for values in milliseconds. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9635) Should ConfigProvider.subscribe be decrecated?

2020-03-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9635:
--

 Summary: Should ConfigProvider.subscribe be decrecated?
 Key: KAFKA-9635
 URL: https://issues.apache.org/jira/browse/KAFKA-9635
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


KIP 297 added the ConfigProvider interface for use with Kafka Connect.
Its seems that at that time it was anticipated that config providers should 
have a change notification mechanism to facilitate dynamic reconfiguration. 
This was realised by having `subscribe()`, `unsubscribe()` and 
`unsubscribeAll()` methods in the ConfigProvider interface.

KIP-421 subsequently added the ability to use config providers with other 
configs (e.g. client, broker and Kafka Streams). KIP-421 didn't end up using 
the change notification feature, since it was incompatible with being able to 
update broker configs atomically.

As things currently stand the `subscribe()`, `unsubscribe()` and 
`unsubscribeAll()`  methods remain in the ConfigProvider interface but are not 
used anywhere in the Kafka code base. Is there an intention to make use of 
these methods, or should they be deprecated?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9634) ConfigProvider does not document thread safety

2020-03-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9634:
--

 Summary: ConfigProvider does not document thread safety
 Key: KAFKA-9634
 URL: https://issues.apache.org/jira/browse/KAFKA-9634
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


In Kafka Connect {{ConfigProvider}} can be used concurrently (e.g. via PUT to 
{{/{connectorType}/config/validate}}, but there is no mention of concurrent 
usage in the Javadocs for {{ConfigProvider}}. It's probably worth calling out 
that implementations need to be thread safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9633) ConfigProvider.close() not called

2020-03-02 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9633:
--

 Summary: ConfigProvider.close() not called
 Key: KAFKA-9633
 URL: https://issues.apache.org/jira/browse/KAFKA-9633
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


ConfigProvider extends Closeable, but in the following contexts the {{close()}} 
method is never called:

1. AbstractConfig
2. WorkerConfigTransformer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-5554) Hilight config settings for particular common use cases

2020-02-25 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-5554.

Resolution: Abandoned

> Hilight config settings for particular common use cases
> ---
>
> Key: KAFKA-5554
> URL: https://issues.apache.org/jira/browse/KAFKA-5554
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> Judging by the sorts of questions seen on the mailling list, stack overflow 
> etc it seems common for users to assume that Kafka will default to settings 
> which won't lose messages. They start using Kafka and at some later time find 
> messages have been lost.
> While it's not our fault if users don't read the documentation, there's a lot 
> of configuration documentation to digest and it's easy for people to miss an 
> important setting.
> Therefore, I'd like to suggest that in addition to the current configuration 
> docs we add a short section highlighting those settings which pertain to 
> common use cases, such as:
> * configs to avoid lost messages
> * configs for low latency
> I'm sure some users will continue to not read the documentation, but when 
> they inevitably start asking questions it means people can respond with "have 
> you configured everything as described here?"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6359) Work for KIP-236

2020-02-25 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-6359.

Resolution: Implemented

This was addressed by KIP-455 instead.

> Work for KIP-236
> 
>
> Key: KAFKA-6359
> URL: https://issues.apache.org/jira/browse/KAFKA-6359
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Tom Bentley
>Assignee: GEORGE LI
>Priority: Minor
>
> This issue is for the work described in KIP-236.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9435) Replace DescribeLogDirs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9435:
--

 Summary: Replace DescribeLogDirs request/response with automated 
protocol
 Key: KAFKA-9435
 URL: https://issues.apache.org/jira/browse/KAFKA-9435
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9434) Replace AlterReplicaLogDirs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9434:
--

 Summary: Replace AlterReplicaLogDirs request/response with 
automated protocol
 Key: KAFKA-9434
 URL: https://issues.apache.org/jira/browse/KAFKA-9434
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9432) Replace DescribeConfigs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9432:
--

 Summary: Replace DescribeConfigs request/response with automated 
protocol
 Key: KAFKA-9432
 URL: https://issues.apache.org/jira/browse/KAFKA-9432
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9433) Replace AlterConfigs request/response with automated protocol

2020-01-15 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-9433:
--

 Summary: Replace AlterConfigs request/response with automated 
protocol
 Key: KAFKA-9433
 URL: https://issues.apache.org/jira/browse/KAFKA-9433
 Project: Kafka
  Issue Type: Sub-task
Reporter: Tom Bentley
Assignee: Tom Bentley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-8862) Misleading exception message for non-existant partition

2019-09-03 Thread Tom Bentley (Jira)
Tom Bentley created KAFKA-8862:
--

 Summary: Misleading exception message for non-existant partition
 Key: KAFKA-8862
 URL: https://issues.apache.org/jira/browse/KAFKA-8862
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 2.3.0
Reporter: Tom Bentley
Assignee: Tom Bentley


https://issues.apache.org/jira/browse/KAFKA-6833 changed the logic of the 
{{KafkaProducer.waitOnMetadata}} so that if a partition did not exist it would 
wait for it to exist.
It means that if called with an incorrect partition the method will eventually 
throw a {{TimeoutException}}, which covers both topic and partition 
non-existence cases.

However, the exception message was not changed for the case where 
{{metadata.awaitUpdate(version, remainingWaitMs)}} throws a 
{{TimeoutException}}.

This results in a confusing exception message. For example, if a producer tries 
to send to a non-existent partition of an existing topic the message is 
"Topic %s not present in metadata after %d ms.", when timeout via the other 
code path would come with message
"Partition %d of topic %s with partition count %d is not present in metadata 
after %d ms."





--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8780) Set SCRAM passwords via the Admin interface

2019-08-09 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-8780:
--

 Summary: Set SCRAM passwords via the Admin interface
 Key: KAFKA-8780
 URL: https://issues.apache.org/jira/browse/KAFKA-8780
 Project: Kafka
  Issue Type: New Feature
  Components: admin
Reporter: Tom Bentley
Assignee: Tom Bentley


It should be possible to set user's SCRAM passwords via the Admin interface.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-7789) SSL-related unit tests hang when run on Fedora 29

2019-01-07 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-7789:
--

 Summary: SSL-related unit tests hang when run on Fedora 29
 Key: KAFKA-7789
 URL: https://issues.apache.org/jira/browse/KAFKA-7789
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley


Various SSL-related unit tests (such as {{SslSelectorTest}}) hang when executed 
on Fedora 29. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6379) Work for KIP-240

2017-12-18 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6379:
--

 Summary: Work for KIP-240
 Key: KAFKA-6379
 URL: https://issues.apache.org/jira/browse/KAFKA-6379
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


This issue is for the work described in KIP-240.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6359) Work for KIP-236

2017-12-13 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6359:
--

 Summary: Work for KIP-236
 Key: KAFKA-6359
 URL: https://issues.apache.org/jira/browse/KAFKA-6359
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


This issue is for the work described in KIP-236.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6283) Configuration of custom SCRAM SaslServer implementations

2017-12-11 Thread Tom Bentley (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-6283.

Resolution: Duplicate

> Configuration of custom SCRAM SaslServer implementations
> 
>
> Key: KAFKA-6283
> URL: https://issues.apache.org/jira/browse/KAFKA-6283
> Project: Kafka
>  Issue Type: Bug
>Reporter: Tom Bentley
>Assignee: Tom Bentley
>Priority: Minor
>
> It is difficult to supply configuration information to a custom 
> {{SaslServer}} implementation when a SCRAM mechanism is used. 
> {{SaslServerAuthenticator.createSaslServer()}} creates a {{SaslServer}} for a 
> given mechanism. The call to {{Sasl.createSaslServer()}} passes the broker 
> config and a callback handler. In the case of a SCRAM mechanism the callback 
> handler is a {{ScramServerCallbackHandler}} which doesn't have access to the 
> {{jaasContext}}. This makes it hard to configure a such a {{SaslServer}} 
> because I can't supply custom keys to the broker config (any unknown ones get 
> removed) and I don't have access to the JAAS config.
> In the case of a non-SCRAM {{SaslServer}}, I at least have access to the JAAS 
> config via the {{SaslServerCallbackHandler}}.
> A simple way to solve this would be to pass the {{jaasContext}} to the 
> {{ScramServerCallbackHandler}} from where a custom {{SaslServerFactory}} 
> could retrieve it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6283) Configuration of custom SCRAM SaslServer implementations

2017-11-29 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6283:
--

 Summary: Configuration of custom SCRAM SaslServer implementations
 Key: KAFKA-6283
 URL: https://issues.apache.org/jira/browse/KAFKA-6283
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


It is difficult to supply configuration information to a custom {{SaslServer}} 
implementation when a SCRAM mechanism is used. 

{{SaslServerAuthenticator.createSaslServer()}} creates a {{SaslServer}} for a 
given mechanism. The call to {{Sasl.createSaslServer()}} passes the broker 
config and a callback handler. In the case of a SCRAM mechanism the callback 
handler is a {{ScramServerCallbackHandler}} which doesn't have access to the 
{{jaasContext}}. This makes it hard to configure a such a {{SaslServer}} 
because I can't supply custom keys to the broker config (any unknown ones get 
removed) and I don't have access to the JAAS config.

In the case of a non-SCRAM {{SaslServer}}, I at least have access to the JAAS 
config via the {{SaslServerCallbackHandler}}.

A simple way to solve this would be to pass the {{jaasContext}} to the 
{{ScramServerCallbackHandler}} from where a custom {{SaslServerFactory}} could 
retrieve it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6272) SASL PLAIN and SCRAM do not apply SASLPrep

2017-11-27 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6272:
--

 Summary: SASL PLAIN and SCRAM do not apply SASLPrep
 Key: KAFKA-6272
 URL: https://issues.apache.org/jira/browse/KAFKA-6272
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


[RFC 5802|https://tools.ietf.org/html/rfc5802] (SASL SCRAM) says:

{quote}
Before sending the username to the server, the client SHOULD
prepare the username using the "SASLprep" profile [RFC4013] of
the "stringprep" algorithm [RFC3454] treating it as a query
string (i.e., unassigned Unicode code points are allowed).
{quote}

ScramSaslClient uses ScramFormatter.normalize(), which just UTF-8 encodes the 
bytes.

Likewise [RFC 4616|https://tools.ietf.org/html/rfc4616] (SASL PLAIN) says:

{quote}
The presented authentication identity and password strings, as well
as the database authentication identity and password strings, are to
be prepared before being used in the verification process.  The
[SASLPrep] profile of the [StringPrep] algorithm is the RECOMMENDED
preparation algorithm. The SASLprep preparation algorithm is recommended to 
improve the likelihood that comparisons behave in an expected manner.  The 
SASLprep preparation algorithm is not mandatory so as to allow the server to 
employ other preparation algorithms (including none) when appropriate.  For 
instance, use of a different preparation algorithm may be necessary for the 
server to interoperate with an external system.
{quote}

But the comparison is simply on the bare strings.

This doesn't cause problems with the SASL components distributed with Kafka 
(because they consistently don't do any string preparation), but it makes it 
harder to, for, example, use the Kafka {{SaslClient}}s on clients, but 
configure a different {{SaslServer}} on brokers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6143) VerifiableProducer & VerifiableConsumer need tests

2017-10-30 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6143:
--

 Summary: VerifiableProducer & VerifiableConsumer need tests
 Key: KAFKA-6143
 URL: https://issues.apache.org/jira/browse/KAFKA-6143
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Priority: Minor


The {{VerifiableProducer}} and {{VerifiableConsumer}} used use for system 
tests, but don't have any tests themselves. They should have.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6130) VerifiableConsume with --max-messages

2017-10-26 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6130:
--

 Summary: VerifiableConsume with --max-messages 
 Key: KAFKA-6130
 URL: https://issues.apache.org/jira/browse/KAFKA-6130
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


If I run {{kafka-verifiable-consumer.sh --max-messages=N}} I expect the tool to 
consume N messages and then exit. It will actually consume as many messages as 
are in the topic and then block.

The problem is that although  the max messages will cause the loop in 
onRecordsReceived() to break, the loop in run() will just call 
onRecordsReceived() again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6050) Cannot alter default topic config

2017-10-11 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6050:
--

 Summary: Cannot alter default topic config
 Key: KAFKA-6050
 URL: https://issues.apache.org/jira/browse/KAFKA-6050
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley


The command to describe the default topic config
{noformat}
bin/kafka-configs.sh --zookeeper localhost:2181 \
  --describe --entity-type topics --entity-name ''
{noformat}

returns without error, but the equivalent command to alter the default topic 
config:

{noformat}
bin/kafka-configs.sh --zookeeper localhost:2181 --alter \
  --entity-type topics --entity-name '' --add-config retention.ms=1000
{noformat}

returns an error:

{noformat}
Error while executing config command Topic name "" is illegal, it 
contains a character other than ASCII alphanumerics, '.', '_' and '-'
org.apache.kafka.common.errors.InvalidTopicException: Topic name "" is 
illegal, it contains a character other than ASCII alphanumerics, '.', '_' and 
'-'
at org.apache.kafka.common.internals.Topic.validate(Topic.java:45)
at kafka.admin.AdminUtils$.validateTopicConfig(AdminUtils.scala:578)
at kafka.admin.AdminUtils$.changeTopicConfig(AdminUtils.scala:595)
at kafka.admin.AdminUtilities$class.changeConfigs(AdminUtils.scala:52)
at kafka.admin.AdminUtils$.changeConfigs(AdminUtils.scala:63)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:103)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:70)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6046) DeleteRecordsRequest to a non-leader

2017-10-10 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-6046:
--

 Summary: DeleteRecordsRequest to a non-leader
 Key: KAFKA-6046
 URL: https://issues.apache.org/jira/browse/KAFKA-6046
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
 Fix For: 1.1.0


When a `DeleteRecordsRequest` is sent to a broker that's not the leader for the 
partition the  `DeleteRecordsResponse` returns `UNKNOWN_TOPIC_OR_PARTITION`. 
This is ambiguous (does the topic not exist on any broker, or did we just sent 
the request to the wrong broker?), and inconsistent (a `ProduceRequest` would 
return `NOT_LEADER_FOR_PARTITION`).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5860) Prevent non-consecutive partition ids

2017-09-08 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5860:
--

 Summary: Prevent non-consecutive partition ids
 Key: KAFKA-5860
 URL: https://issues.apache.org/jira/browse/KAFKA-5860
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Priority: Minor


It is possible to create non-consecutive partition ids via 
AdminClient.createTopics() and the kafka-topics.sh. It's not clear that this 
has any use cases, nor that it is well tested. 

Since people generally assume partition ids will be consecutive it is likely to 
be a cause of bugs in both Kafka and user code.

We should remove the ability to create topics with non-consecutive partition 
ids.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5856) AdminClient should be able to increase number of partitions

2017-09-07 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5856:
--

 Summary: AdminClient should be able to increase number of 
partitions
 Key: KAFKA-5856
 URL: https://issues.apache.org/jira/browse/KAFKA-5856
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley


It should be possible to increase the partition count using the AdminClient. 

See 
[KIP-195|https://cwiki.apache.org/confluence/display/KAFKA/KIP-195%3A+AdminClient.increasePartitions]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5693) TopicCreationPolicy and AlterConfigsPolicy overlap

2017-08-02 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5693:
--

 Summary: TopicCreationPolicy and AlterConfigsPolicy overlap
 Key: KAFKA-5693
 URL: https://issues.apache.org/jira/browse/KAFKA-5693
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Priority: Minor


The administrator of a cluster can configure a {{CreateTopicPolicy}}, which has 
access to the topic configs as well as other metadata to make its decision 
about whether a topic creation is allowed. Thus in theory the decision could be 
based on a combination of of the replication factor, and the topic configs, for 
example. 

Separately there is an AlterConfigPolicy, which only has access to the configs 
(and can apply to configurable entities other than just topics).

There are potential issues with this. For example although the 
CreateTopicPolicy is checked at creation time, it's not checked for any later 
alterations to the topic config. So policies which depend on both the topic 
configs and other topic metadata could be worked around by changing the configs 
after creation.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5692) Refactor PreferredReplicaLeaderElectionCommand to use AdminClient

2017-08-02 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5692:
--

 Summary: Refactor PreferredReplicaLeaderElectionCommand to use 
AdminClient
 Key: KAFKA-5692
 URL: https://issues.apache.org/jira/browse/KAFKA-5692
 Project: Kafka
  Issue Type: Bug
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


The PreferredReplicaLeaderElectionCommand currently uses a direction connection 
to zookeeper. The zookeeper dependency should be deprecated and an AdminClient 
API created to be used instead. 

This change will require a KIP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5601) Refactor ReassignPartitionsCommand to use AdminClient

2017-07-17 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5601:
--

 Summary: Refactor ReassignPartitionsCommand to use AdminClient
 Key: KAFKA-5601
 URL: https://issues.apache.org/jira/browse/KAFKA-5601
 Project: Kafka
  Issue Type: Improvement
Reporter: Tom Bentley
Assignee: Tom Bentley


Currently the {{ReassignPartitionsCommand}} (used by 
{{kafka-reassign-partitions.sh}}) talks directly to ZooKeeper. It would be 
better to have it use the AdminClient API instead. 

This would entail creating two new protocol APIs, one to initiate the request 
and another to request the status of an in-progress reassignment. As such this 
would require a KIP.

This touches on the work of KIP-166, but that proposes to use the 
{{ReassignPartitionsCommand}} API, so should not be affected so long as that 
API is maintained. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5554) Hilight config settings for particular common use cases

2017-07-04 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5554:
--

 Summary: Hilight config settings for particular common use cases
 Key: KAFKA-5554
 URL: https://issues.apache.org/jira/browse/KAFKA-5554
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


Judging by the sorts of questions seen on the mailling list, stack overflow etc 
it seems common for users to assume that Kafka will default to settings which 
won't lose messages. They start using Kafka and at some later time find 
messages have been lost.

While it's not our fault if users don't read the documentation, there's a lot 
of configuration documentation to digest and it's easy for people to miss an 
important setting.

Therefore, I'd like to suggest that in addition to the current configuration 
docs we add a short section highlighting those settings which pertain to common 
use cases, such as:

* configs to avoid lost messages
* configs for low latency

I'm sure some users will continue to not read the documentation, but when they 
inevitably start asking questions it means people can respond with "have you 
configured everything as described here?"



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5517) Support linking to particular configuration parameters

2017-06-26 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5517:
--

 Summary: Support linking to particular configuration parameters
 Key: KAFKA-5517
 URL: https://issues.apache.org/jira/browse/KAFKA-5517
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


Currently the configuration parameters are documented long tables, and it's 
only possible to link to the heading before a particular table. When discussing 
configuration parameters on forums it would be helpful to be able to link to 
the particular parameter under discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5508) Documentation for altering topics

2017-06-23 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5508:
--

 Summary: Documentation for altering topics
 Key: KAFKA-5508
 URL: https://issues.apache.org/jira/browse/KAFKA-5508
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Tom Bentley
Priority: Minor


According to the upgrade documentation:

bq. Altering topic configuration from the kafka-topics.sh script 
(kafka.admin.TopicCommand) has been deprecated. Going forward, please use the 
kafka-configs.sh script (kafka.admin.ConfigCommand) for this functionality. 

But the Operations documentation still tells people to use kafka-topics.sh to 
alter their topic configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5496) Consistency in documentation

2017-06-22 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5496:
--

 Summary: Consistency in documentation
 Key: KAFKA-5496
 URL: https://issues.apache.org/jira/browse/KAFKA-5496
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Assignee: Tom Bentley
Priority: Minor


The documentation is full of inconsistencies, including

* Some tool examples feature a {{>}} prompt, but others do not.
* Code/config in {{}} tags with different amounts of indentation (often 
there's no actual need for indentation at all)
* Missing or inconsistent typographical conventions for file and script names, 
class and method names etc, making some of the documentation harder to read.
* {{}} tags for syntax highlighting, but no syntax 
highlighting on the site.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5479) Docs for authorization omit authorizer.class.name

2017-06-20 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5479:
--

 Summary: Docs for authorization omit authorizer.class.name
 Key: KAFKA-5479
 URL: https://issues.apache.org/jira/browse/KAFKA-5479
 Project: Kafka
  Issue Type: Improvement
  Components: documentation
Reporter: Tom Bentley
Priority: Minor


The documentation in §7.4 Authorization and ACLs doesn't mention the 
{{authorizer.class.name}} setting. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5459) Support kafka-console-producer.sh messages as whole file

2017-06-16 Thread Tom Bentley (JIRA)
Tom Bentley created KAFKA-5459:
--

 Summary: Support kafka-console-producer.sh messages as whole file
 Key: KAFKA-5459
 URL: https://issues.apache.org/jira/browse/KAFKA-5459
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.10.2.1
Reporter: Tom Bentley
Priority: Trivial


{{kafka-console-producer.sh}} treats each line read as a separate message. This 
can be controlled using the {{--line-reader}} option and the corresponding 
{{MessageReader}} trait. It would be useful to have built-in support for 
sending the whole input stream/file as the message. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-2111) Command Line Standardization - Add Help Arguments & List Required Fields

2017-06-14 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049265#comment-16049265
 ] 

Tom Bentley commented on KAFKA-2111:


I'm happy work work on this if [~johnma] is no longer looking at it?

> Command Line Standardization - Add Help Arguments & List Required Fields
> 
>
> Key: KAFKA-2111
> URL: https://issues.apache.org/jira/browse/KAFKA-2111
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Reporter: Matt Warhaftig
>Assignee: Mariam John
>Priority: Minor
>  Labels: newbie
>
> KIP-14 is the standardization of tool command line arguments.  As an offshoot 
> of that proposal there are standardization changes that don't need to be part 
> of the KIP since they are less invasive.  They are:
> - Properly format argument descriptions (into sentences) and add any missing 
> "REQUIRED" notes.
> - Add 'help' argument to any top-level tool scripts that were missing it.
> This JIRA is for tracking them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5421) Getting InvalidRecordException

2017-06-14 Thread Tom Bentley (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16049037#comment-16049037
 ] 

Tom Bentley commented on KAFKA-5421:


The root cause is probably bitrot on the disk storing that replica. You can 
confirm that using {{bin/kafka-run-class.sh kafka.tools.DumpLogSegments}}. 

If you have another replica for that partition you could read the record from 
there. 

Otherwise you will need to skip at least that message. You can do that 
programatically, but I'm not aware of a tool which would let you do that (but 
I'm new to Kafka so perhaps such a thing exists).

> Getting InvalidRecordException
> --
>
> Key: KAFKA-5421
> URL: https://issues.apache.org/jira/browse/KAFKA-5421
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
>Reporter: Rishi Reddy Bokka
>Priority: Blocker
>
> In my application, I get data which gets queued using kafka and saved on the 
> disk and the consumer which gets this data from kafka and does the 
> processing. But When my consumer is trying to read data from kafka I am 
> getting below exceptions :
> 2017-06-09 10:57:24,733 ERROR NetworkClient Uncaught error in request 
> completion:
> org.apache.kafka.common.KafkaException: Error deserializing key/value for 
> partition TcpMessage-1 at offset 155884487
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:628)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.handleFetchResponse(Fetcher.java:566)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.access$000(Fetcher.java:69)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:139)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher$1.onSuccess(Fetcher.java:136)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
>  ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
> [kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>  [kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>  [kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>  [kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:908)
>  [kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
> [kafka-clients-0.9.0.1.jar:?]
> at 
> com.affirmed.mediation.edr.kafka.tcpMessage.TcpMessageConsumer.doWork(TcpMessageConsumer.java:190)
>  [EdrServer.jar:?]
> at 
> com.affirmed.mediation.edr.kafka.tcpMessage.TcpMessageConsumer.run(TcpMessageConsumer.java:248)
>  [EdrServer.jar:?]
> Caused by: org.apache.kafka.common.record.InvalidRecordException: Record is 
> corrupt (stored crc = 2016852547, computed crc = 1399853379)
> at org.apache.kafka.common.record.Record.ensureValid(Record.java:226) 
> ~[kafka-clients-0.9.0.1.jar:?]
> at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:617)
>  ~[kafka-clients-0.9.0.1.jar:?]
> ... 15 more
> Could anyone please help me with this. I got stuck with it and not able to 
> figure out the root.
> When this occurs is there any way to catch this exception and move the 
> offset? Currently, consumer is keep polling for the same range of records in 
> the next poll  as   
> result never moving forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)