[jira] [Updated] (KAFKA-8563) Minor: Remove method call in netoworkSend

2019-06-18 Thread karan kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

karan kumar updated KAFKA-8563:
---
Environment: 
Darwin WM-CXX 18.2.0 Darwin Kernel Version 18.2.0: Thu Dec 20 20:46:53 PST 
2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64

ProductName:Mac OS X
ProductVersion: 10.14.3

java version "1.8.0_201"
Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)



  was:
Darwin WM-CXX 18.2.0 Darwin Kernel Version 18.2.0: Thu Dec 20 20:46:53 PST 
2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64

ProductName:Mac OS X
ProductVersion: 10.14.3



> Minor: Remove method call in netoworkSend
> -
>
> Key: KAFKA-8563
> URL: https://issues.apache.org/jira/browse/KAFKA-8563
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 2.4.0
> Environment: Darwin WM-CXX 18.2.0 Darwin Kernel Version 18.2.0: 
> Thu Dec 20 20:46:53 PST 2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64
> ProductName:  Mac OS X
> ProductVersion:   10.14.3
> java version "1.8.0_201"
> Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
> Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
>Reporter: karan kumar
>Priority: Minor
>
> There was a  
> [https://github.com/apache/kafka/blob/93bf96589471acadfb90e57ebfecbd91f679f77b/clients/src/main/java/org/apache/kafka/common/network/NetworkSend.java#L30]
>  which can be removed from the network send class. 
>  
> Initial JMH benchmarks suggest minimal improvement after removing this method 
> call.
>  
> Present network send JMH report:
>  
> {code:java}
> // code placeholder
> running JMH with args [-f 2 ByteBufferSendBenchmark]
> # JMH version: 1.21
> # VM version: JDK 1.8.0_201, Java HotSpot(TM) 64-Bit Server VM, 25.201-b09
> # VM invoker: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home/jre/bin/java
> # VM options: 
> # Warmup: 5 iterations, 2000 ms each
> # Measurement: 5 iterations, 5000 ms each
> # Timeout: 10 min per iteration
> # Threads: 1 thread, will synchronize iterations
> # Benchmark mode: Throughput, ops/time
> # Benchmark: org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod
> # Run progress: 0.00% complete, ETA 00:01:10
> # Fork: 1 of 2
> # Warmup Iteration 1: 38.961 ops/us
> # Warmup Iteration 2: 66.493 ops/us
> # Warmup Iteration 3: 63.502 ops/us
> # Warmup Iteration 4: 64.205 ops/us
> # Warmup Iteration 5: 63.676 ops/us
> Iteration 1: 63.537 ops/us
> Iteration 2: 63.863 ops/us
> Iteration 3: 58.472 ops/us
> Iteration 4: 62.780 ops/us
> Iteration 5: 63.454 ops/us
> # Run progress: 50.00% complete, ETA 00:00:35
> # Fork: 2 of 2
> # Warmup Iteration 1: 41.128 ops/us
> # Warmup Iteration 2: 66.872 ops/us
> # Warmup Iteration 3: 64.279 ops/us
> # Warmup Iteration 4: 64.307 ops/us
> # Warmup Iteration 5: 64.101 ops/us
> Iteration 1: 64.315 ops/us
> Iteration 2: 64.370 ops/us
> Iteration 3: 64.043 ops/us
> Iteration 4: 60.844 ops/us
> Iteration 5: 62.936 ops/us
> Result "org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod":
> 62.861 ±(99.9%) 2.804 ops/us [Average]
> (min, avg, max) = (58.472, 62.861, 64.370), stdev = 1.854
> CI (99.9%): [60.058, 65.665] (assumes normal distribution)
> # Run complete. Total time: 00:01:10
> REMEMBER: The numbers below are just data. To gain reusable insights, you 
> need to follow up on
> why the numbers are the way they are. Use profilers (see -prof, -lprof), 
> design factorial
> experiments, perform baseline and negative tests that provide experimental 
> control, make sure
> the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from 
> the domain experts.
> Do not assume the numbers tell you what you want them to tell.
> Benchmark Mode Cnt Score Error Units
> ByteBufferSendBenchmark.testMethod thrpt 10 62.861 ± 2.804 ops/us
> {code}
> and after removing the method call
>  
> {code:java}
> // code placeholder
> running JMH with args [-f 2 ByteBufferSendBenchmark]
> # JMH version: 1.21
> # VM version: JDK 1.8.0_201, Java HotSpot(TM) 64-Bit Server VM, 25.201-b09
> # VM invoker: 
> /Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home/jre/bin/java
> # VM options: 
> # Warmup: 5 iterations, 2000 ms each
> # Measurement: 5 iterations, 5000 ms each
> # Timeout: 10 min per iteration
> # Threads: 1 thread, will synchronize iterations
> # Benchmark mode: Throughput, ops/time
> # Benchmark: org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod
> # Run progress: 0.00% complete, ETA 00:01:10
> # Fork: 1 of 2
> # Warmup Iteration 1: 40.512 ops/us
> # Warmup Iteration 2: 67.002 ops/us
> # Warmup Iteration 3: 63.399 ops/us
> # Warmup Iteration 4: 63.288 ops/us
> # Warmup Iteration 5: 63.776 ops/us
> Iteration 1: 63.539 ops/us

[jira] [Created] (KAFKA-8563) Minor: Remove method call in netoworkSend

2019-06-18 Thread karan kumar (JIRA)
karan kumar created KAFKA-8563:
--

 Summary: Minor: Remove method call in netoworkSend
 Key: KAFKA-8563
 URL: https://issues.apache.org/jira/browse/KAFKA-8563
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.4.0
 Environment: Darwin WM-CXX 18.2.0 Darwin Kernel Version 18.2.0: 
Thu Dec 20 20:46:53 PST 2018; root:xnu-4903.241.1~1/RELEASE_X86_64 x86_64

ProductName:Mac OS X
ProductVersion: 10.14.3

Reporter: karan kumar


There was a  
[https://github.com/apache/kafka/blob/93bf96589471acadfb90e57ebfecbd91f679f77b/clients/src/main/java/org/apache/kafka/common/network/NetworkSend.java#L30]
 which can be removed from the network send class. 

 

Initial JMH benchmarks suggest minimal improvement after removing this method 
call.

 

Present network send JMH report:

 
{code:java}
// code placeholder
running JMH with args [-f 2 ByteBufferSendBenchmark]
# JMH version: 1.21
# VM version: JDK 1.8.0_201, Java HotSpot(TM) 64-Bit Server VM, 25.201-b09
# VM invoker: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home/jre/bin/java
# VM options: 
# Warmup: 5 iterations, 2000 ms each
# Measurement: 5 iterations, 5000 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod

# Run progress: 0.00% complete, ETA 00:01:10
# Fork: 1 of 2
# Warmup Iteration 1: 38.961 ops/us
# Warmup Iteration 2: 66.493 ops/us
# Warmup Iteration 3: 63.502 ops/us
# Warmup Iteration 4: 64.205 ops/us
# Warmup Iteration 5: 63.676 ops/us
Iteration 1: 63.537 ops/us
Iteration 2: 63.863 ops/us
Iteration 3: 58.472 ops/us
Iteration 4: 62.780 ops/us
Iteration 5: 63.454 ops/us

# Run progress: 50.00% complete, ETA 00:00:35
# Fork: 2 of 2
# Warmup Iteration 1: 41.128 ops/us
# Warmup Iteration 2: 66.872 ops/us
# Warmup Iteration 3: 64.279 ops/us
# Warmup Iteration 4: 64.307 ops/us
# Warmup Iteration 5: 64.101 ops/us
Iteration 1: 64.315 ops/us
Iteration 2: 64.370 ops/us
Iteration 3: 64.043 ops/us
Iteration 4: 60.844 ops/us
Iteration 5: 62.936 ops/us


Result "org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod":
62.861 ±(99.9%) 2.804 ops/us [Average]
(min, avg, max) = (58.472, 62.861, 64.370), stdev = 1.854
CI (99.9%): [60.058, 65.665] (assumes normal distribution)


# Run complete. Total time: 00:01:10

REMEMBER: The numbers below are just data. To gain reusable insights, you need 
to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design 
factorial
experiments, perform baseline and negative tests that provide experimental 
control, make sure
the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from 
the domain experts.
Do not assume the numbers tell you what you want them to tell.

Benchmark Mode Cnt Score Error Units
ByteBufferSendBenchmark.testMethod thrpt 10 62.861 ± 2.804 ops/us
{code}
and after removing the method call

 
{code:java}
// code placeholder

running JMH with args [-f 2 ByteBufferSendBenchmark]
# JMH version: 1.21
# VM version: JDK 1.8.0_201, Java HotSpot(TM) 64-Bit Server VM, 25.201-b09
# VM invoker: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_201.jdk/Contents/Home/jre/bin/java
# VM options: 
# Warmup: 5 iterations, 2000 ms each
# Measurement: 5 iterations, 5000 ms each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod

# Run progress: 0.00% complete, ETA 00:01:10
# Fork: 1 of 2
# Warmup Iteration 1: 40.512 ops/us
# Warmup Iteration 2: 67.002 ops/us
# Warmup Iteration 3: 63.399 ops/us
# Warmup Iteration 4: 63.288 ops/us
# Warmup Iteration 5: 63.776 ops/us
Iteration 1: 63.539 ops/us
Iteration 2: 63.204 ops/us
Iteration 3: 63.114 ops/us
Iteration 4: 63.106 ops/us
Iteration 5: 63.708 ops/us

# Run progress: 50.00% complete, ETA 00:00:35
# Fork: 2 of 2
# Warmup Iteration 1: 40.290 ops/us
# Warmup Iteration 2: 65.076 ops/us
# Warmup Iteration 3: 62.961 ops/us
# Warmup Iteration 4: 63.219 ops/us
# Warmup Iteration 5: 63.380 ops/us
Iteration 1: 63.186 ops/us
Iteration 2: 63.411 ops/us
Iteration 3: 63.427 ops/us
Iteration 4: 63.441 ops/us
Iteration 5: 63.483 ops/us


Result "org.apache.kafka.jmh.common.ByteBufferSendBenchmark.testMethod":
63.362 ±(99.9%) 0.303 ops/us [Average]
(min, avg, max) = (63.106, 63.362, 63.708), stdev = 0.200
CI (99.9%): [63.059, 63.665] (assumes normal distribution)


# Run complete. Total time: 00:01:10

REMEMBER: The numbers below are just data. To gain reusable insights, you need 
to follow up on
why the numbers are the way they are. Use profilers (see -prof, -lprof), design 
factorial
experiments, perform baseline and negative tests that provide experimental 
control, make sure
the 

[jira] [Updated] (KAFKA-8488) FetchSessionHandler logging create 73 mb allocation in TLAB which could be no op

2019-06-18 Thread Kamal Chandraprakash (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash updated KAFKA-8488:

Fix Version/s: 2.4.0

> FetchSessionHandler logging create 73 mb allocation in TLAB which could be no 
> op 
> -
>
> Key: KAFKA-8488
> URL: https://issues.apache.org/jira/browse/KAFKA-8488
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Wenshuai Hou
>Priority: Minor
> Fix For: 2.4.0
>
> Attachments: image-2019-06-05-14-04-35-668.png
>
>
> !image-2019-06-05-14-04-35-668.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8559) PartitionStates.partitionStates cause array grow allocation.

2019-06-18 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-8559.

   Resolution: Fixed
Fix Version/s: 2.4.0

> PartitionStates.partitionStates cause array grow allocation. 
> -
>
> Key: KAFKA-8559
> URL: https://issues.apache.org/jira/browse/KAFKA-8559
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Wenshuai Hou
>Priority: Major
> Fix For: 2.4.0
>
> Attachments: image-2019-06-18-19-10-10-633.png
>
>
> this method causes 238 TLABs totaling 297mb of Array.copyOf(). The side of 
> the array can be determined at creation. 
> !image-2019-06-18-19-10-10-633.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8559) PartitionStates.partitionStates cause array grow allocation.

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867205#comment-16867205
 ] 

ASF GitHub Bot commented on KAFKA-8559:
---

ijuma commented on pull request #6964: KAFKA-8559: avoid kafka array list grow
URL: https://github.com/apache/kafka/pull/6964
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> PartitionStates.partitionStates cause array grow allocation. 
> -
>
> Key: KAFKA-8559
> URL: https://issues.apache.org/jira/browse/KAFKA-8559
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Wenshuai Hou
>Priority: Major
> Attachments: image-2019-06-18-19-10-10-633.png
>
>
> this method causes 238 TLABs totaling 297mb of Array.copyOf(). The side of 
> the array can be determined at creation. 
> !image-2019-06-18-19-10-10-633.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8561) Make Kafka Pluggable with customized Keystore/TrustStore

2019-06-18 Thread Thomas Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Zhou updated KAFKA-8561:
---
Priority: Major  (was: Minor)

> Make Kafka Pluggable with customized Keystore/TrustStore
> 
>
> Key: KAFKA-8561
> URL: https://issues.apache.org/jira/browse/KAFKA-8561
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, security
>Reporter: Thomas Zhou
>Assignee: Thomas Zhou
>Priority: Major
>
> A lot of company needs to enable TLS for Kafka for security perspective and 
> Kafka provides file-based configuration to load keystore and truststore from 
> directories. 
> However, it is hard to plug-in the customized in-memory Keystore and 
> Truststore in current kafka version.
> We want to make Keystore and Truststore pluggable which means Kafka Broker 
> and Kafka Client could load the Keystore and Truststore from other service at 
> the start time to enable Kafka using customized Keystore and Truststore. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8335) Log cleaner skips Transactional mark and batch record, causing unlimited growth of __consumer_offsets

2019-06-18 Thread Boquan Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867181#comment-16867181
 ] 

Boquan Tang commented on KAFKA-8335:


[~francisco.juan] This patch worked for us. 
I have a suspicion on why this is still happening to you, you may find some 
offset in {{/your/kafka/data/directory/cleaner-offset-checkpoint}} that is 
corresponding to a recent offset (5006278217 in your case). This is invalid 
because the log before that offset is not really cleaned up due to this bug.
However, since the LogCleanerManager yields (5069232666,5069232666) as dirty 
portion, which may be extremely small compare to the log size, log cleaner 
won't treat it as 'filthiest', also I doubt the dirty ratio will be over 50%, 
which is the default min.cleanable.dirty.ratio.

To solve it, you can either stop the server and manually clean up 
cleaner-offset-checkpoint, or temporarily apply a rentention.ms that is longer 
enough (so you won't end up clean up any needed consumer offset) to help chop 
off a large part of old segment, thus increase the dirty ratio so this topic 
partition can be picked up by log cleaner.

FYI we once configured retention.ms=120960 (two weeks) and 
min.cleanable.dirty.ratio=0.2

> Log cleaner skips Transactional mark and batch record, causing unlimited 
> growth of __consumer_offsets
> -
>
> Key: KAFKA-8335
> URL: https://issues.apache.org/jira/browse/KAFKA-8335
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Boquan Tang
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
> Attachments: seg_april_25.zip, segment.zip
>
>
> My Colleague Weichu already sent out a mail to kafka user mailing list 
> regarding this issue, but we think it's worth having a ticket tracking it.
> We are using Kafka Streams with exactly-once enabled on a Kafka cluster for
> a while.
> Recently we found that the size of __consumer_offsets partitions grew huge.
> Some partition went over 30G. This caused Kafka to take quite long to load
> "__consumer_offsets" topic on startup (it loads the topic in order to
> become group coordinator).
> We dumped the __consumer_offsets segments and found that while normal
> offset commits are nicely compacted, transaction records (COMMIT, etc) are
> all preserved. Looks like that since these messages don't have a key, the
> LogCleaner is keeping them all:
> --
> $ bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
> /003484332061.log --key-decoder-class
> kafka.serializer.StringDecoder 2>/dev/null | cat -v | head
> Dumping 003484332061.log
> Starting offset: 3484332061
> offset: 3484332089 position: 549 CreateTime: 1556003706952 isvalid: true
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 1006
> producerEpoch: 2530 sequence: -1 isTransactional: true headerKeys: []
> endTxnMarker: COMMIT coordinatorEpoch: 81
> offset: 3484332090 position: 627 CreateTime: 1556003706952 isvalid: true
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 4005
> producerEpoch: 2520 sequence: -1 isTransactional: true headerKeys: []
> endTxnMarker: COMMIT coordinatorEpoch: 84
> ...
> --
> Streams is doing transaction commits per 100ms (commit.interval.ms=100 when
> exactly-once) so the __consumer_offsets is growing really fast.
> Is this (to keep all transactions) by design, or is that a bug for
> LogCleaner?  What would be the way to clean up the topic?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8491) Bump up Consumer Protocol to v2 (part 1)

2019-06-18 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867163#comment-16867163
 ] 

Boyang Chen commented on KAFKA-8491:


Could we link the PR here?

> Bump up Consumer Protocol to v2 (part 1)
> 
>
> Key: KAFKA-8491
> URL: https://issues.apache.org/jira/browse/KAFKA-8491
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8562) SASL_SSL still performs reverse DNS lookup despite KAFKA-5051

2019-06-18 Thread Badai Aqrandista (JIRA)
Badai Aqrandista created KAFKA-8562:
---

 Summary: SASL_SSL still performs reverse DNS lookup despite 
KAFKA-5051
 Key: KAFKA-8562
 URL: https://issues.apache.org/jira/browse/KAFKA-8562
 Project: Kafka
  Issue Type: Bug
Reporter: Badai Aqrandista


When using SASL_SSL, the Kafka client performs a reverse DNS lookup to resolve 
IP to DNS. So, this circumvent the security fix made in KAFKA-5051. 

This is the line of code from AK 2.2 where it performs the lookup:

https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L205

Following log messages show that consumer initially tried to connect with IP 
address 10.0.2.15. Then suddenly it created SaslClient with a hostname:

{code:java}
[2019-06-18 06:23:36,486] INFO Kafka commitId: 00d486623990ed9d 
(org.apache.kafka.common.utils.AppInfoParser)
[2019-06-18 06:23:36,487] DEBUG [Consumer clientId=KafkaStore-reader-_schemas, 
groupId=schema-registry-10.0.2.15-18081] Kafka consumer initialized 
(org.apache.kafka.clients.consumer.KafkaConsumer)
[2019-06-18 06:23:36,505] DEBUG [Consumer clientId=KafkaStore-reader-_schemas, 
groupId=schema-registry-10.0.2.15-18081] Initiating connection to node 
10.0.2.15:19094 (id: -1 rack: null) using address /10.0.2.15 
(org.apache.kafka.clients.NetworkClient)
[2019-06-18 06:23:36,512] DEBUG Set SASL client state to 
SEND_APIVERSIONS_REQUEST 
(org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
[2019-06-18 06:23:36,515] DEBUG Creating SaslClient: 
client=null;service=kafka;serviceHostname=quickstart.confluent.io;mechs=[PLAIN] 
(org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
{code}

Thanks
Badai



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8562) SASL_SSL still performs reverse DNS lookup despite KAFKA-5051

2019-06-18 Thread Badai Aqrandista (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Badai Aqrandista updated KAFKA-8562:

Priority: Minor  (was: Major)

> SASL_SSL still performs reverse DNS lookup despite KAFKA-5051
> -
>
> Key: KAFKA-8562
> URL: https://issues.apache.org/jira/browse/KAFKA-8562
> Project: Kafka
>  Issue Type: Bug
>Reporter: Badai Aqrandista
>Priority: Minor
>
> When using SASL_SSL, the Kafka client performs a reverse DNS lookup to 
> resolve IP to DNS. So, this circumvent the security fix made in KAFKA-5051. 
> This is the line of code from AK 2.2 where it performs the lookup:
> https://github.com/apache/kafka/blob/2.2.0/clients/src/main/java/org/apache/kafka/common/network/SaslChannelBuilder.java#L205
> Following log messages show that consumer initially tried to connect with IP 
> address 10.0.2.15. Then suddenly it created SaslClient with a hostname:
> {code:java}
> [2019-06-18 06:23:36,486] INFO Kafka commitId: 00d486623990ed9d 
> (org.apache.kafka.common.utils.AppInfoParser)
> [2019-06-18 06:23:36,487] DEBUG [Consumer 
> clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.0.2.15-18081] 
> Kafka consumer initialized (org.apache.kafka.clients.consumer.KafkaConsumer)
> [2019-06-18 06:23:36,505] DEBUG [Consumer 
> clientId=KafkaStore-reader-_schemas, groupId=schema-registry-10.0.2.15-18081] 
> Initiating connection to node 10.0.2.15:19094 (id: -1 rack: null) using 
> address /10.0.2.15 (org.apache.kafka.clients.NetworkClient)
> [2019-06-18 06:23:36,512] DEBUG Set SASL client state to 
> SEND_APIVERSIONS_REQUEST 
> (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
> [2019-06-18 06:23:36,515] DEBUG Creating SaslClient: 
> client=null;service=kafka;serviceHostname=quickstart.confluent.io;mechs=[PLAIN]
>  (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator)
> {code}
> Thanks
> Badai



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8561) Make Kafka Pluggable with customized Keystore/TrustStore

2019-06-18 Thread Thomas Zhou (JIRA)
Thomas Zhou created KAFKA-8561:
--

 Summary: Make Kafka Pluggable with customized Keystore/TrustStore
 Key: KAFKA-8561
 URL: https://issues.apache.org/jira/browse/KAFKA-8561
 Project: Kafka
  Issue Type: New Feature
  Components: clients, security
Reporter: Thomas Zhou
Assignee: Thomas Zhou


A lot of company needs to enable TLS for Kafka for security perspective and 
Kafka provides file-based configuration to load keystore and truststore from 
directories. 

However, it is hard to plug-in the customized in-memory Keystore and Truststore 
in current kafka version.

We want to make Keystore and Truststore pluggable which means Kafka Broker and 
Kafka Client could load the Keystore and Truststore from other service at the 
start time to enable Kafka using customized Keystore and Truststore. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8450) Augment processed in MockProcessor as KeyValueAndTimestamp

2019-06-18 Thread SuryaTeja Duggi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866019#comment-16866019
 ] 

SuryaTeja Duggi edited comment on KAFKA-8450 at 6/19/19 12:11 AM:
--

[~guozhang] Please find the PR [https://github.com/apache/kafka/pull/6933]


was (Author: suryateja...@gmail.com):
[https://github.com/apache/kafka/pull/6933]

> Augment processed in MockProcessor as KeyValueAndTimestamp
> --
>
> Key: KAFKA-8450
> URL: https://issues.apache.org/jira/browse/KAFKA-8450
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams, unit tests
>Reporter: Guozhang Wang
>Assignee: SuryaTeja Duggi
>Priority: Major
>  Labels: newbie
>
> Today the book-keeping list of `processed` records in MockProcessor is in the 
> form of String, in which we just call the key / value type's toString 
> function in order to book-keep, this loses the type information as well as 
> not having timestamp associated with it.
> It's better to translate its type to `KeyValueAndTimestamp` and refactor 
> impacted unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8491) Bump up Consumer Protocol to v2 (part 1)

2019-06-18 Thread Sophie Blee-Goldman (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sophie Blee-Goldman resolved KAFKA-8491.

Resolution: Fixed

> Bump up Consumer Protocol to v2 (part 1)
> 
>
> Key: KAFKA-8491
> URL: https://issues.apache.org/jira/browse/KAFKA-8491
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8559) PartitionStates.partitionStates cause array grow allocation.

2019-06-18 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen updated KAFKA-8559:
---
Component/s: consumer

> PartitionStates.partitionStates cause array grow allocation. 
> -
>
> Key: KAFKA-8559
> URL: https://issues.apache.org/jira/browse/KAFKA-8559
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Wenshuai Hou
>Priority: Major
> Attachments: image-2019-06-18-19-10-10-633.png
>
>
> this method causes 238 TLABs totaling 297mb of Array.copyOf(). The side of 
> the array can be determined at creation. 
> !image-2019-06-18-19-10-10-633.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8560) The Kafka protocol generator should support common structures

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867125#comment-16867125
 ] 

ASF GitHub Bot commented on KAFKA-8560:
---

cmccabe commented on pull request #6966: KAFKA-8560. The Kafka protocol 
generator should support common structures
URL: https://github.com/apache/kafka/pull/6966
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The Kafka protocol generator should support common structures
> -
>
> Key: KAFKA-8560
> URL: https://issues.apache.org/jira/browse/KAFKA-8560
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> The Kafka protocol generator should support common structures.  This would 
> make things simpler in cases where we need to refer to a single structure 
> from multiple places in a message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8560) The Kafka protocol generator should support common structures

2019-06-18 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8560:
--

 Summary: The Kafka protocol generator should support common 
structures
 Key: KAFKA-8560
 URL: https://issues.apache.org/jira/browse/KAFKA-8560
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


The Kafka protocol generator should support common structures.  This would make 
things simpler in cases where we need to refer to a single structure from 
multiple places in a message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8488) FetchSessionHandler logging create 73 mb allocation in TLAB which could be no op

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867119#comment-16867119
 ] 

ASF GitHub Bot commented on KAFKA-8488:
---

wenhoujx commented on pull request #6964: KAFKA-8488: avoid kafka array list 
grow
URL: https://github.com/apache/kafka/pull/6964
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   see https://issues.apache.org/jira/browse/KAFKA-8488
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> FetchSessionHandler logging create 73 mb allocation in TLAB which could be no 
> op 
> -
>
> Key: KAFKA-8488
> URL: https://issues.apache.org/jira/browse/KAFKA-8488
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Wenshuai Hou
>Priority: Minor
> Attachments: image-2019-06-05-14-04-35-668.png
>
>
> !image-2019-06-05-14-04-35-668.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8559) PartitionStates.partitionStates cause array grow allocation.

2019-06-18 Thread Wenshuai Hou (JIRA)
Wenshuai Hou created KAFKA-8559:
---

 Summary: PartitionStates.partitionStates cause array grow 
allocation. 
 Key: KAFKA-8559
 URL: https://issues.apache.org/jira/browse/KAFKA-8559
 Project: Kafka
  Issue Type: Improvement
Reporter: Wenshuai Hou
 Attachments: image-2019-06-18-19-10-10-633.png

this method causes 238 TLABs totaling 297mb of Array.copyOf(). The side of the 
array can be determined at creation. 

!image-2019-06-18-19-10-10-633.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8179) Incremental Rebalance Protocol for Kafka Consumer

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867049#comment-16867049
 ] 

ASF GitHub Bot commented on KAFKA-8179:
---

ableegoldman commented on pull request #6963: KAFKA-8179: Part 4, add 
CooperativeStickyAssignor
URL: https://github.com/apache/kafka/pull/6963
 
 
   Should be rebased after #6884 is merged
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Incremental Rebalance Protocol for Kafka Consumer
> -
>
> Key: KAFKA-8179
> URL: https://issues.apache.org/jira/browse/KAFKA-8179
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
>
> Recently Kafka community is promoting cooperative rebalancing to mitigate the 
> pain points in the stop-the-world rebalancing protocol. This ticket is 
> created to initiate that idea at the Kafka consumer client, which will be 
> beneficial for heavy-stateful consumers such as Kafka Streams applications.
> In short, the scope of this ticket includes reducing unnecessary rebalance 
> latency due to heavy partition migration: i.e. partitions being revoked and 
> re-assigned. This would make the built-in consumer assignors (range, 
> round-robin etc) to be aware of previously assigned partitions and be sticky 
> in best-effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8558) KIP-479 - Add Materialized Overload to KStream#Join

2019-06-18 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8558:
--

 Summary: KIP-479 - Add Materialized Overload to KStream#Join 
 Key: KAFKA-8558
 URL: https://issues.apache.org/jira/browse/KAFKA-8558
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 2.4.0


To prevent a topology incompatibility with the release of 2.4 and the naming of 
Join operations we'll add an overloaded KStream#join method accepting a 
Materialized parameter. The overloads will apply to all flavors of KStream#join 
(inner, left, and right). 

Additionally, new methods withQueryingEnabled and withQueryingDisabled are 
going to be added to Materialized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8557) Support named listeners in system tests

2019-06-18 Thread Stanislav Vodetskyi (JIRA)
Stanislav Vodetskyi created KAFKA-8557:
--

 Summary: Support named listeners in system tests
 Key: KAFKA-8557
 URL: https://issues.apache.org/jira/browse/KAFKA-8557
 Project: Kafka
  Issue Type: Test
  Components: system tests
Reporter: Stanislav Vodetskyi


Kafka currently supports named listeners, where you can have two or more 
listeners with the same security protocol but different names. Current 
{{KafkaService}} implementation, however, wouldn't allow that, since listeners 
in {{port_mappings}} are keyed by {{security_protocol}}, so there's 1-1 
relationship. Kafka clients in system tests use {{bootstrap_servers()}} method, 
which also accepts {{security_protocol}}, as a way to pick a port to talk to 
kafka.
The scope of this jira is to refactor KafkaService to support named listeners, 
specifically two things - ability to have custom-named listeners and ability to 
have several listeners with the same security protocol. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2019-06-18 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen reopened KAFKA-4222:

  Assignee: (was: Damian Guy)

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Priority: Major
> Fix For: 0.11.0.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2019-06-18 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16867000#comment-16867000
 ] 

Boyang Chen commented on KAFKA-4222:


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/22793/console]

 
*21:59:31* org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance FAILED*21:59:31* java.lang.AssertionError: Condition not 
met within timeout 12. waiting for metadata, store and value to be non 
null*21:59:31* at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:375)*21:59:31*  
   at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:352)*21:59:31*  
   at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:292)*21:59:31*
 at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:382)*21:59:31*

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Damian Guy
>Priority: Major
> Fix For: 0.11.0.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8245) Flaky Test DeleteConsumerGroupsTest#testDeleteCmdAllGroups

2019-06-18 Thread Boyang Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866997#comment-16866997
 ] 

Boyang Chen commented on KAFKA-8245:


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/22793/console]

Failed again

> Flaky Test DeleteConsumerGroupsTest#testDeleteCmdAllGroups
> --
>
> Key: KAFKA-8245
> URL: https://issues.apache.org/jira/browse/KAFKA-8245
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3781/testReport/junit/kafka.admin/DeleteConsumerGroupsTest/testDeleteCmdAllGroups/]
> {quote}java.lang.AssertionError: The group did become empty as expected. at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.admin.DeleteConsumerGroupsTest.testDeleteCmdAllGroups(DeleteConsumerGroupsTest.scala:148){quote}
> STDOUT
> {quote}Error: Deletion of some consumer groups failed: * Group 'test.group' 
> could not be deleted due to: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupNotEmptyException: The group is not 
> empty. Error: Deletion of some consumer groups failed: * Group 
> 'missing.group' could not be deleted due to: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does 
> not exist. [2019-04-16 09:42:02,316] WARN Unable to read additional data from 
> client sessionid 0x104f958dba3, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) Deletion of requested 
> consumer groups ('test.group') was successful. Error: Deletion of some 
> consumer groups failed: * Group 'missing.group' could not be deleted due to: 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.GroupIdNotFoundException: The group id does 
> not exist. These consumer groups were deleted successfully: 
> 'test.group'{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8448) Too many kafka.log.Log instances (Memory Leak)

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866994#comment-16866994
 ] 

ASF GitHub Bot commented on KAFKA-8448:
---

cmccabe commented on pull request #6892: KAFKA-8448: Fix "too many 
kafka.log.Log instances"
URL: https://github.com/apache/kafka/pull/6892
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Too many kafka.log.Log instances (Memory Leak)
> --
>
> Key: KAFKA-8448
> URL: https://issues.apache.org/jira/browse/KAFKA-8448
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
> Environment: Red Hat 4.4.7-16, java version "1.8.0_152", 
> kafka_2.12-2.2.0
>Reporter: Juan Olivares
>Assignee: Justine Olshan
>Priority: Major
> Fix For: 2.4.0
>
>
> We have a custom Kafka health check which creates a topic, add some ACLs 
> (read/write topic and group), produce & consume a single message and then 
> quickly remove it and all the related ACLs created. We close the consumer 
> involved, but no the producer.
> We have observed that # of instances of {{kafka.log.Log}} keep growing, while 
> there's no evidence of topics being leaked, neither running 
> {{/opt/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --describe}} , 
> nor looking at the disk directory where topics are stored.
> After looking at the heapdump we've observed the following
>  - None of the {{kafka.log.Log}} references ({{currentLogs}}, 
> {{logsToBeDeleted }} and {{logsToBeDeleted}}) in {{kafka.log.LogManager}} is 
> holding the big amount of {{kafka.log.Log}} instances.
>  - The only reference preventing {{kafka.log.Log}} to be Garbage collected 
> seems to be 
> {{java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue}} which 
> contains schedule tasks created with the name 
> {{PeriodicProducerExpirationCheck}}.
> I can see in the code that for every {{kafka.log.Log}} a task with this name 
> is scheduled.
> {code:java}
>   scheduler.schedule(name = "PeriodicProducerExpirationCheck", fun = () => {
> lock synchronized {
>   producerStateManager.removeExpiredProducers(time.milliseconds)
> }
>   }, period = producerIdExpirationCheckIntervalMs, delay = 
> producerIdExpirationCheckIntervalMs, unit = TimeUnit.MILLISECONDS)
> {code}
> However it seems those tasks are never unscheduled/cancelled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8556) Add system tests for assignment stickiness validation

2019-06-18 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen reassigned KAFKA-8556:
--

Assignee: Boyang Chen

> Add system tests for assignment stickiness validation
> -
>
> Key: KAFKA-8556
> URL: https://issues.apache.org/jira/browse/KAFKA-8556
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8556) Add system tests for assignment stickiness validation

2019-06-18 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8556:
--

 Summary: Add system tests for assignment stickiness validation
 Key: KAFKA-8556
 URL: https://issues.apache.org/jira/browse/KAFKA-8556
 Project: Kafka
  Issue Type: Sub-task
Reporter: Boyang Chen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8448) Too many kafka.log.Log instances (Memory Leak)

2019-06-18 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-8448.

   Resolution: Fixed
Fix Version/s: 2.4.0

> Too many kafka.log.Log instances (Memory Leak)
> --
>
> Key: KAFKA-8448
> URL: https://issues.apache.org/jira/browse/KAFKA-8448
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
> Environment: Red Hat 4.4.7-16, java version "1.8.0_152", 
> kafka_2.12-2.2.0
>Reporter: Juan Olivares
>Assignee: Justine Olshan
>Priority: Major
> Fix For: 2.4.0
>
>
> We have a custom Kafka health check which creates a topic, add some ACLs 
> (read/write topic and group), produce & consume a single message and then 
> quickly remove it and all the related ACLs created. We close the consumer 
> involved, but no the producer.
> We have observed that # of instances of {{kafka.log.Log}} keep growing, while 
> there's no evidence of topics being leaked, neither running 
> {{/opt/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --describe}} , 
> nor looking at the disk directory where topics are stored.
> After looking at the heapdump we've observed the following
>  - None of the {{kafka.log.Log}} references ({{currentLogs}}, 
> {{logsToBeDeleted }} and {{logsToBeDeleted}}) in {{kafka.log.LogManager}} is 
> holding the big amount of {{kafka.log.Log}} instances.
>  - The only reference preventing {{kafka.log.Log}} to be Garbage collected 
> seems to be 
> {{java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue}} which 
> contains schedule tasks created with the name 
> {{PeriodicProducerExpirationCheck}}.
> I can see in the code that for every {{kafka.log.Log}} a task with this name 
> is scheduled.
> {code:java}
>   scheduler.schedule(name = "PeriodicProducerExpirationCheck", fun = () => {
> lock synchronized {
>   producerStateManager.removeExpiredProducers(time.milliseconds)
> }
>   }, period = producerIdExpirationCheckIntervalMs, delay = 
> producerIdExpirationCheckIntervalMs, unit = TimeUnit.MILLISECONDS)
> {code}
> However it seems those tasks are never unscheduled/cancelled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8448) Too many kafka.log.Log instances (Memory Leak)

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866889#comment-16866889
 ] 

ASF GitHub Bot commented on KAFKA-8448:
---

cmccabe commented on pull request #6847: KAFKA-8448: Too many kafka.log.Log 
instances (Memory Leak)
URL: https://github.com/apache/kafka/pull/6847
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Too many kafka.log.Log instances (Memory Leak)
> --
>
> Key: KAFKA-8448
> URL: https://issues.apache.org/jira/browse/KAFKA-8448
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
> Environment: Red Hat 4.4.7-16, java version "1.8.0_152", 
> kafka_2.12-2.2.0
>Reporter: Juan Olivares
>Assignee: Justine Olshan
>Priority: Major
>
> We have a custom Kafka health check which creates a topic, add some ACLs 
> (read/write topic and group), produce & consume a single message and then 
> quickly remove it and all the related ACLs created. We close the consumer 
> involved, but no the producer.
> We have observed that # of instances of {{kafka.log.Log}} keep growing, while 
> there's no evidence of topics being leaked, neither running 
> {{/opt/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --describe}} , 
> nor looking at the disk directory where topics are stored.
> After looking at the heapdump we've observed the following
>  - None of the {{kafka.log.Log}} references ({{currentLogs}}, 
> {{logsToBeDeleted }} and {{logsToBeDeleted}}) in {{kafka.log.LogManager}} is 
> holding the big amount of {{kafka.log.Log}} instances.
>  - The only reference preventing {{kafka.log.Log}} to be Garbage collected 
> seems to be 
> {{java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue}} which 
> contains schedule tasks created with the name 
> {{PeriodicProducerExpirationCheck}}.
> I can see in the code that for every {{kafka.log.Log}} a task with this name 
> is scheduled.
> {code:java}
>   scheduler.schedule(name = "PeriodicProducerExpirationCheck", fun = () => {
> lock synchronized {
>   producerStateManager.removeExpiredProducers(time.milliseconds)
> }
>   }, period = producerIdExpirationCheckIntervalMs, delay = 
> producerIdExpirationCheckIntervalMs, unit = TimeUnit.MILLISECONDS)
> {code}
> However it seems those tasks are never unscheduled/cancelled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8405) Remove deprecated preferred leader RPC and Command

2019-06-18 Thread Jose Armando Garcia Sancio (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866848#comment-16866848
 ] 

Jose Armando Garcia Sancio commented on KAFKA-8405:
---

Correct. In 2.4.0 we implemented 
[KIP-460|https://cwiki.apache.org/confluence/display/KAFKA/KIP-460%3A+Admin+Leader+Election+RPC].

[~m.sandeep] I would pick a different issue to work on as we won't be able to 
merge it for a while. We can only remove those classes/symbols in Kafka 3.0.0. 
From my point of view there is no concrete date for that version of Kafka. 

> Remove deprecated preferred leader RPC and Command
> --
>
> Key: KAFKA-8405
> URL: https://issues.apache.org/jira/browse/KAFKA-8405
> Project: Kafka
>  Issue Type: Task
>  Components: admin
>Affects Versions: 3.0.0
>Reporter: Jose Armando Garcia Sancio
>Priority: Blocker
> Fix For: 3.0.0
>
>
> For version 2.4.0, we deprecated:
> # AdminClient.electPreferredLeaders
> # ElectPreferredLeadersResult
> # ElectPreferredLeadersOptions
> # PreferredReplicaLeaderElectionCommand.
> For version 3.0.0 we should remove all of this symbols and the reference to 
> them. For the command that includes:
> # bin/kafka-preferred-replica-election.sh
> # bin/windows/kafka-preferred-replica-election.bat



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8555) Flaky test ExampleConnectIntegrationTest#testSourceConnector

2019-06-18 Thread Boyang Chen (JIRA)
Boyang Chen created KAFKA-8555:
--

 Summary: Flaky test 
ExampleConnectIntegrationTest#testSourceConnector
 Key: KAFKA-8555
 URL: https://issues.apache.org/jira/browse/KAFKA-8555
 Project: Kafka
  Issue Type: Bug
Reporter: Boyang Chen


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/22798/console]
*02:03:21* 
org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector
 failed, log available in 
/home/jenkins/jenkins-slave/workspace/kafka-pr-jdk8-scala2.11/connect/runtime/build/reports/testOutput/org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector.test.stdout*02:03:21*
 *02:03:21* org.apache.kafka.connect.integration.ExampleConnectIntegrationTest 
> testSourceConnector FAILED*02:03:21* 
org.apache.kafka.connect.errors.DataException: Insufficient records committed 
by connector simple-conn in 15000 millis. Records expected=2000, 
actual=1013*02:03:21* at 
org.apache.kafka.connect.integration.ConnectorHandle.awaitCommits(ConnectorHandle.java:188)*02:03:21*
 at 
org.apache.kafka.connect.integration.ExampleConnectIntegrationTest.testSourceConnector(ExampleConnectIntegrationTest.java:181)*02:03:21*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3287) Add over-wire encryption support between KAFKA and ZK

2019-06-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-3287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866772#comment-16866772
 ] 

Gérald Quintana commented on KAFKA-3287:


This means adding Zookeeper client SSL support in Kafka Server and also in 
Kafka tooling like kafka-topics.sh, kafka-acls.sh, kafka-configs.sh...

> Add over-wire encryption support between KAFKA and ZK
> -
>
> Key: KAFKA-3287
> URL: https://issues.apache.org/jira/browse/KAFKA-3287
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
>
> ZOOKEEPER-2125 added support for SSL. After Kafka upgrades ZK's dependency to 
> 3.5.1+ or 3.6.0+, SSL support between kafka broker and zk can be added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3288) Update ZK dependency to 3.5.1 when it is marked as stable

2019-06-18 Thread JIRA


[ 
https://issues.apache.org/jira/browse/KAFKA-3288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866761#comment-16866761
 ] 

Gérald Quintana commented on KAFKA-3288:


Zookeeper 3.5.5 is marked as stable

> Update ZK dependency to 3.5.1 when it is marked as stable
> -
>
> Key: KAFKA-3288
> URL: https://issues.apache.org/jira/browse/KAFKA-3288
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
>
> When a stable version of ZK 3.5.1+ is released, update Kafka's ZK dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8335) Log cleaner skips Transactional mark and batch record, causing unlimited growth of __consumer_offsets

2019-06-18 Thread Francisco Juan (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1685#comment-1685
 ] 

Francisco Juan edited comment on KAFKA-8335 at 6/18/19 2:51 PM:


Hello, we have recently updated a Kafka cluster with this same problem from 
version 1.1 to version 2.2.1, without updating the 
inter.broker.protocol.version yet, still set as 1.1.

We were expecting this update to reduce the size on some partitions of 
__consumer_offsets that keep growing. The observed behaviour is that there's 
still many segments with full of only "isTransactional: true" kind of messages.

This is a sample of the kafka-dump-log.sh:
{code:java}
/usr/kafka/bin/kafka-dump-log.sh --files 004107011120.log 
--value-decoder-class "kafka.serializer.StringDecoder" | head -n 20
Dumping 004107011120.log
Starting offset: 4107011120
baseOffset: 4107011154 lastOffset: 4107011154 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 558010 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 0 CreateTime: 1556123964832 
size: 78 magic: 2 compresscodec: NONE crc: 1007341472 isvalid: true
| offset: 4107011154 CreateTime: 1556123964832 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84
baseOffset: 4107011178 lastOffset: 4107011178 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 559002 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 78 CreateTime: 1556123964895 
size: 78 magic: 2 compresscodec: NONE crc: 470005994 isvalid: true
| offset: 4107011178 CreateTime: 1556123964895 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84
baseOffset: 4107011180 lastOffset: 4107011180 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 559002 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 156 CreateTime: 1556123964916 
size: 78 magic: 2 compresscodec: NONE crc: 681157535 isvalid: true
| offset: 4107011180 CreateTime: 1556123964916 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84{code}
This command is executed on Jun 18th.

The `offsets.retention.minutes` is set to 40 days.

The timestamps shown on this dump are way beyond the retention period.

The LogCleaner DEBUG log is next:
{code:java}
DEBUG Finding range of cleanable offsets for log=__consumer_offsets-6 
topicPartition=__consumer_offsets-6. Last clean offset=Some(5006278217) 
now=1560855479531 => firstDirtyOffset=5006278217 
firstUncleanableOffset=5069232666 activeSegment.baseOffset=5069232666 
(kafka.log.LogCleanerManager$)
{code}
Offsets shown on the dump are not on the active segment and are way below the 
firstUncleanbleOffset


was (Author: francisco.juan):
Hello, we have recently updated a Kafka cluster with this same problem from 
version 1.1 to version 2.2.1, without updating the 
inter.broker.protocol.version yet, still set as 1.1.

We were expecting this update to reduce the size on some partitions of 
__consumer_offsets that keep growing. The observed behaviour is that there's 
still many segments with full of only "isTransactional: true" kind of.

This is a sample of the kafka-dump-log.sh:
{code:java}
/usr/kafka/bin/kafka-dump-log.sh --files 004107011120.log 
--value-decoder-class "kafka.serializer.StringDecoder" | head -n 20
Dumping 004107011120.log
Starting offset: 4107011120
baseOffset: 4107011154 lastOffset: 4107011154 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 558010 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 0 CreateTime: 1556123964832 
size: 78 magic: 2 compresscodec: NONE crc: 1007341472 isvalid: true
| offset: 4107011154 CreateTime: 1556123964832 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84
baseOffset: 4107011178 lastOffset: 4107011178 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 559002 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 78 CreateTime: 1556123964895 
size: 78 magic: 2 compresscodec: NONE crc: 470005994 isvalid: true
| offset: 4107011178 CreateTime: 1556123964895 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84
baseOffset: 4107011180 lastOffset: 4107011180 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 559002 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 156 CreateTime: 1556123964916 
size: 78 magic: 2 compresscodec: NONE crc: 681157535 isvalid: true
| offset: 4107011180 CreateTime: 1556123964916 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84{code}
This command is executed on Jun 18th.

The `offsets.retention.minutes` is set to 40 days.

[jira] [Comment Edited] (KAFKA-8335) Log cleaner skips Transactional mark and batch record, causing unlimited growth of __consumer_offsets

2019-06-18 Thread Francisco Juan (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1685#comment-1685
 ] 

Francisco Juan edited comment on KAFKA-8335 at 6/18/19 2:50 PM:


Hello, we have recently updated a Kafka cluster with this same problem from 
version 1.1 to version 2.2.1, without updating the 
inter.broker.protocol.version yet, still set as 1.1.

We were expecting this update to reduce the size on some partitions of 
__consumer_offsets that keep growing. The observed behaviour is that there's 
still many segments with full of only "isTransactional: true" kind of.

This is a sample of the kafka-dump-log.sh:
{code:java}
/usr/kafka/bin/kafka-dump-log.sh --files 004107011120.log 
--value-decoder-class "kafka.serializer.StringDecoder" | head -n 20
Dumping 004107011120.log
Starting offset: 4107011120
baseOffset: 4107011154 lastOffset: 4107011154 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 558010 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 0 CreateTime: 1556123964832 
size: 78 magic: 2 compresscodec: NONE crc: 1007341472 isvalid: true
| offset: 4107011154 CreateTime: 1556123964832 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84
baseOffset: 4107011178 lastOffset: 4107011178 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 559002 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 78 CreateTime: 1556123964895 
size: 78 magic: 2 compresscodec: NONE crc: 470005994 isvalid: true
| offset: 4107011178 CreateTime: 1556123964895 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84
baseOffset: 4107011180 lastOffset: 4107011180 count: 1 baseSequence: -1 
lastSequence: -1 producerId: 559002 producerEpoch: 0 partitionLeaderEpoch: 490 
isTransactional: true isControl: true position: 156 CreateTime: 1556123964916 
size: 78 magic: 2 compresscodec: NONE crc: 681157535 isvalid: true
| offset: 4107011180 CreateTime: 1556123964916 keysize: 4 valuesize: 6 
sequence: -1 headerKeys: [] endTxnMarker: COMMIT coordinatorEpoch: 84{code}
This command is executed on Jun 18th.

The `offsets.retention.minutes` is set to 40 days.

The timestamps shown on this dump are way beyond the retention period.

The LogCleaner DEBUG log is next:
{code:java}
DEBUG Finding range of cleanable offsets for log=__consumer_offsets-6 
topicPartition=__consumer_offsets-6. Last clean offset=Some(5006278217) 
now=1560855479531 => firstDirtyOffset=5006278217 
firstUncleanableOffset=5069232666 activeSegment.baseOffset=5069232666 
(kafka.log.LogCleanerManager$)
{code}
Offsets shown on the dump are not on the active segment and are way below the 
firstUncleanbleOffset


was (Author: francisco.juan):
Hello, we have recently updated a Kafka cluster with this same problem from 
version 1.1 to version 2.2.1, without updating the 
inter.broker.protocol.version yet, still set as 1.1.

We were expecting this update to reduce the size on some partitions of 
__consumer_offsets that keep growing. The observed behaviour is that there's 
still many segments with full of only "isTransactional: true" kind of.

This is a sample of the kafka-dump-log.sh:

> Log cleaner skips Transactional mark and batch record, causing unlimited 
> growth of __consumer_offsets
> -
>
> Key: KAFKA-8335
> URL: https://issues.apache.org/jira/browse/KAFKA-8335
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Boquan Tang
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
> Attachments: seg_april_25.zip, segment.zip
>
>
> My Colleague Weichu already sent out a mail to kafka user mailing list 
> regarding this issue, but we think it's worth having a ticket tracking it.
> We are using Kafka Streams with exactly-once enabled on a Kafka cluster for
> a while.
> Recently we found that the size of __consumer_offsets partitions grew huge.
> Some partition went over 30G. This caused Kafka to take quite long to load
> "__consumer_offsets" topic on startup (it loads the topic in order to
> become group coordinator).
> We dumped the __consumer_offsets segments and found that while normal
> offset commits are nicely compacted, transaction records (COMMIT, etc) are
> all preserved. Looks like that since these messages don't have a key, the
> LogCleaner is keeping them all:
> --
> $ bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
> /003484332061.log --key-decoder-class
> kafka.serializer.StringDecoder 2>/dev/null | cat -v | head
> Dumping 003484332061.log
> Starting offset: 3484332061
> offset: 

[jira] [Commented] (KAFKA-8335) Log cleaner skips Transactional mark and batch record, causing unlimited growth of __consumer_offsets

2019-06-18 Thread Francisco Juan (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1685#comment-1685
 ] 

Francisco Juan commented on KAFKA-8335:
---

Hello, we have recently updated a Kafka cluster with this same problem from 
version 1.1 to version 2.2.1, without updating the 
inter.broker.protocol.version yet, still set as 1.1.

We were expecting this update to reduce the size on some partitions of 
__consumer_offsets that keep growing. The observed behaviour is that there's 
still many segments with full of only "isTransactional: true" kind of.

This is a sample of the kafka-dump-log.sh:

> Log cleaner skips Transactional mark and batch record, causing unlimited 
> growth of __consumer_offsets
> -
>
> Key: KAFKA-8335
> URL: https://issues.apache.org/jira/browse/KAFKA-8335
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Boquan Tang
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
> Attachments: seg_april_25.zip, segment.zip
>
>
> My Colleague Weichu already sent out a mail to kafka user mailing list 
> regarding this issue, but we think it's worth having a ticket tracking it.
> We are using Kafka Streams with exactly-once enabled on a Kafka cluster for
> a while.
> Recently we found that the size of __consumer_offsets partitions grew huge.
> Some partition went over 30G. This caused Kafka to take quite long to load
> "__consumer_offsets" topic on startup (it loads the topic in order to
> become group coordinator).
> We dumped the __consumer_offsets segments and found that while normal
> offset commits are nicely compacted, transaction records (COMMIT, etc) are
> all preserved. Looks like that since these messages don't have a key, the
> LogCleaner is keeping them all:
> --
> $ bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
> /003484332061.log --key-decoder-class
> kafka.serializer.StringDecoder 2>/dev/null | cat -v | head
> Dumping 003484332061.log
> Starting offset: 3484332061
> offset: 3484332089 position: 549 CreateTime: 1556003706952 isvalid: true
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 1006
> producerEpoch: 2530 sequence: -1 isTransactional: true headerKeys: []
> endTxnMarker: COMMIT coordinatorEpoch: 81
> offset: 3484332090 position: 627 CreateTime: 1556003706952 isvalid: true
> keysize: 4 valuesize: 6 magic: 2 compresscodec: NONE producerId: 4005
> producerEpoch: 2520 sequence: -1 isTransactional: true headerKeys: []
> endTxnMarker: COMMIT coordinatorEpoch: 84
> ...
> --
> Streams is doing transaction commits per 100ms (commit.interval.ms=100 when
> exactly-once) so the __consumer_offsets is growing really fast.
> Is this (to keep all transactions) by design, or is that a bug for
> LogCleaner?  What would be the way to clean up the topic?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-4090) JVM runs into OOM if (Java) client uses a SSL port without setting the security protocol

2019-06-18 Thread Kurt T Stam (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1681#comment-1681
 ] 

Kurt T Stam edited comment on KAFKA-4090 at 6/18/19 2:43 PM:
-

Is this a hard bug to fix? Can't we just add some validation and send back an 
error rather then having the client blow up? Seems like a pretty nasty defect 
to me, please add your vote so we can some priority on this.  --Kurt


was (Author: kstam):
Is this a hard bug to fix? Can't we just add some validation and send back an 
error rather then having the client blow up? Seems like a pretty nasty defect 
to me.  --Kurt

> JVM runs into OOM if (Java) client uses a SSL port without setting the 
> security protocol
> 
>
> Key: KAFKA-4090
> URL: https://issues.apache.org/jira/browse/KAFKA-4090
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: jaikiran pai
>Priority: Major
>
> Quoting from the mail thread that was sent to Kafka mailing list:
> {quote}
> We have been using Kafka 0.9.0.1 (server and Java client libraries). So far 
> we had been using it with plaintext transport but recently have been 
> considering upgrading to using SSL. It mostly works except that a 
> mis-configured producer (and even consumer) causes a hard to relate 
> OutOfMemory exception and thus causing the JVM in which the client is 
> running, to go into a bad state. We can consistently reproduce that OOM very 
> easily. We decided to check if this is something that is fixed in 0.10.0.1 so 
> upgraded one of our test systems to that version (both server and client 
> libraries) but still see the same issue. Here's how it can be easily 
> reproduced
> 1. Enable SSL listener on the broker via server.properties, as per the Kafka 
> documentation
> {code}
> listeners=PLAINTEXT://:9092,SSL://:9093
> ssl.keystore.location=
> ssl.keystore.password=pass
> ssl.key.password=pass
> ssl.truststore.location=
> ssl.truststore.password=pass
> {code}
> 2. Start zookeeper and kafka server
> 3. Create a "oom-test" topic (which will be used for these tests):
> {code}
> kafka-topics.sh --zookeeper localhost:2181 --create --topic oom-test  
> --partitions 1 --replication-factor 1
> {code}
> 4. Create a simple producer which sends a single message to the topic via 
> Java (new producer) APIs:
> {code}
> public class OOMTest {
> public static void main(final String[] args) throws Exception {
> final Properties kafkaProducerConfigs = new Properties();
> // NOTE: Intentionally use a SSL port without specifying 
> security.protocol as SSL
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9093");
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> StringSerializer.class.getName());
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
>  StringSerializer.class.getName());
> try (KafkaProducer producer = new 
> KafkaProducer<>(kafkaProducerConfigs)) {
> System.out.println("Created Kafka producer");
> final String topicName = "oom-test";
> final String message = "Hello OOM!";
> // send a message to the topic
> final Future recordMetadataFuture = 
> producer.send(new ProducerRecord<>(topicName, message));
> final RecordMetadata sentRecordMetadata = 
> recordMetadataFuture.get();
> System.out.println("Sent message '" + message + "' to topic '" + 
> topicName + "'");
> }
> System.out.println("Tests complete");
> }
> }
> {code}
> Notice that the server URL is using a SSL endpoint localhost:9093 but isn't 
> specifying any of the other necessary SSL configs like security.protocol.
> 5. For the sake of easily reproducing this issue run this class with a max 
> heap size of 256MB (-Xmx256M). Running this code throws up the following 
> OutOfMemoryError in one of the Sender threads:
> {code}
> 18:33:25,770 ERROR [KafkaThread] - Uncaught exception in 
> kafka-producer-network-thread | producer-1:
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
> at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
> at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
> at 

[jira] [Commented] (KAFKA-4090) JVM runs into OOM if (Java) client uses a SSL port without setting the security protocol

2019-06-18 Thread Kurt T Stam (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1681#comment-1681
 ] 

Kurt T Stam commented on KAFKA-4090:


Is this a hard bug to fix? Can't we just add some validation and send back an 
error rather then having the client blow up? Seems like a pretty nasty defect 
to me.  --Kurt

> JVM runs into OOM if (Java) client uses a SSL port without setting the 
> security protocol
> 
>
> Key: KAFKA-4090
> URL: https://issues.apache.org/jira/browse/KAFKA-4090
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: jaikiran pai
>Priority: Major
>
> Quoting from the mail thread that was sent to Kafka mailing list:
> {quote}
> We have been using Kafka 0.9.0.1 (server and Java client libraries). So far 
> we had been using it with plaintext transport but recently have been 
> considering upgrading to using SSL. It mostly works except that a 
> mis-configured producer (and even consumer) causes a hard to relate 
> OutOfMemory exception and thus causing the JVM in which the client is 
> running, to go into a bad state. We can consistently reproduce that OOM very 
> easily. We decided to check if this is something that is fixed in 0.10.0.1 so 
> upgraded one of our test systems to that version (both server and client 
> libraries) but still see the same issue. Here's how it can be easily 
> reproduced
> 1. Enable SSL listener on the broker via server.properties, as per the Kafka 
> documentation
> {code}
> listeners=PLAINTEXT://:9092,SSL://:9093
> ssl.keystore.location=
> ssl.keystore.password=pass
> ssl.key.password=pass
> ssl.truststore.location=
> ssl.truststore.password=pass
> {code}
> 2. Start zookeeper and kafka server
> 3. Create a "oom-test" topic (which will be used for these tests):
> {code}
> kafka-topics.sh --zookeeper localhost:2181 --create --topic oom-test  
> --partitions 1 --replication-factor 1
> {code}
> 4. Create a simple producer which sends a single message to the topic via 
> Java (new producer) APIs:
> {code}
> public class OOMTest {
> public static void main(final String[] args) throws Exception {
> final Properties kafkaProducerConfigs = new Properties();
> // NOTE: Intentionally use a SSL port without specifying 
> security.protocol as SSL
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "localhost:9093");
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> StringSerializer.class.getName());
> 
> kafkaProducerConfigs.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
>  StringSerializer.class.getName());
> try (KafkaProducer producer = new 
> KafkaProducer<>(kafkaProducerConfigs)) {
> System.out.println("Created Kafka producer");
> final String topicName = "oom-test";
> final String message = "Hello OOM!";
> // send a message to the topic
> final Future recordMetadataFuture = 
> producer.send(new ProducerRecord<>(topicName, message));
> final RecordMetadata sentRecordMetadata = 
> recordMetadataFuture.get();
> System.out.println("Sent message '" + message + "' to topic '" + 
> topicName + "'");
> }
> System.out.println("Tests complete");
> }
> }
> {code}
> Notice that the server URL is using a SSL endpoint localhost:9093 but isn't 
> specifying any of the other necessary SSL configs like security.protocol.
> 5. For the sake of easily reproducing this issue run this class with a max 
> heap size of 256MB (-Xmx256M). Running this code throws up the following 
> OutOfMemoryError in one of the Sender threads:
> {code}
> 18:33:25,770 ERROR [KafkaThread] - Uncaught exception in 
> kafka-producer-network-thread | producer-1:
> java.lang.OutOfMemoryError: Java heap space
> at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
> at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:93)
> at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
> at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:153)
> at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:134)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:286)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:256)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Note that I set 

[jira] [Commented] (KAFKA-8554) Generate Topic/Key from Json Transform

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866640#comment-16866640
 ] 

ASF GitHub Bot commented on KAFKA-8554:
---

gokhansari commented on pull request #6960: KAFKA-8554 Generate Topic/Key from 
Json Transform
URL: https://github.com/apache/kafka/pull/6960
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Generate Topic/Key from Json Transform
> --
>
> Key: KAFKA-8554
> URL: https://issues.apache.org/jira/browse/KAFKA-8554
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Gokhan Sari
>Priority: Major
>
> In a configurable pattern, topic and key generation is needed. This pattern 
> could include static values and dynamic parameters which are exist in json 
> tree.
> Eg:
>  * property.format = "*signals_\{appId}_\{date}" >> 
> "signals_app01_18-06-2019"*
>  ** static '*signals*'
>  ** parameter '*appId*' from json tree
>  ** parameter or record '*date*'
>  * property.date.field = "*details.signalCreationDate"*
>  ** parameter '*details.signalCreationDate*' path from json tree
>  * property.date.format = "*dd-MM-"*
>  ** date format for date parameters or record dates
>  
> Extracting topic or key (properties) with these way will led developers to 
> use them according to their business logic.
>  
> Especially this will be useful for Elasticsearch Kafka Connector in case of 
> dynamic index name and dynamic document id generation needs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8554) Generate Topic/Key from Json Transform

2019-06-18 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866630#comment-16866630
 ] 

ASF GitHub Bot commented on KAFKA-8554:
---

gokhansari commented on pull request #6960: KAFKA-8554 Generate Topic/Key from 
Json Transform
URL: https://github.com/apache/kafka/pull/6960
 
 
   In a configurable pattern, topic and key generation is needed. This pattern 
could include static values and dynamic parameters which are exist in json tree.
   
   Eg:
   
   - property.format = "signals_{appId}_{date}" >> "signals_app01_18-06-2019"
   static 'signals'
   parameter 'appId' from json tree
   parameter or record 'date'
   - property.date.field = "details.signalCreationDate"
   parameter 'details.signalCreationDate' path from json tree
   - property.date.format = "dd-MM-"
   date format for date parameters or record dates
   

   Extracting topic or key (properties) with these way will led developers to 
use them according to their business logic.
   
   
   Especially this will be useful for Elasticsearch Kafka Connector in case of 
dynamic index name and dynamic document id generation needs.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Generate Topic/Key from Json Transform
> --
>
> Key: KAFKA-8554
> URL: https://issues.apache.org/jira/browse/KAFKA-8554
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Gokhan Sari
>Priority: Major
>
> In a configurable pattern, topic and key generation is needed. This pattern 
> could include static values and dynamic parameters which are exist in json 
> tree.
> Eg:
>  * property.format = "*signals_\{appId}_\{date}" >> 
> "signals_app01_18-06-2019"*
>  ** static '*signals*'
>  ** parameter '*appId*' from json tree
>  ** parameter or record '*date*'
>  * property.date.field = "*details.signalCreationDate"*
>  ** parameter '*details.signalCreationDate*' path from json tree
>  * property.date.format = "*dd-MM-"*
>  ** date format for date parameters or record dates
>  
> Extracting topic or key (properties) with these way will led developers to 
> use them according to their business logic.
>  
> Especially this will be useful for Elasticsearch Kafka Connector in case of 
> dynamic index name and dynamic document id generation needs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8554) Generate Topic/Key from Json Transform

2019-06-18 Thread Gokhan Sari (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokhan Sari updated KAFKA-8554:
---
Description: 
In a configurable pattern, topic and key generation is needed. This pattern 
could include static values and dynamic parameters which are exist in json tree.

Eg:
 * property.format = "*signals_\{appId}_\{date}" >> "signals_app01_18-06-2019"*
 ** static '*signals*'
 ** parameter '*appId*' from json tree
 ** parameter or record '*date*'

 * property.date.field = "*details.signalCreationDate"*
 ** parameter '*details.signalCreationDate*' path from json tree

 * property.date.format = "*dd-MM-"*
 ** date format for date parameters or record dates

 

Extracting topic or key (properties) with these way will led developers to use 
them according to their business logic.

 

Especially this will be useful for Elasticsearch Kafka Connector in case of 
dynamic index name and dynamic document id generation needs.

  was:
In a configurable pattern, topic and key generation is needed. This pattern 
could include static values and dynamic parameters which are exist in json tree.

Eg:
 * property.format = "*signals_\{appId}_\{date}" >> "signals_app01_18-06-2019"*
 ** static '*signals*' *sdf*
 ** parameter '*appId*' from json tree
 ** parameter or record '*date*'

 * property.date.field = "*details.signalCreationDate"*
 ** parameter '*details.signalCreationDate*' path from json tree

 * property.date.format = "*dd-MM-"*
 ** date format for date parameters or record dates

 

Extracting topic or key (properties) with these way will led developers to use 
them according to their business logic.

 

Especially this will be useful for Elasticsearch Kafka Connector in case of 
dynamic index name and dynamic document id generation needs.


> Generate Topic/Key from Json Transform
> --
>
> Key: KAFKA-8554
> URL: https://issues.apache.org/jira/browse/KAFKA-8554
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Gokhan Sari
>Priority: Major
>
> In a configurable pattern, topic and key generation is needed. This pattern 
> could include static values and dynamic parameters which are exist in json 
> tree.
> Eg:
>  * property.format = "*signals_\{appId}_\{date}" >> 
> "signals_app01_18-06-2019"*
>  ** static '*signals*'
>  ** parameter '*appId*' from json tree
>  ** parameter or record '*date*'
>  * property.date.field = "*details.signalCreationDate"*
>  ** parameter '*details.signalCreationDate*' path from json tree
>  * property.date.format = "*dd-MM-"*
>  ** date format for date parameters or record dates
>  
> Extracting topic or key (properties) with these way will led developers to 
> use them according to their business logic.
>  
> Especially this will be useful for Elasticsearch Kafka Connector in case of 
> dynamic index name and dynamic document id generation needs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8554) Generate Topic/Key from Json Transform

2019-06-18 Thread Gokhan Sari (JIRA)
Gokhan Sari created KAFKA-8554:
--

 Summary: Generate Topic/Key from Json Transform
 Key: KAFKA-8554
 URL: https://issues.apache.org/jira/browse/KAFKA-8554
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Gokhan Sari


In a configurable pattern, topic and key generation is needed. This pattern 
could include static values and dynamic parameters which are exist in json tree.

Eg:
 * property.format = "*signals_\{appId}_\{date}" >> "signals_app01_18-06-2019"*
 ** static '*signals*' *sdf*
 ** parameter '*appId*' from json tree
 ** parameter or record '*date*'

 * property.date.field = "*details.signalCreationDate"*
 ** parameter '*details.signalCreationDate*' path from json tree

 * property.date.format = "*dd-MM-"*
 ** date format for date parameters or record dates

 

Extracting topic or key (properties) with these way will led developers to use 
them according to their business logic.

 

Especially this will be useful for Elasticsearch Kafka Connector in case of 
dynamic index name and dynamic document id generation needs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8534) retention.bytes does not work as documented

2019-06-18 Thread Evelyn Bayes (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16866592#comment-16866592
 ] 

Evelyn Bayes commented on KAFKA-8534:
-

Right still getting my head around the test suite.

My guess is my change breaks a lot of test.
There are 6 alone in core:unitTest and these would be expecting a specific 
number of log segments to still exist etc.

I'm going to patching all the tests tomorrow.

 

p.s. can't assign the Jira to myself for some reason

> retention.bytes does not work as documented
> ---
>
> Key: KAFKA-8534
> URL: https://issues.apache.org/jira/browse/KAFKA-8534
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.11.0.3, 1.0.2, 1.1.1, 2.0.1, 2.1.1, 2.2.1
>Reporter: Evelyn Bayes
>Priority: Major
>
> A log segment isn't deleted when a partition reaches retention.bytes.
> Instead a log segment is deleted when a partition reaches retention.bytes + 
> segment.bytes
> This conflicts with the defenition of retention.bytes:
> *_This configuration controls the maximum size a partition (which consists of 
> log segments) can grow to before we will discard old log segments to free up 
> space if we are using the "delete" retention policy. By default there is no 
> size limit only a time limit. Since this limit is enforced at the partition 
> level, multiply it by the number of partitions to compute the topic retention 
> in bytes._*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8488) FetchSessionHandler logging create 73 mb allocation in TLAB which could be no op

2019-06-18 Thread Kamal Chandraprakash (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamal Chandraprakash resolved KAFKA-8488.
-
Resolution: Fixed

> FetchSessionHandler logging create 73 mb allocation in TLAB which could be no 
> op 
> -
>
> Key: KAFKA-8488
> URL: https://issues.apache.org/jira/browse/KAFKA-8488
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Wenshuai Hou
>Priority: Minor
> Attachments: image-2019-06-05-14-04-35-668.png
>
>
> !image-2019-06-05-14-04-35-668.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

!foo1.png!!image-2019-06-18-14-59-54-643.png!

This is either the expected behavior and is not documented or unexpected 
behavior and is an issue with the implementation.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

!foo1.png!!image-2019-06-18-14-59-54-643.png!

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
> Attachments: foo1.png, image-2019-06-18-14-59-54-643.png
>
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> !foo1.png!!image-2019-06-18-14-59-54-643.png!
> This is either the expected behavior and is not documented or unexpected 
> behavior and is an issue with the implementation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Attachment: image-2019-06-18-14-59-54-643.png

> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
> Attachments: foo1.png, image-2019-06-18-14-59-54-643.png
>
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Attachment: foo1.png

> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
> Attachments: foo1.png, image-2019-06-18-14-59-54-643.png
>
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

!foo1.png!!image-2019-06-18-14-59-54-643.png!

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
> Attachments: foo1.png, image-2019-06-18-14-59-54-643.png
>
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> !foo1.png!!image-2019-06-18-14-59-54-643.png!
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

!data-pipelinescode_data-pipelines__idea_pants-projects_a44ff4d5a7b47b280b58163799be5d1547316f03__-___Library_Preferences_IntelliJIdea2019_1_scratches_scratch_30_json.png!

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
> Attachments: foo1.png, image-2019-06-18-14-59-54-643.png
>
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Attachment: (was: 
data-pipelinescode_data-pipelines__idea_pants-projects_a44ff4d5a7b47b280b58163799be5d1547316f03__-___Library_Preferences_IntelliJIdea2019_1_scratches_scratch_30_json.png)

> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> !data-pipelinescode_data-pipelines__idea_pants-projects_a44ff4d5a7b47b280b58163799be5d1547316f03__-___Library_Preferences_IntelliJIdea2019_1_scratches_scratch_30_json.png!
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

!data-pipelinescode_data-pipelines__idea_pants-projects_a44ff4d5a7b47b280b58163799be5d1547316f03__-___Library_Preferences_IntelliJIdea2019_1_scratches_scratch_30_json.png!

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{
 {{  "type": "record",}}
 {{  "name": "Foo1",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{  ]}}
 {{}}}

{{{}}
{{  "type": "record",}}
{{  "name": "Foo2",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{     { "name": "bar", "type": ["null", "string"]}}}
{{  ]}}
{{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> !data-pipelinescode_data-pipelines__idea_pants-projects_a44ff4d5a7b47b280b58163799be5d1547316f03__-___Library_Preferences_IntelliJIdea2019_1_scratches_scratch_30_json.png!
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Attachment: 
data-pipelinescode_data-pipelines__idea_pants-projects_a44ff4d5a7b47b280b58163799be5d1547316f03__-___Library_Preferences_IntelliJIdea2019_1_scratches_scratch_30_json.png

> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> {
>  {{  "type": "record",}}
>  {{  "name": "Foo1",}}
>  {{  "namespace":"example",}}
>  {{  "fields": [}}
>  {{  ]}}
>  {{}}}
> {{{}}
> {{  "type": "record",}}
> {{  "name": "Foo2",}}
> {{  "namespace":"example",}}
> {{  "fields": [}}
> {{     { "name": "bar", "type": ["null", "string"]}}}
> {{  ]}}
> {{}}}
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{
 {{  "type": "record",}}
 {{  "name": "Foo1",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{  ]}}
 {{}}}

{{{}}
{{  "type": "record",}}
{{  "name": "Foo2",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{     { "name": "bar", "type": ["null", "string"]}}}
{{  ]}}
{{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{
 {{  "type": "record",}}
 {{  "name": "Foo1",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{  ]}}
 {{}}}

{
 {{  "type": "record",}}
 {{  "name": "Foo2",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{    {"name": "bar", "type": ["null", "string"]}}}

{{  ]}}
 {{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> {
>  {{  "type": "record",}}
>  {{  "name": "Foo1",}}
>  {{  "namespace":"example",}}
>  {{  "fields": [}}
>  {{  ]}}
>  {{}}}
> {{{}}
> {{  "type": "record",}}
> {{  "name": "Foo2",}}
> {{  "namespace":"example",}}
> {{  "fields": [}}
> {{     { "name": "bar", "type": ["null", "string"]}}}
> {{  ]}}
> {{}}}
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{
 {{  "type": "record",}}
 {{  "name": "Foo1",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{  ]}}
 {{}}}

{{{}}
{{  "type": "record",}}
{{  "name": "Foo2",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{    \{"name": "bar", "type": ["null", "string"]}}}

{{  ]}}
 {{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{{{}}
{{  "type": "record",}}
{{  "name": "Foo1",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{  ]}}
{{ }}}

{{{}}
{{  "type": "record",}}
{{  "name": "Foo2",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{    {"name": "bar", "type": ["null", "string"]}}}
{{  ]}}
{{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> {
>  {{  "type": "record",}}
>  {{  "name": "Foo1",}}
>  {{  "namespace":"example",}}
>  {{  "fields": [}}
>  {{  ]}}
>  {{}}}
> {{{}}
> {{  "type": "record",}}
> {{  "name": "Foo2",}}
> {{  "namespace":"example",}}
> {{  "fields": [}}
> {{    \{"name": "bar", "type": ["null", "string"]}}}
> {{  ]}}
>  {{}}}
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Omer van Kloeten updated KAFKA-8553:

Description: 
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{
 {{  "type": "record",}}
 {{  "name": "Foo1",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{  ]}}
 {{}}}

{
 {{  "type": "record",}}
 {{  "name": "Foo2",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{    {"name": "bar", "type": ["null", "string"]}}}

{{  ]}}
 {{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.

  was:
{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{
 {{  "type": "record",}}
 {{  "name": "Foo1",}}
 {{  "namespace":"example",}}
 {{  "fields": [}}
 {{  ]}}
 {{}}}

{{{}}
{{  "type": "record",}}
{{  "name": "Foo2",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{    \{"name": "bar", "type": ["null", "string"]}}}

{{  ]}}
 {{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.


> Kafka Connect Schema Compatibility Checks for Name Changes
> --
>
> Key: KAFKA-8553
> URL: https://issues.apache.org/jira/browse/KAFKA-8553
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Omer van Kloeten
>Priority: Major
>
> {{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
> compatible with another one. This is used for projection when using 
> {{schema.compatibility}}.
> Unfortunately, nowhere is it documented that if you change the name of the 
> schema, this would break compatibility entirely.
> For instance, the following two Avro schemas are fully compatible, but 
> Connect says they're not:
> {
>  {{  "type": "record",}}
>  {{  "name": "Foo1",}}
>  {{  "namespace":"example",}}
>  {{  "fields": [}}
>  {{  ]}}
>  {{}}}
> {
>  {{  "type": "record",}}
>  {{  "name": "Foo2",}}
>  {{  "namespace":"example",}}
>  {{  "fields": [}}
>  {{    {"name": "bar", "type": ["null", "string"]}}}
> {{  ]}}
>  {{}}}
> This is either the expected behavior and is not documented or unexpected 
> behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8553) Kafka Connect Schema Compatibility Checks for Name Changes

2019-06-18 Thread Omer van Kloeten (JIRA)
Omer van Kloeten created KAFKA-8553:
---

 Summary: Kafka Connect Schema Compatibility Checks for Name Changes
 Key: KAFKA-8553
 URL: https://issues.apache.org/jira/browse/KAFKA-8553
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Omer van Kloeten


{{SchemaProjector.checkMaybeCompatible}} checks whether the Connect schema is 
compatible with another one. This is used for projection when using 
{{schema.compatibility}}.

Unfortunately, nowhere is it documented that if you change the name of the 
schema, this would break compatibility entirely.

For instance, the following two Avro schemas are fully compatible, but Connect 
says they're not:

{{{}}
{{  "type": "record",}}
{{  "name": "Foo1",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{  ]}}
{{ }}}

{{{}}
{{  "type": "record",}}
{{  "name": "Foo2",}}
{{  "namespace":"example",}}
{{  "fields": [}}
{{    {"name": "bar", "type": ["null", "string"]}}}
{{  ]}}
{{}}}

This is either the expected behavior and is not documented or unexpected 
behavior and is a bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8552) Use SASL authentication in ConfigCommand for connection to zookeeper

2019-06-18 Thread Tim Lansbergen (JIRA)
Tim Lansbergen created KAFKA-8552:
-

 Summary: Use SASL authentication in ConfigCommand for connection 
to zookeeper
 Key: KAFKA-8552
 URL: https://issues.apache.org/jira/browse/KAFKA-8552
 Project: Kafka
  Issue Type: Improvement
  Components: zkclient
Affects Versions: 2.2.1
Reporter: Tim Lansbergen


Currently we are using the kafka-configs script to create SCRAM users in 
zookeeper. I execute the following command on the machine:

*./kafka-configs --zookeeper _ip-adres_:2181 --alter --add-config 
'SCRAM-SHA-256=[password=password]' --entity-type users --entity-name user123*

I would like to create users dynamically via a Java api. Since it is not 
possible to create SCRAM users via the KafkaAdminApi (please confirm?), I am 
now using the Kafka Scala class 'AdminZkClient' to create users the same as way 
as the ConfigCommand currently does. It looks like the AdminZkClient doesn't 
provide a way to authenticate against zookeeper using SASL. I'm currently 
connecting to zookeeper without authentication and this is a security issue. Is 
it possible to connect with the AdminZkClient with SASL authentication?

I'm aware of issue KAFKA-5722 which is an improvement to use the AdminClient in 
the ConfigCommand class so this issue might be a duplicate but I would like to 
know if it is possible to authenticate using SASL with the AdminZkClient.

Thanks!

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)