[jira] [Resolved] (KAFKA-6277) Make loadClass thread-safe for class loaders of Connect plugins
[ https://issues.apache.org/jira/browse/KAFKA-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ewen Cheslack-Postava resolved KAFKA-6277. -- Resolution: Fixed Fix Version/s: 1.1.0 Issue resolved by pull request 4428 [https://github.com/apache/kafka/pull/4428] > Make loadClass thread-safe for class loaders of Connect plugins > --- > > Key: KAFKA-6277 > URL: https://issues.apache.org/jira/browse/KAFKA-6277 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 1.0.0, 0.11.0.2 >Reporter: Konstantine Karantasis >Assignee: Konstantine Karantasis >Priority: Blocker > Fix For: 1.1.0, 1.0.1, 0.11.0.3 > > > In Connect's classloading isolation framework, {{PluginClassLoader}} class > encounters a race condition when several threads corresponding to tasks using > a specific plugin (e.g. a Connector) try to load the same class at the same > time on a single JVM. > The race condition is related to calls to method {{defineClass}} which, > contract to {{findClass}}, is not thread safe for classloaders that override > {{loadClass}}. More details here: > https://docs.oracle.com/javase/7/docs/technotes/guides/lang/cl-mt.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6465) Add a metrics for the number of records per log
Ivan Babrou created KAFKA-6465: -- Summary: Add a metrics for the number of records per log Key: KAFKA-6465 URL: https://issues.apache.org/jira/browse/KAFKA-6465 Project: Kafka Issue Type: Bug Reporter: Ivan Babrou Currently there are log metrics for: * Start offset * End offset * Size in bytes * Number of segments I propose to add another metric to track number of record batches in the log. This should provide operators with an idea of how much batching is happening on the producers. Having this metric in one place seems easier than scraping the metric from each producer. Having an absolute counter may be infeasible (batches are not assigned sequential IDs), but gauge should be ok. Average batch size can be calculated as (end offset - start offset) / number of batches. This will be heavily skewed for logs with long retention, though. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-6464) Base64URL encoding under JRE 1.7 is broken due to incorrect padding assumption
[ https://issues.apache.org/jira/browse/KAFKA-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333715#comment-16333715 ] Ron Dagostino edited comment on KAFKA-6464 at 1/21/18 10:43 PM: Please assign to me; I will submit a pull request. was (Author: rndgstn): Please assign to me; I will submit a patch. > Base64URL encoding under JRE 1.7 is broken due to incorrect padding assumption > -- > > Key: KAFKA-6464 > URL: https://issues.apache.org/jira/browse/KAFKA-6464 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.0.0 >Reporter: Ron Dagostino >Priority: Minor > Original Estimate: 1h > Remaining Estimate: 1h > > The org.apache.kafka.common.utils.Base64 class defers Base64 > encoding/decoding to the java.util.Base64 class beginning with JRE 1.8 but > leverages javax.xml.bind.DatatypeConverter under JRE 1.7. The implementation > of the encodeToString(bytes[]) method returned under JRE 1.7 by > Base64.urlEncoderNoPadding() blindly removes the last two trailing characters > of the Base64 encoding under the assumption that they will always be the > string "==" but that is incorrect; padding can be "=", "==", or non-existent. > For example, this statement: > > {code:java} > Base64.urlEncoderNoPadding().encodeToString( > "{\"alg\":\"none\"}".getBytes(StandardCharsets.UTF_8));{code} > > Yields this, which is incorrect: (because the padding on the Base64 encoded > value is "=" instead of the assumed "==", so an extra character is > incorrectly trimmed): > {{eyJhbGciOiJub25lIn}} > The correct value is: > {{eyJhbGciOiJub25lIn0}} > There is also no Base64.urlDecoder() method, which aside from providing > useful functionality would also make it easy to write a unit test (there > currently is none). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6464) Base64URL encoding under JRE 1.7 is broken due to incorrect padding assumption
[ https://issues.apache.org/jira/browse/KAFKA-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333715#comment-16333715 ] Ron Dagostino commented on KAFKA-6464: -- Please assign to me; I will submit a patch. > Base64URL encoding under JRE 1.7 is broken due to incorrect padding assumption > -- > > Key: KAFKA-6464 > URL: https://issues.apache.org/jira/browse/KAFKA-6464 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.0.0 >Reporter: Ron Dagostino >Priority: Minor > Original Estimate: 1h > Remaining Estimate: 1h > > The org.apache.kafka.common.utils.Base64 class defers Base64 > encoding/decoding to the java.util.Base64 class beginning with JRE 1.8 but > leverages javax.xml.bind.DatatypeConverter under JRE 1.7. The implementation > of the encodeToString(bytes[]) method returned under JRE 1.7 by > Base64.urlEncoderNoPadding() blindly removes the last two trailing characters > of the Base64 encoding under the assumption that they will always be the > string "==" but that is incorrect; padding can be "=", "==", or non-existent. > For example, this statement: > > {code:java} > Base64.urlEncoderNoPadding().encodeToString( > "{\"alg\":\"none\"}".getBytes(StandardCharsets.UTF_8));{code} > > Yields this, which is incorrect: (because the padding on the Base64 encoded > value is "=" instead of the assumed "==", so an extra character is > incorrectly trimmed): > {{eyJhbGciOiJub25lIn}} > The correct value is: > {{eyJhbGciOiJub25lIn0}} > There is also no Base64.urlDecoder() method, which aside from providing > useful functionality would also make it easy to write a unit test (there > currently is none). > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6464) Base64URL encoding under JRE 1.7 is broken due to incorrect padding assumption
Ron Dagostino created KAFKA-6464: Summary: Base64URL encoding under JRE 1.7 is broken due to incorrect padding assumption Key: KAFKA-6464 URL: https://issues.apache.org/jira/browse/KAFKA-6464 Project: Kafka Issue Type: Bug Components: clients Affects Versions: 1.0.0 Reporter: Ron Dagostino The org.apache.kafka.common.utils.Base64 class defers Base64 encoding/decoding to the java.util.Base64 class beginning with JRE 1.8 but leverages javax.xml.bind.DatatypeConverter under JRE 1.7. The implementation of the encodeToString(bytes[]) method returned under JRE 1.7 by Base64.urlEncoderNoPadding() blindly removes the last two trailing characters of the Base64 encoding under the assumption that they will always be the string "==" but that is incorrect; padding can be "=", "==", or non-existent. For example, this statement: {code:java} Base64.urlEncoderNoPadding().encodeToString( "{\"alg\":\"none\"}".getBytes(StandardCharsets.UTF_8));{code} Yields this, which is incorrect: (because the padding on the Base64 encoded value is "=" instead of the assumed "==", so an extra character is incorrectly trimmed): {{eyJhbGciOiJub25lIn}} The correct value is: {{eyJhbGciOiJub25lIn0}} There is also no Base64.urlDecoder() method, which aside from providing useful functionality would also make it easy to write a unit test (there currently is none). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-5846) Use singleton NoOpConsumerRebalanceListener in subscribe() call where listener is not specified
[ https://issues.apache.org/jira/browse/KAFKA-5846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16191405#comment-16191405 ] Ted Yu edited comment on KAFKA-5846 at 1/21/18 5:59 PM: Patch looks good to me. was (Author: yuzhih...@gmail.com): Patch looks good. > Use singleton NoOpConsumerRebalanceListener in subscribe() call where > listener is not specified > --- > > Key: KAFKA-5846 > URL: https://issues.apache.org/jira/browse/KAFKA-5846 > Project: Kafka > Issue Type: Task >Reporter: Ted Yu >Assignee: Kamal Chandraprakash >Priority: Minor > > Currently KafkaConsumer creates instance of NoOpConsumerRebalanceListener for > each subscribe() call where ConsumerRebalanceListener is not specified: > {code} > public void subscribe(Pattern pattern) { > subscribe(pattern, new NoOpConsumerRebalanceListener()); > {code} > We can create a singleton NoOpConsumerRebalanceListener to be used in such > scenarios. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5540) Deprecate and remove internal converter configs
[ https://issues.apache.org/jira/browse/KAFKA-5540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333530#comment-16333530 ] Umesh Chaudhary commented on KAFKA-5540: Sure [~rhauch], I'll send the PR mostly in this week. > Deprecate and remove internal converter configs > --- > > Key: KAFKA-5540 > URL: https://issues.apache.org/jira/browse/KAFKA-5540 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Affects Versions: 0.11.0.0 >Reporter: Ewen Cheslack-Postava >Priority: Major > Labels: needs-kip > > The internal.key.converter and internal.value.converter were original exposed > as configs because a) they are actually pluggable and b) providing a default > would require relying on the JsonConverter always being available, which > until we had classloader isolation it was possible might be removed for > compatibility reasons. > However, this has ultimately just caused a lot more trouble and confusion > than it is worth. We should deprecate the configs, give them a default of > JsonConverter (which is also kind of nice since it results in human-readable > data in the internal topics), and then ultimately remove them in the next > major version. > These are all public APIs so this will need a small KIP before we can make > the change. -- This message was sent by Atlassian JIRA (v7.6.3#76005)