[jira] [Resolved] (KAFKA-7813) JmxTool throws NPE when --object-name is omitted

2019-03-17 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7813.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6139
[https://github.com/apache/kafka/pull/6139]

> JmxTool throws NPE when --object-name is omitted
> 
>
> Key: KAFKA-7813
> URL: https://issues.apache.org/jira/browse/KAFKA-7813
> Project: Kafka
>  Issue Type: Bug
>Reporter: Attila Sasvari
>Assignee: huxihx
>Priority: Minor
> Fix For: 2.3.0
>
>
> Running the JMX tool without --object-name parameter, results in a 
> NullPointerException:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi
> ...
> Exception in thread "main" java.lang.NullPointerException
>   at kafka.tools.JmxTool$$anonfun$3.apply(JmxTool.scala:143)
>   at kafka.tools.JmxTool$$anonfun$3.apply(JmxTool.scala:143)
>   at 
> scala.collection.LinearSeqOptimized$class.exists(LinearSeqOptimized.scala:93)
>   at scala.collection.immutable.List.exists(List.scala:84)
>   at kafka.tools.JmxTool$.main(JmxTool.scala:143)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code} 
> Documentation of the tool says:
> {code}
> --object-name  A JMX object name to use as a query. 
>  
>This can contain wild cards, and   
>  
>this option can be given multiple  
>  
>times to specify more than one 
>  
>query. If no objects are specified 
>  
>all objects will be queried.
> {code}
> Running the tool with {{--object-name ''}}, also results in an NPE:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi --object-name ''
> ...
> Exception in thread "main" java.lang.NullPointerException
>   at kafka.tools.JmxTool$.main(JmxTool.scala:197)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code}
> Runnig the tool with --object-name without an argument, the tool with 
> OptionMissingRequiredArgumentException:
> {code}
> $ bin/kafka-run-class.sh kafka.tools.JmxTool  --jmx-url 
> service:jmx:rmi:///jndi/rmi://127.0.0.1:/jmxrmi --object-name 
> Exception in thread "main" joptsimple.OptionMissingRequiredArgumentException: 
> Option object-name requires an argument
>   at 
> joptsimple.RequiredArgumentOptionSpec.detectOptionArgument(RequiredArgumentOptionSpec.java:48)
>   at 
> joptsimple.ArgumentAcceptingOptionSpec.handleOption(ArgumentAcceptingOptionSpec.java:257)
>   at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:513)
>   at 
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:56)
>   at joptsimple.OptionParser.parse(OptionParser.java:396)
>   at kafka.tools.JmxTool$.main(JmxTool.scala:104)
>   at kafka.tools.JmxTool.main(JmxTool.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7834) Extend collected logs in system test services to include heap dumps

2019-02-04 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7834.
--
   Resolution: Fixed
Fix Version/s: (was: 1.1.2)
   (was: 3.0.0)
   (was: 1.0.3)
   2.3.0

Issue resolved by pull request 6158
[https://github.com/apache/kafka/pull/6158]

> Extend collected logs in system test services to include heap dumps
> ---
>
> Key: KAFKA-7834
> URL: https://issues.apache.org/jira/browse/KAFKA-7834
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Major
> Fix For: 2.3.0, 2.2.0, 2.0.2
>
>
> Overall I'd suggest enabling by default: 
> {\{-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="}}
> in the major system test services, so that a heap dump is captured on OOM. 
> Given these flags, we should also extend the set of collected logs in each 
> service to include the predetermined filename for the heap dump. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7461) Connect Values converter should have coverage of logical types

2019-01-14 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7461.
--
   Resolution: Fixed
Fix Version/s: 2.2.0

Issue resolved by pull request 6077
[https://github.com/apache/kafka/pull/6077]

> Connect Values converter should have coverage of logical types
> --
>
> Key: KAFKA-7461
> URL: https://issues.apache.org/jira/browse/KAFKA-7461
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.1.1, 2.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Andrew Schofield
>Priority: Blocker
>  Labels: newbie, test
> Fix For: 2.2.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Per fix from KAFKA-7460, we've got some gaps in testing for the Values 
> converter added in KIP-145, in particular for logical types. It looks like 
> there are a few other gaps (e.g. from quick scan of coverage, maybe the float 
> types as well), but logical types seem to be the bulk other than trivial 
> wrapper methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7503) Integration Test Framework for Connect

2019-01-14 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7503.
--
   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.1
   2.2.0

Issue resolved by pull request 5516
[https://github.com/apache/kafka/pull/5516]

> Integration Test Framework for Connect
> --
>
> Key: KAFKA-7503
> URL: https://issues.apache.org/jira/browse/KAFKA-7503
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Minor
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> Implement a framework which enables writing and executing integration tests 
> against real connect workers and kafka brokers. The worker and brokers would 
> run within the same process the test is running (which is similar to how 
> integration tests are written in Streams and Core). The complexity of these 
> tests would lie somewhere between unit tests and system tests. The main 
> utility is to be able to run end-to-end tests within the java test framework, 
> and facilitate development of large features which could modify many parts of 
> the framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7551) Refactor to create both producer & consumer in Worker

2018-11-29 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7551.
--
Resolution: Fixed

Merged [https://github.com/apache/kafka/pull/5842,] my bad for screwing up 
closing the Jira along with the fix...

> Refactor to create both producer & consumer in Worker
> -
>
> Key: KAFKA-7551
> URL: https://issues.apache.org/jira/browse/KAFKA-7551
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Minor
> Fix For: 2.2.0
>
>
> In distributed mode,  the producer is created in the Worker and the consumer 
> is created in the WorkerSinkTask. The proposal is to refactor it so that both 
> of them are created in Worker. This will not affect any functionality and is 
> just a refactoring to make the code consistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7620) ConfigProvider is broken for KafkaConnect when TTL is not null

2018-11-27 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7620.
--
   Resolution: Fixed
Fix Version/s: 2.0.2
   2.1.1
   2.2.0

Issue resolved by pull request 5914
[https://github.com/apache/kafka/pull/5914]

> ConfigProvider is broken for KafkaConnect when TTL is not null
> --
>
> Key: KAFKA-7620
> URL: https://issues.apache.org/jira/browse/KAFKA-7620
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ye Ji
>Assignee: Robert Yokota
>Priority: Major
> Fix For: 2.2.0, 2.1.1, 2.0.2
>
>
> If the ConfigData returned by ConfigProvider.get implementations has non-null 
> and non-negative ttl, it will trigger infinite recursion, here is an excerpt 
> of the stack trace:
> {code:java}
> at 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.scheduleReload(WorkerConfigTransformer.java:62)
>   at 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.scheduleReload(WorkerConfigTransformer.java:56)
>   at 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:49)
>   at 
> org.apache.kafka.connect.runtime.distributed.ClusterConfigState.connectorConfig(ClusterConfigState.java:121)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.connectorConfigReloadAction(DistributedHerder.java:648)
>   at 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.scheduleReload(WorkerConfigTransformer.java:62)
>   at 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.scheduleReload(WorkerConfigTransformer.java:56)
>   at 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform(WorkerConfigTransformer.java:49)
> {code}
> Basically, 
> 1) if a non-null ttl is returned from the config provider, connect runtime 
> will try to schedule a reload in the future, 
> 2) scheduleReload function reads the config again to see if it is a restart 
> or not, by calling 
> org.apache.kafka.connect.runtime.WorkerConfigTransformer.transform to 
> transform the config
> 3) the transform function calls config provider, and gets a non-null ttl, 
> causing scheduleReload being called, we are back to step 1.
> To reproduce, simply fork the provided 
> [FileConfigProvider|https://github.com/apache/kafka/blob/3cdc78e6bb1f83973a14ce1550fe3874f7348b05/clients/src/main/java/org/apache/kafka/common/config/provider/FileConfigProvider.java],
>  and add a non-negative ttl to the ConfigData returned by the get functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7560) PushHttpMetricsReporter should not convert metric value to double

2018-11-07 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7560.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.2.0

Issue resolved by pull request 5886
[https://github.com/apache/kafka/pull/5886]

> PushHttpMetricsReporter should not convert metric value to double
> -
>
> Key: KAFKA-7560
> URL: https://issues.apache.org/jira/browse/KAFKA-7560
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Stanislav Kozlovski
>Assignee: Dong Lin
>Priority: Blocker
> Fix For: 2.2.0, 2.1.0
>
>
> Currently PushHttpMetricsReporter will convert value from 
> KafkaMetric.metricValue() to double. This will not work for non-numerical 
> metrics such as version in AppInfoParser whose value can be string. This has 
> caused issue for PushHttpMetricsReporter which in turn caused system test 
> kafkatest.tests.client.quota_test.QuotaTest.test_quota to fail with the 
> following exception:  
> {code:java}
>  File "/opt/kafka-dev/tests/kafkatest/tests/client/quota_test.py", line 196, 
> in validate     metric.value for k, metrics in 
> producer.metrics(group='producer-metrics', name='outgoing-byte-rate', 
> client_id=producer.client_id) for metric in metrics ValueError: max() arg is 
> an empty sequence
> {code}
> Since we allow metric value to be object, PushHttpMetricsReporter should also 
> read metric value as object and pass it to the http server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6490) JSON SerializationException Stops Connect

2018-10-20 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6490.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Closing as this is effectively fixed by 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-298%3A+Error+Handling+in+Connect]
 which allows you to configure how errors are handled, and should apply to 
errors in Converters, Transformations, and Connectors.

> JSON SerializationException Stops Connect
> -
>
> Key: KAFKA-6490
> URL: https://issues.apache.org/jira/browse/KAFKA-6490
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: William R. Speirs
>Assignee: Prasanna Subburaj
>Priority: Major
> Fix For: 2.0.0
>
> Attachments: KAFKA-6490_v1.patch
>
>
> If you configure KafkaConnect to parse JSON messages, and you send it a 
> non-JSON message, the SerializationException message will bubble up to the 
> top, and stop KafkaConnect. While I understand sending non-JSON to a JSON 
> serializer is a bad idea, I think that a single malformed message stopping 
> all of KafkaConnect is even worse.
> The data exception is thrown here: 
> [https://github.com/apache/kafka/blob/trunk/connect/json/src/main/java/org/apache/kafka/connect/json/JsonConverter.java#L305]
>  
> From the call here: 
> [https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/WorkerSinkTask.java#L476]
> This bubbles all the way up to the top, and KafkaConnect simply stops with 
> the message: {{ERROR WorkerSinkTask\{id=elasticsearch-sink-0} Task threw an 
> uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask:172)}}
> Thoughts on adding a {{try/catch}} around the {{for}} loop in 
> WorkerSinkTask's {{convertMessages}} so messages that don't properly parse 
> are logged, but simply ignored? This way KafkaConnect can keep working even 
> when it encounters a message it cannot decode?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2018-10-05 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5117.
--
   Resolution: Duplicate
 Assignee: Ewen Cheslack-Postava
Fix Version/s: 2.0.0

> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.0.0
>
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7476) SchemaProjector is not properly handling Date-based logical types

2018-10-04 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7476.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   0.10.2.3
   2.0.1
   0.9.0.2
   1.0.3
   0.11.0.4
   0.10.1.2
   0.10.0.2
   2.2.0
   1.1.2

Issue resolved by pull request 5736
[https://github.com/apache/kafka/pull/5736]

> SchemaProjector is not properly handling Date-based logical types
> -
>
> Key: KAFKA-7476
> URL: https://issues.apache.org/jira/browse/KAFKA-7476
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Major
> Fix For: 1.1.2, 2.2.0, 0.10.0.2, 0.10.1.2, 0.11.0.4, 1.0.3, 
> 0.9.0.2, 2.0.1, 0.10.2.3, 2.1.0
>
>
> SchemaProjector is not properly handling Date-based logical types.  An 
> exception of the following form is thrown:  
> {{Caused by: java.lang.ClassCastException: java.util.Date cannot be cast to 
> java.lang.Number}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7460) Connect Values converter uses incorrect date format string

2018-09-30 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7460.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1
   1.1.2

Issue resolved by pull request 5718
[https://github.com/apache/kafka/pull/5718]

> Connect Values converter uses incorrect date format string
> --
>
> Key: KAFKA-7460
> URL: https://issues.apache.org/jira/browse/KAFKA-7460
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0, 1.1.1, 2.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Blocker
> Fix For: 1.1.2, 2.0.1, 2.1.0
>
>
> Discovered in KAFKA-6684, the converter is using week date year () 
> instead of plain year () and day in year (DD) instead of date in month 
> (dd).
> Filing this so we have independent tracking of the issue since it was only 
> tangentially related and discovered in that issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7461) Connect Values converter should have coverage of logical types

2018-09-30 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-7461:


 Summary: Connect Values converter should have coverage of logical 
types
 Key: KAFKA-7461
 URL: https://issues.apache.org/jira/browse/KAFKA-7461
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.0.0, 1.1.1
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Per fix from KAFKA-7460, we've got some gaps in testing for the Values 
converter added in KIP-145, in particular for logical types. It looks like 
there are a few other gaps (e.g. from quick scan of coverage, maybe the float 
types as well), but logical types seem to be the bulk other than trivial 
wrapper methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7460) Connect Values converter uses incorrect date format string

2018-09-30 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-7460:


 Summary: Connect Values converter uses incorrect date format string
 Key: KAFKA-7460
 URL: https://issues.apache.org/jira/browse/KAFKA-7460
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.0.0, 1.1.1, 1.1.0
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Discovered in KAFKA-6684, the converter is using week date year () instead 
of plain year () and day in year (DD) instead of date in month (dd).

Filing this so we have independent tracking of the issue since it was only 
tangentially related and discovered in that issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7434) DeadLetterQueueReporter throws NPE if transform throws NPE

2018-09-29 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7434.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1

Issue resolved by pull request 5700
[https://github.com/apache/kafka/pull/5700]

> DeadLetterQueueReporter throws NPE if transform throws NPE
> --
>
> Key: KAFKA-7434
> URL: https://issues.apache.org/jira/browse/KAFKA-7434
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
> Environment: jdk 8
>Reporter: Michal Borowiecki
>Assignee: Michal Borowiecki
>Priority: Major
> Fix For: 2.0.1, 2.1.0
>
>
> A NPE thrown from a transform in a connector configured with
> errors.deadletterqueue.context.headers.enable=true
> causes DeadLetterQueueReporter to break with a NPE.
> {code}
> Executing stage 'TRANSFORMATION' with class 
> 'org.apache.kafka.connect.transforms.Flatten$Value', where consumed record is 
> {topic='', partition=1, offset=0, timestamp=1537370573366, 
> timestampType=CreateTime}. 
> (org.apache.kafka.connect.runtime.errors.LogReporter)
> java.lang.NullPointerException
> Task threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.toBytes(DeadLetterQueueReporter.java:202)
>   at 
> org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.populateContextHeaders(DeadLetterQueueReporter.java:172)
>   at 
> org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.report(DeadLetterQueueReporter.java:146)
>   at 
> org.apache.kafka.connect.runtime.errors.ProcessingContext.report(ProcessingContext.java:137)
>   at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:108)
>   at 
> org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:44)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:532)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> This is caused by populateContextHeaders only checking if the Throwable is 
> not null, but not checking that the message in the Throwable is not null 
> before trying to serialize the message:
> [https://github.com/apache/kafka/blob/cfd33b313c9856ae2b4b45ed3d4aac41d6ef5a6b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java#L170-L177]
> {code:java}
> if (context.error() != null) {
>      headers.add(ERROR_HEADER_EXCEPTION, 
> toBytes(context.error().getClass().getName()));
>      headers.add(ERROR_HEADER_EXCEPTION_MESSAGE, 
> toBytes(context.error().getMessage()));
> {code}
> toBytes throws an NPE if passed null as the parameter.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4932) Add UUID Serde

2018-09-09 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4932.
--
Resolution: Fixed

Issue resolved by pull request 4438
[https://github.com/apache/kafka/pull/4438]

> Add UUID Serde
> --
>
> Key: KAFKA-4932
> URL: https://issues.apache.org/jira/browse/KAFKA-4932
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients, streams
>Reporter: Jeff Klukas
>Assignee: Brandon Kirchner
>Priority: Minor
>  Labels: needs-kip, newbie
> Fix For: 2.1.0
>
>
> I propose adding serializers and deserializers for the java.util.UUID class.
> I have many use cases where I want to set the key of a Kafka message to be a 
> UUID. Currently, I need to turn UUIDs into strings or byte arrays and use 
> their associated Serdes, but it would be more convenient to serialize and 
> deserialize UUIDs directly.
> I'd propose that the serializer and deserializer use the 36-byte string 
> representation, calling UUID.toString and UUID.fromString, and then using the 
> existing StringSerializer / StringDeserializer to finish the job. We would 
> also wrap these in a Serde and modify the streams Serdes class to include 
> this in the list of supported types.
> Optionally, we could have the deserializer support a 16-byte representation 
> and it would check the size of the input byte array to determine whether it's 
> a binary or string representation of the UUID. It's not well defined whether 
> the most significant bits or least significant go first, so this deserializer 
> would have to support only one or the other.
> Similary, if the deserializer supported a 16-byte representation, there could 
> be two variants of the serializer, a UUIDStringSerializer and a 
> UUIDBytesSerializer.
> I would be willing to write this PR, but am looking for feedback about 
> whether there are significant concerns here around ambiguity of what the byte 
> representation of a UUID should be, or if there's desire to keep to list of 
> built-in Serdes minimal such that a PR would be unlikely to be accepted.
> KIP Link: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-206%3A+Add+support+for+UUID+serialization+and+deserialization



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7353) Connect logs 'this' for anonymous inner classes

2018-09-05 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7353.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1
   1.0.3
   1.1.2

Issue resolved by pull request 5583
[https://github.com/apache/kafka/pull/5583]

> Connect logs 'this' for anonymous inner classes
> ---
>
> Key: KAFKA-7353
> URL: https://issues.apache.org/jira/browse/KAFKA-7353
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.2, 1.1.1, 2.0.0
>Reporter: Kevin Lafferty
>Priority: Minor
> Fix For: 1.1.2, 1.0.3, 2.0.1, 2.1.0
>
>
> Some classes in the Kafka Connect runtime create anonymous inner classes that 
> log 'this', resulting in log messages that can't be correlated with any other 
> messages. These should scope 'this' to the outer class to have consistent log 
> messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7242) Externalized secrets are revealed in task configuration

2018-08-28 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7242.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.1

Issue resolved by pull request 5475
[https://github.com/apache/kafka/pull/5475]

> Externalized secrets are revealed in task configuration
> ---
>
> Key: KAFKA-7242
> URL: https://issues.apache.org/jira/browse/KAFKA-7242
> Project: Kafka
>  Issue Type: Bug
>Reporter: Bahdan Siamionau
>Assignee: Robert Yokota
>Priority: Major
> Fix For: 2.0.1, 2.1.0
>
>
> Trying to use new [externalized 
> secrets|https://issues.apache.org/jira/browse/KAFKA-6886] feature I noticed 
> that task configuration is being saved in config topic with disclosed 
> secrets. It seems like the main goal of feature was not achieved - secrets 
> are still persisted in plain-text. Probably I'm misusing this new config, 
> please correct me if I wrong.
> I'm running connect in distributed mode, creating connector with following 
> config:
> {code:java}
> {
>   "name" : "jdbc-sink-test",
>   "config" : {
> "connector.class" : "io.confluent.connect.jdbc.JdbcSinkConnector",
> "tasks.max" : "1",
> "config.providers" : "file",
> "config.providers.file.class" : 
> "org.apache.kafka.common.config.provider.FileConfigProvider",
> "config.providers.file.param.secrets" : "/opt/mysecrets",
> "topics" : "test_topic",
> "connection.url" : "${file:/opt/mysecrets:url}",
> "connection.user" : "${file:/opt/mysecrets:user}",
> "connection.password" : "${file:/opt/mysecrets:password}",
> "insert.mode" : "upsert",
> "pk.mode" : "record_value",
> "pk.field" : "id"
>   }
> }
> {code}
> Connector works fine, placeholders are substituted with correct values from 
> file, but then updated config is written into  the topic again (see 3 
> following records in config topic):
> {code:java}
> key: connector-jdbc-sink-test
> value:
> {
> "properties": {
> "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
> "tasks.max": "1",
> "config.providers": "file",
> "config.providers.file.class": 
> "org.apache.kafka.common.config.provider.FileConfigProvider",
> "config.providers.file.param.secrets": "/opt/mysecrets",
> "topics": "test_topic",
> "connection.url": "${file:/opt/mysecrets:url}",
> "connection.user": "${file:/opt/mysecrets:user}",
> "connection.password": "${file:/opt/mysecrets:password}",
> "insert.mode": "upsert",
> "pk.mode": "record_value",
> "pk.field": "id",
> "name": "jdbc-sink-test"
> }
> }
> key: task-jdbc-sink-test-0
> value:
> {
> "properties": {
> "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
> "config.providers.file.param.secrets": "/opt/mysecrets",
> "connection.password": "actualpassword",
> "tasks.max": "1",
> "topics": "test_topic",
> "config.providers": "file",
> "pk.field": "id",
> "task.class": "io.confluent.connect.jdbc.sink.JdbcSinkTask",
> "connection.user": "datawarehouse",
> "name": "jdbc-sink-test",
> "config.providers.file.class": 
> "org.apache.kafka.common.config.provider.FileConfigProvider",
> "connection.url": 
> "jdbc:postgresql://actualurl:5432/datawarehouse?stringtype=unspecified",
> "insert.mode": "upsert",
> "pk.mode": "record_value"
> }
> }
> key: commit-jdbc-sink-test
> value:
> {
> "tasks":1
> }
> {code}
> Please advice have I misunderstood the goal of the given feature, have I 
> missed smth in configuration or is it actually a bug? Thank you



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7225) Kafka Connect ConfigProvider not invoked before validation

2018-08-07 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7225.
--
Resolution: Fixed

> Kafka Connect ConfigProvider not invoked before validation
> --
>
> Key: KAFKA-7225
> URL: https://issues.apache.org/jira/browse/KAFKA-7225
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Nacho Munoz
>Assignee: Robert Yokota
>Priority: Minor
> Fix For: 2.0.1, 2.1.0
>
>
> When trying to register a JDBC connector with externalised secrets (e.g. 
> connection.password) the validation fails and the endpoint returns a 500. I 
> think that the problem is that the config transformer is not being invoked 
> before the validation so trying to exercise the credentials against the 
> database fails. I have checked that publishing the connector configuration 
> directly to the connect-config topic to skip the validation and restarting 
> the server is enough to get the connector working so that confirms that we 
> are just missing to call config transformer before validating the connector. 
> Please let me know if you need further information.
> I'm happy to open a PR to address this issue given that I think that this is 
> easy enough to fix for a new contributor to the project. So please feel free 
> to assign the resolution of the bug to me.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7228) DeadLetterQueue throws a NullPointerException

2018-08-02 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7228.
--
Resolution: Fixed

> DeadLetterQueue throws a NullPointerException
> -
>
> Key: KAFKA-7228
> URL: https://issues.apache.org/jira/browse/KAFKA-7228
> Project: Kafka
>  Issue Type: Task
>  Components: KafkaConnect
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Major
> Fix For: 2.0.1, 2.1.0
>
>
> Using the dead letter queue results in a NPE: 
> {code:java}
> [2018-08-01 07:41:08,907] ERROR WorkerSinkTask{id=s3-sink-yanivr-2-4} Task 
> threw an uncaught and unrecoverable exception 
> (org.apache.kafka.connect.runtime.WorkerTask)
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.report(DeadLetterQueueReporter.java:124)
> at 
> org.apache.kafka.connect.runtime.errors.ProcessingContext.report(ProcessingContext.java:137)
> at 
> org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:108)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:513)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
> at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> [2018-08-01 07:41:08,908] ERROR WorkerSinkTask{id=s3-sink-yanivr-2-4} Task is 
> being killed and will not recover until manually restarted 
> (org.apache.kafka.connect.runtime.WorkerTask)
> {code}
> DLQ reporter does not get a {{errorHandlingMetrics}} object when initialized 
> through the WorkerSinkTask.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7068) ConfigTransformer doesn't handle null values

2018-06-17 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7068.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5241
[https://github.com/apache/kafka/pull/5241]

> ConfigTransformer doesn't handle null values
> 
>
> Key: KAFKA-7068
> URL: https://issues.apache.org/jira/browse/KAFKA-7068
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> ConfigTransformer fails with NPE when the input configs have keys with null 
> values. This is a blocker for 2.0.0 since connectors configs can have null 
> values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7047) Connect isolation whitelist does not include SimpleHeaderConverter

2018-06-16 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7047.
--
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 2.1.0
   1.1.1
   2.0.0

> Connect isolation whitelist does not include SimpleHeaderConverter
> --
>
> Key: KAFKA-7047
> URL: https://issues.apache.org/jira/browse/KAFKA-7047
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 2.0.0, 1.1.1, 2.1.0
>
>
> The SimpleHeaderConverter added in 1.1.0 was never added to the PluginUtils 
> whitelist so that this header converter is loaded in isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7039) DelegatingClassLoader creates plugin instance even if its not Versioned

2018-06-16 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7039.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5191
[https://github.com/apache/kafka/pull/5191]

> DelegatingClassLoader creates plugin instance even if its not Versioned
> ---
>
> Key: KAFKA-7039
> URL: https://issues.apache.org/jira/browse/KAFKA-7039
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> The versioned interface was introduced as part of 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin].
>  DelegatingClassLoader is now attempting to create an instance of all the 
> plugins, even if it's not required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7056) Connect's new numeric converters should be in a different package

2018-06-15 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7056.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5222
[https://github.com/apache/kafka/pull/5222]

> Connect's new numeric converters should be in a different package
> -
>
> Key: KAFKA-7056
> URL: https://issues.apache.org/jira/browse/KAFKA-7056
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 2.0.0, 2.1.0
>
>
> KIP-305 added several new primitive converters, but placed them alongside 
> {{StringConverter}} in the {{...connect.storage}} package rather than 
> alongside {{ByteArrayConverter}} in the {{...connect.converters}} package. We 
> should move them to the {{converters}} package. See 
> https://github.com/apache/kafka/pull/5198 for a discussion.
> Need to also update the plugins whitelist (see KAFKA-7043).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7009) Mute logger for reflections.org at the warn level in system tests

2018-06-12 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7009.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   0.11.0.3
   0.10.2.2
   0.10.1.2
   0.10.0.2

Issue resolved by pull request 5151
[https://github.com/apache/kafka/pull/5151]

> Mute logger for reflections.org at the warn level in system tests
> -
>
> Key: KAFKA-7009
> URL: https://issues.apache.org/jira/browse/KAFKA-7009
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect, system tests
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 0.10.0.2, 0.10.1.2, 0.10.2.2, 2.0.0, 0.11.0.3, 2.1.0
>
>
> AK's Log4J configuration file for Connect includes [these 
> lines|https://github.com/apache/kafka/blob/trunk/config/connect-log4j.properties#L25]:
> {code}
> log4j.logger.org.apache.zookeeper=ERROR
> log4j.logger.org.I0Itec.zkclient=ERROR
> log4j.logger.org.reflections=ERROR
> {code}
> The last one suppresses lots of Reflections warnings like the following that 
> are output during classpath scanning and are harmless:
> {noformat}
> [2018-06-06 13:52:39,448] WARN could not create Vfs.Dir from url. ignoring 
> the exception and continuing (org.reflections.Reflections)
> org.reflections.ReflectionsException: could not create Vfs.Dir from url, no 
> matching UrlType was found 
> [file:/usr/bin/../share/java/confluent-support-metrics/*]
> either use fromURL(final URL url, final List urlTypes) or use the 
> static setDefaultURLTypes(final List urlTypes) or 
> addDefaultURLTypes(UrlType urlType) with your specialized UrlType.
> at org.reflections.vfs.Vfs.fromURL(Vfs.java:109)
> at org.reflections.vfs.Vfs.fromURL(Vfs.java:91)
> at org.reflections.Reflections.scan(Reflections.java:240)
> at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.scan(DelegatingClassLoader.java:373)
> at org.reflections.Reflections$1.run(Reflections.java:198)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The last line also need to be added to [Connect's Log4J configuration file in 
> the AK system 
> tests|https://github.com/apache/kafka/blob/trunk/tests/kafkatest/services/templates/connect_log4j.properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7031) Kafka Connect API module depends on Jersey

2018-06-12 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7031.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5190
[https://github.com/apache/kafka/pull/5190]

> Kafka Connect API module depends on Jersey
> --
>
> Key: KAFKA-7031
> URL: https://issues.apache.org/jira/browse/KAFKA-7031
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Randall Hauch
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> The Kafka Connect API module for 2.0.0 brings in Jersey dependencies. When I 
> run {{mvn dependency:tree}} on a project that depends only on the snapshot 
> version of {{org.apache.kafka:kafka-connect-api}}, the following are shown:
> {noformat}
> [INFO] +- org.apache.kafka:connect-api:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.slf4j:slf4j-api:jar:1.7.25:compile
> [INFO] |  \- 
> org.glassfish.jersey.containers:jersey-container-servlet:jar:2.27:compile
> [INFO] | +- 
> org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.27:compile
> [INFO] | |  \- 
> org.glassfish.hk2.external:javax.inject:jar:2.5.0-b42:compile
> [INFO] | +- org.glassfish.jersey.core:jersey-common:jar:2.27:compile
> [INFO] | |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
> [INFO] | |  \- org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
> [INFO] | +- org.glassfish.jersey.core:jersey-server:jar:2.27:compile
> [INFO] | |  +- org.glassfish.jersey.core:jersey-client:jar:2.27:compile
> [INFO] | |  +- 
> org.glassfish.jersey.media:jersey-media-jaxb:jar:2.27:compile
> [INFO] | |  \- javax.validation:validation-api:jar:1.1.0.Final:compile
> [INFO] | \- javax.ws.rs:javax.ws.rs-api:jar:2.1:compile
> ...
> {noformat}
> This may have been an unintended side effect of the 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin]
>  effort, which added the REST extension for Connect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7043) Connect isolation whitelist does not include new primitive converters (KIP-305)

2018-06-12 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7043.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5198
[https://github.com/apache/kafka/pull/5198]

> Connect isolation whitelist does not include new primitive converters 
> (KIP-305)
> ---
>
> Key: KAFKA-7043
> URL: https://issues.apache.org/jira/browse/KAFKA-7043
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> KIP-305 added several new primitive converters, but the PR did not add them 
> to the whitelist for the plugin isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7003) Add headers with error context in messages written to the Connect DeadLetterQueue topic

2018-06-11 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7003.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   2.0.0

Issue resolved by pull request 5159
[https://github.com/apache/kafka/pull/5159]

> Add headers with error context in messages written to the Connect 
> DeadLetterQueue topic
> ---
>
> Key: KAFKA-7003
> URL: https://issues.apache.org/jira/browse/KAFKA-7003
> Project: Kafka
>  Issue Type: Task
>Reporter: Arjun Satish
>Priority: Major
> Fix For: 2.0.0, 2.1.0
>
>
> This was added to the KIP after the feature freeze. 
> If the property {{errors.deadletterqueue.}}{{context.headers.enable}} is set 
> to {{*true*}}, the following headers will be added to the produced raw 
> message (only if they don't already exist in the message). All values will be 
> serialized as UTF-8 strings.
> ||Header Name||Description||
> |__connect.errors.topic|Name of the topic that contained the message.|
> |__connect.errors.task.id|The numeric ID of the task that encountered the 
> error (encoded as a UTF-8 string).|
> |__connect.errors.stage|The name of the stage where the error occurred.|
> |__connect.errors.partition|The numeric ID of the partition in the original 
> topic that contained the message (encoded as a UTF-8 string).|
> |__connect.errors.offset|The numeric value of the message offset in the 
> original topic (encoded as a UTF-8 string).|
> |__connect.errors.exception.stacktrace|The stacktrace of the exception.|
> |__connect.errors.exception.message|The message in the exception.|
> |__connect.errors.exception.class.name|The fully qualified classname of the 
> exception that was thrown during the execution.|
> |__connect.errors.connector.name|The name of the connector which encountered 
> the error.|
> |__connect.errors.class.name|The fully qualified name of the class that 
> caused the error.|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6997) Kafka run class doesn't exclude test-sources jar

2018-06-05 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6997.
--
Resolution: Fixed

Issue resolved by pull request 5139
[https://github.com/apache/kafka/pull/5139]

> Kafka run class doesn't exclude test-sources jar
> 
>
> Key: KAFKA-6997
> URL: https://issues.apache.org/jira/browse/KAFKA-6997
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Minor
> Fix For: 2.0.0
>
>
> kafka-run-class.sh has a flag INCLUDE_TEST_JAR. This doesn't exclude 
> test-sources jar files when the flag is set to false. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6981) Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect Clusters

2018-06-05 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6981.
--
Resolution: Fixed

Issue resolved by pull request 5125
[https://github.com/apache/kafka/pull/5125]

> Missing Connector Config (errors.deadletterqueue.topic.name) kills Connect 
> Clusters
> ---
>
> Key: KAFKA-6981
> URL: https://issues.apache.org/jira/browse/KAFKA-6981
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Arjun Satish
>Assignee: Arjun Satish
>Priority: Major
> Fix For: 2.0.0
>
>
> The trunk version of AK currently tries to incorrectly read the property 
> (errors.deadletterqueue.topic.name) when starting a sink connector. This 
> fails no matter what the contents of the connector config are. The 
> ConnectorConfig does not define this property, and any calls to getString 
> will throw a ConfigException (since only known properties are retained by 
> AbstractConfig). 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5807) Check Connector.config() and Transformation.config() returns a valid ConfigDef

2018-05-22 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5807.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Issue resolved by pull request 3762
[https://github.com/apache/kafka/pull/3762]

> Check Connector.config() and Transformation.config() returns a valid ConfigDef
> --
>
> Key: KAFKA-5807
> URL: https://issues.apache.org/jira/browse/KAFKA-5807
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
>Priority: Minor
> Fix For: 2.0.0
>
>
> NPE is thrown when a developer returns a null when overloading 
> Connector.validate(). 
> {code}
> [2017-08-23 13:36:30,086] ERROR Stopping after connector error 
> (org.apache.kafka.connect.cli.ConnectStandalone:99)
> java.lang.NullPointerException
> at 
> org.apache.kafka.connect.connector.Connector.validate(Connector.java:134)
> at 
> org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:254)
> at 
> org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:158)
> at 
> org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:93)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6566) SourceTask#stop() not called after exception raised in poll()

2018-05-18 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6566.
--
Resolution: Fixed

> SourceTask#stop() not called after exception raised in poll()
> -
>
> Key: KAFKA-6566
> URL: https://issues.apache.org/jira/browse/KAFKA-6566
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Gunnar Morling
>Assignee: Robert Yokota
>Priority: Blocker
> Fix For: 2.0.0, 0.11.0.3, 1.0.2, 1.1.1
>
>
> Having discussed this with [~rhauch], it has been my assumption that 
> {{SourceTask#stop()}} will be called by the Kafka Connect framework in case 
> an exception has been raised in {{poll()}}. That's not the case, though. As 
> an example see the connector and task below.
> Calling {{stop()}} after an exception in {{poll()}} seems like a very useful 
> action to take, as it'll allow the task to clean up any resources such as 
> releasing any database connections, right after that failure and not only 
> once the connector is stopped.
> {code}
> package com.example;
> import java.util.Collections;
> import java.util.List;
> import java.util.Map;
> import org.apache.kafka.common.config.ConfigDef;
> import org.apache.kafka.connect.connector.Task;
> import org.apache.kafka.connect.source.SourceConnector;
> import org.apache.kafka.connect.source.SourceRecord;
> import org.apache.kafka.connect.source.SourceTask;
> public class TestConnector extends SourceConnector {
> @Override
> public String version() {
> return null;
> }
> @Override
> public void start(Map props) {
> }
> @Override
> public Class taskClass() {
> return TestTask.class;
> }
> @Override
> public List> taskConfigs(int maxTasks) {
> return Collections.singletonList(Collections.singletonMap("foo", 
> "bar"));
> }
> @Override
> public void stop() {
> }
> @Override
> public ConfigDef config() {
> return new ConfigDef();
> }
> public static class TestTask extends SourceTask {
> @Override
> public String version() {
> return null;
> }
> @Override
> public void start(Map props) {
> }
> @Override
> public List poll() throws InterruptedException {
> throw new RuntimeException();
> }
> @Override
> public void stop() {
> System.out.println("stop() called");
> }
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2018-04-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5141.
--
Resolution: Fixed
  Assignee: Ewen Cheslack-Postava

Not sure of the fix, but we haven't seen this test failure in a long time. 
Pretty sure it has been fixed somewhere along the way. We can re-open if we see 
the same issue again.

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>  Labels: transient-unit-test-failure
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
>   at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
>   at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
>   at 
> 

[jira] [Resolved] (KAFKA-6728) Kafka Connect Header Null Pointer Exception

2018-04-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6728.
--
   Resolution: Fixed
Fix Version/s: 1.1.1
   1.2.0

Issue resolved by pull request 4815
[https://github.com/apache/kafka/pull/4815]

> Kafka Connect Header Null Pointer Exception
> ---
>
> Key: KAFKA-6728
> URL: https://issues.apache.org/jira/browse/KAFKA-6728
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
> Environment: Linux Mint
>Reporter: Philippe Hong
>Priority: Critical
> Fix For: 1.2.0, 1.1.1
>
>
> I am trying to use the newly released Kafka Connect that supports headers by 
> using the standalone connector to write to a text file (so in this case I am 
> only using the sink component)
> I am sadly greeted by a NullPointerException :
> {noformat}
> ERROR WorkerSinkTask{id=local-file-sink-0} Task threw an uncaught and 
> unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:172)
> java.lang.NullPointerException
>     at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertHeadersFor(WorkerSinkTask.java:501)
>     at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:469)
>     at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:301)
>     at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:205)
>     at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:173)
>     at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:170)
>     at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:214)
>     at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     at java.lang.Thread.run(Thread.java:748)
> {noformat}
> I launched zookeeper and kafka 1.1.0 locally and sent a 
> ProducerRecord[String, Array[Byte]] using a KafkaProducer[String, 
> Array[Byte]] with a header that has a key and value.
> I can read the record with a console consumer as well as using a 
> KafkaConsumer (where in this case I can see the content of the header of the 
> record I sent previously) so no problem here.
> I only made two changes to the kafka configuration:
>      - I used the StringConverter for the key and the ByteArrayConverter for 
> the value. 
>      - I also changed the topic where the sink would connect to.
> If I forgot something please tell me so as it is the first time I am creating 
> an issue on Jira.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6676) System tests do not handle ZK chroot properly with SCRAM

2018-03-17 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6676:


 Summary: System tests do not handle ZK chroot properly with SCRAM
 Key: KAFKA-6676
 URL: https://issues.apache.org/jira/browse/KAFKA-6676
 Project: Kafka
  Issue Type: Bug
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


This is related to the issue observed in KAFKA-6672. There, we are now 
automatically creating parent nodes if they do not exist. However, if using a 
chroot within ZK and that chroot does not yet exist, you get an error message 
about "Path length must be > 0" as it tries to create all the parent paths.

It would probably be better to be able to detect this issue and account for it, 
but currently system test code will fail if you use SCRAM and a chroot because 
while Kafka will create the chroot when it starts up, there are some commands 
related to security that may need to be executed before that and assume the 
chroot will already be there.

We're currently missing this because while the chroot option is there, nothing 
in Kafka's tests are currently exercising it. So given what is apparently a 
common assumption in tools that the chroot already exists (since I think the 
core kafka server is the only thing that handles creating it if needed), I 
think the fix here would be two-fold:
 # Make KafkaService ensure the chroot exists before running any commands that 
might need it.
 # On at least one test that exercises security support, use a zk_chroot so 
that functionality is at least reasonably well exercised.

It would be good to have this in both trunk and 1.1 branches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5999) Offset Fetch Request

2018-03-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5999.
--
Resolution: Invalid
  Assignee: Ewen Cheslack-Postava

Closing as Invalid until we get some clarification about the issue.

> Offset Fetch Request
> 
>
> Key: KAFKA-5999
> URL: https://issues.apache.org/jira/browse/KAFKA-5999
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Zhao Weilong
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>
> New kafka (found in 10.2.1) support new feature for all topic which is put 
> number of topics -1. (v2)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5471) Original Kafka paper link broken

2018-03-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5471.
--
Resolution: Fixed
  Assignee: Ewen Cheslack-Postava

Updated the link to what appears to be a more permanent NetDB pdf link.

> Original Kafka paper link broken
> 
>
> Key: KAFKA-5471
> URL: https://issues.apache.org/jira/browse/KAFKA-5471
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Reporter: Jeremy Hanna
>Assignee: Ewen Cheslack-Postava
>Priority: Trivial
>
> Currently on the [Kafka papers and presentations 
> site|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations]
>  the original Kafka paper is linked but it's a broken link.
> Currently it links to 
> [here|http://research.microsoft.com/en-us/um/people/srikanth/netdb11/netdb11papers/netdb11-final12.pdf]
>  but that person may have taken the paper down.  I found it 
> [here|http://notes.stephenholiday.com/Kafka.pdf] but that could have a 
> similar problem in the future.  We should be able to put the file as an 
> attachment in the confluence wiki to make it a more permanent link.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4854) Producer RecordBatch executes callbacks with `null` provided for metadata if an exception is encountered

2018-03-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4854.
--
Resolution: Not A Bug
  Assignee: Ewen Cheslack-Postava

This behavior is intended. The idea is to have *either* valid metadata about 
the produced messages based on the successful reply from the broker *or* an 
exception indicating why production failed. Metadata about produced messages 
doesn't make sense in the case of an exception since the exception implies the 
messages were not successfully added to the log.

> Producer RecordBatch executes callbacks with `null` provided for metadata if 
> an exception is encountered
> 
>
> Key: KAFKA-4854
> URL: https://issues.apache.org/jira/browse/KAFKA-4854
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.1.1
>Reporter: Robert Quinlivan
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>
> When using a user-provided callback with the producer, the `RecordBatch` 
> executes the callbacks with a null metadata argument if an exception was 
> encountered. For monitoring and debugging purposes, I would prefer if the 
> metadata were provided, perhaps optionally. For example, it would be useful 
> to know the size of the serialized payload and the offset so these values 
> could appear in application logs.
> To be entirely clear, the piece of code I am considering is in 
> `org.apache.kafka.clients.producer.internals.RecordBatch#done`:
> ```java
> // execute callbacks
> for (Thunk thunk : thunks) {
> try {
> if (exception == null) {
> RecordMetadata metadata = thunk.future.value();
> thunk.callback.onCompletion(metadata, null);
> } else {
> thunk.callback.onCompletion(null, exception);
> }
> } catch (Exception e) {
> log.error("Error executing user-provided callback on message 
> for topic-partition '{}'", topicPartition, e);
> }
> }
> ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6236) stream not picking data from topic - after rebalancing

2018-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6236.
--
Resolution: Cannot Reproduce

Unresponsive, so we can't track this down. Please re-open if there's more info 
to be shared.

> stream not picking data from topic - after rebalancing 
> ---
>
> Key: KAFKA-6236
> URL: https://issues.apache.org/jira/browse/KAFKA-6236
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: DHRUV BANSAL
>Priority: Critical
>
> Kafka stream is not polling new messages from the topic. 
> On enquiring the consumer group it is showing in rebalancing state
> Command output:
> ./kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group 
>  --describe
> Warning: Consumer group 'name' is rebalancing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6239) Consume group hung into rebalancing state, now stream not able to poll data

2018-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6239.
--
Resolution: Duplicate

> Consume group hung into rebalancing state, now stream not able to poll data
> ---
>
> Key: KAFKA-6239
> URL: https://issues.apache.org/jira/browse/KAFKA-6239
> Project: Kafka
>  Issue Type: Bug
>Reporter: DHRUV BANSAL
>Priority: Critical
>
> ./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group 
> mitra-log-parser --describe
> Note: This will only show information about consumers that use the Java 
> consumer API (non-ZooKeeper-based consumers).
> Warning: Consumer group 'mitra-log-parser' is rebalancing.
> How to restore the consumer group state?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6439) "com.streamsets.pipeline.api.StageException: KAFKA_50 - Error writing data to the Kafka broker: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.N

2018-02-23 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6439.
--
Resolution: Not A Bug

> "com.streamsets.pipeline.api.StageException: KAFKA_50 - Error writing data to 
> the Kafka broker: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received"
> -
>
> Key: KAFKA-6439
> URL: https://issues.apache.org/jira/browse/KAFKA-6439
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.10.0.1
> Environment: Ubuntu 64bit
>Reporter: srithar durairaj
>Priority: Major
>
> We are using streamset to produce data into kafka topic (3 node cluster). We 
> are facing following error frequently in production.  
> "com.streamsets.pipeline.api.StageException: KAFKA_50 - Error writing data to 
> the Kafka broker: java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6580) Connect bin scripts have incorrect usage

2018-02-21 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6580:


 Summary: Connect bin scripts have incorrect usage
 Key: KAFKA-6580
 URL: https://issues.apache.org/jira/browse/KAFKA-6580
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava


Originally we left usage to the actual binaries, which get them correct (modulo 
the fact that they use the class names rather than the bin scripts). We added 
usage I believe because some features of the bin script weren't listed in the 
usage.

 

However, there are a couple of problems. First, we seem to have accidentally 
swapped the standalone and distributed names in the two files, which is 
confusing. Second, the usage is incomplete, so the need for standalone to have 
both worker and the connector properties. We should fill these out to be more 
complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6503) Connect: Plugin scan is very slow

2018-02-14 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6503.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

Issue resolved by pull request 4561
[https://github.com/apache/kafka/pull/4561]

> Connect: Plugin scan is very slow
> -
>
> Key: KAFKA-6503
> URL: https://issues.apache.org/jira/browse/KAFKA-6503
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Per Steffensen
>Assignee: Robert Yokota
>Priority: Critical
> Fix For: 1.2.0, 1.1.0
>
>
> Just upgraded to 1.0.0. It seems some plugin scan has been introduced. It is 
> very slow - see logs from starting my Kafka-Connect instance at the bottom. 
> It takes almost 4 minutes scanning. I am running Kafka-Connect in docker 
> based on confluentinc/cp-kafka-connect:4.0.0. I set plugin.path to 
> /usr/share/java. The only thing I have added is a 13MB jar in 
> /usr/share/java/kafka-connect-file-streamer-client containing two connectors 
> and a converter. That one alone seems to take 20 secs.
> If it was just scanning in the background, and everything was working it 
> probably would not be a big issue. But it does not. Right after starting the 
> Kafka-Connect instance I try to create a connector via the /connectors 
> endpoint, but it will not succeed before the plugin scanning has finished (4 
> minutes)
> I am not even sure why scanning is necessary. Is it not always true that 
> connectors, converters etc are mentioned by name, so to see if it exists, 
> just try to load the class - the classloader will tell if it is available. 
> Hmmm, there is probably a reason.
> Anyway, either it should be made much faster, or at least Kafka-Connect 
> should be fully functional (or as functional as possible) while scanning is 
> going on.
> {code}
> [2018-01-30 13:52:26,834] INFO Scanning for plugin classes. This might take a 
> moment ... (org.apache.kafka.connect.cli.ConnectDistributed)
> [2018-01-30 13:52:27,218] INFO Loading plugin from: 
> /usr/share/java/kafka-connect-file-streamer-client 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:43,037] INFO Registered loader: 
> PluginClassLoader{pluginLocation=file:/usr/share/java/kafka-connect-file-streamer-client/}
>  (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:43,038] INFO Added plugin 
> 'com.tlt.common.files.streamer.client.kafka.connect.OneTaskPerStreamSourceConnectorManager'
>  (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:43,039] INFO Added plugin 
> 'com.tlt.common.files.streamer.client.kafka.connect.OneTaskPerFilesStreamerServerSourceConnectorManager'
>  (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:43,040] INFO Added plugin 
> 'com.tlt.common.files.streamer.client.kafka.connect.KafkaConnectByteArrayConverter'
>  (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:43,049] INFO Loading plugin from: 
> /usr/share/java/kafka-connect-elasticsearch 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:47,595] INFO Registered loader: 
> PluginClassLoader{pluginLocation=file:/usr/share/java/kafka-connect-elasticsearch/}
>  (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:47,611] INFO Added plugin 
> 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector' 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:47,651] INFO Loading plugin from: 
> /usr/share/java/kafka-connect-jdbc 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:49,491] INFO Registered loader: 
> PluginClassLoader{pluginLocation=file:/usr/share/java/kafka-connect-jdbc/} 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:49,491] INFO Added plugin 
> 'io.confluent.connect.jdbc.JdbcSinkConnector' 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:49,492] INFO Added plugin 
> 'io.confluent.connect.jdbc.JdbcSourceConnector' 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:52:49,663] INFO Loading plugin from: 
> /usr/share/java/kafka-connect-s3 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:53:51,055] INFO Registered loader: 
> PluginClassLoader{pluginLocation=file:/usr/share/java/kafka-connect-s3/} 
> (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
> [2018-01-30 13:53:51,055] INFO Added plugin 
> 'io.confluent.connect.s3.S3SinkConnector' 
> 

[jira] [Resolved] (KAFKA-6513) New Connect header support doesn't define `converter.type` property correctly

2018-02-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6513.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

Issue resolved by pull request 4512
[https://github.com/apache/kafka/pull/4512]

> New Connect header support doesn't define `converter.type` property correctly
> -
>
> Key: KAFKA-6513
> URL: https://issues.apache.org/jira/browse/KAFKA-6513
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.2.0, 1.1.0
>
>
> The recent feature (KAFKA-5142) added a new {{converter.type}} to make the 
> {{Converter}} implementations now implement {{Configurable}}. However, the 
> worker is not correctly setting these new property types and is instead 
> incorrectly assuming the existing {{Converter}} implementations will set 
> them. For example:
> {noformat}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "converter.type" which has no default value.
> at 
> org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:472)
> at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:462)
> at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:62)
> at 
> org.apache.kafka.connect.storage.ConverterConfig.(ConverterConfig.java:48)
> at 
> org.apache.kafka.connect.json.JsonConverterConfig.(JsonConverterConfig.java:59)
> at 
> org.apache.kafka.connect.json.JsonConverter.configure(JsonConverter.java:284)
> at 
> org.apache.kafka.connect.runtime.isolation.Plugins.newConfiguredPlugin(Plugins.java:77)
> at 
> org.apache.kafka.connect.runtime.isolation.Plugins.newConverter(Plugins.java:208)
> at org.apache.kafka.connect.runtime.Worker.(Worker.java:107)
> at 
> io.confluent.connect.replicator.ReplicatorApp.config(ReplicatorApp.java:104)
> at 
> io.confluent.connect.replicator.ReplicatorApp.main(ReplicatorApp.java:60)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6536) Streams quickstart pom.xml is missing versions for a bunch of plugins

2018-02-05 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6536:


 Summary: Streams quickstart pom.xml is missing versions for a 
bunch of plugins
 Key: KAFKA-6536
 URL: https://issues.apache.org/jira/browse/KAFKA-6536
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.2, 1.0.0, 1.0.1
Reporter: Ewen Cheslack-Postava


There are a bunch of plugins being used that maven helpfully warns you about 
being unversioned:
{code:java}
> [INFO] Scanning for projects...
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.kafka:streams-quickstart-java:maven-archetype:1.0.1
> [WARNING] 'build.plugins.plugin.version' for 
> org.apache.maven.plugins:maven-shade-plugin is missing. @ 
> org.apache.kafka:streams-quickstart:1.0.1, 
> /Users/ewencp/kafka.git/.release_work_dir/kafka/streams/quickstart/pom.xml, 
> line 64, column 21
> [WARNING] 'build.plugins.plugin.version' for 
> com.github.siom79.japicmp:japicmp-maven-plugin is missing. @ 
> org.apache.kafka:streams-quickstart:1.0.1, 
> /Users/ewencp/kafka.git/.release_work_dir/kafka/streams/quickstart/pom.xml, 
> line 74, column 21
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.kafka:streams-quickstart:pom:1.0.1
> [WARNING] 'build.plugins.plugin.version' for 
> org.apache.maven.plugins:maven-shade-plugin is missing. @ line 64, column 21
> [WARNING] 'build.plugins.plugin.version' for 
> com.github.siom79.japicmp:japicmp-maven-plugin is missing. @ line 74, column 
> 21
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.{code}
Unversioned dependencies are dangerous as they make the build non-reproducible. 
In fact, a released version may become very difficult to build as the user 
would have to track down the working versions of the plugins. This seems 
particularly bad for the quickstart as it's likely to be copy/pasted into 
people's own projects.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5987) Kafka metrics templates used in document generation should maintain order of tags

2018-02-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5987.
--
   Resolution: Fixed
Fix Version/s: 1.1.0
   1.2.0

Issue resolved by pull request 3985
[https://github.com/apache/kafka/pull/3985]

> Kafka metrics templates used in document generation should maintain order of 
> tags
> -
>
> Key: KAFKA-5987
> URL: https://issues.apache.org/jira/browse/KAFKA-5987
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 1.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.2.0, 1.1.0, 1.0.1
>
>
> KAFKA-5191 recently added a new {{MetricNameTemplate}} that is used to create 
> the {{MetricName}} objects in the producer and consumer, as we as in the 
> newly-added generation of metric documentation. The {{MetricNameTemplate}} 
> and the {{Metric.toHtmlTable}} do not maintain the order of the tags, which 
> means the resulting HTML documentation will order the table of MBean 
> attributes based upon the lexicographical ordering of the MBeans, each of 
> which uses the lexicographical ordering of its tags. This can result in the 
> following order:
> {noformat}
> kafka.connect:type=sink-task-metrics,connector={connector},partition={partition},task={task},topic={topic}
> kafka.connect:type=sink-task-metrics,connector={connector},task={task}
> {noformat}
> However, if the MBeans maintained the order of the tags then the 
> documentation would use the following order:
> {noformat}
> kafka.connect:type=sink-task-metrics,connector={connector},task={task}
> kafka.connect:type=sink-task-metrics,connector={connector},task={task},topic={topic},partition={partition}
> {noformat}
> This would be more readable, and the code that is creating the templates 
> would have control over the order of the tags. 
> To maintain order, {{MetricNameTemplate}} should used a {{LinkedHashSet}} for 
> the tags, and the {{Metrics.toHtmlTable}} method should also use a 
> {{LinkedHashMap}} when building up the tags used in the MBean name.
> Note that JMX MBean names use {{ObjectName}} that does not maintain order, so 
> this change should have no impact on JMX MBean names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6253) Improve sink connector topic regex validation

2018-02-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6253.
--
   Resolution: Fixed
 Reviewer: Ewen Cheslack-Postava
Fix Version/s: 1.2.0

> Improve sink connector topic regex validation
> -
>
> Key: KAFKA-6253
> URL: https://issues.apache.org/jira/browse/KAFKA-6253
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Jeff Klukas
>Priority: Major
> Fix For: 1.1.0, 1.2.0
>
>
> KAFKA-3073 adds topic regex support for sink connectors. The addition 
> requires that you only specify one of topics or topics.regex settings. This 
> is being validated in one place, but not during submission of connectors. We 
> should improve this since this means it's possible to get a bad connector 
> config into the config topic.
> For more detailed discussion, see 
> https://github.com/apache/kafka/pull/4151#pullrequestreview-77300221



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5142) KIP-145 - Expose Record Headers in Kafka Connect

2018-01-31 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5142.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4319
[https://github.com/apache/kafka/pull/4319]

> KIP-145 - Expose Record Headers in Kafka Connect
> 
>
> Key: KAFKA-5142
> URL: https://issues.apache.org/jira/browse/KAFKA-5142
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Michael Andre Pearce
>Assignee: Michael Andre Pearce
>Priority: Major
> Fix For: 1.1.0
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect
> As KIP-82 introduced Headers into the core Kafka Product, it would be 
> advantageous to expose them in the Kafka Connect Framework.
> Connectors that replicate data between Kafka cluster or between other 
> messaging products and Kafka would want to replicate the headers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6506) Integration or unit test for Connect REST SSL Support

2018-01-30 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6506:


 Summary: Integration or unit test for Connect REST SSL Support
 Key: KAFKA-6506
 URL: https://issues.apache.org/jira/browse/KAFKA-6506
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava


This would be follow up for KAFKA-4029 / KIP-208. We have some unit test 
coverage and manual verification, but it would be good to have a better 
end-to-end test (possibly covering both no client auth and w/ client auth 
scenarios). This isn't particularly risky as is and the code should basically 
never change, so it's mainly a nice-to-have / safety net.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6018) Make KafkaFuture.Function java 8 lambda compatible

2018-01-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6018.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4033
[https://github.com/apache/kafka/pull/4033]

> Make KafkaFuture.Function java 8 lambda compatible
> --
>
> Key: KAFKA-6018
> URL: https://issues.apache.org/jira/browse/KAFKA-6018
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Steven Aerts
>Priority: Major
> Fix For: 1.1.0
>
>
> KafkaFuture.Function is currently an empty public abstract class.
> This means you cannot implement them as a java lambda.  And you end up with 
> constructs as:
> {code:java}
> new KafkaFuture.Function() {
> @Override
> public Object apply(Set strings) {
> return foo;
> }
> }
> {code}
> I propose to define them as interfaces.
> So this code can become in java 8:
> {code:java}
> strings -> foo
> {code}
> I know this change is backwards incompatible (extends becomes implements).
> But as {{KafkaFuture}} is marked as {{@InterfaceStability.Evolving}}.
> And KafkaFuture states in its javadoc:
> {quote}This will eventually become a thin shim on top of Java 8's 
> CompletableFuture.{quote}
> I think this change might be worth considering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-4029) SSL support for Connect REST API

2018-01-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4029.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4429
[https://github.com/apache/kafka/pull/4429]

> SSL support for Connect REST API
> 
>
> Key: KAFKA-4029
> URL: https://issues.apache.org/jira/browse/KAFKA-4029
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Jakub Scholz
>Priority: Major
> Fix For: 1.1.0
>
>
> Currently the Connect REST API only supports http. We should also add SSL 
> support so access to the REST API can be secured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-3988) KafkaConfigBackingStore assumes configs will be stored as schemaless maps

2018-01-22 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3988.
--
Resolution: Won't Fix

Not going to fix since KIP-174 is deprecating the internal converter configs 
and will instead always use schemaless JsonConverter.

> KafkaConfigBackingStore assumes configs will be stored as schemaless maps
> -
>
> Key: KAFKA-3988
> URL: https://issues.apache.org/jira/browse/KAFKA-3988
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> If you use an internal key/value converter that drops schema information (as 
> is the default in the config files we provide since we use JsonConverter with 
> schemas.enable=false), the schemas we use that are structs get converted to 
> maps since we don't know the structure to decode them to. Because our tests 
> run with these settings, we haven't validated that the code works if schemas 
> are preserved.
> When they are preserved, we'll hit an error message like this
> {quote}
> [2016-07-25 07:36:34,828] ERROR Found connector configuration 
> (connector-test-mysql-jdbc) in wrong format: class 
> org.apache.kafka.connect.data.Struct 
> (org.apache.kafka.connect.storage.KafkaConfigBackingStore:498)
> {quote}
> because the code currently checks that it is working with a map. We should 
> actually be checking for either a Struct or a Map. This same problem probably 
> affects a couple of other types of data in the same class as Connector 
> configs, Task configs, Connect task lists, and target states are all Structs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6277) Make loadClass thread-safe for class loaders of Connect plugins

2018-01-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6277.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4428
[https://github.com/apache/kafka/pull/4428]

> Make loadClass thread-safe for class loaders of Connect plugins
> ---
>
> Key: KAFKA-6277
> URL: https://issues.apache.org/jira/browse/KAFKA-6277
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0, 0.11.0.2
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Blocker
> Fix For: 1.1.0, 1.0.1, 0.11.0.3
>
>
> In Connect's classloading isolation framework, {{PluginClassLoader}} class 
> encounters a race condition when several threads corresponding to tasks using 
> a specific plugin (e.g. a Connector) try to load the same class at the same 
> time on a single JVM. 
> The race condition is related to calls to method {{defineClass}} which, 
> contract to {{findClass}}, is not thread safe for classloaders that override 
> {{loadClass}}. More details here: 
> https://docs.oracle.com/javase/7/docs/technotes/guides/lang/cl-mt.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6428) Fail builds on findbugs warnings

2018-01-08 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6428.
--
Resolution: Invalid

Seems it was already setup to report & fail, the reporting on Jenkins was just 
missing.

> Fail builds on findbugs warnings
> 
>
> Key: KAFKA-6428
> URL: https://issues.apache.org/jira/browse/KAFKA-6428
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Findbugs spots likely bugs, and especially for warnings at the High level, it 
> actually has pretty good signal for real bugs (or just things that might be 
> risky). We should be failing builds, especially PRs, if any sufficiently high 
> warnings are listed. We should get this enabled for that level and then 
> decide if we want to adjust the level of warnings we want to address.
> This likely relates to KAFKA-5887 since findbugs may not be sufficiently 
> maintained for JDK9 support. In any case, the intent is to fail the build 
> based on whichever tool is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6252) A metric named 'XX' already exists, can't register another one.

2018-01-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6252.
--
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava

> A metric named 'XX' already exists, can't register another one.
> ---
>
> Key: KAFKA-6252
> URL: https://issues.apache.org/jira/browse/KAFKA-6252
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
> Environment: Linux
>Reporter: Alexis Sellier
>Assignee: Arjun Satish
>Priority: Critical
> Fix For: 1.1.0, 1.0.1
>
>
> When a connector crashes (or is not implemented correctly by not 
> stopping/interrupting {{poll()}}), It cannot be restarted and an exception 
> like this is thrown 
> {code:java}
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=offset-commit-max-time-ms, group=connector-task-metrics, 
> description=The maximum time in milliseconds taken by this task to commit 
> offsets., tags={connector=hdfs-sink-connector-recover, task=0}]' already 
> exists, can't register another one.
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:532)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:256)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:241)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask$TaskMetricsGroup.(WorkerTask.java:328)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.(WorkerTask.java:69)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.(WorkerSinkTask.java:98)
>   at 
> org.apache.kafka.connect.runtime.Worker.buildWorkerTask(Worker.java:449)
>   at org.apache.kafka.connect.runtime.Worker.startTask(Worker.java:404)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startTask(DistributedHerder.java:852)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1600(DistributedHerder.java:108)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:866)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder$13.call(DistributedHerder.java:862)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> I guess it's because the function taskMetricsGroup.close is not call in all 
> the cases



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6428) Fail builds on findbugs warnings

2018-01-05 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6428:


 Summary: Fail builds on findbugs warnings
 Key: KAFKA-6428
 URL: https://issues.apache.org/jira/browse/KAFKA-6428
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


Findbugs spots likely bugs, and especially for warnings at the High level, it 
actually has pretty good signal for real bugs (or just things that might be 
risky). We should be failing builds, especially PRs, if any sufficiently high 
warnings are listed. We should get this enabled for that level and then decide 
if we want to adjust the level of warnings we want to address.

This likely relates to KAFKA-5887 since findbugs may not be sufficiently 
maintained for JDK9 support. In any case, the intent is to fail the build based 
on whichever tool is used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4335) FileStreamSource Connector not working for large files (~ 1GB)

2018-01-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4335.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4356
[https://github.com/apache/kafka/pull/4356]

> FileStreamSource Connector not working for large files (~ 1GB)
> --
>
> Key: KAFKA-4335
> URL: https://issues.apache.org/jira/browse/KAFKA-4335
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0
>Reporter: Rahul Shukla
> Fix For: 1.1.0
>
>
> I was trying to sink large file about (1gb). FileStreamSource connector is 
> not working for that it's working fine for small files.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6284) System Test failed: ConnectRestApiTest

2017-11-30 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6284.
--
Resolution: Fixed

Issue resolved by pull request 4279
[https://github.com/apache/kafka/pull/4279]

> System Test failed: ConnectRestApiTest 
> ---
>
> Key: KAFKA-6284
> URL: https://issues.apache.org/jira/browse/KAFKA-6284
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Mikkin Patel
>Assignee: Mikkin Patel
> Fix For: 1.1.0
>
>
> KAFKA-3073 introduced topic regex support for Connect sinks. The 
> ConnectRestApiTest failed to verifiy configdef with expected response. 
> {noformat}
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/system-test-kafka-ci_trunk-STZ4BWS25LWQAJN6JJZHKK6GP4YF7HCVYRFRFXB2KMAQ4PC5H4ZA/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.7.1-py2.7.egg/ducktape/tests/runner_client.py",
>  line 132, in run
> data = self.run_test()
>   File 
> "/home/jenkins/workspace/system-test-kafka-ci_trunk-STZ4BWS25LWQAJN6JJZHKK6GP4YF7HCVYRFRFXB2KMAQ4PC5H4ZA/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.7.1-py2.7.egg/ducktape/tests/runner_client.py",
>  line 185, in run_test
> return self.test_context.function(self.test)
>   File 
> "/home/jenkins/workspace/system-test-kafka-ci_trunk-STZ4BWS25LWQAJN6JJZHKK6GP4YF7HCVYRFRFXB2KMAQ4PC5H4ZA/kafka/tests/kafkatest/tests/connect/connect_rest_test.py",
>  line 92, in test_rest_api
> self.verify_config(self.FILE_SINK_CONNECTOR, self.FILE_SINK_CONFIGS, 
> configs)
>   File 
> "/home/jenkins/workspace/system-test-kafka-ci_trunk-STZ4BWS25LWQAJN6JJZHKK6GP4YF7HCVYRFRFXB2KMAQ4PC5H4ZA/kafka/tests/kafkatest/tests/connect/connect_rest_test.py",
>  line 200, in verify_config
> assert config_def == set(config_names)
> AssertionError
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5563) Clarify handling of connector name in config

2017-11-25 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5563.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4230
[https://github.com/apache/kafka/pull/4230]

> Clarify handling of connector name in config 
> -
>
> Key: KAFKA-5563
> URL: https://issues.apache.org/jira/browse/KAFKA-5563
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Sönke Liebau
>Assignee: Sönke Liebau
>Priority: Minor
> Fix For: 1.1.0
>
>
> The connector name is currently being stored in two places, once at the root 
> level of the connector and once in the config:
> {code:java}
> {
>   "name": "test",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "test"
>   },
>   "tasks": [
>   {
>   "connector": "test",
>   "task": 0
>   }
>   ]
> }
> {code}
> If no name is provided in the "config" element, then the name from the root 
> level is [copied there when the connector is being 
> created|https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/resources/ConnectorsResource.java#L95].
>  If however a name is provided in the config then it is not touched, which 
> means it is possible to create a connector with a different name at the root 
> level and in the config like this:
> {code:java}
> {
>   "name": "test1",
>   "config": {
>   "connector.class": 
> "org.apache.kafka.connect.tools.MockSinkConnector",
>   "tasks.max": "3",
>   "topics": "test-topic",
>   "name": "differentname"
>   },
>   "tasks": [
>   {
>   "connector": "test1",
>   "task": 0
>   }
>   ]
> }
> {code}
> I am not aware of any issues that this currently causes, but it is at least 
> confusing and probably not intended behavior and definitely bears potential 
> for bugs, if different functions take the name from different places.
> Would it make sense to add a check to reject requests that provide different 
> names in the request and the config section?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4827) Kafka connect: error with special characters in connector name

2017-11-22 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4827.
--
   Resolution: Fixed
Fix Version/s: (was: 0.11.0.3)
   (was: 0.10.2.2)
   (was: 0.10.1.2)
   1.0.1

Issue resolved by pull request 4205
[https://github.com/apache/kafka/pull/4205]

> Kafka connect: error with special characters in connector name
> --
>
> Key: KAFKA-4827
> URL: https://issues.apache.org/jira/browse/KAFKA-4827
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.0
>Reporter: Aymeric Bouvet
>Assignee: Arjun Satish
>Priority: Minor
> Fix For: 1.1.0, 1.0.1
>
>
> When creating a connector, if the connector name (and possibly other 
> properties) end with a carriage return, kafka-connect will create the config 
> but report error
> {code}
> cat << EOF > file-connector.json
> {
>   "name": "file-connector\r",
>   "config": {
> "topic": "kafka-connect-logs\r",
> "tasks.max": "1",
> "file": "/var/log/ansible-confluent/connect.log",
> "connector.class": 
> "org.apache.kafka.connect.file.FileStreamSourceConnector"
>   }
> }
> EOF
> curl -X POST -H "Content-Type: application/json" -H "Accept: 
> application/json" -d @file-connector.json localhost:8083/connectors 
> {code}
> returns an error 500  and log the following
> {code}
> [2017-03-01 18:25:23,895] WARN  (org.eclipse.jetty.servlet.ServletHandler)
> javax.servlet.ServletException: java.lang.IllegalArgumentException: Illegal 
> character in path at index 27: /connectors/file-connector4
> at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
> at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
> at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:812)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at 
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:159)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalArgumentException: Illegal character in path at 
> index 27: /connectors/file-connector4
> at java.net.URI.create(URI.java:852)
> at 
> org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource.createConnector(ConnectorsResource.java:100)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
> at 
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:144)
> 

[jira] [Created] (KAFKA-6253) Improve sink connector topic regex validation

2017-11-21 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6253:


 Summary: Improve sink connector topic regex validation
 Key: KAFKA-6253
 URL: https://issues.apache.org/jira/browse/KAFKA-6253
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava
Assignee: Randall Hauch
 Fix For: 1.1.0


KAFKA-3073 adds topic regex support for sink connectors. The addition requires 
that you only specify one of topics or topics.regex settings. This is being 
validated in one place, but not during submission of connectors. We should 
improve this since this means it's possible to get a bad connector config into 
the config topic.

For more detailed discussion, see 
https://github.com/apache/kafka/pull/4151#pullrequestreview-77300221



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6168) Connect Schema comparison is slow for large schemas

2017-11-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6168.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4176
[https://github.com/apache/kafka/pull/4176]

> Connect Schema comparison is slow for large schemas
> ---
>
> Key: KAFKA-6168
> URL: https://issues.apache.org/jira/browse/KAFKA-6168
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Randall Hauch
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 1.1.0
>
> Attachments: 6168.v1.txt
>
>
> The {{ConnectSchema}} implementation computes the hash code every time its 
> needed, and {{equals(Object)}} is a deep equality check. This extra work can 
> be expensive for large schemas, especially in code like the {{AvroConverter}} 
> (or rather {{AvroData}} in the converter) that uses instances as keys in a 
> hash map that then requires significant use of {{hashCode}} and {{equals}}.
> The {{ConnectSchema}} is an immutable object and should at a minimum 
> precompute the hash code. Also, the order that the fields are compared in 
> {{equals(...)}} should use the cheapest comparisons first (e.g., the {{name}} 
> field is one of the _last_ fields to be checked). Finally, it might be worth 
> considering having each instance precompute and cache a string or byte[] 
> representation of all fields that can be used for faster equality checking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6218) Optimize condition in if statement to reduce the number of comparisons

2017-11-16 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6218.
--
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava

> Optimize condition in if statement to reduce the number of comparisons
> --
>
> Key: KAFKA-6218
> URL: https://issues.apache.org/jira/browse/KAFKA-6218
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1, 0.10.1.0, 
> 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.0, 0.11.0.1, 1.0.0
>Reporter: sachin bhalekar
>Assignee: sachin bhalekar
>Priority: Trivial
>  Labels: newbie
> Fix For: 1.1.0
>
> Attachments: kafka_optimize_if.JPG
>
>
> Optimizing the condition in *if *statement 
> *(schema.name() == null || !(schema.name().equals(LOGICAL_NAME)))* which
> requires two comparisons in worst case with
> *(!LOGICAL_NAME.equals(schema.name()))*  which requires single comparison
> in all cases and _avoids null pointer exception_.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6087) Scanning plugin.path needs to support relative symlinks

2017-10-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6087.
--
   Resolution: Fixed
Fix Version/s: 1.1.0

Issue resolved by pull request 4092
[https://github.com/apache/kafka/pull/4092]

> Scanning plugin.path needs to support relative symlinks
> ---
>
> Key: KAFKA-6087
> URL: https://issues.apache.org/jira/browse/KAFKA-6087
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Blocker
> Fix For: 1.1.0, 1.0.0, 0.11.0.2
>
>   Original Estimate: 6h
>  Remaining Estimate: 6h
>
> Discovery of Kafka Connect plugins supports symbolic links from within the 
> {{plugin.path}} locations, but this ability is restricted to absolute 
> symbolic links.
> It's essential to support relative symbolic links, as this is the most common 
> use case from within the plugin locations. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5953) Connect classloader isolation may be broken for JDBC drivers

2017-10-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5953.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   1.1.0

Issue resolved by pull request 4030
[https://github.com/apache/kafka/pull/4030]

> Connect classloader isolation may be broken for JDBC drivers
> 
>
> Key: KAFKA-5953
> URL: https://issues.apache.org/jira/browse/KAFKA-5953
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Jiri Pechanec
>Assignee: Konstantine Karantasis
>Priority: Critical
> Fix For: 1.1.0, 1.0.0
>
>
> Let's suppose there are two connectors deployed
> # using JDBC driver (Debezium MySQL connector)
> # using PostgreSQL JDBC driver (JDBC sink).
> Connector 1 is started first - it executes a statement
> {code:java}
> Connection conn = DriverManager.getConnection(url, props);
> {code}
> As a result a {{DriverManager}} calls {{ServiceLoader}} and searches for all 
> JDBC drivers. The postgres driver from connector 2) is found associated with 
> classloader from connector 1).
> Connector 2 is started after that - it executes a statement
> {code:java}
> connection = DriverManager.getConnection(url, username, password);
> {code}
> DriverManager finds the connector that was loaded in step before but becuase 
> the classloader is different - now we use classloader 2) so it refuses to 
> load the class and no JDBC driver is found.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5903) Create Connect metrics for workers

2017-10-05 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5903.
--
Resolution: Fixed

Issue resolved by pull request 4011
[https://github.com/apache/kafka/pull/4011]

> Create Connect metrics for workers
> --
>
> Key: KAFKA-5903
> URL: https://issues.apache.org/jira/browse/KAFKA-5903
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> See KAFKA-2376 for parent task and 
> [KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
>  for the details on the metrics. This subtask is to create the "Worker 
> Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6008) Kafka Connect: Unsanitized workerID causes exception during startup

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-6008.
--
Resolution: Fixed

Issue resolved by pull request 4012
[https://github.com/apache/kafka/pull/4012]

> Kafka Connect: Unsanitized workerID causes exception during startup
> ---
>
> Key: KAFKA-6008
> URL: https://issues.apache.org/jira/browse/KAFKA-6008
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
> Environment: MacOS, Java 1.8.0_77-b03
>Reporter: Jakub Scholz
>Assignee: Jakub Scholz
> Fix For: 1.0.0
>
>
> When KAfka Connect starts, it seems to use unsanitized workerId for creating 
> Metrics. As a result it throws following exception:
> {code}
> [2017-10-04 13:16:08,886] WARN Error registering AppInfo mbean 
> (org.apache.kafka.common.utils.AppInfoParser:66)
> javax.management.MalformedObjectNameException: Invalid character ':' in value 
> part of property
>   at javax.management.ObjectName.construct(ObjectName.java:618)
>   at javax.management.ObjectName.(ObjectName.java:1382)
>   at 
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:60)
>   at 
> org.apache.kafka.connect.runtime.ConnectMetrics.(ConnectMetrics.java:77)
>   at org.apache.kafka.connect.runtime.Worker.(Worker.java:88)
>   at 
> org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:81)
> {code}
> It looks like in my case the generated workerId is :. The 
> workerId should be sanitized before creating the metric.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6011) AppInfoParser should only use metrics API and should not register JMX mbeans directly

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6011:


 Summary: AppInfoParser should only use metrics API and should not 
register JMX mbeans directly
 Key: KAFKA-6011
 URL: https://issues.apache.org/jira/browse/KAFKA-6011
 Project: Kafka
  Issue Type: Bug
  Components: metrics
Reporter: Ewen Cheslack-Postava
Priority: Minor


AppInfoParser collects info about the app ID, version, and commit ID and logs 
them + exposes corresponding metrics. For some reason we ended up with the app 
ID metric being registered directly to JMX while the version and commit ID use 
the metrics API. This means the app ID would not be accessible to custom 
metrics reporter.

This isn't a huge loss as this is probably a rarely used metric, but we should 
really only be using the metrics API. Only using the metrics API would also 
reduce and centralize the places we need to do name mangling to handle 
characters that might not be valid for metrics.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5990) Add generated documentation for Connect metrics

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5990.
--
Resolution: Fixed

Issue resolved by pull request 3987
[https://github.com/apache/kafka/pull/3987]

> Add generated documentation for Connect metrics
> ---
>
> Key: KAFKA-5990
> URL: https://issues.apache.org/jira/browse/KAFKA-5990
> Project: Kafka
>  Issue Type: Sub-task
>  Components: documentation, KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> KAFKA-5191 recently added a new {{MetricNameTemplate}} that is used to create 
> the {{MetricName}} objects in the producer and consumer, as we as in the 
> newly-added generation of metric documentation. The {{Metric.toHtmlTable}} 
> method then takes these templates and generates an HTML documentation for the 
> metrics.
> Change the Connect metrics to use these templates and update the build to 
> generate these metrics and include them in the Kafka documentation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6009) Fix formatting of autogenerated docs tables

2017-10-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-6009:


 Summary: Fix formatting of autogenerated docs tables
 Key: KAFKA-6009
 URL: https://issues.apache.org/jira/browse/KAFKA-6009
 Project: Kafka
  Issue Type: Sub-task
  Components: documentation
Reporter: Ewen Cheslack-Postava


In reviewing https://github.com/apache/kafka/pull/3987 I noticed that the 
autogenerated tables currently differ from the manually created ones. The 
manual ones have 3 columns -- metric/attribute name, description, and mbean 
name with a regex. The new ones have 3 columns, but for some reason the first 
one is just empty, the second is the metric/attribute name, the last one is the 
description, and there is no regex.

We could potentially just drop to two columns since the regex column is 
generally very repetitive and is now handled by a header row giving the general 
group mbean name info/format. The one thing that seems to currently be missing 
is the regex that would restrict the format of these (although these weren't 
really technically enforced and some of the restrictions are being removed, 
e.g. see some of the follow up discussion to 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-190%3A+Handle+client-ids+consistently+between+clients+and+brokers).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5902) Create Connect metrics for sink tasks

2017-10-03 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5902.
--
Resolution: Fixed

Issue resolved by pull request 3975
[https://github.com/apache/kafka/pull/3975]

> Create Connect metrics for sink tasks
> -
>
> Key: KAFKA-5902
> URL: https://issues.apache.org/jira/browse/KAFKA-5902
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> See KAFKA-2376 for parent task and 
> [KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
>  for the details on the metrics. This subtask is to create the "Sink Task 
> Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5901) Create Connect metrics for source tasks

2017-09-28 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5901.
--
Resolution: Fixed

Issue resolved by pull request 3959
[https://github.com/apache/kafka/pull/3959]

> Create Connect metrics for source tasks
> ---
>
> Key: KAFKA-5901
> URL: https://issues.apache.org/jira/browse/KAFKA-5901
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> See KAFKA-2376 for parent task and 
> [KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
>  for the details on the metrics. This subtask is to create the "Source Task 
> Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5900) Create Connect metrics common to source and sink tasks

2017-09-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5900.
--
Resolution: Fixed
  Reviewer: Ewen Cheslack-Postava

Fixed via https://github.com/apache/kafka/pull/3911

> Create Connect metrics common to source and sink tasks
> --
>
> Key: KAFKA-5900
> URL: https://issues.apache.org/jira/browse/KAFKA-5900
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 1.0.0
>
>
> See KAFKA-2376 for parent task and 
> [KIP-196|https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework]
>  for the details on the metrics. This subtask is to create the "Common Task 
> Metrics".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5330) Use per-task converters in Connect

2017-09-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5330.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3196
[https://github.com/apache/kafka/pull/3196]

> Use per-task converters in Connect
> --
>
> Key: KAFKA-5330
> URL: https://issues.apache.org/jira/browse/KAFKA-5330
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Ewen Cheslack-Postava
> Fix For: 1.0.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Because Connect started with a worker-wide model of data formats, we 
> currently allocate a single Converter per worker and only allocate an 
> independent one when the user overrides the converter.
> This can lead to performance problems when the worker-level default converter 
> is used by a large number of tasks because converters need to be threadsafe 
> to support this model and they may spend a lot of time just on 
> synchronization.
> We could, instead, simply allocate one converter per task. There is some 
> overhead involved, but generally it shouldn't be that large. For example, 
> Confluent's Avro converters will each have their own schema cache and have to 
> make their on calls to the schema registry API, but these are relatively 
> small, likely inconsequential compared to any normal overhead we would 
> already have for creating and managing each task. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5657) Connect REST API should include the connector type when describing a connector

2017-09-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5657.
--
Resolution: Fixed

Issue resolved by pull request 3812
[https://github.com/apache/kafka/pull/3812]

> Connect REST API should include the connector type when describing a connector
> --
>
> Key: KAFKA-5657
> URL: https://issues.apache.org/jira/browse/KAFKA-5657
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>  Labels: needs-kip, newbie
> Fix For: 1.0.0
>
> Attachments: 5657.v1.txt
>
>
> Kafka Connect's REST API's {{connectors/}} and {{connectors/\{name\}}} 
> endpoints should include whether the connector is a source or a sink.
> See KAFKA-4343 and KIP-151 for the related modification of the 
> {{connector-plugins}} endpoint.
> Also see KAFKA-4279 for converter-related endpoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5742) Support passing ZK chroot in system tests

2017-08-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5742.
--
   Resolution: Fixed
Fix Version/s: 1.0.0
   0.11.0.1

> Support passing ZK chroot in system tests
> -
>
> Key: KAFKA-5742
> URL: https://issues.apache.org/jira/browse/KAFKA-5742
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Reporter: Xavier Léauté
>Assignee: Xavier Léauté
> Fix For: 0.11.0.1, 1.0.0
>
>
> Currently spinning up multiple Kafka clusters in a system tests requires at 
> least one ZK node per Kafka cluster, which wastes a lot of resources. We 
> currently also don't test anything outside of the ZK root path.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5704) Auto topic creation causes failure with older clusters

2017-08-08 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5704.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3641
[https://github.com/apache/kafka/pull/3641]

> Auto topic creation causes failure with older clusters
> --
>
> Key: KAFKA-5704
> URL: https://issues.apache.org/jira/browse/KAFKA-5704
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Randall Hauch
> Fix For: 1.0.0, 0.11.0.1
>
>
> The new automatic internal topic creation always tries to check the topic and 
> create it if missing. However, older brokers that we should still be 
> compatible with don't support some requests that are used. This results in an 
> UnsupportedVersionException which some of the TopicAdmin code notes that it 
> can throw but then isn't caught in the initializers, causing the entire 
> process to fail.
> We should probably just catch it, log a message, and allow things to proceed 
> hoping that the user has already created the topics correctly (as we used to 
> do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5704) Auto topic creation causes failure with older clusters

2017-08-04 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5704:


 Summary: Auto topic creation causes failure with older clusters
 Key: KAFKA-5704
 URL: https://issues.apache.org/jira/browse/KAFKA-5704
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Ewen Cheslack-Postava


The new automatic internal topic creation always tries to check the topic and 
create it if missing. However, older brokers that we should still be compatible 
with don't support some requests that are used. This results in an 
UnsupportedVersionException which some of the TopicAdmin code notes that it can 
throw but then isn't caught in the initializers, causing the entire process to 
fail.

We should probably just catch it, log a message, and allow things to proceed 
hoping that the user has already created the topics correctly (as we used to 
do).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5667) kafka.api.LogDirFailureTest testProduceAfterLogDirFailure flaky test

2017-07-27 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5667:


 Summary: kafka.api.LogDirFailureTest testProduceAfterLogDirFailure 
flaky test
 Key: KAFKA-5667
 URL: https://issues.apache.org/jira/browse/KAFKA-5667
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Ewen Cheslack-Postava


We observed this on our Jenkins build against trunk:

{quote}
10:59:38 kafka.api.LogDirFailureTest > testProduceAfterLogDirFailure FAILED
10:59:38 java.lang.AssertionError
10:59:38 at org.junit.Assert.fail(Assert.java:86)
10:59:38 at org.junit.Assert.assertTrue(Assert.java:41)
10:59:38 at org.junit.Assert.assertTrue(Assert.java:52)
10:59:38 at 
kafka.api.LogDirFailureTest.testProduceAfterLogDirFailure(LogDirFailureTest.scala:76)
{quote}

Not sure yet how flaky the test is, we've only seen this once so far.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5089) JAR mismatch in KafkaConnect leads to NoSuchMethodError in HDP 2.6

2017-07-21 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5089.
--
Resolution: Fixed

Going to close this since it should be resolved by 
[KIP-146|https://cwiki.apache.org/confluence/display/KAFKA/KIP-146+-+Classloading+Isolation+in+Connect]
 which provides better classloader isolation. Please reopen if this is still an 
issue even after that feature was added.

> JAR mismatch in KafkaConnect leads to NoSuchMethodError in HDP 2.6
> --
>
> Key: KAFKA-5089
> URL: https://issues.apache.org/jira/browse/KAFKA-5089
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.1.1
> Environment: HDP 2.6, Centos 7.3.1611, 
> kafka-0.10.1.2.6.0.3-8.el6.noarch
>Reporter: Christoph Körner
>
> When I follow the steps on the Getting Started Guide of KafkaConnect 
> (https://kafka.apache.org/quickstart#quickstart_kafkaconnect), it throws an 
> NoSuchMethodError error. 
> {code:borderStyle=solid}
> [root@devbox kafka-broker]# ./bin/connect-standalone.sh 
> config/connect-standalone.properties config/connect-file-source.properties 
> config/ connect-file-sink.properties
> [2017-04-19 14:38:36,583] INFO StandaloneConfig values:
> access.control.allow.methods =
> access.control.allow.origin =
> bootstrap.servers = [localhost:6667]
> internal.key.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> internal.value.converter = class 
> org.apache.kafka.connect.json.JsonConverter
> key.converter = class org.apache.kafka.connect.json.JsonConverter
> offset.flush.interval.ms = 1
> offset.flush.timeout.ms = 5000
> offset.storage.file.filename = /tmp/connect.offsets
> rest.advertised.host.name = null
> rest.advertised.port = null
> rest.host.name = null
> rest.port = 8083
> task.shutdown.graceful.timeout.ms = 5000
> value.converter = class org.apache.kafka.connect.json.JsonConverter
>  (org.apache.kafka.connect.runtime.standalone.StandaloneConfig:180)
> [2017-04-19 14:38:36,756] INFO Logging initialized @714ms 
> (org.eclipse.jetty.util.log:186)
> [2017-04-19 14:38:36,871] INFO Kafka Connect starting 
> (org.apache.kafka.connect.runtime.Connect:52)
> [2017-04-19 14:38:36,872] INFO Herder starting 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:70)
> [2017-04-19 14:38:36,872] INFO Worker starting 
> (org.apache.kafka.connect.runtime.Worker:114)
> [2017-04-19 14:38:36,873] INFO Starting FileOffsetBackingStore with file 
> /tmp/connect.offsets 
> (org.apache.kafka.connect.storage.FileOffsetBackingStore:60)
> [2017-04-19 14:38:36,877] INFO Worker started 
> (org.apache.kafka.connect.runtime.Worker:119)
> [2017-04-19 14:38:36,878] INFO Herder started 
> (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:72)
> [2017-04-19 14:38:36,878] INFO Starting REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer:98)
> [2017-04-19 14:38:37,077] INFO jetty-9.2.15.v20160210 
> (org.eclipse.jetty.server.Server:327)
> [2017-04-19 14:38:37,154] WARN FAILED 
> o.e.j.s.ServletContextHandler@3c46e67a{/,null,STARTING}: 
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map; 
> (org.eclipse.jetty.util.component.AbstractLifeCycle:212)
> java.lang.NoSuchMethodError: 
> javax.ws.rs.core.Application.getProperties()Ljava/util/Map;
> at 
> org.glassfish.jersey.server.ApplicationHandler.(ApplicationHandler.java:331)
> at 
> org.glassfish.jersey.servlet.WebComponent.(WebComponent.java:392)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:177)
> at 
> org.glassfish.jersey.servlet.ServletContainer.init(ServletContainer.java:369)
> at javax.servlet.GenericServlet.init(GenericServlet.java:241)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:616)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:396)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
>   

[jira] [Resolved] (KAFKA-5623) ducktape kafka service: do not assume Service contains num_nodes

2017-07-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5623.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.1
   0.10.2.2
   0.11.1.0

Issue resolved by pull request 3557
[https://github.com/apache/kafka/pull/3557]

> ducktape kafka service: do not assume Service contains num_nodes
> 
>
> Key: KAFKA-5623
> URL: https://issues.apache.org/jira/browse/KAFKA-5623
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.11.0.1
>Reporter: Colin P. McCabe
> Fix For: 0.11.1.0, 0.10.2.2, 0.11.0.1
>
>
> In the ducktape kafka service, we should not assume that {{ducktape.Service}} 
> contains {{num_nodes}}.  In newer versions of ducktape, it does not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5614) pep8/flake8 checks for system tests

2017-07-19 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5614:


 Summary: pep8/flake8 checks for system tests
 Key: KAFKA-5614
 URL: https://issues.apache.org/jira/browse/KAFKA-5614
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 0.11.0.0
Reporter: Ewen Cheslack-Postava


Currently the python code for system tests doesn't have an checks on style. 
Similar to checkstyle in java/scala, we should be checking the style for Python 
code. These consists of at least 2 parts:

* Add the configs that allow you to run these checks
* Get the Kafka CI systems to run these as a regular part of the build process, 
in addition to building & testing the java/scala code.

The latter is hard to estimate the effort for since it depends on what is 
available on the Apache Jenkins slaves that we build on. Hopefully at least 
python + pip would be there such that augmenting the Jenkins build to also run 
flake8 wouldn't be too tough.

If it makes sense while doing this, we might also want to get Python3 
compatibility and even just move to it being a requirement. Whether this is a 
requirement for this JIRA might depend on the Apache jenkins environments and 
what python versions they have available.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5613) Deprecate JmxTool?

2017-07-19 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5613:


 Summary: Deprecate JmxTool?
 Key: KAFKA-5613
 URL: https://issues.apache.org/jira/browse/KAFKA-5613
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.11.0.0
Reporter: Ewen Cheslack-Postava


According to git-blame, JmxTool has been around since October 2011. We use it 
in system tests, but we are thinking it might be best to replace it: 
https://issues.apache.org/jira/browse/KAFKA-5612

When making modifications for system tests, we've had to take into account 
compatibility because this tool is technically included in our distribution 
and, perhaps unintentionally, a public utility.

We know that "real" tools for JMX, like jmxtrans, are more commonly used, but 
we don't know who might be using JmxTool simply because it ships with Kafka. 
That said, it also isn't documented in the Kafka documentation, so you probably 
have to dig around to find it.

Hopefully we can deprecate this and eventually move it either to a jar that is 
only used for system tests, or even better just remove it entirely. To do any 
of this, we'd probably need to do at least a cursory survey of the community to 
get a feel for usage level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5612) Replace JmxTool with a MetricsReporter in system tests

2017-07-19 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5612:


 Summary: Replace JmxTool with a MetricsReporter in system tests
 Key: KAFKA-5612
 URL: https://issues.apache.org/jira/browse/KAFKA-5612
 Project: Kafka
  Issue Type: Bug
  Components: system tests
Affects Versions: 0.11.0.0
Reporter: Ewen Cheslack-Postava


I marked this as affecting 0.11.0.0, but it affects all earlier versions as 
well, at least as far back as 0.10.1.

The discussion in https://github.com/apache/kafka/pull/3547 probably gives the 
clearest explanation, but the basic issue is that ever since JmxMixin was 
introduced to the system tests, we've faced race condition issues because the 
second process that performs the monitoring has various timing issues with the 
process it is monitoring. It can be both too fast and too slow, and the exact 
conditions it needs to wait for may not even be externally visible (e.g. that 
all metrics have been registered).

An alternative solution would be to introduce a MetricsReporter implementation 
that accomplishes the same thing, but just requires overriding some configs for 
the service that is utilizing JmxMixin. In particular, the reporter could 
output data to a simple file, ideally would not require all metrics that are 
reported to be available up front (i.e., no CSV format that requires a fixed 
header that cannot be changed), and wouldn't have any timing constraints (e.g., 
could at least guarantee that metrics are reported once at the beginning and 
end of the program).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5608) System test failure due to timeout starting Jmx tool

2017-07-19 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5608.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.1
   0.11.1.0

Issue resolved by pull request 3547
[https://github.com/apache/kafka/pull/3547]

> System test failure due to timeout starting Jmx tool
> 
>
> Key: KAFKA-5608
> URL: https://issues.apache.org/jira/browse/KAFKA-5608
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.11.1.0, 0.11.0.1
>
>
> Began seeing this in some failing system tests:
> {code}
> [INFO  - 2017-07-18 14:25:55,375 - background_thread - _protected_worker - 
> lineno:39]: Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/services/background_thread.py",
>  line 35, in _protected_worker
> self._worker(idx, node)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/tests/kafkatest/services/console_consumer.py",
>  line 261, in _worker
> self.start_jmx_tool(idx, node)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/tests/kafkatest/services/monitor/jmx.py",
>  line 73, in start_jmx_tool
> wait_until(lambda: self._jmx_has_output(node), timeout_sec=10, 
> backoff_sec=.5, err_msg="%s: Jmx tool took too long to start" % node.account)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: ubuntu@worker7: Jmx tool took too long to start
> {code}
> This is immediately followed by a consumer timeout in the failing cases:
> {code}
> [INFO  - 2017-07-18 14:26:46,907 - runner_client - log - lineno:221]: 
> RunnerClient: 
> kafkatest.tests.core.security_rolling_upgrade_test.TestSecurityRollingUpgrade.test_rolling_upgrade_phase_two.broker_protocol=SASL_SSL.client_protocol=SASL_SSL:
>  FAIL: Consumer failed to consume messages for 60s.
> Traceback (most recent call last):
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 123, in run
> data = self.run_test()
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/tests/runner_client.py",
>  line 176, in run_test
> return self.test_context.function(self.test)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/mark/_mark.py",
>  line 321, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/tests/kafkatest/tests/core/security_rolling_upgrade_test.py",
>  line 148, in test_rolling_upgrade_phase_two
> self.run_produce_consume_validate(self.roll_in_secured_settings, 
> client_protocol, broker_protocol)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 106, in run_produce_consume_validate
> self.start_producer_and_consumer()
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 79, in start_producer_and_consumer
> self.consumer_start_timeout_sec)
>   File 
> "/home/jenkins/workspace/system-test-kafka-0.11.0/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.6.0-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Consumer failed to consume messages for 60s.
> {code}
> There does not appear to be anything wrong with the consumer in the logs, so 
> the timeout seems to be caused by the Jmx tool timeout.
> Possibly due to https://github.com/apache/kafka/pull/3447/files?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5579) SchemaBuilder.type(Schema.Type) should not allow null.

2017-07-17 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5579.
--
   Resolution: Fixed
Fix Version/s: 0.11.1.0

Issue resolved by pull request 3517
[https://github.com/apache/kafka/pull/3517]

> SchemaBuilder.type(Schema.Type) should not allow null.
> --
>
> Key: KAFKA-5579
> URL: https://issues.apache.org/jira/browse/KAFKA-5579
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
>Priority: Minor
> Fix For: 0.11.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5568) Transformations that mutate topic-partitions break sink connectors that manage their own configuration

2017-07-06 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5568:


 Summary: Transformations that mutate topic-partitions break sink 
connectors that manage their own configuration
 Key: KAFKA-5568
 URL: https://issues.apache.org/jira/browse/KAFKA-5568
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0, 0.10.2.1, 0.10.2.0
Reporter: Ewen Cheslack-Postava


KAFKA-5567 describes how offset commits for sink connectors are broken if a 
record's topic-partition is mutated by an SMT, e.g RegexRouter or 
TimestampRouter.

This is also a problem for sink connectors that manage their own offsets, i.e. 
those that store offsets elsewhere and call SinkTaskContext.rewind(). In this 
case, the transformation has already been applied by the time the SinkTask sees 
it, so there is no way it could correctly track offsets and call rewind() with 
valid values. For example, this would make the offset tracking that Confluent's 
HDFS connector does by working with filenames no longer work. Even if they were 
stored separately in a file rather than relying on filenames, it still wouldn't 
have ever had the correct offsets to write to that file.

There are a couple of options:

1. Decide that this is an acceptable consequence of combining SMTs with sink 
connectors and it's a limitation we accept. You can either transform the data 
via Kafka Streams instead or accept that you can't do these "routing" type 
operations in the sink connector unless it supports it natively. This *might* 
not be the wrong choice since we think there are very few connectors that track 
their own offsets. In the case of HDFS, we might rarely hit this issue because 
it supports its own file/directory partitioning schemes anyway so doing this 
via SMTs isn't as necessary there.
2. Try to expose the original record information to the sink connector via the 
records. I can think of 2 ways this could be done. The first is to attach the 
original record to each SinkRecord. The cost here is relatively high in terms 
of memory, especially for sink connectors that need to buffer data. The second 
is to add fields to SinkRecords for originalTopic() and originalPartition(). 
This feels a bit ugly to me but might be the least intrusive change API-wise 
and we can guarantee those fields aren't overwritten by not allowing public 
constructors to set them.
3. Try to expose the original record information to the sink connector via a 
new pre-processing callback. The idea is similar to preCommit, but instead 
would happen before any processing occurs. Taken to its logical conclusion this 
turns into a sort of interceptor interface (preConversion, preTransformation, 
put, and preCommit).
4. Add something to the Context that allows the connector to get back at the 
original information. Maybe some sort of IdentityMap 
originalPutRecords() that would let you get a mapping back to the original 
records. One nice aspect of this is that the connector can hold onto the 
original only if it needs it.
5. A very intrusive change/extension to the SinkTask API that passes in pairs 
of  records. Accomplishes the same as 2 but requires 
what I think are more complicated changes. Mentioned for completeness.
6. Something else I haven't thought of?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5548) SchemaBuilder does not validate input.

2017-07-01 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5548.
--
   Resolution: Fixed
Fix Version/s: 0.11.1.0

Issue resolved by pull request 3474
[https://github.com/apache/kafka/pull/3474]

> SchemaBuilder does not validate input.
> --
>
> Key: KAFKA-5548
> URL: https://issues.apache.org/jira/browse/KAFKA-5548
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
>Priority: Minor
> Fix For: 0.11.1.0
>
>
> SchemaBuilder.map(), SchemaBuilder.array(), and SchemaBuilder.field() do not 
> validate input. This can cause weird NullPointerException exceptions later. 
> For example I mistakenly called field("somefield", null), then later 
> performed an operation against field.schema() which yielded a null. It would 
> be preferable to throw an exception stating the issue. We could throw the a 
> NPE but state what is null. Schema is null in this case for example.
> {code:java}
>   @Test(expected = NullPointerException.class)
>   public void fieldNameNull() {
> Schema schema = SchemaBuilder.struct()
> .field(null, Schema.STRING_SCHEMA)
> .build();
>   }
>   @Test(expected = NullPointerException.class)
>   public void fieldSchemaNull() {
> Schema schema = SchemaBuilder.struct()
> .field("fieldName", null)
> .build();
>   }
>   @Test(expected = NullPointerException.class)
>   public void arraySchemaNull() {
> Schema schema = SchemaBuilder.array(Schema.STRING_SCHEMA)
> .build();
>   }
>   @Test(expected = NullPointerException.class)
>   public void mapKeySchemaNull() {
> Schema schema = SchemaBuilder.map(null, Schema.STRING_SCHEMA)
> .build();
>   }
>   @Test(expected = NullPointerException.class)
>   public void mapValueSchemaNull() {
> Schema schema = SchemaBuilder.map(Schema.STRING_SCHEMA, null)
> .build();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5540) Deprecate and remove internal converter configs

2017-06-29 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5540:


 Summary: Deprecate and remove internal converter configs
 Key: KAFKA-5540
 URL: https://issues.apache.org/jira/browse/KAFKA-5540
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.11.0.0
Reporter: Ewen Cheslack-Postava


The internal.key.converter and internal.value.converter were original exposed 
as configs because a) they are actually pluggable and b) providing a default 
would require relying on the JsonConverter always being available, which until 
we had classloader isolation it was possible might be removed for compatibility 
reasons.

However, this has ultimately just caused a lot more trouble and confusion than 
it is worth. We should deprecate the configs, give them a default of 
JsonConverter (which is also kind of nice since it results in human-readable 
data in the internal topics), and then ultimately remove them in the next major 
version.

These are all public APIs so this will need a small KIP before we can make the 
change.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5484) Refactor kafkatest docker support

2017-06-29 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-5484.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.1
   0.10.2.2
   0.11.1.0

Issue resolved by pull request 3389
[https://github.com/apache/kafka/pull/3389]

> Refactor kafkatest docker support
> -
>
> Key: KAFKA-5484
> URL: https://issues.apache.org/jira/browse/KAFKA-5484
> Project: Kafka
>  Issue Type: Bug
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
> Fix For: 0.11.1.0, 0.10.2.2, 0.11.0.1
>
>
> Refactor kafkatest docker support to fix some issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5498) Connect validation API stops returning recommendations for some fields after the right sequence of requests

2017-06-22 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5498:


 Summary: Connect validation API stops returning recommendations 
for some fields after the right sequence of requests
 Key: KAFKA-5498
 URL: https://issues.apache.org/jira/browse/KAFKA-5498
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.11.0.0


If you issue the right sequence of requests against this API, it starts 
behaving differently, omitting  certain fields (at a minimum recommended 
values, which is how I noticed this). If you start with

{code}
$ curl -X PUT -H "Content-Type: application/json" --data '{"connector.class": 
"org.apache.kafka.connect.file.FileStreamSourceConnector", "name": "file", 
"transforms": "foo"}' 
http://localhost:8083/connector-plugins/org.apache.kafka.connect.file.FileStreamSourceConnector/config/validate
  | jq
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  5845  100  5730  100   115  36642735 --:--:-- --:--:-- --:--:-- 36496
{
  "name": "org.apache.kafka.connect.file.FileStreamSourceConnector",
  "error_count": 4,
  "groups": [
"Common",
"Transforms",
"Transforms: foo"
  ],
  "configs": [
{
  "definition": {
"name": "name",
"type": "STRING",
"required": true,
"default_value": null,
"importance": "HIGH",
"documentation": "Globally unique name to use for this connector.",
"group": "Common",
"width": "MEDIUM",
"display_name": "Connector name",
"dependents": [],
"order": 1
  },
  "value": {
"name": "name",
"value": "file",
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "connector.class",
"type": "STRING",
"required": true,
"default_value": null,
"importance": "HIGH",
"documentation": "Name or alias of the class for this connector. Must 
be a subclass of org.apache.kafka.connect.connector.Connector. If the connector 
is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either 
specify this full name,  or use \"FileStreamSink\" or 
\"FileStreamSinkConnector\" to make the configuration a bit shorter",
"group": "Common",
"width": "LONG",
"display_name": "Connector class",
"dependents": [],
"order": 2
  },
  "value": {
"name": "connector.class",
"value": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "tasks.max",
"type": "INT",
"required": false,
"default_value": "1",
"importance": "HIGH",
"documentation": "Maximum number of tasks to use for this connector.",
"group": "Common",
"width": "SHORT",
"display_name": "Tasks max",
"dependents": [],
"order": 3
  },
  "value": {
"name": "tasks.max",
"value": "1",
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "key.converter",
"type": "CLASS",
"required": false,
"default_value": null,
"importance": "LOW",
"documentation": "Converter class used to convert between Kafka Connect 
format and the serialized form that is written to Kafka. This controls the 
format of the keys in messages written to or read from Kafka, and since this is 
independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and Avro.",
"group": "Common",
"width": "SHORT",
"display_name": "Key converter class",
"dependents": [],
"order": 4
  },
  "value": {
"name": "key.converter",
"value": null,
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "value.converter",
"type": "CLASS",
"required": false,
"default_value": null,
"importance": "LOW",
"documentation": "Converter class used to convert between Kafka Connect 
format and the serialized form that is written to Kafka. This controls the 
format of the values in messages written to or read from Kafka, and since this 
is independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and Avro.",
"group": "Common",
"width": "SHORT",
"display_name": "Value converter 

[jira] [Created] (KAFKA-5475) Connector config validation REST API endpoint not including fields for transformations

2017-06-19 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5475:


 Summary: Connector config validation REST API endpoint not 
including fields for transformations
 Key: KAFKA-5475
 URL: https://issues.apache.org/jira/browse/KAFKA-5475
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava
Priority: Blocker
 Fix For: 0.11.0.0


An issue with how embedded transformation configurations are included seems to 
have been introduced during 0.11.0.0. We are no longer seeing the 
`transforms..type` being included in the validation output.

{code}
 curl -X PUT -H "Content-Type: application/json" --data '{"connector.class": 
"org.apache.kafka.connect.file.FileStreamSourceConnector", "transforms": 
"foo,bar"}' 
http://localhost:8083/connector-plugins/org.apache.kafka.connect.file.FileStreamSourceConnector/config/validate
  | jq
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  3428  100  3325  100   103   344k  10917 --:--:-- --:--:-- --:--:--  360k
{
  "name": "org.apache.kafka.connect.file.FileStreamSourceConnector",
  "error_count": 1,
  "groups": [
"Common",
"Transforms"
  ],
  "configs": [
{
  "definition": {
"name": "value.converter",
"type": "CLASS",
"required": false,
"default_value": null,
"importance": "LOW",
"documentation": "Converter class used to convert between Kafka Connect 
format and the serialized form that is written to Kafka. This controls the 
format of the values in messages written to or read from Kafka, and since this 
is independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and Avro.",
"group": "Common",
"width": "SHORT",
"display_name": "Value converter class",
"dependents": [],
"order": 5
  },
  "value": {
"name": "value.converter",
"value": null,
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "name",
"type": "STRING",
"required": true,
"default_value": null,
"importance": "HIGH",
"documentation": "Globally unique name to use for this connector.",
"group": "Common",
"width": "MEDIUM",
"display_name": "Connector name",
"dependents": [],
"order": 1
  },
  "value": {
"name": "name",
"value": null,
"recommended_values": [],
"errors": [
  "Missing required configuration \"name\" which has no default value."
],
"visible": true
  }
},
{
  "definition": {
"name": "tasks.max",
"type": "INT",
"required": false,
"default_value": "1",
"importance": "HIGH",
"documentation": "Maximum number of tasks to use for this connector.",
"group": "Common",
"width": "SHORT",
"display_name": "Tasks max",
"dependents": [],
"order": 3
  },
  "value": {
"name": "tasks.max",
"value": "1",
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "connector.class",
"type": "STRING",
"required": true,
"default_value": null,
"importance": "HIGH",
"documentation": "Name or alias of the class for this connector. Must 
be a subclass of org.apache.kafka.connect.connector.Connector. If the connector 
is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either 
specify this full name,  or use \"FileStreamSink\" or 
\"FileStreamSinkConnector\" to make the configuration a bit shorter",
"group": "Common",
"width": "LONG",
"display_name": "Connector class",
"dependents": [],
"order": 2
  },
  "value": {
"name": "connector.class",
"value": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"recommended_values": [],
"errors": [],
"visible": true
  }
},
{
  "definition": {
"name": "key.converter",
"type": "CLASS",
"required": false,
"default_value": null,
"importance": "LOW",
"documentation": "Converter class used to convert between Kafka Connect 
format and the serialized form that is written to Kafka. This controls the 
format of the keys in messages written to or read from Kafka, and since this is 
independent of connectors it allows any connector to work with any 
serialization format. Examples of common formats include JSON and Avro.",
"group": "Common",
"width": "SHORT",

[jira] [Updated] (KAFKA-5450) Scripts to startup Connect in system tests have too short a timeout

2017-06-14 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-5450:
-
   Resolution: Fixed
Fix Version/s: 0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3344
[https://github.com/apache/kafka/pull/3344]

> Scripts to startup Connect in system tests have too short a timeout
> ---
>
> Key: KAFKA-5450
> URL: https://issues.apache.org/jira/browse/KAFKA-5450
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 0.11.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
> Fix For: 0.11.0.0, 0.11.1.0
>
>
> When the system tests start up a Kafka Connect standalone or distributed 
> worker, the utility starts the process, and if the worker does not start up 
> within 30 seconds the utility considers it a failure and stops everything. 
> This is often sufficient when running the system tests against the source 
> code, as the CLASSPATH for Connect includes only the Kafka Connect runtime 
> JARs (in addition to all of the connector dirs). However, when running the 
> system tests against the packaged form of Kafka, the CLASSPATH for Connect 
> includes all of the Apache Kafka JARs (in addition to all of the connector 
> dirs). This increases the total number of JARs that have to be scanned by 
> almost 75% and increases the time required to scan all of the JARs nearly 
> doubles from ~14sec to ~26sec. (Some of the additional JARs are likely larger 
> and take longer to scan than those JARs in Connect or the connectors.)
> As a result, the 30 second timeout is often not quite sufficient for the 
> Connect system test utility and should be increased to 60 seconds. This 
> shouldn't noticeably increase the time of most system tests, since 30 seconds 
> was nearly sufficient anyway; it will increase the duration of the tests 
> where does fail to start, but that ideally won't happen much. :-)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5448) TimestampConverter's "type" config conflicts with the basic Transformation "type" config

2017-06-14 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-5448:


 Summary: TimestampConverter's "type" config conflicts with the 
basic Transformation "type" config
 Key: KAFKA-5448
 URL: https://issues.apache.org/jira/browse/KAFKA-5448
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Blocker
 Fix For: 0.11.0.0


[KIP-66|https://cwiki.apache.org/confluence/display/KAFKA/KIP-66%3A+Single+Message+Transforms+for+Kafka+Connect]
 defined one of the configs for TimestampConverter to be "type". However, all 
transformations are configured with the "type" config specifying the class that 
implements them.

We need to modify the naming of the configs so these don't conflict.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5382) Log recovery can fail if topic names contain one of the index suffixes

2017-06-13 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-5382:
-
Fix Version/s: (was: 0.11.0.0.)
   0.11.0.0

> Log recovery can fail if topic names contain one of the index suffixes
> --
>
> Key: KAFKA-5382
> URL: https://issues.apache.org/jira/browse/KAFKA-5382
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.2.0, 0.10.2.1
>Reporter: Jason Gustafson
> Fix For: 0.11.0.0
>
>
> Our log recovery logic fails in 0.10.2 and prior releases if the topic name 
> contains "index" or "timeindex." The issue is this snippet:
> {code}
> val logFile =
>   if (filename.endsWith(TimeIndexFileSuffix))
> new File(file.getAbsolutePath.replace(TimeIndexFileSuffix, 
> LogFileSuffix))
>   else
> new File(file.getAbsolutePath.replace(IndexFileSuffix, 
> LogFileSuffix))
> if(!logFile.exists) {
>   warn("Found an orphaned index file, %s, with no corresponding log 
> file.".format(file.getAbsolutePath))
>   file.delete()
> }
> {code}
> The {{replace}} is a global replace, so the substituted filename is incorrect 
> if the topic contains the index suffix.
> Note this is already fixed in trunk and 0.11.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-1044) change log4j to slf4j

2017-06-06 Thread Ewen Cheslack-Postava (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040126#comment-16040126
 ] 

Ewen Cheslack-Postava commented on KAFKA-1044:
--

[~viktorsomogyi] I've added you as a contributor. I assigned this JIRA to you, 
but since you are a contributor now you should be able to assign any other 
JIRAs to yourself as well.

> change log4j to slf4j 
> --
>
> Key: KAFKA-1044
> URL: https://issues.apache.org/jira/browse/KAFKA-1044
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: shijinkui
>Assignee: Viktor Somogyi
>Priority: Minor
>  Labels: newbie
>
> can u chanage the log4j to slf4j, in my project, i use logback, it's conflict 
> with log4j.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-1044) change log4j to slf4j

2017-06-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-1044:


Assignee: Ewen Cheslack-Postava

> change log4j to slf4j 
> --
>
> Key: KAFKA-1044
> URL: https://issues.apache.org/jira/browse/KAFKA-1044
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: shijinkui
>Assignee: Ewen Cheslack-Postava
>Priority: Minor
>  Labels: newbie
>
> can u chanage the log4j to slf4j, in my project, i use logback, it's conflict 
> with log4j.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-1044) change log4j to slf4j

2017-06-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava reassigned KAFKA-1044:


Assignee: Viktor Somogyi  (was: Ewen Cheslack-Postava)

> change log4j to slf4j 
> --
>
> Key: KAFKA-1044
> URL: https://issues.apache.org/jira/browse/KAFKA-1044
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.8.0
>Reporter: shijinkui
>Assignee: Viktor Somogyi
>Priority: Minor
>  Labels: newbie
>
> can u chanage the log4j to slf4j, in my project, i use logback, it's conflict 
> with log4j.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (KAFKA-4942) Kafka Connect: Offset committing times out before expected

2017-06-06 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-4942.
--
   Resolution: Fixed
Fix Version/s: 0.11.1.0
   0.11.0.0

Issue resolved by pull request 2912
[https://github.com/apache/kafka/pull/2912]

> Kafka Connect: Offset committing times out before expected
> --
>
> Key: KAFKA-4942
> URL: https://issues.apache.org/jira/browse/KAFKA-4942
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Stephane Maarek
> Fix For: 0.11.0.0, 0.11.1.0
>
>
> On Kafka 0.10.2.0
> I run a connector that deals with a lot of data, in a kafka connect cluster
> When the offsets are getting committed, I get the following:
> {code}
> [2017-03-23 03:56:25,134] INFO WorkerSinkTask{id=MyConnector-1} Committing 
> offsets (org.apache.kafka.connect.runtime.WorkerSinkTask)
> [2017-03-23 03:56:25,135] WARN Commit of WorkerSinkTask{id=MyConnector-1} 
> offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask)
> {code}
> If you look at the timestamps, they're 1 ms apart. My settings are the 
> following: 
> {code}
>   offset.flush.interval.ms = 12
>   offset.flush.timeout.ms = 6
>   offset.storage.topic = _connect_offsets
> {code}
> It seems the offset flush timeout setting is completely ignored for the look 
> of the logs. I would expect the timeout message to happen 60 seconds after 
> the commit offset INFO message, not 1 millisecond later.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5164) SetSchemaMetadata does not replace the schemas in structs correctly

2017-06-02 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-5164:
-
   Resolution: Fixed
Fix Version/s: 0.11.1.0
   0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3198
[https://github.com/apache/kafka/pull/3198]

> SetSchemaMetadata does not replace the schemas in structs correctly
> ---
>
> Key: KAFKA-5164
> URL: https://issues.apache.org/jira/browse/KAFKA-5164
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Randall Hauch
> Fix For: 0.11.0.0, 0.11.1.0
>
>
> In SetSchemaMetadataTest we verify that the name and version of the schema in 
> the record have been replaced:
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/test/java/org/apache/kafka/connect/transforms/SetSchemaMetadataTest.java#L62
> However, in the case of Structs, the schema will be attached to both the 
> record and the Struct itself. So we correctly rebuild the Record:
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L77
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L104
> https://github.com/apache/kafka/blob/trunk/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/SetSchemaMetadata.java#L119
> But if the key or value are a struct, they will still contain the old schema 
> embedded in the struct.
> Ultimately this can lead to validations in other code failing (even for very 
> simple changes like adjusting the name of a schema):
> {code}
> (org.apache.kafka.connect.runtime.WorkerTask:141)
> org.apache.kafka.connect.errors.DataException: Mismatching struct schema
> at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:471)
> at io.confluent.connect.avro.AvroData.fromConnectData(AvroData.java:295)
> at 
> io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:73)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:196)
> at 
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:167)
> at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:139)
> at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:182)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The solution to this is probably to check whether we're dealing with a Struct 
> when we use a new schema and potentially copy/reallocate it.
> This particular issue would only appear if we don't modify the data, so I 
> think SetSchemaMetadata is currently the only transformation that would have 
> the issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-5229) Reflections logs excessive warnings when scanning classpaths

2017-06-01 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-5229:
-
   Resolution: Fixed
Fix Version/s: 0.11.1.0
   0.11.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 3072
[https://github.com/apache/kafka/pull/3072]

> Reflections logs excessive warnings when scanning classpaths
> 
>
> Key: KAFKA-5229
> URL: https://issues.apache.org/jira/browse/KAFKA-5229
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.0.0, 0.10.0.1, 0.10.1.0, 0.10.1.1, 0.10.2.0, 
> 0.10.2.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Bharat Viswanadham
>  Labels: newbie
> Fix For: 0.11.0.0, 0.11.1.0
>
>
> We use Reflections to scan the classpath for available plugins (connectors, 
> converters, transformations), but when doing so Reflections tends to generate 
> a lot of log noise like this:
> {code}
> [2017-05-12 14:59:48,224] WARN could not get type for name 
> org.jboss.netty.channel.SimpleChannelHandler from any class loader 
> (org.reflections.Reflections:396)
> org.reflections.ReflectionsException: could not get type for name 
> org.jboss.netty.channel.SimpleChannelHandler
>   at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:390)
>   at org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
>   at org.reflections.Reflections.(Reflections.java:126)
>   at 
> org.apache.kafka.connect.runtime.PluginDiscovery.scanClasspathForPlugins(PluginDiscovery.java:68)
>   at 
> org.apache.kafka.connect.runtime.AbstractHerder$1.run(AbstractHerder.java:391)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.jboss.netty.channel.SimpleChannelHandler
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   at org.reflections.ReflectionUtils.forName(ReflectionUtils.java:388)
>   ... 5 more
> {code}
> Despite being benign, these warnings worry users, especially first time users.
> We should either a) see if we can get Reflections to turn off these specific 
> warnings via some config or b) make Reflections only log at > WARN by default 
> in our log4j config. (b) is probably safe since we should only be seeing 
> these at startup and I don't think I've seen any actual issue logged at WARN.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   3   4   5   6   7   8   9   10   >