[jira] [Created] (KAFKA-8404) Authorization header is not passed in Connect when forwarding REST requests

2019-05-21 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-8404:


 Summary: Authorization header is not passed in Connect when 
forwarding REST requests
 Key: KAFKA-8404
 URL: https://issues.apache.org/jira/browse/KAFKA-8404
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Robert Yokota
 Fix For: 2.3.0


When Connect forwards a REST request from one worker to another, the 
Authorization header is not forwarded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7512) java.lang.ClassCastException: java.util.Date cannot be cast to java.lang.Number

2018-10-16 Thread Robert Yokota (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-7512.
--
Resolution: Duplicate

> java.lang.ClassCastException: java.util.Date cannot be cast to 
> java.lang.Number
> ---
>
> Key: KAFKA-7512
> URL: https://issues.apache.org/jira/browse/KAFKA-7512
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Rohit Kumar Gupta
>Priority: Blocker
> Attachments: connect.out
>
>
> Steps:
> ~~
> bash-4.2# kafka-avro-console-producer --broker-list localhost:9092 --topic 
> connect_10oct_03 -property schema.registry.url=http://localhost:8081 
> --property value.schema='{"type":"record","name":"myrecord","fields":[
> {"name":"f1","type":"string"}
> ,{"name":"f2","type":["null",
> {"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}
> ],"default":null}]}'
> {"f1": "value1","f2": \{"null":null}}
> {"f1": "value1","f2": \{"long":1022}}
>  
> bash-4.2# kafka-avro-console-producer --broker-list localhost:9092 --topic 
> connect_10oct_03 -property schema.registry.url=http://localhost:8081 
> --property value.schema='{"type":"record","name":"myrecord","fields":[
> {"name":"f1","type":"string"}
> ,{"name":"f2","type":["null",
> {"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}
> ],"default":null},\{"name":"f3","type":"string","default":"green"}]}'
> {"f1": "value1","f2": \\{"null":null}
> ,"f3":"toto"}
> {"f1": "value1","f2": \\{"null":null}
> ,"f3":"toto"}
> {"f1": "value1","f2": \\{"null":null}
> ,"f3":"toto"}
> {"f1": "value1","f2": \\{"long":12343536}
> ,"f3":"tutu"}
>  
> bash-4.2# kafka-avro-console-producer --broker-list localhost:9092 --topic 
> connect_10oct_03 -property schema.registry.url=http://localhost:8081 
> --property value.schema='{"type":"record","name":"myrecord","fields":[
> {"name":"f1","type":"string"}
> ,{"name":"f2","type":["null",
> {"type":"long","logicalType":"timestamp-millis","connect.version":1,"connect.name":"org.apache.kafka.connect.data.Timestamp"}
> ],"default":null}]}'
> {"f1": "value1","f2": \{"null":null}}
> {"f1": "value1","f2": \{"long":1022}}
>  
> bash-4.2# curl -X POST -H "Accept: application/json" -H "Content-Type: 
> application/json" http://localhost:8083/connectors -d 
> '\{"name":"hdfs-sink-connector-10oct-03", "config": 
> {"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector", 
> "tasks.max":"1", "topics":"connect_10oct_03", "hdfs.url": 
> "hdfs://localhost:8020/tmp/", "flush.size":"1", "hive.integration": "true", 
> "hive.metastore.uris": "thrift://localhost:9083", "hive.database": "rohit", 
> "schema.compatibility": "BACKWARD"}}'
> {"name":"hdfs-sink-connector-10oct-03","config":\\{"connector.class":"io.confluent.connect.hdfs.HdfsSinkConnector","tasks.max":"1","topics":"connect_10oct_03","hdfs.url":"hdfs://localhost:8020/tmp/","flush.size":"1","hive.integration":"true","hive.metastore.uris":"thrift://localhost:9083","hive.database":"rohit","schema.compatibility":"BACKWARD","name":"hdfs-sink-connector-10oct-03"}
> ,"tasks":[],"type":null}bash-4.2#
> bash-4.2#
>  
> bash-4.2# curl 
> http://localhost:8083/connectors/hdfs-sink-connector-10oct-03/status
> {"name":"hdfs-sink-connector-10oct-03","connector":\\{"state":"RUNNING","worker_id":"localhost:8083"}
> ,"tasks":[\\{"state":"FAILED","trace":"org.apache.kafka.connect.errors.ConnectException:
>  Exiting WorkerSinkTask due to unrecoverable exception.\n\tat 
> org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)\n\tat
>  
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat
>  
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)\n\tat
>  
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)\n\tat
>  org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat 
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
>  java.lang.Thread.run(Thread.java:748)\nCaused by: 
> java.lang.ClassCastException: java.util.Date cannot be cast to 
> java.lang.Number\n\tat 
> org.apache.kafka.connect.data.SchemaProjector.projectPrimitive(SchemaProjector.java:164)\n\tat
>  
> 

[jira] [Created] (KAFKA-7476) SchemaProjector is not properly handling Date-based logical types

2018-10-02 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-7476:


 Summary: SchemaProjector is not properly handling Date-based 
logical types
 Key: KAFKA-7476
 URL: https://issues.apache.org/jira/browse/KAFKA-7476
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Robert Yokota
Assignee: Robert Yokota


SchemaProjector is not properly handling Date-based logical types.  An 
exception of the following form is thrown:  `Caused by: 
java.lang.ClassCastException: java.util.Date cannot be cast to java.lang.Number`



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7370) Enhance FileConfigProvider to read a directory

2018-09-17 Thread Robert Yokota (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-7370.
--
Resolution: Won't Do

> Enhance FileConfigProvider to read a directory
> --
>
> Key: KAFKA-7370
> URL: https://issues.apache.org/jira/browse/KAFKA-7370
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Affects Versions: 2.0.0
>Reporter: Robert Yokota
>Assignee: Robert Yokota
>Priority: Minor
>
> Currently FileConfigProvider can read a Properties file as a set of key-value 
> pairs.  This enhancement is to augment FileConfigProvider so that it can also 
> read a directory, where the file names are the keys and the corresponding 
> file contents are the values.
> This will allow for easier integration with secret management systems where 
> each secret is often an individual file, such as in Docker and Kubernetes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7370) Enhance FileConfigProvider to read a directory

2018-08-31 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-7370:


 Summary: Enhance FileConfigProvider to read a directory
 Key: KAFKA-7370
 URL: https://issues.apache.org/jira/browse/KAFKA-7370
 Project: Kafka
  Issue Type: Improvement
  Components: config
Affects Versions: 2.0.0
Reporter: Robert Yokota
Assignee: Robert Yokota


Currently FileConfigProvider can read a Properties file as a set of key-value 
pairs.  This enhancement is to augment FileConfigProvider so that it can also 
read a directory, where the file names are the keys and the corresponding file 
contents are the values.

This will allow for easier integration with secret management systems where 
each secret is often an individual file, such as in Docker and Kubernetes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6894) Cannot access GlobalKTable store from KStream.transform()

2018-05-10 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-6894:


 Summary: Cannot access GlobalKTable store from KStream.transform()
 Key: KAFKA-6894
 URL: https://issues.apache.org/jira/browse/KAFKA-6894
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.1.0
Reporter: Robert Yokota
Assignee: Robert Yokota


I was trying to access a store from a {{GlobalKTable}} in 
{{KStream.transform()}}, but I got the following error:

{code}
org.apache.kafka.streams.errors.TopologyException: Invalid topology: StateStore 
globalStore is not added yet.
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.connectProcessorAndStateStore(InternalTopologyBuilder.java:716)
at 
org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.connectProcessorAndStateStores(InternalTopologyBuilder.java:615)
at 
org.apache.kafka.streams.kstream.internals.KStreamImpl.transform(KStreamImpl.java:521)
{code}

I was able to make a change to 
{{InternalTopologyBuilder.connectProcessorAndState}} to allow me to access the 
global store from {{KStream.transform()}}.  I will submit a PR for review.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6886) Externalize Secrets for Kafka Connect Configurations

2018-05-08 Thread Robert Yokota (JIRA)
Robert Yokota created KAFKA-6886:


 Summary: Externalize Secrets for Kafka Connect Configurations
 Key: KAFKA-6886
 URL: https://issues.apache.org/jira/browse/KAFKA-6886
 Project: Kafka
  Issue Type: New Feature
  Components: KafkaConnect
Reporter: Robert Yokota
Assignee: Robert Yokota
 Fix For: 2.0.0


Kafka Connect's connector configurations have plaintext passwords, and Connect 
stores these in cleartext either on the filesystem (for standalone mode) or in 
internal topics (for distributed mode). 

Connect should not store or transmit cleartext passwords in connector 
configurations. Secrets in stored connector configurations should be allowed to 
be replaced with references to values stored in external secret management 
systems. Connect should provide an extension point for adding customized 
integrations, as well as provide a file-based extension as an example. Second, 
a Connect runtime should be allowed to be configured to use one or more of 
these extensions, and allow connector configurations to use placeholders that 
will be resolved by the runtime before passing the complete connector 
configurations to connectors. This will allow existing connectors to not see 
any difference in the configurations that Connect provides to them at startup. 
And third, Connect's API should be changed to allow a connector to obtain the 
latest connector configuration at any time.





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6407) Sink task metrics are the same for all connectors

2018-02-09 Thread Robert Yokota (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-6407.
--
Resolution: Duplicate

> Sink task metrics are the same for all connectors
> -
>
> Key: KAFKA-6407
> URL: https://issues.apache.org/jira/browse/KAFKA-6407
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.0.0
>Reporter: Alexander Koval
>Priority: Minor
>
> I have a lot of sink connectors inside a distributed worker. When I tried to 
> save metrics to graphite I discovered all task metrics are identical.
> {code}
> $>get -b 
> kafka.connect:type=sink-task-metrics,connector=prom-by-catalog-company,task=0 
> sink-record-read-total
> #mbean = 
> kafka.connect:type=sink-task-metrics,connector=prom-by-catalog-company,task=0:
> sink-record-read-total = 228744.0;
> $>get -b 
> kafka.connect:type=sink-task-metrics,connector=prom-kz-catalog-product,task=0 
> sink-record-read-total
> #mbean = 
> kafka.connect:type=sink-task-metrics,connector=prom-kz-catalog-product,task=0:
> sink-record-read-total = 228744.0;
> $>get -b 
> kafka.connect:type=sink-task-metrics,connector=prom-ru-catalog-company,task=0 
> sink-record-read-total
> #mbean = 
> kafka.connect:type=sink-task-metrics,connector=prom-ru-catalog-company,task=0:
> sink-record-read-total = 228744.0;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2925) NullPointerException if FileStreamSinkTask is stopped before initialization finishes

2018-02-02 Thread Robert Yokota (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Yokota resolved KAFKA-2925.
--
Resolution: Cannot Reproduce

I wasn't able to reproduce the NPE and by reviewing the code it doesn't seem 
possible any longer.  Closing this as cannot reproduce.

> NullPointerException if FileStreamSinkTask is stopped before initialization 
> finishes
> 
>
> Key: KAFKA-2925
> URL: https://issues.apache.org/jira/browse/KAFKA-2925
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Robert Yokota
>Priority: Minor
>
> If a FileStreamSinkTask is stopped too quickly after a distributed herder 
> rebalances work, it can result in cleanup happening without start() ever 
> being called:
> {quote}
> Sink task org.apache.kafka.connect.runtime.WorkerSinkTask@f9ac651 was stopped 
> before completing join group. Task initialization and start is being skipped 
> (org.apache.kafka.connect.runtime.WorkerSinkTask:150)
> {quote}
> This is actually a bit weird since stop() is still called so resources 
> allocated in the constructor can be cleaned up, but possibly unexpected that 
> stop() will be called without start() ever being called.
> Because the code in FileStreamSinkTask's stop() method assumes start() has 
> been called, it can result in a NullPointerException because it assumes the 
> PrintStream is already initialized.
> The easy fix is to check for nulls before closing. However, we should 
> probably also consider whether the current possibly sequence of events is 
> confusing and if we shoud not invoke stop() and make it clear in the SInkTask 
> interface that you should only initialize stuff in the constructor that won't 
> need any manual cleanup later.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)