[jira] [Commented] (KAFKA-3705) Support non-key joining in KTable

2019-09-13 Thread satyanarayan komandur (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929382#comment-16929382
 ] 

satyanarayan komandur commented on KAFKA-3705:
--

How is this change different from ValueJoiner on kstreams

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Adam Bellemare
>Priority: Major
>  Labels: api, kip
>
> KIP-213: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable]
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code:java}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-5505) Connect: Do not restart connector and existing tasks on task-set change

2018-09-10 Thread satyanarayan komandur (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16609666#comment-16609666
 ] 

satyanarayan komandur commented on KAFKA-5505:
--

Is there any update on the re balance improvement. When i am running a single 
worker node (in distributed mode in dev environment), i still see re-balancing 
going on resulting in restart of connectors/tasks. Is'nt re-balance really a 
scenario where multiple Worker nodes are involved.

> Connect: Do not restart connector and existing tasks on task-set change
> ---
>
> Key: KAFKA-5505
> URL: https://issues.apache.org/jira/browse/KAFKA-5505
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.10.2.1
>Reporter: Per Steffensen
>Priority: Major
>
> I am writing a connector with a frequently changing task-set. It is really 
> not working very well, because the connector and all existing tasks are 
> restarted when the set of tasks changes. E.g. if the connector is running 
> with 10 tasks, and an additional task is needed, the connector itself and all 
> 10 existing tasks are restarted, just to make the 11th task run also. My 
> tasks have a fairly heavy initialization, making it extra annoying. I would 
> like to see a change, introducing a "mode", where only new/deleted tasks are 
> started/stopped when notifying the system that the set of tasks changed 
> (calling context.requestTaskReconfiguration() - or something similar).
> Discussed this issue a little on d...@kafka.apache.org in the thread "Kafka 
> Connect: To much restarting with a SourceConnector with dynamic set of tasks"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7387) Kafka distributed worker reads config from backup while launching connector

2018-09-07 Thread satyanarayan komandur (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

satyanarayan komandur resolved KAFKA-7387.
--
Resolution: Not A Bug

Issue is with the JDBC connector and not with the kafka connect framework

> Kafka distributed worker reads config from backup while launching connector
> ---
>
> Key: KAFKA-7387
> URL: https://issues.apache.org/jira/browse/KAFKA-7387
> Project: Kafka
>  Issue Type: Bug
>Reporter: satyanarayan komandur
>Priority: Minor
>
> While launching kafka connector using REST API in the distributed mode, the 
> kafka worker uses the old configuration. Normally this is fine when we 
> relaunch the connector without changing configuration.
> If the prior failure is related to a connector configuration issue and we are 
> relaunching, worker is still using old configuration for the first time 
> though subsequently it is updating the new configuration supplied. So 
> essentially we have to launch it twice in such cases. This behavior is not 
> changing even if we call the update config first and then launching the 
> connector
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7239) Kafka Connect secret externalization not working

2018-09-07 Thread satyanarayan komandur (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16607395#comment-16607395
 ] 

satyanarayan komandur commented on KAFKA-7239:
--

I confirmed the issue is with the JDBC connector and its not with kafka connect 
framework. This issue can be closed

> Kafka Connect secret externalization not working
> 
>
> Key: KAFKA-7239
> URL: https://issues.apache.org/jira/browse/KAFKA-7239
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: satyanarayan komandur
>Priority: Major
>
> I used the Kafka FileConfigProvider to externalize the properties like 
> connection.user and connection.password for JDBC source connector. I noticed 
> that the values in the connection properties are being replaced after the 
> connector has attempted to establish a connection with original key/value 
> pairs (untransformed). This is resulting a failure in connection. I am not 
> sure if this issue belong to Kafka Connector framework or its an issue with 
> JDBC Source Connector



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7387) Kafka distributed worker reads config from backup while launching connector

2018-09-07 Thread satyanarayan komandur (JIRA)
satyanarayan komandur created KAFKA-7387:


 Summary: Kafka distributed worker reads config from backup while 
launching connector
 Key: KAFKA-7387
 URL: https://issues.apache.org/jira/browse/KAFKA-7387
 Project: Kafka
  Issue Type: Bug
Reporter: satyanarayan komandur


While launching kafka connector using REST API in the distributed mode, the 
kafka worker uses the old configuration. Normally this is fine when we relaunch 
the connector without changing configuration.

If the prior failure is related to a connector configuration issue and we are 
relaunching, worker is still using old configuration for the first time though 
subsequently it is updating the new configuration supplied. So essentially we 
have to launch it twice in such cases. This behavior is not changing even if we 
call the update config first and then launching the connector

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5117) Kafka Connect REST endpoints reveal Password typed values

2018-09-06 Thread satyanarayan komandur (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16606326#comment-16606326
 ] 

satyanarayan komandur commented on KAFKA-5117:
--

I would like to add couple of more points related to this KIP

Currently i noticed even accessing end point

connectors/\{connector-name}/status is also hitting the configuration. I think 
this endpoint need not gather config information.

 

 

 

> Kafka Connect REST endpoints reveal Password typed values
> -
>
> Key: KAFKA-5117
> URL: https://issues.apache.org/jira/browse/KAFKA-5117
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.10.2.0
>Reporter: Thomas Holmes
>Priority: Major
>  Labels: needs-kip
>
> A Kafka Connect connector can specify ConfigDef keys as type of Password. 
> This type was added to prevent logging the values (instead "[hidden]" is 
> logged).
> This change does not apply to the values returned by executing a GET on 
> {{connectors/\{connector-name\}}} and 
> {{connectors/\{connector-name\}/config}}. This creates an easily accessible 
> way for an attacker who has infiltrated your network to gain access to 
> potential secrets that should not be available.
> I have started on a code change that addresses this issue by parsing the 
> config values through the ConfigDef for the connector and returning their 
> output instead (which leads to the masking of Password typed configs as 
> [hidden]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7239) Kafka Connect secret externalization not working

2018-08-02 Thread satyanarayan komandur (JIRA)
satyanarayan komandur created KAFKA-7239:


 Summary: Kafka Connect secret externalization not working
 Key: KAFKA-7239
 URL: https://issues.apache.org/jira/browse/KAFKA-7239
 Project: Kafka
  Issue Type: Bug
Reporter: satyanarayan komandur


I used the Kafka FileConfigProvider to externalize the properties like 
connection.user and connection.password for JDBC source connector. I noticed 
that the values in the connection properties are being replaced after the 
connector has attempted to establish a connection with original key/value pairs 
(untransformed). This is resulting a failure in connection. I am not sure if 
this issue belong to Kafka Connector framework or its an issue with JDBC Source 
Connector



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)