Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-06 Thread Christofer Dutz
Ok ... as I haven't heard of any objections, I'll now push my updates to the 
kafka connector versions.

Chris

Am 04.04.18, 17:52 schrieb "Christofer Dutz" :

Yeah, 

I think I'll just wait till Friday before committing that change however to 
give the others the chance to object ;-)

Chris

Am 04.04.18, 17:44 schrieb "Vino yang" :

Hi Chris, Take it easy, I think the compile failure may not affect a 
lot. I am glad to hear that higher Kafka client can be backward compatible 
well.  Vino yang Thanks. On 2018-04-04 23:21 , Christofer Dutz Wrote: Hi 
Vino, Yeah ... but I did it without an ASF header ... that's why the build was 
failing for 23 days ( (I am really ashamed about that) I tried updating the two 
Kafka dependencies to the 1.1.0 verison (and to Scala 2.12) and that worked 
without any noticeable problems. Chris Am 04.04.18, 13:38 schrieb "vino yang" 
: Hi Chris, I rechecked the old mails between you and 
me. I misunderstand your message. I thought you will create the annotation. In 
fact, you have created the annotation. I will do this work soon, hold on. Vino 
yang. Thanks. 2018-04-04 19:32 GMT+08:00 vino yang : > 
Hi Chris, > > I have not done this. And I would upgrade it soon. > > Vino yang 
> Thanks! > > 2018-04-04 19:23 GMT+08:00 Christofer Dutz 
: > >> Hi, >> >> so I updated the libs locally, 
built and re-ran the example with this >> version and it now worked without any 
problems. >> >> Chris >> >> >> >> Am 04.04.18, 12:58 schrieb "Christofer Dutz" 
> >: >> >> Hi all, >> >> reporting back 
from my easter holidays :-) >> >> Today I had to help a customer with 
getting a POC working that uses >> PLC4X and Edgent. Unfortunately it seems 
that in order to use the kafka >> connector I can only use 0.x versions of 
Kafka. When connecting to 1.x >> versions I get stack-overflows and OutOfMemory 
errors. I did a quick test >> with updating the kafaka libs from the ancient 
0.8.2.2 to 1.1.0 seemed to >> not break anything ... I'll do some local tests 
with an updated Kafka >> client. >> >> @vino yang ... have you been working 
on adding the Annotations to the >> client? >> >> @all others ... does 
anyone have objections to updating the kafka >> client libs to 1.1.0? It 
shouldn't break anything as it should be backward >> compatible. As we are 
currently not using anything above the API level of >> 0.8.2 there should also 
not be any Exceptions (I don't know of any removed >> things, which could be a 
problem). >> >> Chris >> >> >> >> Am 20.03.18, 10:33 schrieb 
"Christofer Dutz" < >> christofer.d...@c-ware.de>: >> >> Ok, >> >>  
   So I just added a new Annotation type to the Kafka module. >> >> 
org.apache.edgent.connectors.kafka.annotations.KafkaVersion >> >> It 
has a fromVersion and a toVersion attribute. Both should be >> optional so just 
adding the annotation would have no effect (besides a few >> additional CPU 
operations). The annotation can be applied to methods or >> classes (every 
method then inherits this). I hope that's ok, because >> implementing this on a 
parameter Level would make things extremely >> difficult. >> >> @vino 
yang With this you should be able to provide Kafka version >> constraints to 
your code changes. Just tell me if something's missing or >> needs to be done 
differently >> >> For now this annotation will have no effect as I 
haven't >> implemented the Aspect for doing the checks, but I'll start working 
on that >> as soon as you have annotated something. >> >> Chris >> >>   
  Am 20.03.18, 10:11 schrieb "Christofer Dutz" < >> 
christofer.d...@c-ware.de>: >> >> Ok ... maybe I should add the 
Annotation prior to continuing >> my work on the AWS connector ... >> >> >> 
Chris >> >> Am 04.03.18, 08:10 schrieb "vino yang" 
> >: >> >> The reason is that Kafka 
0.9+ provided a new consumer API >> which has more >> features 
and better performance. >> >> Just like Flink's implementation 
: >> https://github.com/apache/flin >> 
k/tree/master/flink-connectors. >> >> vinoyang >>   
  Thanks. >> >> >> >> >> >> >> >> >> >





Re: Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread Vino yang
Hi Chris, Take it easy, I think the compile failure may not affect a lot. I am 
glad to hear that higher Kafka client can be backward compatible well.  
Vino yang Thanks. On 2018-04-04 23:21 , Christofer Dutz Wrote: Hi Vino, Yeah 
... but I did it without an ASF header ... that's why the build was failing for 
23 days ( (I am really ashamed about that) I tried updating the two Kafka 
dependencies to the 1.1.0 verison (and to Scala 2.12) and that worked without 
any noticeable problems. Chris Am 04.04.18, 13:38 schrieb "vino yang" 
: Hi Chris, I rechecked the old mails between you and 
me. I misunderstand your message. I thought you will create the annotation. In 
fact, you have created the annotation. I will do this work soon, hold on. Vino 
yang. Thanks. 2018-04-04 19:32 GMT+08:00 vino yang : > 
Hi Chris, > > I have not done this. And I would upgrade it soon. > > Vino yang 
> Thanks! > > 2018-04-04 19:23 GMT+08:00 Christofer Dutz 
: > >> Hi, >> >> so I updated the libs locally, 
built and re-ran the example with this >> version and it now worked without any 
problems. >> >> Chris >> >> >> >> Am 04.04.18, 12:58 schrieb "Christofer Dutz" 
> >: >> >>     Hi all, >> >>     reporting back 
from my easter holidays :-) >> >>     Today I had to help a customer with 
getting a POC working that uses >> PLC4X and Edgent. Unfortunately it seems 
that in order to use the kafka >> connector I can only use 0.x versions of 
Kafka. When connecting to 1.x >> versions I get stack-overflows and OutOfMemory 
errors. I did a quick test >> with updating the kafaka libs from the ancient 
0.8.2.2 to 1.1.0 seemed to >> not break anything ... I'll do some local tests 
with an updated Kafka >> client. >> >>     @vino yang ... have you been working 
on adding the Annotations to the >> client? >> >>     @all others ... does 
anyone have objections to updating the kafka >> client libs to 1.1.0? It 
shouldn't break anything as it should be backward >> compatible. As we are 
currently not using anything above the API level of >> 0.8.2 there should also 
not be any Exceptions (I don't know of any removed >> things, which could be a 
problem). >> >>     Chris >> >> >> >>     Am 20.03.18, 10:33 schrieb 
"Christofer Dutz" < >> christofer.d...@c-ware.de>: >> >>         Ok, >> >>      
   So I just added a new Annotation type to the Kafka module. >> >>         
org.apache.edgent.connectors.kafka.annotations.KafkaVersion >> >>         It 
has a fromVersion and a toVersion attribute. Both should be >> optional so just 
adding the annotation would have no effect (besides a few >> additional CPU 
operations). The annotation can be applied to methods or >> classes (every 
method then inherits this). I hope that's ok, because >> implementing this on a 
parameter Level would make things extremely >> difficult. >> >>         @vino 
yang With this you should be able to provide Kafka version >> constraints to 
your code changes. Just tell me if something's missing or >> needs to be done 
differently >> >>         For now this annotation will have no effect as I 
haven't >> implemented the Aspect for doing the checks, but I'll start working 
on that >> as soon as you have annotated something. >> >>         Chris >> >>   
      Am 20.03.18, 10:11 schrieb "Christofer Dutz" < >> 
christofer.d...@c-ware.de>: >> >>             Ok ... maybe I should add the 
Annotation prior to continuing >> my work on the AWS connector ... >> >> >>     
        Chris >> >>             Am 04.03.18, 08:10 schrieb "vino yang" 
> >: >> >>                 The reason is that Kafka 
0.9+ provided a new consumer API >> which has more >>                 features 
and better performance. >> >>                 Just like Flink's implementation 
: >>                 https://github.com/apache/flin >> 
k/tree/master/flink-connectors. >> >>                 vinoyang >>               
  Thanks. >> >> >> >> >> >> >> >> >> >

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread Christofer Dutz
Hi Vino,

Yeah ... but I did it without an ASF header ... that's why the build was 
failing for 23 days ( (I am really ashamed about that)
I tried updating the two Kafka dependencies to the 1.1.0 verison (and to Scala 
2.12) and that worked without any noticeable problems.

Chris

Am 04.04.18, 13:38 schrieb "vino yang" :

Hi Chris,

I rechecked the old mails between you and me. I misunderstand your message.
I thought you will create the annotation. In fact, you have created the
annotation.

I will do this work soon, hold on.

Vino yang.
Thanks.

2018-04-04 19:32 GMT+08:00 vino yang :

> Hi Chris,
>
> I have not done this. And I would upgrade it soon.
>
> Vino yang
> Thanks!
>
> 2018-04-04 19:23 GMT+08:00 Christofer Dutz :
>
>> Hi,
>>
>> so I updated the libs locally, built and re-ran the example with this
>> version and it now worked without any problems.
>>
>> Chris
>>
>>
>>
>> Am 04.04.18, 12:58 schrieb "Christofer Dutz" > >:
>>
>> Hi all,
>>
>> reporting back from my easter holidays :-)
>>
>> Today I had to help a customer with getting a POC working that uses
>> PLC4X and Edgent. Unfortunately it seems that in order to use the kafka
>> connector I can only use 0.x versions of Kafka. When connecting to 1.x
>> versions I get stack-overflows and OutOfMemory errors. I did a quick test
>> with updating the kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to
>> not break anything ... I'll do some local tests with an updated Kafka
>> client.
>>
>> @vino yang ... have you been working on adding the Annotations to the
>> client?
>>
>> @all others ... does anyone have objections to updating the kafka
>> client libs to 1.1.0? It shouldn't break anything as it should be 
backward
>> compatible. As we are currently not using anything above the API level of
>> 0.8.2 there should also not be any Exceptions (I don't know of any 
removed
>> things, which could be a problem).
>>
>> Chris
>>
>>
>>
>> Am 20.03.18, 10:33 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Ok,
>>
>> So I just added a new Annotation type to the Kafka module.
>>
>> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>>
>> It has a fromVersion and a toVersion attribute. Both should be
>> optional so just adding the annotation would have no effect (besides a 
few
>> additional CPU operations). The annotation can be applied to methods or
>> classes (every method then inherits this). I hope that's ok, because
>> implementing this on a parameter Level would make things extremely
>> difficult.
>>
>> @vino yang With this you should be able to provide Kafka version
>> constraints to your code changes. Just tell me if something's missing or
>> needs to be done differently
>>
>> For now this annotation will have no effect as I haven't
>> implemented the Aspect for doing the checks, but I'll start working on 
that
>> as soon as you have annotated something.
>>
>> Chris
>>
>> Am 20.03.18, 10:11 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Ok ... maybe I should add the Annotation prior to continuing
>> my work on the AWS connector ...
>>
>>
>> Chris
>>
>> Am 04.03.18, 08:10 schrieb "vino yang" > >:
>>
>> The reason is that Kafka 0.9+ provided a new consumer API
>> which has more
>> features and better performance.
>>
>> Just like Flink's implementation :
>> https://github.com/apache/flin
>> k/tree/master/flink-connectors.
>>
>> vinoyang
>> Thanks.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>




Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread vino yang
Hi Chris,

I rechecked the old mails between you and me. I misunderstand your message.
I thought you will create the annotation. In fact, you have created the
annotation.

I will do this work soon, hold on.

Vino yang.
Thanks.

2018-04-04 19:32 GMT+08:00 vino yang :

> Hi Chris,
>
> I have not done this. And I would upgrade it soon.
>
> Vino yang
> Thanks!
>
> 2018-04-04 19:23 GMT+08:00 Christofer Dutz :
>
>> Hi,
>>
>> so I updated the libs locally, built and re-ran the example with this
>> version and it now worked without any problems.
>>
>> Chris
>>
>>
>>
>> Am 04.04.18, 12:58 schrieb "Christofer Dutz" > >:
>>
>> Hi all,
>>
>> reporting back from my easter holidays :-)
>>
>> Today I had to help a customer with getting a POC working that uses
>> PLC4X and Edgent. Unfortunately it seems that in order to use the kafka
>> connector I can only use 0.x versions of Kafka. When connecting to 1.x
>> versions I get stack-overflows and OutOfMemory errors. I did a quick test
>> with updating the kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to
>> not break anything ... I'll do some local tests with an updated Kafka
>> client.
>>
>> @vino yang ... have you been working on adding the Annotations to the
>> client?
>>
>> @all others ... does anyone have objections to updating the kafka
>> client libs to 1.1.0? It shouldn't break anything as it should be backward
>> compatible. As we are currently not using anything above the API level of
>> 0.8.2 there should also not be any Exceptions (I don't know of any removed
>> things, which could be a problem).
>>
>> Chris
>>
>>
>>
>> Am 20.03.18, 10:33 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Ok,
>>
>> So I just added a new Annotation type to the Kafka module.
>>
>> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>>
>> It has a fromVersion and a toVersion attribute. Both should be
>> optional so just adding the annotation would have no effect (besides a few
>> additional CPU operations). The annotation can be applied to methods or
>> classes (every method then inherits this). I hope that's ok, because
>> implementing this on a parameter Level would make things extremely
>> difficult.
>>
>> @vino yang With this you should be able to provide Kafka version
>> constraints to your code changes. Just tell me if something's missing or
>> needs to be done differently
>>
>> For now this annotation will have no effect as I haven't
>> implemented the Aspect for doing the checks, but I'll start working on that
>> as soon as you have annotated something.
>>
>> Chris
>>
>> Am 20.03.18, 10:11 schrieb "Christofer Dutz" <
>> christofer.d...@c-ware.de>:
>>
>> Ok ... maybe I should add the Annotation prior to continuing
>> my work on the AWS connector ...
>>
>>
>> Chris
>>
>> Am 04.03.18, 08:10 schrieb "vino yang" > >:
>>
>> The reason is that Kafka 0.9+ provided a new consumer API
>> which has more
>> features and better performance.
>>
>> Just like Flink's implementation :
>> https://github.com/apache/flin
>> k/tree/master/flink-connectors.
>>
>> vinoyang
>> Thanks.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread vino yang
Hi Chris,

I have not done this. And I would upgrade it soon.

Vino yang
Thanks!

2018-04-04 19:23 GMT+08:00 Christofer Dutz :

> Hi,
>
> so I updated the libs locally, built and re-ran the example with this
> version and it now worked without any problems.
>
> Chris
>
>
>
> Am 04.04.18, 12:58 schrieb "Christofer Dutz" :
>
> Hi all,
>
> reporting back from my easter holidays :-)
>
> Today I had to help a customer with getting a POC working that uses
> PLC4X and Edgent. Unfortunately it seems that in order to use the kafka
> connector I can only use 0.x versions of Kafka. When connecting to 1.x
> versions I get stack-overflows and OutOfMemory errors. I did a quick test
> with updating the kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to
> not break anything ... I'll do some local tests with an updated Kafka
> client.
>
> @vino yang ... have you been working on adding the Annotations to the
> client?
>
> @all others ... does anyone have objections to updating the kafka
> client libs to 1.1.0? It shouldn't break anything as it should be backward
> compatible. As we are currently not using anything above the API level of
> 0.8.2 there should also not be any Exceptions (I don't know of any removed
> things, which could be a problem).
>
> Chris
>
>
>
> Am 20.03.18, 10:33 schrieb "Christofer Dutz" <
> christofer.d...@c-ware.de>:
>
> Ok,
>
> So I just added a new Annotation type to the Kafka module.
>
> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>
> It has a fromVersion and a toVersion attribute. Both should be
> optional so just adding the annotation would have no effect (besides a few
> additional CPU operations). The annotation can be applied to methods or
> classes (every method then inherits this). I hope that's ok, because
> implementing this on a parameter Level would make things extremely
> difficult.
>
> @vino yang With this you should be able to provide Kafka version
> constraints to your code changes. Just tell me if something's missing or
> needs to be done differently
>
> For now this annotation will have no effect as I haven't
> implemented the Aspect for doing the checks, but I'll start working on that
> as soon as you have annotated something.
>
> Chris
>
> Am 20.03.18, 10:11 schrieb "Christofer Dutz" <
> christofer.d...@c-ware.de>:
>
> Ok ... maybe I should add the Annotation prior to continuing
> my work on the AWS connector ...
>
>
> Chris
>
> Am 04.03.18, 08:10 schrieb "vino yang"  >:
>
> The reason is that Kafka 0.9+ provided a new consumer API
> which has more
> features and better performance.
>
> Just like Flink's implementation :
> https://github.com/apache/flink/tree/master/flink-
> connectors.
>
> vinoyang
> Thanks.
>
>
>
>
>
>
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread Christofer Dutz
Hi,

so I updated the libs locally, built and re-ran the example with this version 
and it now worked without any problems.

Chris



Am 04.04.18, 12:58 schrieb "Christofer Dutz" :

Hi all,

reporting back from my easter holidays :-)

Today I had to help a customer with getting a POC working that uses PLC4X 
and Edgent. Unfortunately it seems that in order to use the kafka connector I 
can only use 0.x versions of Kafka. When connecting to 1.x versions I get 
stack-overflows and OutOfMemory errors. I did a quick test with updating the 
kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to not break anything ... 
I'll do some local tests with an updated Kafka client. 

@vino yang ... have you been working on adding the Annotations to the 
client?

@all others ... does anyone have objections to updating the kafka client 
libs to 1.1.0? It shouldn't break anything as it should be backward compatible. 
As we are currently not using anything above the API level of 0.8.2 there 
should also not be any Exceptions (I don't know of any removed things, which 
could be a problem).

Chris



Am 20.03.18, 10:33 schrieb "Christofer Dutz" :

Ok,

So I just added a new Annotation type to the Kafka module. 

org.apache.edgent.connectors.kafka.annotations.KafkaVersion

It has a fromVersion and a toVersion attribute. Both should be optional 
so just adding the annotation would have no effect (besides a few additional 
CPU operations). The annotation can be applied to methods or classes (every 
method then inherits this). I hope that's ok, because implementing this on a 
parameter Level would make things extremely difficult.

@vino yang With this you should be able to provide Kafka version 
constraints to your code changes. Just tell me if something's missing or needs 
to be done differently

For now this annotation will have no effect as I haven't implemented 
the Aspect for doing the checks, but I'll start working on that as soon as you 
have annotated something.

Chris

Am 20.03.18, 10:11 schrieb "Christofer Dutz" 
:

Ok ... maybe I should add the Annotation prior to continuing my 
work on the AWS connector ...


Chris

Am 04.03.18, 08:10 schrieb "vino yang" :

The reason is that Kafka 0.9+ provided a new consumer API which 
has more
features and better performance.

Just like Flink's implementation :
https://github.com/apache/flink/tree/master/flink-connectors.

vinoyang
Thanks.










Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread Christofer Dutz
Hi all,

reporting back from my easter holidays :-)

Today I had to help a customer with getting a POC working that uses PLC4X and 
Edgent. Unfortunately it seems that in order to use the kafka connector I can 
only use 0.x versions of Kafka. When connecting to 1.x versions I get 
stack-overflows and OutOfMemory errors. I did a quick test with updating the 
kafaka libs from the ancient 0.8.2.2 to 1.1.0 seemed to not break anything ... 
I'll do some local tests with an updated Kafka client. 

@vino yang ... have you been working on adding the Annotations to the client?

@all others ... does anyone have objections to updating the kafka client libs 
to 1.1.0? It shouldn't break anything as it should be backward compatible. As 
we are currently not using anything above the API level of 0.8.2 there should 
also not be any Exceptions (I don't know of any removed things, which could be 
a problem).

Chris



Am 20.03.18, 10:33 schrieb "Christofer Dutz" :

Ok,

So I just added a new Annotation type to the Kafka module. 

org.apache.edgent.connectors.kafka.annotations.KafkaVersion

It has a fromVersion and a toVersion attribute. Both should be optional so 
just adding the annotation would have no effect (besides a few additional CPU 
operations). The annotation can be applied to methods or classes (every method 
then inherits this). I hope that's ok, because implementing this on a parameter 
Level would make things extremely difficult.

@vino yang With this you should be able to provide Kafka version 
constraints to your code changes. Just tell me if something's missing or needs 
to be done differently

For now this annotation will have no effect as I haven't implemented the 
Aspect for doing the checks, but I'll start working on that as soon as you have 
annotated something.

Chris

Am 20.03.18, 10:11 schrieb "Christofer Dutz" :

Ok ... maybe I should add the Annotation prior to continuing my work on 
the AWS connector ...


Chris

Am 04.03.18, 08:10 schrieb "vino yang" :

The reason is that Kafka 0.9+ provided a new consumer API which has 
more
features and better performance.

Just like Flink's implementation :
https://github.com/apache/flink/tree/master/flink-connectors.

vinoyang
Thanks.








Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-04-04 Thread Christofer Dutz
G ... 

I just noticed that my last commit broke the build and no one complained for 23 
days (
I Just fixed that (hopefully) ... 

Chris


Am 20.03.18, 10:33 schrieb "Christofer Dutz" :

Ok,

So I just added a new Annotation type to the Kafka module. 

org.apache.edgent.connectors.kafka.annotations.KafkaVersion

It has a fromVersion and a toVersion attribute. Both should be optional so 
just adding the annotation would have no effect (besides a few additional CPU 
operations). The annotation can be applied to methods or classes (every method 
then inherits this). I hope that's ok, because implementing this on a parameter 
Level would make things extremely difficult.

@vino yang With this you should be able to provide Kafka version 
constraints to your code changes. Just tell me if something's missing or needs 
to be done differently

For now this annotation will have no effect as I haven't implemented the 
Aspect for doing the checks, but I'll start working on that as soon as you have 
annotated something.

Chris

Am 20.03.18, 10:11 schrieb "Christofer Dutz" :

Ok ... maybe I should add the Annotation prior to continuing my work on 
the AWS connector ...


Chris

Am 04.03.18, 08:10 schrieb "vino yang" :

The reason is that Kafka 0.9+ provided a new consumer API which has 
more
features and better performance.

Just like Flink's implementation :
https://github.com/apache/flink/tree/master/flink-connectors.

vinoyang
Thanks.








Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-20 Thread vino yang
Hi Chris,

I will try to do this, if I have any question, keep in touch!

Vino yang
Thanks.

2018-03-20 17:32 GMT+08:00 Christofer Dutz :

> Ok,
>
> So I just added a new Annotation type to the Kafka module.
>
> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>
> It has a fromVersion and a toVersion attribute. Both should be optional so
> just adding the annotation would have no effect (besides a few additional
> CPU operations). The annotation can be applied to methods or classes (every
> method then inherits this). I hope that's ok, because implementing this on
> a parameter Level would make things extremely difficult.
>
> @vino yang With this you should be able to provide Kafka version
> constraints to your code changes. Just tell me if something's missing or
> needs to be done differently
>
> For now this annotation will have no effect as I haven't implemented the
> Aspect for doing the checks, but I'll start working on that as soon as you
> have annotated something.
>
> Chris
>
> Am 20.03.18, 10:11 schrieb "Christofer Dutz" :
>
> Ok ... maybe I should add the Annotation prior to continuing my work
> on the AWS connector ...
>
>
> Chris
>
> Am 04.03.18, 08:10 schrieb "vino yang" :
>
> The reason is that Kafka 0.9+ provided a new consumer API which
> has more
> features and better performance.
>
> Just like Flink's implementation :
> https://github.com/apache/flink/tree/master/flink-connectors.
>
> vinoyang
> Thanks.
>
>
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-20 Thread Christofer Dutz
Ok,

So I just added a new Annotation type to the Kafka module. 

org.apache.edgent.connectors.kafka.annotations.KafkaVersion

It has a fromVersion and a toVersion attribute. Both should be optional so just 
adding the annotation would have no effect (besides a few additional CPU 
operations). The annotation can be applied to methods or classes (every method 
then inherits this). I hope that's ok, because implementing this on a parameter 
Level would make things extremely difficult.

@vino yang With this you should be able to provide Kafka version constraints to 
your code changes. Just tell me if something's missing or needs to be done 
differently

For now this annotation will have no effect as I haven't implemented the Aspect 
for doing the checks, but I'll start working on that as soon as you have 
annotated something.

Chris

Am 20.03.18, 10:11 schrieb "Christofer Dutz" :

Ok ... maybe I should add the Annotation prior to continuing my work on the 
AWS connector ...


Chris

Am 04.03.18, 08:10 schrieb "vino yang" :

The reason is that Kafka 0.9+ provided a new consumer API which has more
features and better performance.

Just like Flink's implementation :
https://github.com/apache/flink/tree/master/flink-connectors.

vinoyang
Thanks.






Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-20 Thread Christofer Dutz
Ok ... maybe I should add the Annotation prior to continuing my work on the AWS 
connector ...


Chris

Am 04.03.18, 08:10 schrieb "vino yang" :

The reason is that Kafka 0.9+ provided a new consumer API which has more
features and better performance.

Just like Flink's implementation :
https://github.com/apache/flink/tree/master/flink-connectors.

vinoyang
Thanks.




Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-14 Thread vino yang
Hi Chris,

the version upgrade and the main features which affect the produce/consume
API list below :


   - 0.8.2.x -> 0.9+ : support save offset, position by Kafka server
   itself, Kerberos, TLS authentication, need not specify zookeeper server
   - 0.9.x -> 0.10.x : let consumer control max records, introduced
   "max.poll.records" config item
   - 0.10.x -> 0.11.x : support producer exactly-once (with
   *beginTransaction*/*commitTransaction *API)
   - 0.11.x -> 1.x : seems no specific feature which affect the API

Vino yang
Thanks.

2018-03-14 9:27 GMT+08:00 vino yang :

> Hi Chris,
>
> No objections about this approach. Good division of the work. I will
> provide the mapping of Kafka version and the specified feature later.
>
> Vino yang
> Thanks.
>
> 2018-03-13 20:11 GMT+08:00 Christofer Dutz :
>
>> Well I have implemented something like the Version checking before, so I
>> would opt to take care of that.
>>
>> I would define an Annotation with an optional "from" and "to" version ...
>> you could use that
>> I would need something that provides the version of the server from your
>> side.
>>
>> With this I would then implement an Aspect that intercepts these calls,
>> does the check and eventually throws Exceptions with a message what the
>> minimum or maximum version for a feature would be.
>>
>> I would use a compile-time weaver as this does not add any more
>> dependencies or setup complexity to the construct.
>>
>> Any objections to this approach?
>>
>> Chris
>>
>>
>> Am 13.03.18, 03:06 schrieb "vino yang" :
>>
>> Hi Chris,
>>
>> It looks like a good idea. I think to finish this job, we can split
>> it into
>> three sub tasks:
>>
>>- upgrade kafka version to 1.x and test it to match the 0.8.x
>>connector's function and behaivor;
>>- Carding and defining the annotation which contains different
>> kafka
>>version and features
>>- expose the new feature's API to user and check with annotation
>>
>> What's your opinion?
>>
>>
>> 2018-03-12 21:00 GMT+08:00 Christofer Dutz > >:
>>
>> > Don't know if this would be an option:
>> >
>> > If we defined and used a Java annotation which defines what
>> Kafka-Version
>> > a feature is available from (or up to which version it is
>> supported) and
>> > then we could do quick checks that compare the current version with
>> the
>> > annotations on the methods we call. I think this type of check
>> should be
>> > quite easy to understand and we wouldn't have to build, maintain,
>> test,
>> > document etc. loads of separate modules.
>> >
>> > Chris
>> >
>> >
>> >
>> > Am 12.03.18, 13:30 schrieb "vino yang" :
>> >
>> > Hi Chris,
>> >
>> > OK, Hope for listening someone's opinion.
>> >
>> > Vino yang.
>> >
>> > 2018-03-12 20:23 GMT+08:00 Christofer Dutz <
>> christofer.d...@c-ware.de
>> > >:
>> >
>> > > Hi Vino,
>> > >
>> > > please don't interpret my opinion as some official project
>> decision.
>> > > For discussions like this I would definitely prefer to hear
>> the
>> > opinions
>> > > of others in the project.
>> > > Perhaps having a new client API and having compatibility
>> layers
>> > inside the
>> > > connector would be another option.
>> > > So per default the compatibility level of the Kafka client
>> lib would
>> > be
>> > > used but a developer could explicitly choose
>> > > older compatibility levels, where we have taken care of the
>> work to
>> > decide
>> > > what works and what doesn't.
>> > >
>> > > Chris
>> > >
>> > >
>> > >
>> > > Am 12.03.18, 13:07 schrieb "vino yang" > >:
>> > >
>> > > Hi Chris,
>> > >
>> > > In some ways, I argee with you. Though kafka API has the
>> > > compatibility. But
>> > >
>> > >
>> > >- old API + higher server version : this mode would
>> miss some
>> > key
>> > > new
>> > >feature.
>> > >- new API + older server version : this mode, users
>> are in a
>> > puzzle
>> > >about which feature they could use and which could
>> not. Also,
>> > new
>> > > API will
>> > >do more logic judgement and something else (which cause
>> > performance
>> > > cost)
>> > >for backward compatibility.
>> > >
>> > > I think it's the main reason that other framework split
>> > different kafka
>> > > connector with versions.
>> > >
>> > > Anyway, I will respect your decision. Can I claim this
>> task about
>> > > 

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-13 Thread vino yang
Hi Chris,

No objections about this approach. Good division of the work. I will
provide the mapping of Kafka version and the specified feature later.

Vino yang
Thanks.

2018-03-13 20:11 GMT+08:00 Christofer Dutz :

> Well I have implemented something like the Version checking before, so I
> would opt to take care of that.
>
> I would define an Annotation with an optional "from" and "to" version ...
> you could use that
> I would need something that provides the version of the server from your
> side.
>
> With this I would then implement an Aspect that intercepts these calls,
> does the check and eventually throws Exceptions with a message what the
> minimum or maximum version for a feature would be.
>
> I would use a compile-time weaver as this does not add any more
> dependencies or setup complexity to the construct.
>
> Any objections to this approach?
>
> Chris
>
>
> Am 13.03.18, 03:06 schrieb "vino yang" :
>
> Hi Chris,
>
> It looks like a good idea. I think to finish this job, we can split it
> into
> three sub tasks:
>
>- upgrade kafka version to 1.x and test it to match the 0.8.x
>connector's function and behaivor;
>- Carding and defining the annotation which contains different kafka
>version and features
>- expose the new feature's API to user and check with annotation
>
> What's your opinion?
>
>
> 2018-03-12 21:00 GMT+08:00 Christofer Dutz  >:
>
> > Don't know if this would be an option:
> >
> > If we defined and used a Java annotation which defines what
> Kafka-Version
> > a feature is available from (or up to which version it is supported)
> and
> > then we could do quick checks that compare the current version with
> the
> > annotations on the methods we call. I think this type of check
> should be
> > quite easy to understand and we wouldn't have to build, maintain,
> test,
> > document etc. loads of separate modules.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 13:30 schrieb "vino yang" :
> >
> > Hi Chris,
> >
> > OK, Hope for listening someone's opinion.
> >
> > Vino yang.
> >
> > 2018-03-12 20:23 GMT+08:00 Christofer Dutz <
> christofer.d...@c-ware.de
> > >:
> >
> > > Hi Vino,
> > >
> > > please don't interpret my opinion as some official project
> decision.
> > > For discussions like this I would definitely prefer to hear the
> > opinions
> > > of others in the project.
> > > Perhaps having a new client API and having compatibility layers
> > inside the
> > > connector would be another option.
> > > So per default the compatibility level of the Kafka client lib
> would
> > be
> > > used but a developer could explicitly choose
> > > older compatibility levels, where we have taken care of the
> work to
> > decide
> > > what works and what doesn't.
> > >
> > > Chris
> > >
> > >
> > >
> > > Am 12.03.18, 13:07 schrieb "vino yang"  >:
> > >
> > > Hi Chris,
> > >
> > > In some ways, I argee with you. Though kafka API has the
> > > compatibility. But
> > >
> > >
> > >- old API + higher server version : this mode would
> miss some
> > key
> > > new
> > >feature.
> > >- new API + older server version : this mode, users are
> in a
> > puzzle
> > >about which feature they could use and which could not.
> Also,
> > new
> > > API will
> > >do more logic judgement and something else (which cause
> > performance
> > > cost)
> > >for backward compatibility.
> > >
> > > I think it's the main reason that other framework split
> > different kafka
> > > connector with versions.
> > >
> > > Anyway, I will respect your decision. Can I claim this
> task about
> > > upgrading
> > > the kafka client's version to 1.x?
> > >
> > >
> > > 2018-03-12 16:30 GMT+08:00 Christofer Dutz <
> > christofer.d...@c-ware.de
> > > >:
> > >
> > > > Hi Vino,
> > > >
> > > > I would rather go a different path. I talked to some
> Kafka
> > pros and
> > > they
> > > > sort of confirmed my gut-feeling.
> > > > The greatest changes to Kafka have been in the layers
> behind
> > the API
> > > > itself. The API seems to have been designed with backward
> > > compatibility in
> > > > mind.
> > > > That means you can generally use a newer API with an
> older
> > 

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-13 Thread Christofer Dutz
Well I have implemented something like the Version checking before, so I would 
opt to take care of that.

I would define an Annotation with an optional "from" and "to" version ... you 
could use that 
I would need something that provides the version of the server from your side.

With this I would then implement an Aspect that intercepts these calls, does 
the check and eventually throws Exceptions with a message what the minimum or 
maximum version for a feature would be.

I would use a compile-time weaver as this does not add any more dependencies or 
setup complexity to the construct.

Any objections to this approach?

Chris


Am 13.03.18, 03:06 schrieb "vino yang" :

Hi Chris,

It looks like a good idea. I think to finish this job, we can split it into
three sub tasks:

   - upgrade kafka version to 1.x and test it to match the 0.8.x
   connector's function and behaivor;
   - Carding and defining the annotation which contains different kafka
   version and features
   - expose the new feature's API to user and check with annotation

What's your opinion?


2018-03-12 21:00 GMT+08:00 Christofer Dutz :

> Don't know if this would be an option:
>
> If we defined and used a Java annotation which defines what Kafka-Version
> a feature is available from (or up to which version it is supported) and
> then we could do quick checks that compare the current version with the
> annotations on the methods we call. I think this type of check should be
> quite easy to understand and we wouldn't have to build, maintain, test,
> document etc. loads of separate modules.
>
> Chris
>
>
>
> Am 12.03.18, 13:30 schrieb "vino yang" :
>
> Hi Chris,
>
> OK, Hope for listening someone's opinion.
>
> Vino yang.
>
> 2018-03-12 20:23 GMT+08:00 Christofer Dutz  >:
>
> > Hi Vino,
> >
> > please don't interpret my opinion as some official project decision.
> > For discussions like this I would definitely prefer to hear the
> opinions
> > of others in the project.
> > Perhaps having a new client API and having compatibility layers
> inside the
> > connector would be another option.
> > So per default the compatibility level of the Kafka client lib would
> be
> > used but a developer could explicitly choose
> > older compatibility levels, where we have taken care of the work to
> decide
> > what works and what doesn't.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 13:07 schrieb "vino yang" :
> >
> > Hi Chris,
> >
> > In some ways, I argee with you. Though kafka API has the
> > compatibility. But
> >
> >
> >- old API + higher server version : this mode would miss some
> key
> > new
> >feature.
> >- new API + older server version : this mode, users are in a
> puzzle
> >about which feature they could use and which could not. Also,
> new
> > API will
> >do more logic judgement and something else (which cause
> performance
> > cost)
> >for backward compatibility.
> >
> > I think it's the main reason that other framework split
> different kafka
> > connector with versions.
> >
> > Anyway, I will respect your decision. Can I claim this task 
about
> > upgrading
> > the kafka client's version to 1.x?
> >
> >
> > 2018-03-12 16:30 GMT+08:00 Christofer Dutz <
> christofer.d...@c-ware.de
> > >:
> >
> > > Hi Vino,
> > >
> > > I would rather go a different path. I talked to some Kafka
> pros and
> > they
> > > sort of confirmed my gut-feeling.
> > > The greatest changes to Kafka have been in the layers behind
> the API
> > > itself. The API seems to have been designed with backward
> > compatibility in
> > > mind.
> > > That means you can generally use a newer API with an older
> broker as
> > well
> > > as use a new broker with an older API (This is probably even
> the
> > safer way
> > > around). As soon as you try to do something with the API which
> your
> > broker
> > > doesn't support, you get error messages.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/
> > Compatibility+Matrix
> > >
> > > I would rather update the existing 

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread vino yang
Hi Chris,

It looks like a good idea. I think to finish this job, we can split it into
three sub tasks:

   - upgrade kafka version to 1.x and test it to match the 0.8.x
   connector's function and behaivor;
   - Carding and defining the annotation which contains different kafka
   version and features
   - expose the new feature's API to user and check with annotation

What's your opinion?


2018-03-12 21:00 GMT+08:00 Christofer Dutz :

> Don't know if this would be an option:
>
> If we defined and used a Java annotation which defines what Kafka-Version
> a feature is available from (or up to which version it is supported) and
> then we could do quick checks that compare the current version with the
> annotations on the methods we call. I think this type of check should be
> quite easy to understand and we wouldn't have to build, maintain, test,
> document etc. loads of separate modules.
>
> Chris
>
>
>
> Am 12.03.18, 13:30 schrieb "vino yang" :
>
> Hi Chris,
>
> OK, Hope for listening someone's opinion.
>
> Vino yang.
>
> 2018-03-12 20:23 GMT+08:00 Christofer Dutz  >:
>
> > Hi Vino,
> >
> > please don't interpret my opinion as some official project decision.
> > For discussions like this I would definitely prefer to hear the
> opinions
> > of others in the project.
> > Perhaps having a new client API and having compatibility layers
> inside the
> > connector would be another option.
> > So per default the compatibility level of the Kafka client lib would
> be
> > used but a developer could explicitly choose
> > older compatibility levels, where we have taken care of the work to
> decide
> > what works and what doesn't.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 13:07 schrieb "vino yang" :
> >
> > Hi Chris,
> >
> > In some ways, I argee with you. Though kafka API has the
> > compatibility. But
> >
> >
> >- old API + higher server version : this mode would miss some
> key
> > new
> >feature.
> >- new API + older server version : this mode, users are in a
> puzzle
> >about which feature they could use and which could not. Also,
> new
> > API will
> >do more logic judgement and something else (which cause
> performance
> > cost)
> >for backward compatibility.
> >
> > I think it's the main reason that other framework split
> different kafka
> > connector with versions.
> >
> > Anyway, I will respect your decision. Can I claim this task about
> > upgrading
> > the kafka client's version to 1.x?
> >
> >
> > 2018-03-12 16:30 GMT+08:00 Christofer Dutz <
> christofer.d...@c-ware.de
> > >:
> >
> > > Hi Vino,
> > >
> > > I would rather go a different path. I talked to some Kafka
> pros and
> > they
> > > sort of confirmed my gut-feeling.
> > > The greatest changes to Kafka have been in the layers behind
> the API
> > > itself. The API seems to have been designed with backward
> > compatibility in
> > > mind.
> > > That means you can generally use a newer API with an older
> broker as
> > well
> > > as use a new broker with an older API (This is probably even
> the
> > safer way
> > > around). As soon as you try to do something with the API which
> your
> > broker
> > > doesn't support, you get error messages.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/
> > Compatibility+Matrix
> > >
> > > I would rather update the existing connector to a newer Kafka
> > version ...
> > > 0.8.2.2 is quite old and we should update to a version of at
> least
> > 0.10.0
> > > (I would prefer a 1.x) and stick with that. I doubt many will
> be
> > using an
> > > ancient 0.8.2 version (09.09.2015). And everything starting
> with
> > 0.10.x
> > > should be interchangeable.
> > >
> > > I wouldn't like to have yet another project maintaining a Zoo
> of
> > adapters
> > > for Kafka.
> > >
> > > Eventually a Kafka-Streams client would make sense though ...
> to
> > sort of
> > > extend the Edgent streams from the edge to the Kafka cluster.
> > >
> > > Chris
> > >
> > >
> > >
> > > Am 12.03.18, 03:41 schrieb "vino yang"  >:
> > >
> > > Hi guys,
> > >
> > > How about this idea, I think we should support kafka's new
> > client API.
> > >
> > > 2018-03-04 15:10 GMT+08:00 vino yang <
> yanghua1...@gmail.com>:
> > >
> > > > The reason is that Kafka 0.9+ provided a new 

Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread Christofer Dutz
Don't know if this would be an option:

If we defined and used a Java annotation which defines what Kafka-Version a 
feature is available from (or up to which version it is supported) and then we 
could do quick checks that compare the current version with the annotations on 
the methods we call. I think this type of check should be quite easy to 
understand and we wouldn't have to build, maintain, test, document etc. loads 
of separate modules.

Chris



Am 12.03.18, 13:30 schrieb "vino yang" :

Hi Chris,

OK, Hope for listening someone's opinion.

Vino yang.

2018-03-12 20:23 GMT+08:00 Christofer Dutz :

> Hi Vino,
>
> please don't interpret my opinion as some official project decision.
> For discussions like this I would definitely prefer to hear the opinions
> of others in the project.
> Perhaps having a new client API and having compatibility layers inside the
> connector would be another option.
> So per default the compatibility level of the Kafka client lib would be
> used but a developer could explicitly choose
> older compatibility levels, where we have taken care of the work to decide
> what works and what doesn't.
>
> Chris
>
>
>
> Am 12.03.18, 13:07 schrieb "vino yang" :
>
> Hi Chris,
>
> In some ways, I argee with you. Though kafka API has the
> compatibility. But
>
>
>- old API + higher server version : this mode would miss some key
> new
>feature.
>- new API + older server version : this mode, users are in a puzzle
>about which feature they could use and which could not. Also, new
> API will
>do more logic judgement and something else (which cause performance
> cost)
>for backward compatibility.
>
> I think it's the main reason that other framework split different 
kafka
> connector with versions.
>
> Anyway, I will respect your decision. Can I claim this task about
> upgrading
> the kafka client's version to 1.x?
>
>
> 2018-03-12 16:30 GMT+08:00 Christofer Dutz  >:
>
> > Hi Vino,
> >
> > I would rather go a different path. I talked to some Kafka pros and
> they
> > sort of confirmed my gut-feeling.
> > The greatest changes to Kafka have been in the layers behind the API
> > itself. The API seems to have been designed with backward
> compatibility in
> > mind.
> > That means you can generally use a newer API with an older broker as
> well
> > as use a new broker with an older API (This is probably even the
> safer way
> > around). As soon as you try to do something with the API which your
> broker
> > doesn't support, you get error messages.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/
> Compatibility+Matrix
> >
> > I would rather update the existing connector to a newer Kafka
> version ...
> > 0.8.2.2 is quite old and we should update to a version of at least
> 0.10.0
> > (I would prefer a 1.x) and stick with that. I doubt many will be
> using an
> > ancient 0.8.2 version (09.09.2015). And everything starting with
> 0.10.x
> > should be interchangeable.
> >
> > I wouldn't like to have yet another project maintaining a Zoo of
> adapters
> > for Kafka.
> >
> > Eventually a Kafka-Streams client would make sense though ... to
> sort of
> > extend the Edgent streams from the edge to the Kafka cluster.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 03:41 schrieb "vino yang" :
> >
> > Hi guys,
> >
> > How about this idea, I think we should support kafka's new
> client API.
> >
> > 2018-03-04 15:10 GMT+08:00 vino yang :
> >
> > > The reason is that Kafka 0.9+ provided a new consumer API
> which has
> > more
> > > features and better performance.
> > >
> > > Just like Flink's implementation : https://github.com/apache/
> > > flink/tree/master/flink-connectors.
> > >
> > > vinoyang
> > > Thanks.
> > >
> > >
> >
> >
> >
>
>
>




Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread vino yang
Hi Chris,

OK, Hope for listening someone's opinion.

Vino yang.

2018-03-12 20:23 GMT+08:00 Christofer Dutz :

> Hi Vino,
>
> please don't interpret my opinion as some official project decision.
> For discussions like this I would definitely prefer to hear the opinions
> of others in the project.
> Perhaps having a new client API and having compatibility layers inside the
> connector would be another option.
> So per default the compatibility level of the Kafka client lib would be
> used but a developer could explicitly choose
> older compatibility levels, where we have taken care of the work to decide
> what works and what doesn't.
>
> Chris
>
>
>
> Am 12.03.18, 13:07 schrieb "vino yang" :
>
> Hi Chris,
>
> In some ways, I argee with you. Though kafka API has the
> compatibility. But
>
>
>- old API + higher server version : this mode would miss some key
> new
>feature.
>- new API + older server version : this mode, users are in a puzzle
>about which feature they could use and which could not. Also, new
> API will
>do more logic judgement and something else (which cause performance
> cost)
>for backward compatibility.
>
> I think it's the main reason that other framework split different kafka
> connector with versions.
>
> Anyway, I will respect your decision. Can I claim this task about
> upgrading
> the kafka client's version to 1.x?
>
>
> 2018-03-12 16:30 GMT+08:00 Christofer Dutz  >:
>
> > Hi Vino,
> >
> > I would rather go a different path. I talked to some Kafka pros and
> they
> > sort of confirmed my gut-feeling.
> > The greatest changes to Kafka have been in the layers behind the API
> > itself. The API seems to have been designed with backward
> compatibility in
> > mind.
> > That means you can generally use a newer API with an older broker as
> well
> > as use a new broker with an older API (This is probably even the
> safer way
> > around). As soon as you try to do something with the API which your
> broker
> > doesn't support, you get error messages.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/
> Compatibility+Matrix
> >
> > I would rather update the existing connector to a newer Kafka
> version ...
> > 0.8.2.2 is quite old and we should update to a version of at least
> 0.10.0
> > (I would prefer a 1.x) and stick with that. I doubt many will be
> using an
> > ancient 0.8.2 version (09.09.2015). And everything starting with
> 0.10.x
> > should be interchangeable.
> >
> > I wouldn't like to have yet another project maintaining a Zoo of
> adapters
> > for Kafka.
> >
> > Eventually a Kafka-Streams client would make sense though ... to
> sort of
> > extend the Edgent streams from the edge to the Kafka cluster.
> >
> > Chris
> >
> >
> >
> > Am 12.03.18, 03:41 schrieb "vino yang" :
> >
> > Hi guys,
> >
> > How about this idea, I think we should support kafka's new
> client API.
> >
> > 2018-03-04 15:10 GMT+08:00 vino yang :
> >
> > > The reason is that Kafka 0.9+ provided a new consumer API
> which has
> > more
> > > features and better performance.
> > >
> > > Just like Flink's implementation : https://github.com/apache/
> > > flink/tree/master/flink-connectors.
> > >
> > > vinoyang
> > > Thanks.
> > >
> > >
> >
> >
> >
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread Christofer Dutz
Hi Vino,

please don't interpret my opinion as some official project decision. 
For discussions like this I would definitely prefer to hear the opinions of 
others in the project. 
Perhaps having a new client API and having compatibility layers inside the 
connector would be another option.
So per default the compatibility level of the Kafka client lib would be used 
but a developer could explicitly choose 
older compatibility levels, where we have taken care of the work to decide what 
works and what doesn't. 

Chris 



Am 12.03.18, 13:07 schrieb "vino yang" :

Hi Chris,

In some ways, I argee with you. Though kafka API has the compatibility. But


   - old API + higher server version : this mode would miss some key new
   feature.
   - new API + older server version : this mode, users are in a puzzle
   about which feature they could use and which could not. Also, new API 
will
   do more logic judgement and something else (which cause performance cost)
   for backward compatibility.

I think it's the main reason that other framework split different kafka
connector with versions.

Anyway, I will respect your decision. Can I claim this task about upgrading
the kafka client's version to 1.x?


2018-03-12 16:30 GMT+08:00 Christofer Dutz :

> Hi Vino,
>
> I would rather go a different path. I talked to some Kafka pros and they
> sort of confirmed my gut-feeling.
> The greatest changes to Kafka have been in the layers behind the API
> itself. The API seems to have been designed with backward compatibility in
> mind.
> That means you can generally use a newer API with an older broker as well
> as use a new broker with an older API (This is probably even the safer way
> around). As soon as you try to do something with the API which your broker
> doesn't support, you get error messages.
>
> https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix
>
> I would rather update the existing connector to a newer Kafka version ...
> 0.8.2.2 is quite old and we should update to a version of at least 0.10.0
> (I would prefer a 1.x) and stick with that. I doubt many will be using an
> ancient 0.8.2 version (09.09.2015). And everything starting with 0.10.x
> should be interchangeable.
>
> I wouldn't like to have yet another project maintaining a Zoo of adapters
> for Kafka.
>
> Eventually a Kafka-Streams client would make sense though ... to sort of
> extend the Edgent streams from the edge to the Kafka cluster.
>
> Chris
>
>
>
> Am 12.03.18, 03:41 schrieb "vino yang" :
>
> Hi guys,
>
> How about this idea, I think we should support kafka's new client API.
>
> 2018-03-04 15:10 GMT+08:00 vino yang :
>
> > The reason is that Kafka 0.9+ provided a new consumer API which has
> more
> > features and better performance.
> >
> > Just like Flink's implementation : https://github.com/apache/
> > flink/tree/master/flink-connectors.
> >
> > vinoyang
> > Thanks.
> >
> >
>
>
>




Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread vino yang
Hi Chris,

In some ways, I argee with you. Though kafka API has the compatibility. But


   - old API + higher server version : this mode would miss some key new
   feature.
   - new API + older server version : this mode, users are in a puzzle
   about which feature they could use and which could not. Also, new API will
   do more logic judgement and something else (which cause performance cost)
   for backward compatibility.

I think it's the main reason that other framework split different kafka
connector with versions.

Anyway, I will respect your decision. Can I claim this task about upgrading
the kafka client's version to 1.x?


2018-03-12 16:30 GMT+08:00 Christofer Dutz :

> Hi Vino,
>
> I would rather go a different path. I talked to some Kafka pros and they
> sort of confirmed my gut-feeling.
> The greatest changes to Kafka have been in the layers behind the API
> itself. The API seems to have been designed with backward compatibility in
> mind.
> That means you can generally use a newer API with an older broker as well
> as use a new broker with an older API (This is probably even the safer way
> around). As soon as you try to do something with the API which your broker
> doesn't support, you get error messages.
>
> https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix
>
> I would rather update the existing connector to a newer Kafka version ...
> 0.8.2.2 is quite old and we should update to a version of at least 0.10.0
> (I would prefer a 1.x) and stick with that. I doubt many will be using an
> ancient 0.8.2 version (09.09.2015). And everything starting with 0.10.x
> should be interchangeable.
>
> I wouldn't like to have yet another project maintaining a Zoo of adapters
> for Kafka.
>
> Eventually a Kafka-Streams client would make sense though ... to sort of
> extend the Edgent streams from the edge to the Kafka cluster.
>
> Chris
>
>
>
> Am 12.03.18, 03:41 schrieb "vino yang" :
>
> Hi guys,
>
> How about this idea, I think we should support kafka's new client API.
>
> 2018-03-04 15:10 GMT+08:00 vino yang :
>
> > The reason is that Kafka 0.9+ provided a new consumer API which has
> more
> > features and better performance.
> >
> > Just like Flink's implementation : https://github.com/apache/
> > flink/tree/master/flink-connectors.
> >
> > vinoyang
> > Thanks.
> >
> >
>
>
>


Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-12 Thread Christofer Dutz
Hi Vino,

I would rather go a different path. I talked to some Kafka pros and they sort 
of confirmed my gut-feeling.
The greatest changes to Kafka have been in the layers behind the API itself. 
The API seems to have been designed with backward compatibility in mind.
That means you can generally use a newer API with an older broker as well as 
use a new broker with an older API (This is probably even the safer way 
around). As soon as you try to do something with the API which your broker 
doesn't support, you get error messages.

https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix

I would rather update the existing connector to a newer Kafka version ... 
0.8.2.2 is quite old and we should update to a version of at least 0.10.0 (I 
would prefer a 1.x) and stick with that. I doubt many will be using an ancient 
0.8.2 version (09.09.2015). And everything starting with 0.10.x should be 
interchangeable.

I wouldn't like to have yet another project maintaining a Zoo of adapters for 
Kafka. 

Eventually a Kafka-Streams client would make sense though ... to sort of extend 
the Edgent streams from the edge to the Kafka cluster.

Chris



Am 12.03.18, 03:41 schrieb "vino yang" :

Hi guys,

How about this idea, I think we should support kafka's new client API.

2018-03-04 15:10 GMT+08:00 vino yang :

> The reason is that Kafka 0.9+ provided a new consumer API which has more
> features and better performance.
>
> Just like Flink's implementation : https://github.com/apache/
> flink/tree/master/flink-connectors.
>
> vinoyang
> Thanks.
>
>




Re: [discuss] What about splitting the kafka Connector into kafka 0.8 and 0.9?

2018-03-11 Thread vino yang
Hi guys,

How about this idea, I think we should support kafka's new client API.

2018-03-04 15:10 GMT+08:00 vino yang :

> The reason is that Kafka 0.9+ provided a new consumer API which has more
> features and better performance.
>
> Just like Flink's implementation : https://github.com/apache/
> flink/tree/master/flink-connectors.
>
> vinoyang
> Thanks.
>
>