Ok ... as I haven't heard of any objections, I'll now push my updates to the
kafka connector versions.
Chris
Am 04.04.18, 17:52 schrieb "Christofer Dutz" :
Yeah,
I think I'll just wait till Friday before committing that change however to
give the others the chance to object ;-)
Yeah,
I think I'll just wait till Friday before committing that change however to
give the others the chance to object ;-)
Chris
Am 04.04.18, 17:44 schrieb "Vino yang" :
Hi Chris, Take it easy, I think the compile failure may not affect a lot. I
am glad to hear that higher Kafka client
Hi Chris, Take it easy, I think the compile failure may not affect a lot. I am
glad to hear that higher Kafka client can be backward compatible well.
Vino yang Thanks. On 2018-04-04 23:21 , Christofer Dutz Wrote: Hi Vino, Yeah
... but I did it without an ASF header ... that's why the build
Hi Vino,
Yeah ... but I did it without an ASF header ... that's why the build was
failing for 23 days ( (I am really ashamed about that)
I tried updating the two Kafka dependencies to the 1.1.0 verison (and to Scala
2.12) and that worked without any noticeable problems.
Chris
Am 04.04.18, 13:
Hi Chris,
I rechecked the old mails between you and me. I misunderstand your message.
I thought you will create the annotation. In fact, you have created the
annotation.
I will do this work soon, hold on.
Vino yang.
Thanks.
2018-04-04 19:32 GMT+08:00 vino yang :
> Hi Chris,
>
> I have not done
Hi Chris,
I have not done this. And I would upgrade it soon.
Vino yang
Thanks!
2018-04-04 19:23 GMT+08:00 Christofer Dutz :
> Hi,
>
> so I updated the libs locally, built and re-ran the example with this
> version and it now worked without any problems.
>
> Chris
>
>
>
> Am 04.04.18, 12:58 sch
Hi,
so I updated the libs locally, built and re-ran the example with this version
and it now worked without any problems.
Chris
Am 04.04.18, 12:58 schrieb "Christofer Dutz" :
Hi all,
reporting back from my easter holidays :-)
Today I had to help a customer with getting
Hi all,
reporting back from my easter holidays :-)
Today I had to help a customer with getting a POC working that uses PLC4X and
Edgent. Unfortunately it seems that in order to use the kafka connector I can
only use 0.x versions of Kafka. When connecting to 1.x versions I get
stack-overflows a
G ...
I just noticed that my last commit broke the build and no one complained for 23
days (
I Just fixed that (hopefully) ...
Chris
Am 20.03.18, 10:33 schrieb "Christofer Dutz" :
Ok,
So I just added a new Annotation type to the Kafka module.
org.apache.edgent.co
Hi Chris,
I will try to do this, if I have any question, keep in touch!
Vino yang
Thanks.
2018-03-20 17:32 GMT+08:00 Christofer Dutz :
> Ok,
>
> So I just added a new Annotation type to the Kafka module.
>
> org.apache.edgent.connectors.kafka.annotations.KafkaVersion
>
> It has a fromVersion an
Ok,
So I just added a new Annotation type to the Kafka module.
org.apache.edgent.connectors.kafka.annotations.KafkaVersion
It has a fromVersion and a toVersion attribute. Both should be optional so just
adding the annotation would have no effect (besides a few additional CPU
operations). The
Ok ... maybe I should add the Annotation prior to continuing my work on the AWS
connector ...
Chris
Am 04.03.18, 08:10 schrieb "vino yang" :
The reason is that Kafka 0.9+ provided a new consumer API which has more
features and better performance.
Just like Flink's implementat
Hi Chris,
the version upgrade and the main features which affect the produce/consume
API list below :
- 0.8.2.x -> 0.9+ : support save offset, position by Kafka server
itself, Kerberos, TLS authentication, need not specify zookeeper server
- 0.9.x -> 0.10.x : let consumer control max re
Hi Chris,
No objections about this approach. Good division of the work. I will
provide the mapping of Kafka version and the specified feature later.
Vino yang
Thanks.
2018-03-13 20:11 GMT+08:00 Christofer Dutz :
> Well I have implemented something like the Version checking before, so I
> would
Well I have implemented something like the Version checking before, so I would
opt to take care of that.
I would define an Annotation with an optional "from" and "to" version ... you
could use that
I would need something that provides the version of the server from your side.
With this I would
Hi Chris,
It looks like a good idea. I think to finish this job, we can split it into
three sub tasks:
- upgrade kafka version to 1.x and test it to match the 0.8.x
connector's function and behaivor;
- Carding and defining the annotation which contains different kafka
version and feat
Don't know if this would be an option:
If we defined and used a Java annotation which defines what Kafka-Version a
feature is available from (or up to which version it is supported) and then we
could do quick checks that compare the current version with the annotations on
the methods we call. I
Hi Chris,
OK, Hope for listening someone's opinion.
Vino yang.
2018-03-12 20:23 GMT+08:00 Christofer Dutz :
> Hi Vino,
>
> please don't interpret my opinion as some official project decision.
> For discussions like this I would definitely prefer to hear the opinions
> of others in the project.
Hi Vino,
please don't interpret my opinion as some official project decision.
For discussions like this I would definitely prefer to hear the opinions of
others in the project.
Perhaps having a new client API and having compatibility layers inside the
connector would be another option.
So per
Hi Chris,
In some ways, I argee with you. Though kafka API has the compatibility. But
- old API + higher server version : this mode would miss some key new
feature.
- new API + older server version : this mode, users are in a puzzle
about which feature they could use and which could
Hi Vino,
I would rather go a different path. I talked to some Kafka pros and they sort
of confirmed my gut-feeling.
The greatest changes to Kafka have been in the layers behind the API itself.
The API seems to have been designed with backward compatibility in mind.
That means you can generally u
Hi guys,
How about this idea, I think we should support kafka's new client API.
2018-03-04 15:10 GMT+08:00 vino yang :
> The reason is that Kafka 0.9+ provided a new consumer API which has more
> features and better performance.
>
> Just like Flink's implementation : https://github.com/apache/
>
22 matches
Mail list logo