Well I have implemented something like the Version checking before, so I would 
opt to take care of that.

I would define an Annotation with an optional "from" and "to" version ... you 
could use that 
I would need something that provides the version of the server from your side.

With this I would then implement an Aspect that intercepts these calls, does 
the check and eventually throws Exceptions with a message what the minimum or 
maximum version for a feature would be.

I would use a compile-time weaver as this does not add any more dependencies or 
setup complexity to the construct.

Any objections to this approach?

Chris


Am 13.03.18, 03:06 schrieb "vino yang" <yanghua1...@gmail.com>:

    Hi Chris,
    
    It looks like a good idea. I think to finish this job, we can split it into
    three sub tasks:
    
       - upgrade kafka version to 1.x and test it to match the 0.8.x
       connector's function and behaivor;
       - Carding and defining the annotation which contains different kafka
       version and features
       - expose the new feature's API to user and check with annotation
    
    What's your opinion?
    
    
    2018-03-12 21:00 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de>:
    
    > Don't know if this would be an option:
    >
    > If we defined and used a Java annotation which defines what Kafka-Version
    > a feature is available from (or up to which version it is supported) and
    > then we could do quick checks that compare the current version with the
    > annotations on the methods we call. I think this type of check should be
    > quite easy to understand and we wouldn't have to build, maintain, test,
    > document etc. loads of separate modules.
    >
    > Chris
    >
    >
    >
    > Am 12.03.18, 13:30 schrieb "vino yang" <yanghua1...@gmail.com>:
    >
    >     Hi Chris,
    >
    >     OK, Hope for listening someone's opinion.
    >
    >     Vino yang.
    >
    >     2018-03-12 20:23 GMT+08:00 Christofer Dutz <christofer.d...@c-ware.de
    > >:
    >
    >     > Hi Vino,
    >     >
    >     > please don't interpret my opinion as some official project decision.
    >     > For discussions like this I would definitely prefer to hear the
    > opinions
    >     > of others in the project.
    >     > Perhaps having a new client API and having compatibility layers
    > inside the
    >     > connector would be another option.
    >     > So per default the compatibility level of the Kafka client lib would
    > be
    >     > used but a developer could explicitly choose
    >     > older compatibility levels, where we have taken care of the work to
    > decide
    >     > what works and what doesn't.
    >     >
    >     > Chris
    >     >
    >     >
    >     >
    >     > Am 12.03.18, 13:07 schrieb "vino yang" <yanghua1...@gmail.com>:
    >     >
    >     >     Hi Chris,
    >     >
    >     >     In some ways, I argee with you. Though kafka API has the
    >     > compatibility. But
    >     >
    >     >
    >     >        - old API + higher server version : this mode would miss some
    > key
    >     > new
    >     >        feature.
    >     >        - new API + older server version : this mode, users are in a
    > puzzle
    >     >        about which feature they could use and which could not. Also,
    > new
    >     > API will
    >     >        do more logic judgement and something else (which cause
    > performance
    >     > cost)
    >     >        for backward compatibility.
    >     >
    >     >     I think it's the main reason that other framework split
    > different kafka
    >     >     connector with versions.
    >     >
    >     >     Anyway, I will respect your decision. Can I claim this task 
about
    >     > upgrading
    >     >     the kafka client's version to 1.x?
    >     >
    >     >
    >     >     2018-03-12 16:30 GMT+08:00 Christofer Dutz <
    > christofer.d...@c-ware.de
    >     > >:
    >     >
    >     >     > Hi Vino,
    >     >     >
    >     >     > I would rather go a different path. I talked to some Kafka
    > pros and
    >     > they
    >     >     > sort of confirmed my gut-feeling.
    >     >     > The greatest changes to Kafka have been in the layers behind
    > the API
    >     >     > itself. The API seems to have been designed with backward
    >     > compatibility in
    >     >     > mind.
    >     >     > That means you can generally use a newer API with an older
    > broker as
    >     > well
    >     >     > as use a new broker with an older API (This is probably even
    > the
    >     > safer way
    >     >     > around). As soon as you try to do something with the API which
    > your
    >     > broker
    >     >     > doesn't support, you get error messages.
    >     >     >
    >     >     > https://cwiki.apache.org/confluence/display/KAFKA/
    >     > Compatibility+Matrix
    >     >     >
    >     >     > I would rather update the existing connector to a newer Kafka
    >     > version ...
    >     >     > 0.8.2.2 is quite old and we should update to a version of at
    > least
    >     > 0.10.0
    >     >     > (I would prefer a 1.x) and stick with that. I doubt many will
    > be
    >     > using an
    >     >     > ancient 0.8.2 version (09.09.2015). And everything starting
    > with
    >     > 0.10.x
    >     >     > should be interchangeable.
    >     >     >
    >     >     > I wouldn't like to have yet another project maintaining a Zoo
    > of
    >     > adapters
    >     >     > for Kafka.
    >     >     >
    >     >     > Eventually a Kafka-Streams client would make sense though ...
    > to
    >     > sort of
    >     >     > extend the Edgent streams from the edge to the Kafka cluster.
    >     >     >
    >     >     > Chris
    >     >     >
    >     >     >
    >     >     >
    >     >     > Am 12.03.18, 03:41 schrieb "vino yang" <yanghua1...@gmail.com
    > >:
    >     >     >
    >     >     >     Hi guys,
    >     >     >
    >     >     >     How about this idea, I think we should support kafka's new
    >     > client API.
    >     >     >
    >     >     >     2018-03-04 15:10 GMT+08:00 vino yang <
    > yanghua1...@gmail.com>:
    >     >     >
    >     >     >     > The reason is that Kafka 0.9+ provided a new consumer 
API
    >     > which has
    >     >     > more
    >     >     >     > features and better performance.
    >     >     >     >
    >     >     >     > Just like Flink's implementation :
    > https://github.com/apache/
    >     >     >     > flink/tree/master/flink-connectors.
    >     >     >     >
    >     >     >     > vinoyang
    >     >     >     > Thanks.
    >     >     >     >
    >     >     >     >
    >     >     >
    >     >     >
    >     >     >
    >     >
    >     >
    >     >
    >
    >
    >
    

Reply via email to