Sometimes I wake up cause I dreamed that this had gone down:

https://cwiki.apache.org/confluence/display/KAFKA/Hierarchical+Topics




On 02.02.2017 19:07, Roger Vandusen wrote:
Ah, yes, I see your point and use case, thanks for the feedback.

On 2/2/17, 11:02 AM, "Damian Guy" <damian....@gmail.com> wrote:

     Hi Roger,
The problem is that you can't do it ansyc and still guarantee at-least-once
     delivery. For example:
     if your streams app looked something like this:
builder.stream("input").mapValue(...).process(yourCustomerProcessSupplier); On the commit interval, kafka streams will commit the consumed offsets for
     the topic "input". Now if you do an async call in process, there is no
     guarantee that the message has been delivered. The broker might fail, there
     may be some other transient error. So you can end up dropping messages as
     the consumer has committed the offset of the source topic, but the receiver
     has not actually received it.
Does that make sense? Thanks,
     Damian
On Thu, 2 Feb 2017 at 17:56 Roger Vandusen <roger.vandu...@ticketmaster.com>
     wrote:
> Damian,
     >
     > We could lessen the producer.send(..).get() impact on throughput by 
simply
     > handing it off to another async worker component in our springboot app, 
any
     > feedback on that?
     >
     > -Roger
     >
     > On 2/2/17, 10:35 AM, "Damian Guy" <damian....@gmail.com> wrote:
     >
     >     Hi, yes you could attach a custom processor that writes to another
     > Kafka
     >     cluster. The problem is going to be guaranteeing at least once 
delivery
     >     without impacting throughput. To guarantee at least once you would
     > need to
     >     do a blocking send on every call to process, i.e.,
     > producer.send(..).get(),
     >     this is going to have an impact on throughput, but i can't currently
     > think
     >     of another way of doing it (with the current framework) that will
     > guarantee
     >     at-least-once delivery.
     >
     >     On Thu, 2 Feb 2017 at 17:26 Roger Vandusen <
     > roger.vandu...@ticketmaster.com>
     >     wrote:
     >
     >     > Thanks for the quick reply Damian.
     >     >
     >     > So the work-around would be to configure our source topology’s 
with a
     >     > processor component that would use another app component as a
     > stand-alone
     >     > KafkaProducer, let’s say an injected spring bean, configured to the
     > other
     >     > (sink) cluster, and then publish sink topic messages through this
     > producer
     >     > to the sink cluster?
     >     >
     >     > Sound like a solution? Have a better suggestion or any warnings
     > about this
     >     > approach?
     >     >
     >     > -Roger
     >     >
     >     >
     >     > On 2/2/17, 10:10 AM, "Damian Guy" <damian....@gmail.com> wrote:
     >     >
     >     >     Hi Roger,
     >     >
     >     >     This is not currently supported and won't be available in
     > 0.10.2.0.
     >     >     This has been discussed, but it doesn't look there is a JIRA 
for
     > it
     >     > yet.
     >     >
     >     >     Thanks,
     >     >     Damian
     >     >
     >     >     On Thu, 2 Feb 2017 at 16:51 Roger Vandusen <
     >     > roger.vandu...@ticketmaster.com>
     >     >     wrote:
     >     >
     >     >     > We would like to source topics from one cluster and sink them
     > to a
     >     >     > different cluster from the same topology.
     >     >     >
     >     >     > If this is not currently supported then is there a KIP/JIRA 
to
     > track
     >     > work
     >     >     > to support this in the future? 0.10.2.0?
     >     >     >
     >     >     > -Roger
     >     >     >
     >     >     >
     >     >
     >     >
     >     >
     >
     >
     >

Reply via email to