Ankit that this problem could actually be solved quite
>>>>> elegantly with Flink's state. If you can ingest the product/account
>>>>> information changes as a stream, you can keep the latest version of it in
>>>>> Flink state by using a co-map function [1,
incoming event.
>>>>
>>>> [1]
>>>> https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/
>>>> [2]
>>>> https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/streami
t;
>>> On Tue, Jul 24, 2018 at 2:15 PM Harshvardhan Agrawal <
>>> harshvardhan.ag...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> Thanks for your responses.
>>>
>>> There is no fixed interval for the data being updated. It’s more like
>
mandates that change
>> will trigger the reference data to change.
>>
>> It’s not just the enrichment we are doing here. Once we have enriched the
>> data we will be performing a bunch of aggregations using the enriched data.
>>
>> Which approach wo
o, just based on this feature, flink doesn’t seem to add a lot of value
> on top of Kafka. As Jorn said below, you can very well store all the events
> in an external store and then periodically run a cron to enrich later since
> your processing doesn’t seem to require absolute real time.
Ankit
From: Jörn Franke
Date: Monday, July 23, 2018 at 10:10 PM
To: Harshvardhan Agrawal
Cc:
Subject: Re: Implement Joins with Lookup Data
For the first one (lookup of single entries) you could use a NoSQL db (eg key
value store) - a relational database will not scale.
Depending
ell store all the events
> in an external store and then periodically run a cron to enrich later since
> your processing doesn’t seem to require absolute real time.
>
>
>
> Thanks
>
> Ankit
>
>
>
> *From: *Jörn Franke
> *Date: *Monday, July 23, 2018 at 10:10 PM
:
Subject: Re: Implement Joins with Lookup Data
For the first one (lookup of single entries) you could use a NoSQL db (eg key
value store) - a relational database will not scale.
Depending on when you need to do the enrichment you could also first store the
data and enrich it later as part
n top of Kafka. As Jorn said below, you can very well store all the events
> in an external store and then periodically run a cron to enrich later since
> your processing doesn’t seem to require absolute real time.
>
>
>
> Thanks
>
> Ankit
>
>
>
> *From: *Jörn Franke
> *Da
.
Thanks
Ankit
From: Jörn Franke
Date: Monday, July 23, 2018 at 10:10 PM
To: Harshvardhan Agrawal
Cc:
Subject: Re: Implement Joins with Lookup Data
For the first one (lookup of single entries) you could use a NoSQL db (eg key
value store) - a relational database will not scale
the product db updated? Based on that you can store
>>>>> product metadata as state in Flink, maybe setup the state on cluster
>>>>> startup and then update daily etc.
>>>>>
>>>>>
>>>>>
>>>>> Also, just based on this featur
;> Also, just based on this feature, flink doesn’t seem to add a lot of
>>>> value on top of Kafka. As Jorn said below, you can very well store all the
>>>> events in an external store and then periodically run a cron to enrich
>>>> later since your processing doesn’t
hen periodically run a cron to enrich
>>> later since your processing doesn’t seem to require absolute real time.
>>>
>>>
>>>
>>> Thanks
>>>
>>> Ankit
>>>
>>>
>>>
>>> *From: *Jörn Franke
>>>
;>
>> Ankit
>>
>>
>>
>> *From: *Jörn Franke
>> *Date: *Monday, July 23, 2018 at 10:10 PM
>> *To: *Harshvardhan Agrawal
>> *Cc: *
>> *Subject: *Re: Implement Joins with Lookup Data
>>
>>
>>
>> For the first one (looku
cron to enrich later since
> your processing doesn’t seem to require absolute real time.
>
>
>
> Thanks
>
> Ankit
>
>
>
> *From: *Jörn Franke
> *Date: *Monday, July 23, 2018 at 10:10 PM
> *To: *Harshvardhan Agrawal
> *Cc: *
> *Subject: *Re: Implement Jo
well store all the events in an
external store and then periodically run a cron to enrich later since your
processing doesn’t seem to require absolute real time.
Thanks
Ankit
From: Jörn Franke
Date: Monday, July 23, 2018 at 10:10 PM
To: Harshvardhan Agrawal
Cc:
Subject: Re: Implement Joins
For the first one (lookup of single entries) you could use a NoSQL db (eg key
value store) - a relational database will not scale.
Depending on when you need to do the enrichment you could also first store the
data and enrich it later as part of a batch process.
> On 24. Jul 2018, at 05:25,
Hi,
We are using Flink for financial data enrichment and aggregations. We have
Positions data that we are currently receiving from Kafka. We want to
enrich that data with reference data like Product and Account information
that is present in a relational database. From my understanding of Flink
18 matches
Mail list logo