Thanks Vinoth for detailed explanation and I was about to reply you that it
worked and followed most of the steps that you mentioned below.
Used forEachBatch() of stream to process the batch data from kafka and then
finding out the partitions using aggregate functions on Kafka Dataset and
then
Thanks Sudha! This is means master is now open for regular PRs. Thanks for
your patience, everyone.
On Fri, Aug 14, 2020 at 3:51 PM Bhavani Sudha
wrote:
> Hello all,
>
> We have cut the release branch -
> https://github.com/apache/hudi/tree/release-0.6.0 . Since it is already
> Friday, we will
Hello all,
We have cut the release branch -
https://github.com/apache/hudi/tree/release-0.6.0 . Since it is already
Friday, we will be sending the release candidate early next week (after
some testing).
Happy Friday!
Thanks,
Sudha
On Wed, Aug 12, 2020 at 3:56 PM vbal...@apache.org
wrote:
>
>
Hello,
I am Siva's colleague and I am working on the problem below as well.
I would like to describe what we are trying to achieve with Hudi as well as our
current way of working and our GDPR and "Right To Be Forgotten " compliance
policies.
Our requirements :
- We wish to apply a strict
+1 thanks leesf. I actually find these very useful when composing the
reports also.:)
On Sun, Aug 9, 2020 at 5:32 PM vino yang wrote:
> Thanks to leesf for continuously updating Hudi weekly.
>
> It is great to see that more and more improvements are being proposed in
> the community.
>
> Best,
Hi,
On re-ingesting, do you mean to say you want to overwrite the table, while
not getting the changes in the incremental query? This has not come up
before.
As you can imagine, it'd tricky scenario, where we need some special
handling/action type introduced.
yes, yes on the next two questions.