https://cwiki.apache.org/confluence/display/HUDI/20200609+Weekly+Sync+Minutes
Thanks
Vinoth
Hey folks,
Wanted to start a thread on the status update for 0.5.3 release.
After first candidate was sent out for voting, we had to pull in 2 more
commits that had fixes to tests flakiness. So, cherry picked them and
prepared the 2nd candidate. But integration tests are failing
Thank you! Excited to contribute!
On Tue, Jun 9, 2020 at 5:51 PM Bhavani Sudha
wrote:
> Done! Welcome to Hudi :)
>
> On Tue, Jun 9, 2020 at 3:45 PM Alan Chu wrote:
>
> > My mistake, it's chualan. Thanks!
> >
> >
> > On Tue, Jun 9, 2020 at 5:44 PM Bhavani Sudha
> > wrote:
> >
> > > Hi Alan,
>
Done! Welcome to Hudi :)
On Tue, Jun 9, 2020 at 3:45 PM Alan Chu wrote:
> My mistake, it's chualan. Thanks!
>
>
> On Tue, Jun 9, 2020 at 5:44 PM Bhavani Sudha
> wrote:
>
> > Hi Alan,
> >
> > Please share your jira id.
> >
> > Thanks,
> > Sudha
> >
> > On Tue, Jun 9, 2020 at 3:13 PM Alan Chu
Hi Mario,
Can you please share your jira id ?
Thanks,
Sudha
On Tue, Jun 9, 2020 at 3:29 AM Mario de Sá Vera wrote:
> hey Vinoth, I noticed you added this suggestion to the weekly log .. that
> is great ! just let me know if I am able to create a JIRA , as I tried to
> go to HUDI project in
My mistake, it's chualan. Thanks!
On Tue, Jun 9, 2020 at 5:44 PM Bhavani Sudha
wrote:
> Hi Alan,
>
> Please share your jira id.
>
> Thanks,
> Sudha
>
> On Tue, Jun 9, 2020 at 3:13 PM Alan Chu wrote:
>
> > Hi,
> >
> > I'd love to contribute to Hudi, please add me to the contributor list if
> >
Hi Alan,
Please share your jira id.
Thanks,
Sudha
On Tue, Jun 9, 2020 at 3:13 PM Alan Chu wrote:
> Hi,
>
> I'd love to contribute to Hudi, please add me to the contributor list if
> possible, thanks!
>
>
> Best,
> Alan Chu
>
Hi,
I'd love to contribute to Hudi, please add me to the contributor list if
possible, thanks!
Best,
Alan Chu
hey Vinoth, I noticed you added this suggestion to the weekly log .. that
is great ! just let me know if I am able to create a JIRA , as I tried to
go to HUDI project in Apache and did not find a way to do it. I can bring
in a good description of the benefits etc...
thanks, Mario.
Em seg., 8 de
Hi,
I tried to ingest records in S3 with 2 runs - 20K/50K partitions with
bulk_insert mode and a COW table.
I can see all the process are considerably ok except the last process where we
finalized the writes i.e. HoodieTable.finalizeWrite as it needs to scan through
the whole directory
10 matches
Mail list logo