We received a +1 from the following individuals:

- Riley Kuttruff
- Nga Chung
- Thomas Loubrieu
- Thomas Huang

I will go ahead and merge this PR (which has also been approved in GH). Thanks 
everyone for contributing and I look forward to getting this feature officially 
released!

- Stepheny

On 2023/08/18 18:55:09 Nga Chung wrote:
> Thank you, Stepheny!
> 
> The Jupyter notebook [1] that we've used to illustrate data match up
> was updated to be compatible with async job and pagination and works
> as expected.
> 
> +1 (binding) from me.
> 
> Best,
> Nga
> 
> [1] 
> https://github.com/access-cdms/cdms-notebooks/blob/master/CDMS-Match-Up-Demo.ipynb
> 
> On Wed, Aug 16, 2023 at 7:13 AM Riley Kuttruff <r...@apache.org> wrote:
> >
> > These changes have been extensively tested and improved through the 
> > Cloud-Based Data Matchup Service project's SDAP deployment and have been 
> > working like a charm. Queries that were previously infeasible due to their 
> > size & computation time are now able to be handled with ease.
> > I will say that storing all matchup results, especially given this PR is 
> > focused on particularly large results, we should eventually look for a tool 
> > to remove old and failed results to free up space.
> >
> > +1 (binding) from me
> >
> > On 2023/08/16 00:33:19 Stepheny Perez wrote:
> > > Hi everyone,
> > >
> > > I opened a PR for a major change to SDAP here: 
> > > https://github.com/apache/incubator-sdap-nexus/pull/249
> > >
> > > Because this is a major change, I'll open a 72-hour VOTE thread and see 
> > > if there are any concerns about this change. However you vote, please 
> > > provide justification.
> > >
> > > This change introduces "async jobs" to SDAP. Currently, SDAP only 
> > > supports synchronous jobs, meaning the API call will hang until the 
> > > analysis is completed and results are returned to the user. This new 
> > > async feature will immediately return a job detail response to the user 
> > > (via a 300 redirect) which the user can then poll until the results are 
> > > ready. This is important because it adds support for larger jobs; the 
> > > jobs can take days or weeks if needed. Please be aware this change is 
> > > only enabled for the /match_spark endpoint -- no other algorithms are 
> > > impacted. In order to enable this feature for other algorithms, the 
> > > results would need to be persisted to Cassandra and the 
> > > "NexusCalcSparkTornadoHandler" handler would need to be inherited.
> > >
> > > The new endpoints utilize the OGC Coverages specification 
> > > (https://ogcapi.ogc.org/coverages/)
> > >
> > > Thanks,
> > > Stepheny
> > >
> 

Reply via email to