Thanks, I'll keep that in mind the next time I allocate memory to my server
On 14 June 2017 at 10:12, Dave Reynolds <[email protected]> wrote: > Glad it worked. The limit on batch size depends on the memory you have > allocated to your server. > > Dave > > > On 14/06/17 09:06, Trevor Lazarus wrote: > >> Dave, I was even able to step it up to batches of 1000000, you really >> bailed me out. >> >> Thanks, >> Trevor. >> >> On 14 June 2017 at 09:51, Trevor Lazarus <[email protected]> wrote: >> >> Hi Dave, >>> >>> That works, I'm able to DELETE in batches of 10000, thanks a lot. >>> >>> Best, >>> Trevor. >>> >>> On 14 June 2017 at 09:37, Dave Reynolds <[email protected]> >>> wrote: >>> >>> Hi Trevor, >>>> >>>> To set a limit you need a subselect and that's only possible with the >>>> full DELETE syntax not the shortened DELETE WHERE. >>>> >>>> So I think your query would look something like (typed in my hand, >>>> untested): >>>> >>>> DELETE { >>>> ?s skos:closeMatch ?o >>>> } WHERE { >>>> { >>>> SELECT ?s ?o WHERE { >>>> ?s skos:closeMatch ?o >>>> } LIMIT 10000 >>>> } >>>> } >>>> >>>> With a suitable PREFIX declaration for skos. >>>> >>>> Dave >>>> >>>> On 14/06/17 07:17, Trevor Lazarus wrote: >>>> >>>> Hi Dave, >>>>> >>>>> I tried, but couldn't get that update query to run, it would really >>>>> helpful >>>>> if you could show me an example. >>>>> >>>>> This is what I got so far : >>>>> >>>>> >>>>> >>>>> *DELETE WHERE { ?s skos:closeMatch ?o }* >>>>> >>>>> Best, >>>>> Trevor. >>>>> >>>>> >>>>> On 13 June 2017 at 09:54, Trevor Lazarus <[email protected]> wrote: >>>>> >>>>> One option is to use the SPARQL update but with a subselect that >>>>> includes >>>>> >>>>>> >>>>>> a LIMIT. Set that limit large enough to be useful but not so large >>>>>>> that it >>>>>>> times out. >>>>>>> >>>>>>> Then you can just keep applying that until there's no more left. >>>>>>> >>>>>>> >>>>>>> Thanks, I'm going to try that. >>>>>> >>>>>> >>>>>> On 13 June 2017 at 09:46, Dave Reynolds <[email protected]> >>>>>> wrote: >>>>>> >>>>>> On 13/06/17 05:34, Trevor Lazarus wrote: >>>>>> >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>>> >>>>>>>> Sorry if this has been asked many times, but I'm in bit of a soup, I >>>>>>>> ran >>>>>>>> a >>>>>>>> bunch of co-reference resolution scripts and knowingly or >>>>>>>> unknowingly >>>>>>>> put >>>>>>>> all the matches(skos:[exactMatch|closeMatch]) into the default >>>>>>>> graph >>>>>>>> in >>>>>>>> Fuseki 2 along with the other data. >>>>>>>> >>>>>>>> I'm wondering if there's an easy way to just remove those triples, >>>>>>>> using >>>>>>>> SOH with something like s-delete. >>>>>>>> When I run a SPARQL query to delete these from Fuseki's front end, >>>>>>>> it >>>>>>>> times >>>>>>>> out because there's way too many. >>>>>>>> My only other option is to use OFFSET apparently. >>>>>>>> >>>>>>>> >>>>>>>> One option is to use the SPARQL update but with a subselect that >>>>>>> includes >>>>>>> a LIMIT. Set that limit large enough to be useful but not so large >>>>>>> that it >>>>>>> times out. >>>>>>> >>>>>>> Then you can just keep applying that until there's no more left. >>>>>>> >>>>>>> Dave >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>> >> -- The information in this email and attachments is privileged and confidential and may not be disclosed without the express permission of the sender. This information is intended only for the use of the individual to whom it is addressed. If you have received this communication in error, your review, dissemination, or copying of this information is prohibited. Please reply to alert me of the error and then delete this email. Thank you.
