Re: [Virtuoso-users] Submit SPARQL query with POST method
Hi Luis, Roel, The --data section should start with query= curl -X POST -vH "Accept:application/sparql-results+xml" --data "query=select distinct ?Concept where {[] a ?Concept} LIMIT 100" http://localhost:8890/sparql For the rest using a response type other than HTML is recommended e.g. the sparql-results in xml I added. Regards, Jerven On 30/04/2021 15:29, Roel Janssen wrote: Hi, On Fri, 2021-04-30 at 13:24 +, Luís Moreira de Sousa via Virtuoso- users wrote: Hi all, I would like to submit queries to the SPARQL endpoint using the POST method to avoid payload limits with GET. For some reason, Virtuoso is not able to identify the query in the message body. An example: $ curl -X POST -H "Accept:text/html" --data "select distinct ?Concept where {[] a ?Concept} LIMIT 100" http://localhost:8890/sparql Virtuoso 22023 Error The request does not contain text of SPARQL Am I doing something wrong here? Or should I use a different method? Try adding another header with: -H "Content-Type: application/sparql- update". Kind regards, Roel Janssen ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users -- *Jerven Tjalling Bolleman* Principal Software Developer *SIB | Swiss Institute of Bioinformatics* 1, rue Michel Servet - CH 1211 Geneva 4 - Switzerland t +41 22 379 58 85 Jerven.Bolleman@sib.swiss - www.sib.swiss ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] queries and CPU
Hi Roland, Did you try changing the ThreadsPerQuery setting in your virtuoso.ini file to the number of cpu's in your machine? That seems to help in my case. Although some queries remain IO bound on my desktop machine and I don't always get 100% cpu usage. Regards, Jerven On 19/02/14 17:46, Roland Cornelissen wrote: > Hi, > > I have a question about the usage of CPU cores when querying a large > graph: Is it possible to assign _all_ available cores to a SPARQL query? > I noticed that when loading a large dataset all available cores in CPU > are used. > When a SPARQL query is issued on this dataset not all but only 1 core is > used by VOS7. > Since this is a working database for only me I would like to have all > cores working on my query. > Is this possible? > > Thanks, > Roland > > > > > -- > Managing the Performance of Cloud-Based Applications > Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. > Read the Whitepaper. > http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk > > > > ___ > Virtuoso-users mailing list > Virtuoso-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/virtuoso-users > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- Managing the Performance of Cloud-Based Applications Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. Read the Whitepaper. http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] Crash of virtuoso 7 stable <- caused by OOM killer
I will run virtuoso in valgrind --leak-check-yes and see what comes out. Regards, Jerven On 17/02/14 15:32, Hugh Williams wrote: > Hi Jerven, > > Ok, if you could provide a compilable Sesame or JDBC program > demonstrating this issue then we can attempt the recreation with it as I > have just been executing it from the isql command line tool ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 17 Feb 2014, at 13:04, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> Hi Hugh, >> >> It is run via sesame. I will try running it directly as a prepared >> statement via JDBC and see if that makes a difference. >> >> Regards, >> Jerven >> >> On 17/02/14 14:02, Hugh Williams wrote: >>> Hi Jerven, >>> >>> Not a waste of time as you still have an apparent issue ... >>> >>> Going back to the original issue I am downloading the go.rdf.gz dataset >>> to load and run the test SPARQL insert query you provided against the >>> Virtuoso instance. Can you confirm if this query is executed via Sesame >>> or can it be run via isql to see the same problem ? >>> >>> Best Regards >>> Hugh Williams >>> Professional Services >>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>> Weblog -- http://www.openlinksw.com/blogs/ >>> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >>> Twitter -- http://twitter.com/OpenLink >>> Google+ -- http://plus.google.com/100570109519069333827/ >>> Facebook -- http://www.facebook.com/OpenLinkSoftware >>> Universal Data Access, Integration, and Management Technology Providers >>> >>> On 17 Feb 2014, at 08:53, Jerven Bolleman >> <mailto:jerven.bolle...@isb-sib.ch> >>> <mailto:jerven.bolle...@isb-sib.ch>> wrote: >>> >>>> Hi Hugh, >>>> >>>> Sorry for wasting your time. I finally got the cause down. >>>> >>>> When I saw this for the first time in the gdb >>>> >>>> Program terminated with signal SIGKILL, Killed. >>>> The program no longer exists. >>>> >>>> And this in dmesg >>>> >>>> Out of memory: Kill process 14738 (virtuoso-t) score 763 or sacrifice >>>> child >>>> Killed process 14738, UID 21059, (virtuoso-t) total-vm:16179784kB, >>>> anon-rss:12910420kB, file-rss:152kB >>>> >>>> >>>> >>>> >>>> The total-vm size in that dmesg line is surprising its is nearly 16gb >>>> of memory use. >>>> >>>> Yet the NumberOfBuffers is 904897 (about 8 gb assuming 9KB pages) and >>>> MaxDirtyBuffers 50. The odd size of the NumberOfBuffers is due to >>>> auto sizing this setting in the conf from the managing process as >>>> virtuoso is basically running in a embedded/lite mode. >>>> >>>> The server is in O_DIRECT mode (set to 1) >>>> >>>> What can I set so that this does not happen? i.e. why is the memory >>>> consumption so large during this SPARQL update? Is it because my max >>>> result size? >>>> >>>> Regards, >>>> Jerven >>>> >>>> On 13/02/14 20:57, Jerven Bolleman wrote: >>>>> Hi Hugh, >>>>> >>>>> Yes, that should be the file (all those are the same) >>>>> I switched back to the stable branch because the develop/7 branch >>>>> would not build cleanly when I tried. >>>>> This happened before and you guys normally fix it and I was not in a >>>>> hurry. >>>>> I finally have the debug build working and ulimit set etc… so I think >>>>> I will send the backtrace tomorrow. >>>>> >>>>> Regards, >>>>> Jerven >>>>> >>>>> On 13 Feb 2014, at 17:52, Hugh Williams >>>> <mailto:hwilli...@openlinksw.com> >>>>> <mailto:hwilli...@openlinksw.com>> wrote: >>>>> >>>>>> Hi Jerven, >>>>>> >>>&
Re: [Virtuoso-users] Crash of virtuoso 7 stable <- caused by OOM killer
Hi Hugh, Running it as a preparedStatement does change the outcome I see these again 14:12:30 * Monitor: Low query memory limit, try to increase MaxQueryMem again, While running I had a look at status(); And it gives a very long list of 16384: IER 8 80: ISR NO OWNER 76: ISR NO OWNER 72: ISR NO OWNER 68: ISR NO OWNER 64: ISR NO OWNER 60: ISR NO OWNER 56: ISR NO OWNER 52: ISR NO OWNER 48: ISR NO OWNER 44: ISR NO OWNER 40: ISR NO OWNER 36: ISR NO OWNER 32: ISR NO OWNER 28: ISR NO OWNER 24: ISR NO OWNER 20: ISR NO OWNER etc.. etc.. Yet in the end using a prepared statement or not does not make a difference. Also using isql. SQL> sparql PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> INSERT { GRAPH <http://beta.sparql.uniprot.org/go/> { ?sub rdfs:subClassOf ?super}} WHERE { SELECT DISTINCT ?super ?sub WHERE {GRAPH <http://beta.sparql.uniprot.org/go/> { ?sub rdfs:subClassOf [] . ?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . [] rdfs:subClassOf ?super . } } }Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> Type the rest of statement, end with a semicolon (;)> ; *** Error 08S01: [Virtuoso Driver]CL065: Lost connection to server in lines 2-12 of Top-Level: #line 2 "(console)" sparql PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> INSERT { GRAPH <http://beta.sparql.uniprot.org/go/> { ?sub rdfs:subClassOf ?super}} WHERE { SELECT DISTINCT ?super ?sub WHERE {GRAPH <http://beta.sparql.uniprot.org/go/> { ?sub rdfs:subClassOf [] . ?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . [] rdfs:subClassOf ?super . } } } Regards, Jerven On 17/02/14 14:04, Jerven Bolleman wrote: > Hi Hugh, > > It is run via sesame. I will try running it directly as a prepared > statement via JDBC and see if that makes a difference. > > Regards, > Jerven > > On 17/02/14 14:02, Hugh Williams wrote: >> Hi Jerven, >> >> Not a waste of time as you still have an apparent issue ... >> >> Going back to the original issue I am downloading the go.rdf.gz dataset >> to load and run the test SPARQL insert query you provided against the >> Virtuoso instance. Can you confirm if this query is executed via Sesame >> or can it be run via isql to see the same problem ? >> >> Best Regards >> Hugh Williams >> Professional Services >> OpenLink Software, Inc. // http://www.openlinksw.com/ >> Weblog -- http://www.openlinksw.com/blogs/ >> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >> Twitter -- http://twitter.com/OpenLink >> Google+ -- http://plus.google.com/100570109519069333827/ >> Facebook -- http://www.facebook.com/OpenLinkSoftware >> Universal Data Access, Integration, and Management Technology Providers >> >> On 17 Feb 2014, at 08:53, Jerven Bolleman > <mailto:jerven.bolle...@isb-sib.ch>> wrote: >> >>> Hi Hugh, >>> >>> Sorry for wasting your time. I finally got the cause down. >>> >>> When I saw this for the first time in the gdb >>> >>> Program terminated with signal SIGKILL, Killed. >>> The program no longer exists. >>> >>> And this in dmesg >>> >>> Out of memory: Kill process 14738 (virtuoso-t) score 763 or sacrifice >>> child >>> Killed process 14738, UID 21059, (virtuoso-t) total-vm:16179784kB, >>> anon-rss:12910420kB, file-rss:152kB >>> >>> >>> >>> >>> The total-vm size in that dmesg line is surprising its is nearly 16gb >>> of memory use. >>> >>> Yet the NumberOfBuffers is 904897 (about 8 gb assuming 9KB pages) and >>> MaxDirtyBuffers 500000. The odd size of the NumberOfBuffers is due to >>> auto sizing this setting in the conf from the managing process as >>> virtuoso is basically running in a embedded/lite mode. >>> >>> The server is
Re: [Virtuoso-users] Crash of virtuoso 7 stable <- caused by OOM killer
Hi Hugh, It is run via sesame. I will try running it directly as a prepared statement via JDBC and see if that makes a difference. Regards, Jerven On 17/02/14 14:02, Hugh Williams wrote: > Hi Jerven, > > Not a waste of time as you still have an apparent issue ... > > Going back to the original issue I am downloading the go.rdf.gz dataset > to load and run the test SPARQL insert query you provided against the > Virtuoso instance. Can you confirm if this query is executed via Sesame > or can it be run via isql to see the same problem ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 17 Feb 2014, at 08:53, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> Hi Hugh, >> >> Sorry for wasting your time. I finally got the cause down. >> >> When I saw this for the first time in the gdb >> >> Program terminated with signal SIGKILL, Killed. >> The program no longer exists. >> >> And this in dmesg >> >> Out of memory: Kill process 14738 (virtuoso-t) score 763 or sacrifice >> child >> Killed process 14738, UID 21059, (virtuoso-t) total-vm:16179784kB, >> anon-rss:12910420kB, file-rss:152kB >> >> >> >> >> The total-vm size in that dmesg line is surprising its is nearly 16gb >> of memory use. >> >> Yet the NumberOfBuffers is 904897 (about 8 gb assuming 9KB pages) and >> MaxDirtyBuffers 50. The odd size of the NumberOfBuffers is due to >> auto sizing this setting in the conf from the managing process as >> virtuoso is basically running in a embedded/lite mode. >> >> The server is in O_DIRECT mode (set to 1) >> >> What can I set so that this does not happen? i.e. why is the memory >> consumption so large during this SPARQL update? Is it because my max >> result size? >> >> Regards, >> Jerven >> >> On 13/02/14 20:57, Jerven Bolleman wrote: >>> Hi Hugh, >>> >>> Yes, that should be the file (all those are the same) >>> I switched back to the stable branch because the develop/7 branch >>> would not build cleanly when I tried. >>> This happened before and you guys normally fix it and I was not in a >>> hurry. >>> I finally have the debug build working and ulimit set etc… so I think >>> I will send the backtrace tomorrow. >>> >>> Regards, >>> Jerven >>> >>> On 13 Feb 2014, at 17:52, Hugh Williams >> <mailto:hwilli...@openlinksw.com>> wrote: >>> >>>> Hi Jerven, >>>> >>>> From our point of view there is no real value in building from the >>>> stable/7 branch from July last year, as it is about to be superseded >>>> in the new VOS 7 release by develop/7 when finalised, which is what >>>> we would be more interested in knowing of any problem being >>>> encountered using ? >>>> >>>> In your email below you mention loading the go.rdf.gz uniprot >>>> dataset, do you have URL to what you loaded or is it one of: >>>> >>>> >>>> >>>> ftp://ftp.expasy.org/databases/uniprot/current_release/rdf/go.rdf.gz >>>> >>>> >>>> ftp://ftp.ebi.ac.uk/pub/databases/uniprot/current_release/rdf/go.rdf.gz >>>> >>>> >>>> ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/rdf/go.rdf.gz >>>> >>>> >>>> >>>> and I can load here to try recreate your problem with develop/7 ? >>>> >>>> Best Regards >>>> Hugh Williams >>>> Professional Services >>>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>>> Weblog -- http://www.openlinksw.com/blogs/ >>>> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >>>> Twitter -- http://twitter.com/OpenLink >>>> Google+ -- http://plus.google.com/100570109519069333827/ >>>> Facebook -- http://www.facebook.com/OpenLinkSoftware >>>> Universal Data Access, Integration, and Management Technology Providers >>>> >>>> On 13 Feb 2014, at 10:14, Jerven Bol
Re: [Virtuoso-users] Crash of virtuoso 7 stable <- caused by OOM killer
Hi Hugh, Sorry for wasting your time. I finally got the cause down. When I saw this for the first time in the gdb Program terminated with signal SIGKILL, Killed. The program no longer exists. And this in dmesg Out of memory: Kill process 14738 (virtuoso-t) score 763 or sacrifice child Killed process 14738, UID 21059, (virtuoso-t) total-vm:16179784kB, anon-rss:12910420kB, file-rss:152kB The total-vm size in that dmesg line is surprising its is nearly 16gb of memory use. Yet the NumberOfBuffers is 904897 (about 8 gb assuming 9KB pages) and MaxDirtyBuffers 50. The odd size of the NumberOfBuffers is due to auto sizing this setting in the conf from the managing process as virtuoso is basically running in a embedded/lite mode. The server is in O_DIRECT mode (set to 1) What can I set so that this does not happen? i.e. why is the memory consumption so large during this SPARQL update? Is it because my max result size? Regards, Jerven On 13/02/14 20:57, Jerven Bolleman wrote: Hi Hugh, Yes, that should be the file (all those are the same) I switched back to the stable branch because the develop/7 branch would not build cleanly when I tried. This happened before and you guys normally fix it and I was not in a hurry. I finally have the debug build working and ulimit set etc… so I think I will send the backtrace tomorrow. Regards, Jerven On 13 Feb 2014, at 17:52, Hugh Williams wrote: Hi Jerven, From our point of view there is no real value in building from the stable/7 branch from July last year, as it is about to be superseded in the new VOS 7 release by develop/7 when finalised, which is what we would be more interested in knowing of any problem being encountered using ? In your email below you mention loading the go.rdf.gz uniprot dataset, do you have URL to what you loaded or is it one of: ftp://ftp.expasy.org/databases/uniprot/current_release/rdf/go.rdf.gz ftp://ftp.ebi.ac.uk/pub/databases/uniprot/current_release/rdf/go.rdf.gz ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/rdf/go.rdf.gz and I can load here to try recreate your problem with develop/7 ? Best Regards Hugh Williams Professional Services OpenLink Software, Inc. // http://www.openlinksw.com/ Weblog -- http://www.openlinksw.com/blogs/ LinkedIn -- http://www.linkedin.com/company/openlink-software/ Twitter -- http://twitter.com/OpenLink Google+ -- http://plus.google.com/100570109519069333827/ Facebook -- http://www.facebook.com/OpenLinkSoftware Universal Data Access, Integration, and Management Technology Providers On 13 Feb 2014, at 10:14, Jerven Bolleman wrote: That was premature, I just reran the experiment and now it crashed again :( Will try to see where that core dump went. Regards, Jerven On 13/02/14 09:25, Jerven Bolleman wrote: Hi All, I have managed to build a debug build. The funny thing is it now succeeds! I do get this message once the next time I call a checkpoint. 18:02:22 suspect to miss a flush of L=100 in cpt, line 1008 18:02:23 Buffer 0x7fc9f79516f8 occupied in cpt Regards, Jerven On 11/02/14 16:06, Jerven Bolleman wrote: Hi Hugh, I don't understand how to build the debug release. I can't find the lines to patch e.g. cd /scratch/virtuoso-opensource git pull;git status; grep CONFIGURE_ARGS Makefile remote: Counting objects: 298, done. remote: Compressing objects: 100% (298/298), done. remote: Total 298 (delta 174), reused 0 (delta 0) Receiving objects: 100% (298/298), 1.44 MiB | 974 KiB/s, done. Resolving deltas: 100% (174/174), done. From git://github.com/openlink/virtuoso-opensource 74ed20c..0b71fb6 develop/7 -> origin/develop/7 Already up-to-date. # On branch stable/7 nothing to commit (working directory clean) As you can notice I am now trying out the stable branch. Because when I tried building the develop/7 branch yesterday I had a failure. I do have a better idea of what is going on to trigger the bug. First of all I have rewritten the update statement to look like this. PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy> { ?sub rdfs:subClassOf ?super}} WHERE { SELECT DISTINCT ?super ?sub WHERE {GRAPH <http://beta.sparql.uniprot.org/taxonomy> { ?sub rdfs:subClassOf [] . ?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . [] rdfs:subClassOf ?super . } } } Which runs in about 25 minutes for the taxonomy data I have been using. Once the TransactionAfterImageLimit is raised (in my case to about 1GB or 10 bytes) Then I try loading the go.rdf.gz data into a named which works fine and run the nearly identical query. PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> INSERT { GRAPH <http://beta.sparql.uniprot.org/go> { ?sub rdfs:subClassOf ?super}} WHERE { SELECT DISTINCT ?super ?sub WHERE {GRAPH <http://beta.sparq
Re: [Virtuoso-users] Crash of virtuoso 7 stable + better bulk loading from api's
Hi Hugh, Yes, that should be the file (all those are the same) I switched back to the stable branch because the develop/7 branch would not build cleanly when I tried. This happened before and you guys normally fix it and I was not in a hurry. I finally have the debug build working and ulimit set etc… so I think I will send the backtrace tomorrow. Regards, Jerven On 13 Feb 2014, at 17:52, Hugh Williams wrote: > Hi Jerven, > > From our point of view there is no real value in building from the stable/7 > branch from July last year, as it is about to be superseded in the new VOS 7 > release by develop/7 when finalised, which is what we would be more > interested in knowing of any problem being encountered using ? > > In your email below you mention loading the go.rdf.gz uniprot dataset, do you > have URL to what you loaded or is it one of: > > > > ftp://ftp.expasy.org/databases/uniprot/current_release/rdf/go.rdf.gz > > > ftp://ftp.ebi.ac.uk/pub/databases/uniprot/current_release/rdf/go.rdf.gz > > > ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/rdf/go.rdf.gz > > > > and I can load here to try recreate your problem with develop/7 ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 13 Feb 2014, at 10:14, Jerven Bolleman wrote: > >> That was premature, I just reran the experiment and now it crashed again :( >> >> Will try to see where that core dump went. >> >> Regards, >> Jerven >> >> On 13/02/14 09:25, Jerven Bolleman wrote: >>> Hi All, >>> >>> I have managed to build a debug build. >>> The funny thing is it now succeeds! >>> I do get this message once the next time I call a checkpoint. >>> >>> 18:02:22 suspect to miss a flush of L=100 in cpt, line 1008 >>> 18:02:23 Buffer 0x7fc9f79516f8 occupied in cpt >>> >>> Regards, >>> Jerven >>> On 11/02/14 16:06, Jerven Bolleman wrote: >>>> Hi Hugh, >>>> >>>> I don't understand how to build the debug release. >>>> I can't find the lines to patch e.g. >>>> >>>> cd /scratch/virtuoso-opensource >>>> git pull;git status; >>>> grep CONFIGURE_ARGS Makefile >>>> >>>> >>>> remote: Counting objects: 298, done. >>>> remote: Compressing objects: 100% (298/298), done. >>>> remote: Total 298 (delta 174), reused 0 (delta 0) >>>> Receiving objects: 100% (298/298), 1.44 MiB | 974 KiB/s, done. >>>> Resolving deltas: 100% (174/174), done. >>>> From git://github.com/openlink/virtuoso-opensource >>>> 74ed20c..0b71fb6 develop/7 -> origin/develop/7 >>>> Already up-to-date. >>>> # On branch stable/7 >>>> nothing to commit (working directory clean) >>>> >>>> As you can notice I am now trying out the stable branch. >>>> Because when I tried building the develop/7 branch yesterday I had a >>>> failure. >>>> >>>> I do have a better idea of what is going on to trigger the bug. >>>> First of all I have rewritten the update statement to look like this. >>>> >>>> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> >>>> INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy> { >>>> ?sub rdfs:subClassOf ?super}} >>>> WHERE { >>>>SELECT DISTINCT ?super ?sub >>>>WHERE {GRAPH <http://beta.sparql.uniprot.org/taxonomy> { >>>> ?sub rdfs:subClassOf [] . >>>> ?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . >>>> [] rdfs:subClassOf ?super . } >>>>} >>>> } >>>> >>>> Which runs in about 25 minutes for the taxonomy data I have been using. >>>> Once the TransactionAfterImageLimit is raised (in my case to about 1GB >>>> or 10 bytes) >>>> >>>> Then I try loading the go.rdf.gz data into a named which works fine and >>>> run the nearly identical query. >>>> >>>> PREFIX rdfs:<http://w
Re: [Virtuoso-users] Crash of virtuoso 7 stable + better bulk loading from api's
That was premature, I just reran the experiment and now it crashed again :( Will try to see where that core dump went. Regards, Jerven On 13/02/14 09:25, Jerven Bolleman wrote: > Hi All, > > I have managed to build a debug build. > The funny thing is it now succeeds! > I do get this message once the next time I call a checkpoint. > > 18:02:22 suspect to miss a flush of L=100 in cpt, line 1008 > 18:02:23 Buffer 0x7fc9f79516f8 occupied in cpt > > Regards, > Jerven > On 11/02/14 16:06, Jerven Bolleman wrote: >> Hi Hugh, >> >> I don't understand how to build the debug release. >> I can't find the lines to patch e.g. >> >> cd /scratch/virtuoso-opensource >> git pull;git status; >> grep CONFIGURE_ARGS Makefile >> >> >> remote: Counting objects: 298, done. >> remote: Compressing objects: 100% (298/298), done. >> remote: Total 298 (delta 174), reused 0 (delta 0) >> Receiving objects: 100% (298/298), 1.44 MiB | 974 KiB/s, done. >> Resolving deltas: 100% (174/174), done. >> From git://github.com/openlink/virtuoso-opensource >> 74ed20c..0b71fb6 develop/7 -> origin/develop/7 >> Already up-to-date. >> # On branch stable/7 >> nothing to commit (working directory clean) >> >> As you can notice I am now trying out the stable branch. >> Because when I tried building the develop/7 branch yesterday I had a >> failure. >> >> I do have a better idea of what is going on to trigger the bug. >> First of all I have rewritten the update statement to look like this. >> >> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> >> INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy> { >>?sub rdfs:subClassOf ?super}} >> WHERE { >> SELECT DISTINCT ?super ?sub >> WHERE {GRAPH <http://beta.sparql.uniprot.org/taxonomy> { >>?sub rdfs:subClassOf [] . >>?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . >>[] rdfs:subClassOf ?super . } >> } >> } >> >> Which runs in about 25 minutes for the taxonomy data I have been using. >> Once the TransactionAfterImageLimit is raised (in my case to about 1GB >> or 10 bytes) >> >> Then I try loading the go.rdf.gz data into a named which works fine and >> run the nearly identical query. >> >> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> >> INSERT { GRAPH <http://beta.sparql.uniprot.org/go> { >>?sub rdfs:subClassOf ?super}} >> WHERE { >> SELECT DISTINCT ?super ?sub >> WHERE {GRAPH <http://beta.sparql.uniprot.org/go> { >>?sub rdfs:subClassOf [] . >>?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . >>[] rdfs:subClassOf ?super . } >> } >> } >> >> At this time the database starts to die again :( >> >> ?sub rdfs:subClassOf [] . >> and >> [] rdfs:subClassOf ?super >> are there to ground ?sub and ?super or I get an error from the query >> engine but without those the query is valid SPARQL as well. >> I think this is fixed in the develop branch. >> >> As you might have noticed from yesterday's message about the clob/blob >> api for Virtuoso JDBC I was investigating better ways of loading the >> data again. >> >> Currently I build a Turtle string that is passed to the virtuoso side >> where it is loaded via TTLP or TTLP_MT. However, it is sad that I go >> from structured data to a string to be parsed again on the other side. >> I was hoping that there would be a better way to generate a data >> structure that does not require as much parsing by virtuoso. >> On large datasets this might not be a noticable overhead but on the >> small datasets that I am now testing I notice that we are running out of >> CPU power before IO/MEM. >> >> I was wondering if an API where we can pass 4 (or 5) array's to a >> prepared statement instead of a string to parse can really improve the >> loading speed. >> >> e.g. >> one array of subjects >> one array of predicates >> one array of objects >> optional one of datatypes >> one array of graphs >> >> If this does not match your needs then another feature requests is >> better blob/clob support in virtuoso. >> Currently I can figure out how to create them but not how to actually >> put data into them without being in a virtuoso package. >> >> Regards, >> Jerven >> >> >> On 28/01/
Re: [Virtuoso-users] Crash of virtuoso 7 stable + better bulk loading from api's
Hi All, I have managed to build a debug build. The funny thing is it now succeeds! I do get this message once the next time I call a checkpoint. 18:02:22 suspect to miss a flush of L=100 in cpt, line 1008 18:02:23 Buffer 0x7fc9f79516f8 occupied in cpt Regards, Jerven On 11/02/14 16:06, Jerven Bolleman wrote: > Hi Hugh, > > I don't understand how to build the debug release. > I can't find the lines to patch e.g. > > cd /scratch/virtuoso-opensource > git pull;git status; > grep CONFIGURE_ARGS Makefile > > > remote: Counting objects: 298, done. > remote: Compressing objects: 100% (298/298), done. > remote: Total 298 (delta 174), reused 0 (delta 0) > Receiving objects: 100% (298/298), 1.44 MiB | 974 KiB/s, done. > Resolving deltas: 100% (174/174), done. > From git://github.com/openlink/virtuoso-opensource > 74ed20c..0b71fb6 develop/7 -> origin/develop/7 > Already up-to-date. > # On branch stable/7 > nothing to commit (working directory clean) > > As you can notice I am now trying out the stable branch. > Because when I tried building the develop/7 branch yesterday I had a > failure. > > I do have a better idea of what is going on to trigger the bug. > First of all I have rewritten the update statement to look like this. > > PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> > INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy> { >?sub rdfs:subClassOf ?super}} > WHERE { > SELECT DISTINCT ?super ?sub > WHERE {GRAPH <http://beta.sparql.uniprot.org/taxonomy> { >?sub rdfs:subClassOf [] . >?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . >[] rdfs:subClassOf ?super . } > } > } > > Which runs in about 25 minutes for the taxonomy data I have been using. > Once the TransactionAfterImageLimit is raised (in my case to about 1GB > or 10 bytes) > > Then I try loading the go.rdf.gz data into a named which works fine and > run the nearly identical query. > > PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> > INSERT { GRAPH <http://beta.sparql.uniprot.org/go> { >?sub rdfs:subClassOf ?super}} > WHERE { > SELECT DISTINCT ?super ?sub > WHERE {GRAPH <http://beta.sparql.uniprot.org/go> { >?sub rdfs:subClassOf [] . >?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . >[] rdfs:subClassOf ?super . } > } > } > > At this time the database starts to die again :( > > ?sub rdfs:subClassOf [] . > and > [] rdfs:subClassOf ?super > are there to ground ?sub and ?super or I get an error from the query > engine but without those the query is valid SPARQL as well. > I think this is fixed in the develop branch. > > As you might have noticed from yesterday's message about the clob/blob > api for Virtuoso JDBC I was investigating better ways of loading the > data again. > > Currently I build a Turtle string that is passed to the virtuoso side > where it is loaded via TTLP or TTLP_MT. However, it is sad that I go > from structured data to a string to be parsed again on the other side. > I was hoping that there would be a better way to generate a data > structure that does not require as much parsing by virtuoso. > On large datasets this might not be a noticable overhead but on the > small datasets that I am now testing I notice that we are running out of > CPU power before IO/MEM. > > I was wondering if an API where we can pass 4 (or 5) array's to a > prepared statement instead of a string to parse can really improve the > loading speed. > > e.g. > one array of subjects > one array of predicates > one array of objects > optional one of datatypes > one array of graphs > > If this does not match your needs then another feature requests is > better blob/clob support in virtuoso. > Currently I can figure out how to create them but not how to actually > put data into them without being in a virtuoso package. > > Regards, > Jerven > > > On 28/01/14 06:03, Hugh Williams wrote: >> Hi Jerven, >> >> I have run the query many times, but as said I keep getting the "SR325: >> Transaction aborted because it's log after image size went above the >> limit" first time the query is run and have to increase >> the "TransactionAfterImageLimit" param in the INI file. Thus have you >> encountered this error yourself or do you even have >> the "TransactionAfterImageLimit" param set for your instance as I am >> wondering if this might be the cause of the crash you are seeing on your >> instance and perhaps the pa
Re: [Virtuoso-users] Crash of virtuoso 7 stable + better bulk loading from api's
Hi Hugh, I don't understand how to build the debug release. I can't find the lines to patch e.g. cd /scratch/virtuoso-opensource git pull;git status; grep CONFIGURE_ARGS Makefile remote: Counting objects: 298, done. remote: Compressing objects: 100% (298/298), done. remote: Total 298 (delta 174), reused 0 (delta 0) Receiving objects: 100% (298/298), 1.44 MiB | 974 KiB/s, done. Resolving deltas: 100% (174/174), done. From git://github.com/openlink/virtuoso-opensource 74ed20c..0b71fb6 develop/7 -> origin/develop/7 Already up-to-date. # On branch stable/7 nothing to commit (working directory clean) As you can notice I am now trying out the stable branch. Because when I tried building the develop/7 branch yesterday I had a failure. I do have a better idea of what is going on to trigger the bug. First of all I have rewritten the update statement to look like this. PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy> { ?sub rdfs:subClassOf ?super}} WHERE { SELECT DISTINCT ?super ?sub WHERE {GRAPH <http://beta.sparql.uniprot.org/taxonomy> { ?sub rdfs:subClassOf [] . ?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . [] rdfs:subClassOf ?super . } } } Which runs in about 25 minutes for the taxonomy data I have been using. Once the TransactionAfterImageLimit is raised (in my case to about 1GB or 10 bytes) Then I try loading the go.rdf.gz data into a named which works fine and run the nearly identical query. PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> INSERT { GRAPH <http://beta.sparql.uniprot.org/go> { ?sub rdfs:subClassOf ?super}} WHERE { SELECT DISTINCT ?super ?sub WHERE {GRAPH <http://beta.sparql.uniprot.org/go> { ?sub rdfs:subClassOf [] . ?sub rdfs:subClassOf/rdfs:subClassOf+ ?super . [] rdfs:subClassOf ?super . } } } At this time the database starts to die again :( ?sub rdfs:subClassOf [] . and [] rdfs:subClassOf ?super are there to ground ?sub and ?super or I get an error from the query engine but without those the query is valid SPARQL as well. I think this is fixed in the develop branch. As you might have noticed from yesterday's message about the clob/blob api for Virtuoso JDBC I was investigating better ways of loading the data again. Currently I build a Turtle string that is passed to the virtuoso side where it is loaded via TTLP or TTLP_MT. However, it is sad that I go from structured data to a string to be parsed again on the other side. I was hoping that there would be a better way to generate a data structure that does not require as much parsing by virtuoso. On large datasets this might not be a noticable overhead but on the small datasets that I am now testing I notice that we are running out of CPU power before IO/MEM. I was wondering if an API where we can pass 4 (or 5) array's to a prepared statement instead of a string to parse can really improve the loading speed. e.g. one array of subjects one array of predicates one array of objects optional one of datatypes one array of graphs If this does not match your needs then another feature requests is better blob/clob support in virtuoso. Currently I can figure out how to create them but not how to actually put data into them without being in a virtuoso package. Regards, Jerven On 28/01/14 06:03, Hugh Williams wrote: > Hi Jerven, > > I have run the query many times, but as said I keep getting the "SR325: > Transaction aborted because it's log after image size went above the > limit" first time the query is run and have to increase > the "TransactionAfterImageLimit" param in the INI file. Thus have you > encountered this error yourself or do you even have > the "TransactionAfterImageLimit" param set for your instance as I am > wondering if this might be the cause of the crash you are seeing on your > instance and perhaps the param needs to be increased for your instance ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 27 Jan 2014, at 13:52, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> Hi Hugh, >> >> Sorry I won’t have time to get a memory dump for at least two weeks. >> However, I must have explained something badl
[Virtuoso-users] JDBC sqlexception thrown for not implemented methods instead of SQLFeatureNotSupportedException
Dear virtuoso devs, A small thing in the JDBC drivers shipping with virtuoso. When a method is not supported by the JDBC driver you often throw an exception like this. public int setString(long pos, String str, int offset, int len) throws SQLException { throw new VirtuosoException ("Not implemented function", VirtuosoException.NOTIMPLEMENTED); } e.g. see https://github.com/openlink/virtuoso-opensource/blob/develop/7/libsrc/JDBCDriverType4/virtuoso/jdbc2/VirtuosoBlob.java#L785 Instead of the SQLFeatureNotSupportedException extending VirtuosoFNSException. A small string replacement in the code of the VirtuosoBlob will get most of these very small annoyances. Its nothing serious but it is only a small piece of work. Regards, Jerven -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- Managing the Performance of Cloud-Based Applications Take advantage of what the Cloud has to offer - Avoid Common Pitfalls. Read the Whitepaper. http://pubads.g.doubleclick.net/gampad/clk?id=121051231&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] Crash of virtuoso 7 dev after "Low query memory limit" message
; 3. To CONFIGURE_ENV prepend CC='cc -g’ > > 4. Then do "make clean all deinstall reinstall” to build a new debug > unstripped binary (virtuoso-t) > > 5. Start database with this new binary and force the crash condition again to > generate a new core file > > 6. Use gdb to load core file > > gdb virtuoso-t core > > 7. At the prompt, type "bt" or “backtrace” to backtrace through stack and > provide the output when top of stack is reached. > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 22 Jan 2014, at 11:08, Jerven Bolleman wrote: > >> Dear Virtuoso devs, >> >> On commit 2701d3f242a630562302471d168d20fec5ed2805 of the develop/7 branch. >> >> I load the file >> ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/rdf/taxonomy.rdf.gz >> into the graph named >> http://beta.sparql.uniprot.org/taxonomy/. >> >> Then checkpoint. Then I the run the SPARUL query >> >> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> >> INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy/> { >> ?sub rdfs:subClassOf ?super}} >> WHERE { GRAPH <http://beta.sparql.uniprot.org/taxonomy/> { >> ?sub rdfs:subClassOf/rdfs:subClassOf >> ?super . >> MINUS { ?sub rdfs:subClassOf ?super .}}} >> >> I get a message >> 17:27:41 * Monitor: Low query memory limit, try to increase MaxQueryMem >> Then I again run the same query >> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> >> INSERT { GRAPH <http://beta.sparql.uniprot.org/taxonomy/> { >> ?sub rdfs:subClassOf ?super}} >> WHERE { GRAPH <http://beta.sparql.uniprot.org/taxonomy/> { >> ?sub rdfs:subClassOf/rdfs:subClassOf >> ?super . >> MINUS { ?sub rdfs:subClassOf ?super .}}} >> >> And virtuoso crashes without leaving a trace in the logs. >> >> The query log shows the first execution but not the second. >> And that looks like this >> >> Precode: >> 0: { FOR UPDATE >> time -nan% fanout 1 input 1 rows >> time -nan% fanout 1 input 1 rows >> >> Precode: >> 0: vector := Call vector ( 1 , 0 , 3 , ##subClassOf , 1 , 1 ) >> 5: vector := Call vector (vector) >> 10: vector := Call vector () >> 15: __bft := Call __bft (http://beta.sparql.uniprot.org/taxonomy/>, >> 1 ) >> 20: BReturn 0 >> { fork >> time -nan% fanout 1.09875e+06 input 1 rows >> RDF_QUAD_POGS 2.4e+03 rows(s_6_4_t1.O, s_6_4_t1.S) >> inlined P = ##subClassOf G = #/ >> time -nan% fanout 0.65 input 1.09875e+06 rows >> RDF_QUAD 1 rows(s_6_7_t2.O) >> inlined P = ##subClassOf , S = k_s_6_4_t1.O G = #/ >> time -nan% fanout 0 input 1.09871e+06 rows >> END Node >> After test: >> 0: if ({ >> time -nan% fanout 1 input 1.09871e+06 rows >> time -nan% fanout 0 input 1.09871e+06 rows >> RDF_QUAD_POGS unq0.0011 rows () >> P = ##subClassOf , O = k_s_6_7_t2.O , S = k_s_6_4_t1.S , G = #/ >> time -nan% fanout 0 input 0 rows >> Subquery Select( ) >> } >> ) then 5 else 4 unkn 5 >> 4: BReturn 1 >> 5: BReturn 0 >> >> After code: >> 0: __ro2lo := Call __ro2lo (s_6_7_t2.O) >> 5: vector := Call vector (s_6_4_t1.S, __ro2lo) >> 10: if ($63 "user_aggr_notfirst" = 1 ) then 25 else 14 unkn 14 >> 14: $63 "user_aggr_notfirst" := := artm 1 >> 18: user_aggr_ret := Call DB.DBA.SPARQL_CONSTRUCT_INIT ($64 >> "user_aggr_env") >> 25: user_aggr_ret := Call DB.DBA.SPARQL_CONSTRUCT_ACC ($64 >> "user_aggr_env", vector, vector, vector, 0 ) >> 32: BReturn 0 >> } >> time -nan% fanout 1 input 1 rows >> skip node 1 set_ctr >> >> After code: >> 0: DB.DBA.SPARQL_CONSTRUCT_FIN := Call DB.DBA.SPARQL_CONSTR
[Virtuoso-users] Crash of virtuoso 7 dev after "Low query memory limit" message
END Node } I would expect the next query to show up here but it did not. The change to the checkpoint interval and logging status show that the two statements before the SPARUL succeeded. The exception caught from the java side is Exception:virtuoso.jdbc4.VirtuosoException: Virtuoso Communications Link Failure (timeout) : Connection to the server lost at virtuoso.sesame2.driver.VirtuosoRepositoryConnection.executeSPARUL(Unknown Source) at virtuoso.sesame2.driver.VirtuosoRepositoryConnection$4.execute(Unknown Source) Is there a way I can change the error log level to be more verbose/helpfull? Regards, Jerven -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- ; ; virtuoso.ini ; ; Database setup ; [Database] DatabaseFile= /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso.db ErrorLogFile= /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso.log LockFile= /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso.lck TransactionFile = /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso.trx xa_persistent_file = /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso.pxa ErrorLogLevel = 7 FileExtend = 2 MaxCheckpointRemap = 2 Striping= 0 TempStorage = TempDatabase [TempDatabase] DatabaseFile= /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso-temp.db TransactionFile = /home/jbollema/git/expasy4j-sparql/./data/virtuoso/virtuoso-temp.trx MaxCheckpointRemap = 2000 Striping= 0 ; ; Server parameters ; [Parameters] ServerPort = 1112 LiteMode= 1 DisableUnixSocket = 0 DisableTcpSocket= 0 ServerThreads = 20 CheckpointInterval = 60 O_DIRECT= 1 CaseMode= 2 MaxStaticCursorRows = 5000 CheckpointAuditTrail= 0 AllowOSCalls= 0 SchedulerInterval = 10 DirsAllowed = ., /scratch/virtuoso//share/virtuoso/vad ThreadCleanupInterval = 0 ThreadThreshold = 10 ResourcesCleanupInterval= 0 FreeTextBatchSize = 10 SingleCPU = 0 VADInstallDir = /scratch/virtuoso//share/virtuoso/vad/ PrefixResultNames = 0 RdfFreeTextRulesSize= 100 IndexTreeMaps = 256 MaxMemPoolSize = 0 PrefixResultNames = 0 MacSpotlight= 0 IndexTreeMaps = 64 TransactionAfterImageLimit = 5 ColumnStore = 1 QueryLog = /tmp/virtuoso-query.log NumberOfBuffers = 68 MaxDirtyBuffers = 50 MaxQueryMem= 200M VectorSize = 1 MaxVectorSize = 100 AdjustVectorSize = 1 [AutoRepair] BadParentLinks = 0 [Client] SQL_PREFETCH_ROWS = 100 SQL_PREFETCH_BYTES = 16000 SQL_QUERY_TIMEOUT = 0 SQL_TXN_TIMEOUT = 0 [VDB] ArrayOptimization = 0 NumArrayParameters = 10 VDBDisconnectTimeout= 1000 KeepConnectionOnFixedThread = 0 [Replication] ServerName = db-LIN-072 ServerEnable= 1 QueueMax= 5 ; ; Striping setup ; ; These parameters have only effect when Striping is set to 1 in the ; [Database] section, in which case the DatabaseFile parameter is ignored. ; ; With striping, the database is spawned across multiple segments ; where each segment can have multiple stripes. ; ; Format of the lines below: ;Segment = , [, .. ] ; ; must be ordered from 1 up. ; ; The is the total size of the segment which is equally divided ; across all stripes forming the segment. Its specification can be in ; gigabytes (g), megabytes (m), kilobytes (k) or in database blocks ; (b, the default) ; ; Note that the segment size must be a multiple of the database page size ; which is currently 8k. Also, the segment size must be divisible by the ; number of stripe files forming the segment. ; ; The example below creates a 200 meg database striped on two segments ; with two stripes of 50 meg and one of 100 meg. ; ; You can always add m
Re: [Virtuoso-users] Support for reading from named pipes.
Hi Kingsley, IMHO, i would put the effort in further SPARQL 1.1. support. (Main reason we are not using Virtuoso in production) Regards, Jerven On 21/01/14 03:25, Kingsley Idehen wrote: > On 1/20/14 3:03 PM, Jerven Bolleman wrote: >> Hi Hugh, >> >> No not really, just trying another knob to see if things could go faster… >> >From the perspective of the data warehousing set up we would use, >> directly loading triples from a ssh & gunzip pipe into virtuoso >> would be nice (as our data production and data hosting are in >> different data centres). >> But not something you should prioritise IMHO (it’s nice to have) >> >> Regards, >> Jerven > > I don't mind having it implemented during this window of opportunity, so > now is the time to spec what you seek :-) > > Kingsley >> On 20 Jan 2014, at 19:17, Hugh Williams wrote: >> >>> Hi Jerven, >>> >>> In speaking to development the Virtuoso "file_io..." function were >>> written for use with regular files and thus do not support named >>> pipes. Do you have a specific need for this support for data loads ? >>> >>> Best Regards >>> Hugh Williams >>> Professional Services >>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>> Weblog -- http://www.openlinksw.com/blogs/ >>> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >>> Twitter -- http://twitter.com/OpenLink >>> Google+ -- http://plus.google.com/100570109519069333827/ >>> Facebook -- http://www.facebook.com/OpenLinkSoftware >>> Universal Data Access, Integration, and Management Technology Providers >>> >>> On 17 Jan 2014, at 14:08, Jerven Bolleman >>> wrote: >>> >>>> Hi Hugh, >>>> >>>> The virtuoso server log does not show any error messages. >>>> The version is Version 7.0.1-dev.3207-pthreads as of Jan 16 2014 >>>> >>>> If I use file_open instead of file_to_string_output I get a error >>>> message (FA25 Seek error in file , which makes sense for a pipe). >>>> >>>> i.e. DEBUG: Using script:DB.DBA.TTLP(file_open >>>> ('/tmp/virtuosoJavaCommunicationFifo7003142399157620167.ttl'), >>>> 'http://beta.sparql.uniprot.org/locations/','http://beta.sparql.uniprot.org/locations/') >>>> >>>> 2014-01-17 13:29:35 + >>>> org.expasy.sesame.virtuoso.VirtuosoBulkStatementTransaction >>>>ERROR: >>>> virtuoso.jdbc4.VirtuosoException: FA025: Seek error in file >>>> '/tmp/virtuosoJavaCommunicationFifo7003142399157620167.ttl', error : >>>> >>>> Then in my code writing to the named pipe fails on the 87th triple just >>>> as in the case with file_to_string_output. (this seems to be very close >>>> to 4096 bytes written before failing). >>>> >>>> However, with file_to_string_output I see the same broken pipe from the >>>> java side. But nothing in the virtuoso.log. Just the normal start up >>>> logging. The process also fails on the 87th triple. >>>> >>>> Secondly as a second trial, I speed up the writing into the turtle file >>>> then the process is successful. e.g. cat in to a named pipe or less >>>> checks on the java side. This makes me think that the >>>> file_to_string_output method does not correctly check if the named pipe >>>> is finished or if it just has no bytes available at this time. >>>> If that is the case then it would be a bug, but someone would need to >>>> check on a code level if my idea is correct. >>>> >>>> Regards, >>>> Jerven >>>> >>>> >>>> On 17/01/14 02:42, Hugh Williams wrote: >>>>> Hi Jerven, >>>>> >>>>> What is the version of Virtuoso being used, please provide the >>>>> output of >>>>> running: >>>>> >>>>> virtuoso-t -? >>>>> >>>>> What are the actual errors being reported on the server side, please >>>>> provide a copy or snippet of the virtuoso.log show these ? >>>>> >>>>> Are you able to provide a simple test case for recreation in-house ? >>>>> >>>>> Best Regards >>>>> Hugh Williams >>>>> Professional Services >>>>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>>>>
Re: [Virtuoso-users] Support for reading from named pipes.
Hi Hugh, No not really, just trying another knob to see if things could go faster… >From the perspective of the data warehousing set up we would use, directly >loading triples from a ssh & gunzip pipe into virtuoso would be nice (as our data production and data hosting are in different data centres). But not something you should prioritise IMHO (it’s nice to have) Regards, Jerven On 20 Jan 2014, at 19:17, Hugh Williams wrote: > Hi Jerven, > > In speaking to development the Virtuoso "file_io..." function were written > for use with regular files and thus do not support named pipes. Do you have a > specific need for this support for data loads ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 17 Jan 2014, at 14:08, Jerven Bolleman wrote: > >> Hi Hugh, >> >> The virtuoso server log does not show any error messages. >> The version is Version 7.0.1-dev.3207-pthreads as of Jan 16 2014 >> >> If I use file_open instead of file_to_string_output I get a error >> message (FA25 Seek error in file , which makes sense for a pipe). >> >> i.e. DEBUG: Using script:DB.DBA.TTLP(file_open >> ('/tmp/virtuosoJavaCommunicationFifo7003142399157620167.ttl'), >> 'http://beta.sparql.uniprot.org/locations/','http://beta.sparql.uniprot.org/locations/') >> 2014-01-17 13:29:35 + >> org.expasy.sesame.virtuoso.VirtuosoBulkStatementTransaction >> ERROR: >> virtuoso.jdbc4.VirtuosoException: FA025: Seek error in file >> '/tmp/virtuosoJavaCommunicationFifo7003142399157620167.ttl', error : >> >> Then in my code writing to the named pipe fails on the 87th triple just >> as in the case with file_to_string_output. (this seems to be very close >> to 4096 bytes written before failing). >> >> However, with file_to_string_output I see the same broken pipe from the >> java side. But nothing in the virtuoso.log. Just the normal start up >> logging. The process also fails on the 87th triple. >> >> Secondly as a second trial, I speed up the writing into the turtle file >> then the process is successful. e.g. cat in to a named pipe or less >> checks on the java side. This makes me think that the >> file_to_string_output method does not correctly check if the named pipe >> is finished or if it just has no bytes available at this time. >> If that is the case then it would be a bug, but someone would need to >> check on a code level if my idea is correct. >> >> Regards, >> Jerven >> >> >> On 17/01/14 02:42, Hugh Williams wrote: >>> Hi Jerven, >>> >>> What is the version of Virtuoso being used, please provide the output of >>> running: >>> >>> virtuoso-t -? >>> >>> What are the actual errors being reported on the server side, please >>> provide a copy or snippet of the virtuoso.log show these ? >>> >>> Are you able to provide a simple test case for recreation in-house ? >>> >>> Best Regards >>> Hugh Williams >>> Professional Services >>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>> Weblog -- http://www.openlinksw.com/blogs/ >>> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >>> Twitter -- http://twitter.com/OpenLink >>> Google+ -- http://plus.google.com/100570109519069333827/ >>> Facebook -- http://www.facebook.com/OpenLinkSoftware >>> Universal Data Access, Integration, and Management Technology Providers >>> >>> On 16 Jan 2014, at 16:44, Jerven Bolleman >> <mailto:jerven.bolle...@isb-sib.ch>> wrote: >>> >>>> Hi Virtuoso Devs, >>>> >>>> I am trying to use file_to_string_output to read from a named pipe. >>>> However, this seems to break without log message from virtuoso >>>> >>>> e.g. >>>> >>>> DB.DBA.TTLP(file_to_string_output >>>> ('/tmp/virtuosoJavaCommunicationFifo7287643597025348653.ttl'), >>>> 'http://beta.sparql.uniprot.org/locations/','http://beta.sparql.uniprot.org/locations/') &g
Re: [Virtuoso-users] Support for reading from named pipes.
Hi Hugh, The virtuoso server log does not show any error messages. The version is Version 7.0.1-dev.3207-pthreads as of Jan 16 2014 If I use file_open instead of file_to_string_output I get a error message (FA25 Seek error in file , which makes sense for a pipe). i.e. DEBUG: Using script:DB.DBA.TTLP(file_open ('/tmp/virtuosoJavaCommunicationFifo7003142399157620167.ttl'), 'http://beta.sparql.uniprot.org/locations/','http://beta.sparql.uniprot.org/locations/') 2014-01-17 13:29:35 + org.expasy.sesame.virtuoso.VirtuosoBulkStatementTransaction ERROR: virtuoso.jdbc4.VirtuosoException: FA025: Seek error in file '/tmp/virtuosoJavaCommunicationFifo7003142399157620167.ttl', error : Then in my code writing to the named pipe fails on the 87th triple just as in the case with file_to_string_output. (this seems to be very close to 4096 bytes written before failing). However, with file_to_string_output I see the same broken pipe from the java side. But nothing in the virtuoso.log. Just the normal start up logging. The process also fails on the 87th triple. Secondly as a second trial, I speed up the writing into the turtle file then the process is successful. e.g. cat in to a named pipe or less checks on the java side. This makes me think that the file_to_string_output method does not correctly check if the named pipe is finished or if it just has no bytes available at this time. If that is the case then it would be a bug, but someone would need to check on a code level if my idea is correct. Regards, Jerven On 17/01/14 02:42, Hugh Williams wrote: > Hi Jerven, > > What is the version of Virtuoso being used, please provide the output of > running: > > virtuoso-t -? > > What are the actual errors being reported on the server side, please > provide a copy or snippet of the virtuoso.log show these ? > > Are you able to provide a simple test case for recreation in-house ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 16 Jan 2014, at 16:44, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> Hi Virtuoso Devs, >> >> I am trying to use file_to_string_output to read from a named pipe. >> However, this seems to break without log message from virtuoso >> >> e.g. >> >> DB.DBA.TTLP(file_to_string_output >> ('/tmp/virtuosoJavaCommunicationFifo7287643597025348653.ttl'), >> 'http://beta.sparql.uniprot.org/locations/','http://beta.sparql.uniprot.org/locations/') >> >> Which on the other side gives me Broken Pipe IOExceptions. >> >> Do you know if anyone else has tried this before or if this >> fundamentally will never work? >> >> The /tmp is in the DirsAllowed >> >> Regards, >> Jerven >> >> -- >> --- >> Jerven Bolleman jerven.bolle...@isb-sib.ch >> <mailto:jerven.bolle...@isb-sib.ch> >> SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 >> CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 >> 1211 Geneve 4, >> Switzerland www.isb-sib.ch <http://www.isb-sib.ch> - www.uniprot.org >> <http://www.uniprot.org> >> Follow us at https://twitter.com/#!/uniprot >> --- >> >> -- >> CenturyLink Cloud: The Leader in Enterprise Cloud Services. >> Learn Why More Businesses Are Choosing CenturyLink Cloud For >> Critical Workloads, Development Environments & Everything In Between. >> Get a Quote or Start a Free Trial Today. >> http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk >> ___ >> Virtuoso-users mailing list >> Virtuoso-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/virtuoso-users > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.un
[Virtuoso-users] Support for reading from named pipes.
Hi Virtuoso Devs, I am trying to use file_to_string_output to read from a named pipe. However, this seems to break without log message from virtuoso e.g. DB.DBA.TTLP(file_to_string_output ('/tmp/virtuosoJavaCommunicationFifo7287643597025348653.ttl'), 'http://beta.sparql.uniprot.org/locations/','http://beta.sparql.uniprot.org/locations/') Which on the other side gives me Broken Pipe IOExceptions. Do you know if anyone else has tried this before or if this fundamentally will never work? The /tmp is in the DirsAllowed Regards, Jerven -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- CenturyLink Cloud: The Leader in Enterprise Cloud Services. Learn Why More Businesses Are Choosing CenturyLink Cloud For Critical Workloads, Development Environments & Everything In Between. Get a Quote or Start a Free Trial Today. http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] {Disarmed} Re: Automatic loading of triples into virtuoso
gt; VirtModel.openDatabaseModel(graphURI,virtuosoURL,virtuosoUser,virtuosoPassword); > model.begin(); > model.removeAll(); > model.add(memoryModel); > model.commit(); > } catch (Exception e) { > if (model != null) model.abort(); > } finally { > if (model != null) model.close(); > } > > > > On Wed, Jan 15, 2014 at 3:02 PM, Andra Waagmeester > wrote: > Hi, > > I am looking into a solution to automatically load triples into our > triples store. We use jena to convert our data into triples. Our current > workflow is that we dump the triples in a file (e.g. content.ttl) after which > this file is loaded through the isql command line: > > "DB.DBA.TTLP_MT (file_to_string_output > ('wpContent_v0.0.73237_20140115.ttl'), '', 'MailScanner has detected a > possible fraud attempt from "rdf.wikipathways.org'" claiming to be > http://rdf.wikipathways.org'); “ > > Loading the file takes minutes, but it still requires some manual steps. I > would very much like to make both the authoring and the subsequent loading in > the triple store a automatic process. > > I have tried the following in java: > > InputStream in = new FileInputStream(args[1]); > InputStream in = new > FileInputStream("wpContent_v0.0.73237_20140115.ttl"); > String url ="jdbc:virtuoso://localhost:"; > > /* STEP 1 */ > VirtGraph wpGraph = new VirtGraph > ("http://rdf.wikipathways.org/";, url, “dba", “"); > > wpGraph.clear(); > VirtuosoUpdateRequest vur = > VirtuosoUpdateFactory.read(in,wpGraph); > vur.exec(); > > > > or loading them triple by triple: > > VirtGraph wpGraph = new VirtGraph > ("http://rdf.wikipathways.org/";, url, “dba", “"); > > //wpGraph.getTransactionHandler().begin(); > wpGraph.clear(); > StmtIterator iter = model.listStatements(); > while (iter.hasNext()) { > Statement stmt = iter.nextStatement(); // get > next statement > wpGraph.add(new Triple(stmt.getSubject().asNode(), > stmt.getPredicate().asNode(), stmt.getObject().asNode())); > System.out.println( stmt.getSubject()+" - > "+stmt.getPredicate()+" - "+stmt.getObject()); > } > > Both of these approached work, they only take hours to proceed. > > Can I get the minutes needed for loading through the isql command line in an > automated pipeline process? Can I trigger the DB.DBA.TTLP_MT through java or > a shell script? > > Any guidance is much appreciated > > > Kind regards, > > > Andra Waagmeester > > > > > > > > > -- > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk > ___ > Virtuoso-users mailing list > Virtuoso-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/virtuoso-users > > > > > -- > Jim McCusker > > Data Scientist > 5AM Solutions > jmccus...@5amsolutions.com > http://5amsolutions.com > > PhD Student > Tetherless World Constellation > Rensselaer Polytechnic Institute > mcc...@cs.rpi.edu > http://tw.rpi.edu > ------ > CenturyLink Cloud: The Leader in Enterprise Cloud Services. > Learn Why More Businesses Are Choosing CenturyLink Cloud For > Critical Workloads, Development Environments & Everything In Between. > Get a Quote or Start a Free Trial Today. > http://pubads.g.doubleclick.net/gampad/clk?id=119420431&iu=/4140/ostg.clktrk___ > Virtuoso-users mailing list > Virtuoso-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/virtuoso-users --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org
[Virtuoso-users] What is SPARQL VALID() please don't add new keywords to sparql.
Dear Virtuoso developers, In your new 6.1.8 release you claim support for SPARQL VALID(). A search in the SPARQL standard docs does not seem to reveal such a function. Is this like your extension ASSUME()? I understand the need for extending the standard, as well as postels law. However, please think of ways adding such features that do not add new keywords to the SPARQL language. You could for example add this functionality in comments instead. PREFIX bioh:<http://virtuoso/optimizerhints> PREFIX dc: <http://purl.org/dc/elements/1.1/> SELECT ?book ?title WHERE { ?book dc:title ?title . #bioh:assume(?title, isIRI) } This is nicer for those of us who need to support multiple stores and still want to support Virtuoso in a way that is as good as possible. Again, I understand the reasoning for adding new features. Yet I urge all of you think again about the exact way you have added and advertised these features. Regards and thank you for your time, Jerven -- ------- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- Rapidly troubleshoot problems before they affect your business. Most IT organizations don't have a clear picture of how application performance affects their revenue. With AppDynamics, you get 100% visibility into your Java,.NET, & PHP application. Start your 15-day FREE TRIAL of AppDynamics Pro! http://pubads.g.doubleclick.net/gampad/clk?id=84349831&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
[Virtuoso-users] Commit d2117b6e08518c1ed7fc41b191969723dfa66c55 not passing tests on develop/7 branch from github
Hi All, I am trying to build the current develop/7 branch on my CentOS release 6.4 machine. This is failing during one of the test cases. git status; # On branch develop/7 # Untracked files: # (use "git add ..." to include in what will be committed) # # binsrc/yacutia/vad.db # binsrc/yacutia/vad.trx # binsrc/yacutia/virtuoso.ini # binsrc/yacutia/virtuoso.tdb # binsrc/yacutia/xddl.xsd # binsrc/yacutia/xddl_diff.xsl # binsrc/yacutia/xddl_exec.xsl # binsrc/yacutia/xddl_procs.xsd # binsrc/yacutia/xddl_tables.xsd # binsrc/yacutia/xddl_views.xsd --note-- The binsrc/yacutia/* files are not being cleaned up by make clean. Which might be a source of the problems. --note-- rm -rf /scratch/virtuoso; --note-- local scratch dir where I would like to see the resulting binaries --note-- autogen.sh; ./configure --help --prefix=/scratch/virtuoso/ --exec-prefix=/scratch/virtuoso; make clean; make -j 4 ; make install Which failed at a point for creating "yacutia" (I think) = = CREATING VAD PACKAGE FOR VIRTUOSO CONDUCTOR (mkvad.sh) = Tue Nov 5 13:50:54 CET 2013 = VAD Sticker vad_fs.xml creation... VAD Sticker vad_dav.xml creation... *** Error S2801: [Virtuoso Driver]CL033: Connect failed to localhost: = localhost:. at line 0 of Top-Level: Starting Virtuoso server ... Virtuoso server started ***FAILED: execution of DB.DBA.VAD_PACK('vad_dav.xml', '.', 'conductor_dav.vad') SQL ERROR - *** Error 42VAD: [Virtuoso Driver][Virtuoso Server]Inexistent file resource (./vad/vsp/conductor/folder_error.vspx) at line 0 of Top-Level: DB.DBA.VAD_PACK('vad_dav.xml', '.', 'conductor_dav.vad') ***FAILED: execution of DB.DBA.VAD_PACK('vad_fs.xml', '.', 'conductor_filesystem.vad') SQL ERROR - *** Error 42VAD: [Virtuoso Driver][Virtuoso Server]Inexistent file resource (./vad/vsp/conductor/folder_error.vspx) at line 0 of Top-Level: DB.DBA.VAD_PACK('vad_fs.xml', '.', 'conductor_filesystem.vad') PASSED: checkpoint PASSED: shutdown = = Checking log file mkvad.output for statistics: = = Total number of tests PASSED : 2 = Total number of tests FAILED : 2 = Total number of tests ABORTED : 0 = *** Not all tests completed successfully *** Check the file mkvad.output for more information + egrep "\*\*.*FAILED:|\*\*.*ABORTED:" mkvad.output ***FAILED: execution of DB.DBA.VAD_PACK('vad_dav.xml', '.', 'conductor_dav.vad') ***FAILED: execution of DB.DBA.VAD_PACK('vad_fs.xml', '.', 'conductor_filesystem.vad') Regards, Jerven PS. was hoping to find out why I don't get a dev build -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- November Webinars for C, C++, Fortran Developers Accelerate application performance with scalable programming models. Explore techniques for threading, error checking, porting, and tuning. Get the most from the latest Intel processors and coprocessors. See abstracts and register http://pubads.g.doubleclick.net/gampad/clk?id=60136231&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] {Disarmed} Re: Path query crashes virtuoso 7.00.3203
Hi Hugh, Changing the INI file settings fixes this issue. I got side tracked with the build issue where it is supposed to generate a dev build but it does not. I will let you know more about that when I figure out what is going on. Regards, Jerven On 05/11/13 05:22, Hugh Williams wrote: > Hi Jerven, > > Can you please confirm if this issue has been resolved by upgrading your > Virtuoso Server binary and INI file settings ? > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- *MailScanner has detected definite fraud in the website at > "www.facebook.com". Do /not/ trust this website:* > http://www.facebook.com/OpenLinkSoftware > <http://www.facebook.com/OpenLinkSoftware> > Universal Data Access, Integration, and Management Technology Providers > > On 15 Oct 2013, at 16:47, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> Hi Hugh, >> >> Thanks for your time spent looking into this. >> On 15/10/13 04:35, Hugh Williams wrote: >>> Hi Jerven, >>> >>> I have downloaded the /taxonomy.rdf.gz dataset and installed, but have >>> not been able to recreate this problem using a commercial or open source >>> Virtuoso build: >>> >>> $ ./bin/virtuoso-t -? >>> -bash: ./bin/virtuoso-t: No such file or directory >>> Hughs-MacBook-Pro-355:database hwilliams$ ../bin/virtuoso-t -? >>> Virtuoso Open Source Edition (Column Store) (multi threaded) >>> Version 7.0.1-dev.3203-pthreads as of Oct 13 2013 >>> Compiled for Darwin (x86_64-apple-darwin12.3.0) >>> Copyright (C) 1998-2013 OpenLink Software >>> >>> Two points: >>> >>> 1. Your build is labelled as version 07.00.3203 , whereas mine above >>> is 7.0.1-dev.3203 , which is what I would expect from a develop/7 >>> build. So it seems to me you have a stable/7 build, which can be >>> confirmed by running the "git status" command on your build tree. >> git status confirms develop/7 >> However, I might have ended up with a build with a little bit of both >> due to switching from one branch to the other without make clean. >>> >>> 2. Why is you set those Column Store INI file params to such low values >>> as the defaults recommended are: >>> >>> MaxQueryMem = 2G; memory allocated to query processor >>> VectorSize = 1000; initial parallel query vector (array of query >>> operations) size >>> MaxVectorSize = 100; query vector size threshold. >>> AdjustVectorSize = 0 >>> ThreadsPerQuery = 8 >>> AsyncQueueMaxThreads = 10 >> I took these settings from a recent mail to this list by kingsley. >> But will use the defaults instead (will restart from the virtuoso 7 >> defaults from git) >>> >>> Although the only issue I encountered with your low setting was a >>> warning indicating: >>> >>> Virtuoso 42000 Error FRVEC: array in for vectored over max vector length >>> 10001 > 1 >>> >>> as your MaxVectorSize was set to 1. >>> >>> Anyway, I think the key is to confirm which build you are running to >>> ensure it is a develop/7 build ... >> I am now rebuilding after a make clean. >> Will let you know if I can rebuild. >> >> Regards, >> Jerven >>> >>> Best Regards >>> Hugh Williams >>> Professional Services >>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>> Weblog -- http://www.openlinksw.com/blogs/ >>> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >>> Twitter -- http://twitter.com/OpenLink >>> Google+ -- http://plus.google.com/100570109519069333827/ >>> Facebook -- *MailScanner has detected definite fraud in the website >>> at "www.facebook.com". Do /not/ trust this website:* >>> http://www.facebook.com/OpenLinkSoftware >>> <http://www.facebook.com/OpenLinkSoftware> >>> Universal Data Access, Integration, and Management Technology Providers >>> >>> On 14 Oct 2013, at 12:34, Jerven Bolleman >> <mailto:jerven.bolle...@isb-sib.ch> >>> <mailto:je
Re: [Virtuoso-users] Path query crashes virtuoso 7.00.3203
Hi Hugh, Thanks for your time spent looking into this. On 15/10/13 04:35, Hugh Williams wrote: > Hi Jerven, > > I have downloaded the /taxonomy.rdf.gz dataset and installed, but have > not been able to recreate this problem using a commercial or open source > Virtuoso build: > > $ ./bin/virtuoso-t -? > -bash: ./bin/virtuoso-t: No such file or directory > Hughs-MacBook-Pro-355:database hwilliams$ ../bin/virtuoso-t -? > Virtuoso Open Source Edition (Column Store) (multi threaded) > Version 7.0.1-dev.3203-pthreads as of Oct 13 2013 > Compiled for Darwin (x86_64-apple-darwin12.3.0) > Copyright (C) 1998-2013 OpenLink Software > > Two points: > > 1. Your build is labelled as version 07.00.3203 , whereas mine above > is 7.0.1-dev.3203 , which is what I would expect from a develop/7 > build. So it seems to me you have a stable/7 build, which can be > confirmed by running the "git status" command on your build tree. git status confirms develop/7 However, I might have ended up with a build with a little bit of both due to switching from one branch to the other without make clean. > > 2. Why is you set those Column Store INI file params to such low values > as the defaults recommended are: > > MaxQueryMem = 2G; memory allocated to query processor > VectorSize = 1000; initial parallel query vector (array of query > operations) size > MaxVectorSize = 100; query vector size threshold. > AdjustVectorSize = 0 > ThreadsPerQuery = 8 > AsyncQueueMaxThreads = 10 I took these settings from a recent mail to this list by kingsley. But will use the defaults instead (will restart from the virtuoso 7 defaults from git) > > Although the only issue I encountered with your low setting was a > warning indicating: > > Virtuoso 42000 Error FRVEC: array in for vectored over max vector length > 10001 > 1 > > as your MaxVectorSize was set to 1. > > Anyway, I think the key is to confirm which build you are running to > ensure it is a develop/7 build ... I am now rebuilding after a make clean. Will let you know if I can rebuild. Regards, Jerven > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 14 Oct 2013, at 12:34, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> The SPARQL update approach does not immediately crash if I change the >> following settings. >> >> TransactionAfterImageLimit = 5 > --- >> Jerven Bolleman jerven.bolle...@isb-sib.ch >> <mailto:jerven.bolle...@isb-sib.ch> >> SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 >> CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 >> 1211 Geneve 4, >> Switzerland www.isb-sib.ch <http://www.isb-sib.ch> - www.uniprot.org >> <http://www.uniprot.org> >> Follow us at https://twitter.com/#!/uniprot >> --- >> >> -- >> October Webinars: Code for Performance >> Free Intel webinars can help you accelerate application performance. >> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the >> most from >> the latest Intel processors and coprocessors. See abstracts and register > >> http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk >> ___ >> Virtuoso-users mailing list >> Virtuoso-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/virtuoso-users > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60135031&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] Path query crashes virtuoso 7.00.3203
The SPARQL update approach does not immediately crash if I change the following settings. TransactionAfterImageLimit = 5
[Virtuoso-users] Path query crashes virtuoso 7.00.3203
ssOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf/rdfs:subClassOf ?super} UNION { ?sub rdfs:subClassOf/rdfs:subClassOf ?super}} -- ------- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- ; ; virtuoso.ini ; ; Configuration file for the OpenLink Virtuoso VDBMS Server ; ; To learn more about this product, or any other product in our ; portfolio, please check out our web site at: ; ; htt
Re: [Virtuoso-users] how to 'debug' VOS7 performance?
= > ;Ucm1 = > ;Ucm2 = > ;... > > > [Zero Config] > ServerName = virtuoso (WS1) > ;ServerDSN = ZDSN > ;SSLServerName = > ;SSLServerDSN = > > > [Mono] > ;MONO_TRACE = Off > ;MONO_PATH = > ;MONO_ROOT = > ;MONO_CFG_DIR = > ;virtclr.dll= > > > [URIQA] > DynamicLocal= 0 > DefaultHost = localhost:8890 > > > [SPARQL] > DefaultGraph = http://linkedchemistry.info/chembl/ > ImmutableGraphs= http://linkedchemistry.info/chembl/ > ResultSetMaxRows = 100 > MaxQueryCostEstimationTime = 5000 ; in seconds > MaxQueryExecutionTime = 80; in seconds > DefaultQuery = select distinct * where > {<http://linkedchemistry.info/chembl/molecule/m443> ?p ?o} > ;ExternalQuerySource= 1 > ;ExternalXsltSource = 1 > DeferInferenceRulesInit = 0 ; controls inference rules loading > ;PingService = http://rpc.pingthesemanticweb.com/ > > > [Plugins] > LoadPath= > /usr/local/virtuoso-opensource/lib/virtuoso/hosting > Load1 = plain, wikiv > Load2 = plain, mediawiki > Load3 = plain, creolewiki > Load4 = plain, im > ;Load5 = plain, wbxml2 > ;Load6 = plain, hslookup > ;Load7 = attach, libphp5.so > ;Load8 = Hosting, hosting_php.so > ;Load9 = Hosting,hosting_perl.so > ;Load10 = Hosting,hosting_python.so > ;Load11 = Hosting,hosting_ruby.so > ;Load12 = msdtc,msdtc_sample > == > > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- October Webinars: Code for Performance Free Intel webinars can help you accelerate application performance. Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from the latest Intel processors and coprocessors. See abstracts and register > http://pubads.g.doubleclick.net/gampad/clk?id=60134791&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
[Virtuoso-users] ASSUME ???
Dear Virtuoso devs, What is the ASSUME variable that you added in 6.1.7 ? http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/ I can't find any reference to this in the SPARQL 1.1 query standard. Is this a virtuoso specific extension? Regards, Jerven -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- See everything from the browser to the database with AppDynamics Get end-to-end visibility with application monitoring from AppDynamics Isolate bottlenecks and diagnose root cause in seconds. Start your free trial of AppDynamics Pro today! http://pubads.g.doubleclick.net/gampad/clk?id=48808831&iu=/4140/ostg.clktrk ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] Sparql using delete insert problem
Hi Vishesh, Wouldn't the following work? WITH DELETE { nao:lastModified ?mod . } INSERT { nao:lastModified "2012-01-29T16:29:11Z"^^xsd:dateTime . } WHERE { OPTIONAL { nao:lastModified ?mod .} } The first example you posted would not match in the case where the nao:lastModified ?mod did not yet exist. This form should match the update or insert behavior you are looking for. But I might be wrong not using update day to day. Regards, Jerven On 15/05/13 17:20, Vishesh Handa wrote: > Hey Ivan > > On Tue, May 14, 2013 at 8:44 PM, Ivan Mikhailov > wrote: >> Hello Vishesh, >> >> The query says: >> >> For every triple nao:lastModified ?mod found in database (if >> any), >> >> prepare future removal of nao:lastModified ?mod >> and >> prepare insertion of nao:lastModified >> "2012-01-29T16:29:11Z"^^xsd:dateTime . >> >> When all (if any) operations are prepared, perform the actual edition of >> the database. >> >> No nao:lastModified ?mod at the very beginning --- no >> manipulations prepared --- no effect. >> >> The solution is to make separate DELETE and then an unconditional >> INSERT. > > Thanks for the detailed explanation. I was hoping to optimize it by > not having to call a separate delete and insert operation. > > It would be nice to have something like an 'update or insert' like one > has in SQL. > >> >> Best Regards, >> >> Ivan Mikhailov >> OpenLink Software >> http://virtuoso.openlinksw.com >> >> On Tue, 2013-05-14 at 15:27 +0530, Vishesh Handa wrote: >>> Hey guys >>> >>> I've been trying to update some statements in a graph, and I have been >>> using the following query - >>> >>> WITH >>> DELETE { nao:lastModified ?mod . } >>> INSERT { nao:lastModified "2012-01-29T16:29:11Z"^^xsd:dateTime . } >>> WHERE { nao:lastModified ?mod . } >>> >>> This works fine in the case when the nao:lastModified ?m triple >>> already exists, but it does not insert the new statement if that >>> triple does not exist. >>> >>> I was checking the SPARQL 1.1 Update Standard [1], and I found the >>> following text - >>> >>> "Deleting triples that are not present, or from a graph that is not >>> present will have no effect and will result in success." >>> >>> So in the case when the triple is not present the delete operation >>> should result in a success and the insert operation should be carried >>> out. But that does not seem to be the case. >>> >>> Could someone please confirm if this is just a bug or is that the >>> intended behaviour? >>> >>> [1] http://www.w3.org/TR/2012/PR-sparql11-update-20121108/#deleteInsert >>> >>> -- >>> Vishesh Handa >>> >>> -- >>> AlienVault Unified Security Management (USM) platform delivers complete >>> security visibility with the essential security capabilities. Easily and >>> efficiently configure, manage, and operate all of your security controls >>> from a single console and one unified framework. Download a free trial. >>> http://p.sf.net/sfu/alienvault_d2d >>> ___ >>> Virtuoso-users mailing list >>> Virtuoso-users@lists.sourceforge.net >>> https://lists.sourceforge.net/lists/listinfo/virtuoso-users >> > > > > -- > Vishesh Handa > > ------ > AlienVault Unified Security Management (USM) platform delivers complete > security visibility with the essential security capabilities. Easily and > efficiently configure, manage, and operate all of your security controls > from a single console and one unified framework. Download a free trial. > http://p.sf.net/sfu/alienvault_d2d > ___ > Virtuoso-users mailing list > Virtuoso-users@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/virtuoso-users > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] AVG does not follow SPARQL standard
On 15/05/13 16:44, Jerven Bolleman wrote: > Hi Developers, > > This has probably been raised before but the AVG function in 07.00.3202 > does not behave as (I think) it should. > > Avg(M) = Sum(M) / Count(M), where Count(M) > 0 > > Where / returns numeric; but xsd:decimal if both operands are xsd:integer > > This means that per my reading of the spec avg should return a > xsd:decimal but it does not. > > This query > PREFIX xsd:<http://www.w3.org/2001/XMLSchema#> > SELECT AVG (?s) > WHERE { > VALUES ?s {"1"^^xsd:integer "3"^^xsd:integer} > } > > Should return "1.5"^^xsd:decimal but instead it returns "2"^^xsd:int Should return "2.0"^^xsd:decimal but instead it returns "2"^^xsd:int Ok, that was rather silly in my maths but the point still stands PREFIX xsd:<http://www.w3.org/2001/XMLSchema#> SELECT (AVG (?s) as ?avg) WHERE { VALUES (?s) {("1"^^xsd:integer) ("2"^^xsd:integer)} } returns 1 instead of 1.5 ... > > Regards, > Jerven > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
[Virtuoso-users] AVG does not follow SPARQL standard
Hi Developers, This has probably been raised before but the AVG function in 07.00.3202 does not behave as (I think) it should. Avg(M) = Sum(M) / Count(M), where Count(M) > 0 Where / returns numeric; but xsd:decimal if both operands are xsd:integer This means that per my reading of the spec avg should return a xsd:decimal but it does not. This query PREFIX xsd:<http://www.w3.org/2001/XMLSchema#> SELECT AVG (?s) WHERE { VALUES ?s {"1"^^xsd:integer "3"^^xsd:integer} } Should return "1.5"^^xsd:decimal but instead it returns "2"^^xsd:int Regards, Jerven -- ------- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] Path query with * -> transitive temp memory ran out
Ivan, On 14/05/13 16:23, Ivan Mikhailov wrote: > Jerven, > > Indeed it's a departure from the standard. We've described our SPARQL-BI > implementation to the WG, we've warned them that SPARQL 1.1 path > expressions will be implemented as syntax sugar over SQL transitive > subqueries, that's all. If we invent a scalable way of handling > transitive paths without restrictions on ends then we may implement it, > but not immediately. Ah, I did not know that. From my experiments at this time the current solution does not feel scalable. > > both ?sub rdfs:subClassOf [] and [] rdfs:subClassOf ?super are, formally > speaking, bindings for ends. In the generated SQL they form table > sources that will provide statistics for the SQL optimizer about ?sub > and ?super and let the optimizer to chose the direction. One of this > sources will become a leading node of an execution plan, so transitive > node is always a dependent part and there's no need to implement the > support of transitive node with both ends open as a leading part. > > I can add more syntax sugar and implicitly put { select distinct ?sub > { ?sub [] [] }} at the front of a transitive path pattern, but that will > masquerade the issue, not resolve it. It will move you just a little bit closer to the standard. Which means less work in porting queries from one solution to another. So if the cost in code, documentation and performance is not to large please do so. Thanks, Jerven > > Best Regards, > > Ivan Mikhailov > OpenLink Software > > On Tue, 2013-05-14 at 15:06 +0200, Jerven Bolleman wrote: >> Hi Ivan, >> >> I understand you reasoning. But it is a departure from the standard. >> Which seems silly to me because this works. >> >> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> >> CONSTRUCT >> { ?sub rdfs:subClassOf ?super .} >> FROM <http://purl.uniprot.org/go/> >> WHERE >> { ?sub rdfs:subClassOf [] . >> ?sub (rdfs:subClassOf)* ?super . >> [] rdfs:subClassOf ?super} >> >> Regards, >> Jerven >> >> >> On 14/05/13 14:51, Ivan Mikhailov wrote: >>> Hello Jerven, >>> >>> In Virtuoso, transitive queries should have some equality at one end of >>> chain (or at both ends). It may be relaxed in the future, but now it may >>> requires some additional refinement on one of variables. It's not bad >>> idea anyway; in most cases you don't want to get a result set with all >>> subjects mentioned in a graph or in a whole storage. >>> >>> On LOD, 50B quads, the following works quite fast: >>> >>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> >>> select ?sub ?super >>> WHERE >>> { >>>{ select distinct ?sub where { { ?sub a rdfs:Class } union { ?sub >>> rdfs:subClassOf|^rdfs:subClassOf|^rdfs:Domain|^rdfs:Range ?x } } } >>>?sub (rdfs:subClassOf)* ?super } >>> >>> but demonstrates that the database contains enough garbage in some >>> graphs. Formally speaking, blank nodes are legal as types and they are >>> found in LOD but I've never seen types without IRIs in "my" ontologies >>> and I'm egocentric enough to treat such types as garbage. Next version >>> eliminates blank node supertypes: >>> >>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>> >>> select ?sub ?super >>> WHERE >>> { >>>{ select distinct ?super where { { ?super a rdfs:Class } union { ?super >>> rdfs:subClassOf|^rdfs:subClassOf|^rdfs:Domain|^rdfs:Range ?x } filter >>> (isIRI(?super)) } } >>>?sub (rdfs:subClassOf)* ?super } >>> >>> The common idea is self-evident. Some subquery or a triple pattern >>> enumerates starting or ending ends, the transitive path deals with each >>> end in turn. In data manipulations, it may be convenient to make one >>> insert of transitive closure per one start or end, so a select query >>> enumerates starting or ending ends and a data manipulation statement is >>> executed in a loop, once for each enumerated node. Something like >>> >>> for (sparql select distinct ?end where { enumeration pattern } ) do >>> { >>> sparql insert { ?s p ?o } where { ?s p* ?o . filter (?s = ?:end) } ; >>> } >>> >>> inside a stored procedure. The advantage is that the DML statements in >>> the loop body can be placed in an async_queue, providing better hardware >>> utilization on multi-process
Re: [Virtuoso-users] Path query with * -> transitive temp memory ran out
Hi Ivan, I understand you reasoning. But it is a departure from the standard. Which seems silly to me because this works. PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT { ?sub rdfs:subClassOf ?super .} FROM <http://purl.uniprot.org/go/> WHERE { ?sub rdfs:subClassOf [] . ?sub (rdfs:subClassOf)* ?super . [] rdfs:subClassOf ?super} Regards, Jerven On 14/05/13 14:51, Ivan Mikhailov wrote: > Hello Jerven, > > In Virtuoso, transitive queries should have some equality at one end of > chain (or at both ends). It may be relaxed in the future, but now it may > requires some additional refinement on one of variables. It's not bad > idea anyway; in most cases you don't want to get a result set with all > subjects mentioned in a graph or in a whole storage. > > On LOD, 50B quads, the following works quite fast: > > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > > select ?sub ?super > WHERE > { > { select distinct ?sub where { { ?sub a rdfs:Class } union { ?sub > rdfs:subClassOf|^rdfs:subClassOf|^rdfs:Domain|^rdfs:Range ?x } } } > ?sub (rdfs:subClassOf)* ?super } > > but demonstrates that the database contains enough garbage in some > graphs. Formally speaking, blank nodes are legal as types and they are > found in LOD but I've never seen types without IRIs in "my" ontologies > and I'm egocentric enough to treat such types as garbage. Next version > eliminates blank node supertypes: > > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > > select ?sub ?super > WHERE > { > { select distinct ?super where { { ?super a rdfs:Class } union { ?super > rdfs:subClassOf|^rdfs:subClassOf|^rdfs:Domain|^rdfs:Range ?x } filter > (isIRI(?super)) } } > ?sub (rdfs:subClassOf)* ?super } > > The common idea is self-evident. Some subquery or a triple pattern > enumerates starting or ending ends, the transitive path deals with each > end in turn. In data manipulations, it may be convenient to make one > insert of transitive closure per one start or end, so a select query > enumerates starting or ending ends and a data manipulation statement is > executed in a loop, once for each enumerated node. Something like > > for (sparql select distinct ?end where { enumeration pattern } ) do >{ > sparql insert { ?s p ?o } where { ?s p* ?o . filter (?s = ?:end) } ; >} > > inside a stored procedure. The advantage is that the DML statements in > the loop body can be placed in an async_queue, providing better hardware > utilization on multi-processor box or on a cluster. > > Best Regards, > > Ivan Mikhailov > OpenLink Software > http://virtuoso.openlinksw.com > > On Tue, 2013-05-14 at 12:57 +0200, Jerven Bolleman wrote: >> Hi Hugh, >> >> That is not a SPARQL 1.1 requirement. Are the developers aiming to >> remove this constraint? >> >> Anyway, I then hit the next roadblock. >> >>Exception:virtuoso.jdbc3.VirtuosoException: TN...: Exceeded 10 >> bytes in transitive temp memory. use t_distinct, t_max or more >> T_MAX_memory options to limit the search or increase the pool >> >> Looking at google the only way to change this is in the code. >> Is this still correct? >> >> Regards, >> Jerven >> >> On 14/05/13 12:31, Hugh Williams wrote: >>> Hi Jerven, >>> >>> Either ?sub or ?super (or both) should appear in some non-transitive >>> triple pattern to specify at least one of transitive ends. >>> >>> See the following property path examples: >>> >>> http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtTipsAndTricksSPARQL11PropertyPaths >>> >>> Best Regards >>> Hugh Williams >>> Professional Services >>> OpenLink Software, Inc. // http://www.openlinksw.com/ >>> Weblog -- http://www.openlinksw.com/blogs/ >>> LinkedIn -- http://www.linkedin.com/company/openlink-software/ >>> Twitter -- http://twitter.com/OpenLink >>> Google+ -- http://plus.google.com/100570109519069333827/ >>> Facebook -- http://www.facebook.com/OpenLinkSoftware >>> Universal Data Access, Integration, and Management Technology Providers >>> >>> On 14 May 2013, at 09:16, Jerven Bolleman >> <mailto:jerven.bolle...@isb-sib.ch>> wrote: >>> >>>> Hi All, >>>> >>>> When executing >>>> >>>> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >>>> >>>> CONSTRUCT >>>>{ ?sub rdfs:subClassOf ?super .} >>>> FROM <http://purl.uniprot.or
Re: [Virtuoso-users] Path query with * -> transitive temp memory ran out
Hi Hugh, That is not a SPARQL 1.1 requirement. Are the developers aiming to remove this constraint? Anyway, I then hit the next roadblock. Exception:virtuoso.jdbc3.VirtuosoException: TN...: Exceeded 10 bytes in transitive temp memory. use t_distinct, t_max or more T_MAX_memory options to limit the search or increase the pool Looking at google the only way to change this is in the code. Is this still correct? Regards, Jerven On 14/05/13 12:31, Hugh Williams wrote: > Hi Jerven, > > Either ?sub or ?super (or both) should appear in some non-transitive > triple pattern to specify at least one of transitive ends. > > See the following property path examples: > > http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtTipsAndTricksSPARQL11PropertyPaths > > Best Regards > Hugh Williams > Professional Services > OpenLink Software, Inc. // http://www.openlinksw.com/ > Weblog -- http://www.openlinksw.com/blogs/ > LinkedIn -- http://www.linkedin.com/company/openlink-software/ > Twitter -- http://twitter.com/OpenLink > Google+ -- http://plus.google.com/100570109519069333827/ > Facebook -- http://www.facebook.com/OpenLinkSoftware > Universal Data Access, Integration, and Management Technology Providers > > On 14 May 2013, at 09:16, Jerven Bolleman <mailto:jerven.bolle...@isb-sib.ch>> wrote: > >> Hi All, >> >> When executing >> >> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> >> >> CONSTRUCT >> { ?sub rdfs:subClassOf ?super .} >> FROM <http://purl.uniprot.org/go/> >> WHERE >> { ?sub (rdfs:subClassOf)* ?super } >> >> I get the following exception. >> >> Query evaluation failed:PREFIX >> rdfs:<http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT {?sub >> rdfs:subClassOf ?super} FROM <http://purl.uniprot.org/go/> WHERE {?sub >> rdfs:subClassOf* ?super} >> org.openrdf.query.QueryEvaluationException: : SPARQL execute >> failed:[PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT >> {?sub rdfs:subClassOf ?super} FROM <http://purl.uniprot.org/go/> WHERE >> {?sub rdfs:subClassOf* ?super}] >> Exception:virtuoso.jdbc3.VirtuosoException: TR...: transitive start >> not given >> at >> virtuoso.sesame2.driver.VirtuosoRepositoryConnection.executeSPARQLForHandler(Unknown >> >> Source) >> >> >> Regards, >> Jerven >> -- >> --- >> Jerven Bolleman jerven.bolle...@isb-sib.ch >> <mailto:jerven.bolle...@isb-sib.ch> >> SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 >> CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 >> 1211 Geneve 4, >> Switzerland www.isb-sib.ch <http://www.isb-sib.ch> - www.uniprot.org >> <http://www.uniprot.org> >> Follow us at https://twitter.com/#!/uniprot >> --- >> >> -- >> AlienVault Unified Security Management (USM) platform delivers complete >> security visibility with the essential security capabilities. Easily and >> efficiently configure, manage, and operate all of your security controls >> from a single console and one unified framework. Download a free trial. >> http://p.sf.net/sfu/alienvault_d2d >> ___ >> Virtuoso-users mailing list >> Virtuoso-users@lists.sourceforge.net >> https://lists.sourceforge.net/lists/listinfo/virtuoso-users > -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
Re: [Virtuoso-users] Path query with * not supported yet?
dfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub ((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub ((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub ((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub ((rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub (rdfs:subClassOf/rdfs:subClassOf)/rdfs:subClassOf ?super } UNION { ?sub rdfs:subClassOf/rdfs:subClassOf ?super } UNION { ?sub rdfs:subClassOf ?super } } Regards, Jerven On 14/05/13 10:16, Jerven Bolleman wrote: > Hi All, > > When executing > > PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> > > CONSTRUCT >{ ?sub rdfs:subClassOf ?super .} > FROM <http://purl.uniprot.org/go/> > WHERE >{ ?sub (rdfs:subClassOf)* ?super } > > I get the following exception. > > Query evaluation failed:PREFIX > rdfs:<http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT {?sub > rdfs:subClassOf ?super} FROM <http://purl.uniprot.org/go/> WHERE {?sub > rdfs:subClassOf* ?super} > org.openrdf.query.QueryEvaluationException: : SPARQL execute > failed:[PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT > {?sub rdfs:subClassOf ?super} FROM <http://purl.uniprot.org/go/> WHERE > {?sub rdfs:subClassOf* ?super}] > Exception:virtuoso.jdbc3.VirtuosoException: TR...: transitive start > not given > at > virtuoso.sesame2.driver.VirtuosoRepositoryConnection.executeSPARQLForHandler(Unknown > Source) > > > Regards, > Jerven -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities
[Virtuoso-users] Path query with * not supported yet?
Hi All, When executing PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT { ?sub rdfs:subClassOf ?super .} FROM <http://purl.uniprot.org/go/> WHERE { ?sub (rdfs:subClassOf)* ?super } I get the following exception. Query evaluation failed:PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT {?sub rdfs:subClassOf ?super} FROM <http://purl.uniprot.org/go/> WHERE {?sub rdfs:subClassOf* ?super} org.openrdf.query.QueryEvaluationException: : SPARQL execute failed:[PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#> CONSTRUCT {?sub rdfs:subClassOf ?super} FROM <http://purl.uniprot.org/go/> WHERE {?sub rdfs:subClassOf* ?super}] Exception:virtuoso.jdbc3.VirtuosoException: TR...: transitive start not given at virtuoso.sesame2.driver.VirtuosoRepositoryConnection.executeSPARQLForHandler(Unknown Source) Regards, Jerven -- ------- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- AlienVault Unified Security Management (USM) platform delivers complete security visibility with the essential security capabilities. Easily and efficiently configure, manage, and operate all of your security controls from a single console and one unified framework. Download a free trial. http://p.sf.net/sfu/alienvault_d2d ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
[Virtuoso-users] SR491: Too many open statements
Hi All, This is using the virtjdbc3.jar with virt_sesame2. The version is java -cp virtjdbc3.jar virtuoso.jdbc3.Driver OpenLink Virtuoso(TM) Driver for JDBC(TM) Version 3.x [Build 3.62] org.openrdf.repository.RepositoryException: virtuoso.jdbc3.VirtuosoException: SR491: Too many open statements at virtuoso.sesame2.driver.VirtuosoRepositoryConnection.addToQuadStore(Unknown Source) at virtuoso.sesame2.driver.VirtuosoRepositoryConnection.add(Unknown Source) at virtuoso.sesame2.driver.VirtuosoRepositoryConnection.add(Unknown Source) at org.expasy.sesame.StatementIntoSailTransaction.handle(StatementIntoSailTransaction.java:121) By the way. Why is the virt_sesame2 code build to depend on the type3 driver instead of the type4 one? The C code is build today from the following git branch develop/7 commit = 5e894d801309288cac1a38a4be21fda2b6736628. Regards, Jerven -- --- Jerven Bollemanjerven.bolle...@isb-sib.ch SIB Swiss Institute of Bioinformatics Tel: +41 (0)22 379 58 85 CMU, rue Michel Servet 1 Fax: +41 (0)22 379 58 58 1211 Geneve 4, Switzerland www.isb-sib.ch - www.uniprot.org Follow us at https://twitter.com/#!/uniprot --- -- Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. This 200-page book is written by three acclaimed leaders in the field. The early access version is available now. Download your free book today! http://p.sf.net/sfu/neotech_d2d_may ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users
[Virtuoso-users] Setting log_enable etc from Java/Sesame/JDBC type3
Hi All, I might be crazy but I would like to wire up bulk loading directly from my source data to rdf java/sesame process. Without having intermediate files. So I am trying to call a few inbuilt functions directly from java. e..g log_enable and checkpoint I tried the following Connection vrc = ((VirtuosoRepositoryConnection) connection).getQuadStoreConnection();, CallableStatement loggingOff = vrc.prepareCall("log_enable (2);"); loggingOff.execute(); but this fails with the following exception virtuoso.jdbc3.VirtuosoException: SQ074: Line 1: syntax error at virtuoso.jdbc3.VirtuosoResultSet.process_result(Unknown Source) at virtuoso.jdbc3.VirtuosoResultSet.(Unknown Source) at virtuoso.jdbc3.VirtuosoPreparedStatement.(Unknown Source) at virtuoso.jdbc3.VirtuosoCallableStatement.(Unknown Source) at virtuoso.jdbc3.VirtuosoConnection.prepareCall(Unknown Source) at virtuoso.jdbc3.VirtuosoConnection.prepareCall(Unknown Source) Now, I am wondering is this possible at all? Regards, Jerven -- Learn Graph Databases - Download FREE O'Reilly Book "Graph Databases" is the definitive new guide to graph databases and their applications. This 200-page book is written by three acclaimed leaders in the field. The early access version is available now. Download your free book today! http://p.sf.net/sfu/neotech_d2d_may ___ Virtuoso-users mailing list Virtuoso-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/virtuoso-users