|
Bruce; Some responses inline. You can see more
about training, including costs, at
http://www.topquadrant.com/training/training_overview.html On 3/8/12 9:32 PM, Bruce Whealton wrote: Scott (and others - I always feel like adding that statement to ensure others will jump in and comment as well), Referring to importing vocabularies from the Semantic Web, you wrote:Yes, if you choose to Import from URL, the file will be downloaded when the containing (importing file) is opened. Subsequent queries and other actions are applied to the downloaded file.Let's take an example like BIO which is not included with the sample ontologies that come with TBC. BIO is derived from FOAF (which is included with the vocabularies that come with TBC), is there a way to import a remote file that you know you will use very often, and have it stored locally, so that every time I open up certain files in my Workspace, that use various web ontologies, it takes a long time importing each and every one - slowing down production. Two points to make here: 1) Opening a file that includes remote updates suffers only from load latency. I.e. the file will be downloaded and stored in memory. Once the data is cached locally, performance will be the same as a local file - i.e. it is an in-memory operation. So production is not slowed down. Load latencies are problematic, but that occurs once. 2) If the load latency is not acceptable, then save the file in your workspace. If it was downloaded once, stored in the project - not just in the project but I think it has to be stored in the rdf file that I am using to run queries. This is from my reading above. In other words, I cannot have the bio vocabulary stored in my project but not imported into my working file and still be able to run queries that use that vocabulary. Not, sure if I am making sense... Example: my family tree file: brucewhealtonjr.rdf : I open that to work with it. I want to query for the value representing my bio:mother. So, here is my thinking. Since I will use this often, this vocab., I download a copy and save it in the project folder where I am working then I import it into the working file brucewhealtonjr.rdf Does that sound right? Yes, that's basically right. Let me show a couple of ways to do this that may be instructive: 3) Importing to a file. I wasn't sure what file you were referring to, so let's call it BIO.ttl. Further, let's assume its base URI is http://bruceserver.org/BIO.ttl, and is dereferencable (i.e. the file actually exists at that location). You import this into the brucewhealtonjr.rdf file using the base URI and query that. If BIO.ttl is remote, it will be read into memory when you open brucewhealtonjr.rdf. If you save a copy in your workspace, keeping the base URI the same, then it will be loaded from the workspace. This is to say that when Composer (TopBraid Suite in general) sees a URI for import, it first checks the file registry to see if it is local. If it is not, it uses the URI as a URL to go get the file. 4) Using GRAPH in SPARQL. You can specify that parts of a query are applied against a specific file (graph). Assume you've opened brucewhealtonjr.rdf and that bio:mother information is in BIO.ttl. The following query will get bio:mother from fileB for all persons in fileA: SELECT ?person ?mom WHERE { ?person a foaf:Person . GRAPH <http://bruceserver.org/BIO.ttl> { ?person bio:mother ?mom . } } Then you wrote: Of you want to query the file from its server, then stand up a SPARQL endpoint for the data.I didn't follow that. How does one do that? The TQ blog at http://topquadrantblog.blogspot.com/2010/05/how-to-publish-your-linked-data-with.html can give you some information of how to do this. Basically a SPARQL endpoint is a service that takes a SPARQL query as input and returns a result set in a standard format. TopBraid Live has an out-of-the-box SPARQL Endpoint, as described by the blog, and you can use the sml:ReturnSPARQLResults module in a SPARQLMotion script to turn a Web services into a SPARQL endpoint. -- Scott --The training you recommended, where is it located, when, and how much does it cost? Thanks, Bruce On Feb 26, 3:24 pm, Scott Henninger <[email protected]> wrote:Bruce; Please see responses inline: On 2/26/12 1:11 AM, Bruce Whealton wrote:Scott, Of you want to query the file from its server, then stand up a SPARQL endpoint for the data.I assume this must be required in order to have those properties in memory to use for running queries. Is that right?This is the main difference between using a text serialization (.rdf. .ttl, .nt, .owl) and a connector file. With a text serialization, the entire file needs to be read into memory to have access to the triples. For a connector, which can connect to various RDF back-ends (TDB, SDB, Allegro, Sesame, Oracle RDF), relational data (via D2RQ), etc., the TopBraid Suite platform will query the data on demand and cache triples. So the answer is a bit different depending on how your data is configured. In your use case, as I understand it, you are working with text serializations and these will be read into memory when you open a file, be it local or remote. If you are really interested in learning more about semantic technologies and Web application development, I'd highly recommend looking into our training program. -- Scott----Scott Henninger Platform Product Manager, Senior Product and Training EngineerTopQuadrant, Inc., tel: 402-41-6029 / fax:703 991-8192/ main:703 299-9330begin_of_the_skype_highlighting 703 299-9330 Training: Introduction to Semantic Web Technologies - March. 5-8, 2012, Washington, DC Introduction to Semantic Web Technologies - April 24-26, 2012, New York, NY TopBraid Advanced Products Training - May 21-24, 2012, Washington, DCTQ Blog:Voyages of the Semantic EnterpriseThanks, Bruce On Feb 4, 1:14 pm, Scott Henninger<[email protected]>wrote:Hello Bruce; It may be a good idea to take a look at the Getting Started Guide for TopBraid Composer. The first few chapters will give you some good guidance on basic operations with files. You can find the guide, and other guides and tutorials, athttp://www.topquadrant.com/products/support.html. The basic principle is that data is brought into memory for computation, just as with any other application I can think of. You can't run a query on a remote piece of data. You could run a query against a service that wraps a piece of data. That's what a SPARQL Endpoint does. owl:imp orts just makes it easy to specify what data needs to be brought into memory as part of the definition of a file. If fileA imports fileB, e.g. {<fileA> owl:imports <fileB>}, then when fileA is opened, Composer will use the base URI of fileB to open it and merge the data in the in-memory storage. fileB could be a file on the Web. That means when opening fileA, Composer will use the base URI as a URL, download the data, and read it into memory. Depending on network latency and file size, this could take a while. If fileB changes frequently, you may need to keep it that way (i.e. leave the file on the Web). If it changes infrequently, then it may be best to create a copy and place it in Composer's workspace. This reduces the amount of time to open the file. In the case of FOAF, they use a versioning scheme so files are immutable - a change to the file will result in a new file name. Hence downloading to use a specific version is the anticipated use. Regardless of how the file is stored, loading into Composer involves converting the triples into a data structure (Jena RDF). The data structure implements an in-memory graph structure that optimizes access to triples. This in-memory model is what is used to display Composer's UI, fetch data, apply SPARQL queries, run inferences, etc. Merging an imported file is basically reading the file into this model. Namespaces have little to do with any of this - it's a separate concept taken from XML. A namespace is the syntactic part of the URL before the name. So forhttp://example.org/mydata#myitem, the namespace is"http://example.org/mydata#". That's it. The namespace can appear in any file to define a fully qualified URI (namespace + local name). It's a "space" in that one can say that there are a set of entities defined in the "space""http://example.org/mydata#", one of which is "myitem" in this example. So, back to your specific questions. To use FOAF in Composer, it has to be downloaded and read into memory. Then you can apply queries, etc. <<if I import an RDF file into another file, that is rdffiel1.rdf into rdffile2.rdf and then save the second file, the next time I open this rdffile2.rdf should this file have saved the imported graph within the same file?>> No, the imported graph will not be saved *in* rdffile2.rdf. If you specify an owl:imports of rdffile1.rdf into rdffile2.rdf, i.e. {rdffile2.rdf owl:imports rdffile1.rdf}, the data fromrdffiel1.rdfstays in rdffile1.rdf. What the imports statement means is "Whenever rdffile2.rdf is opened (read into memory), also open rdffile1.rdf and merge into the same in-memory model." You can then perform browsing and query operations on the files as a single model. Be clear a bout separating the concepts of namespaces and imports. Imports are basically an "include statement". Namespace is a syntactic piece of the URI. Hopefully that helps some. I'd really encourage taking a look at the Getting Started Guide and/or looking into our training offerings. -- Scott Scott Henninger, PhD Platform Product Manager, Senior Product and Training EngineerTopQuadrant, Inc., tel:402-429-3751begin_of_the_skype_highlighting 402-429-3751 / fax: 703 991-8192 / main:703 299-9330begin_of_the_skype_highlighting 703 299-9330 Training: Introduction to Semantic Web Technologies - March. 5-8, 2012, Washington, DC TopBraid Advanced Products Training - April 9-12, 2012, Washington, DC Introduction to Semantic Web Technologi es - April 24-26, 2012, New York, NY TQ Blog:Voyages of the Semantic Enterprise On 2/3/12 7:05 PM, Bruce Whealton wrote:I was a bit unclear about this concept. Let's assume that TBC didn't have a copy of FOAF included with the download. So, is it possible to run SPARQL queries against FOAF Classes and Properties using just the namespace for FOAF on the web? Or does TBC need to import the FOAF ontology into the open file? That leads to my second question, if I had an ontology file and a data file, as discussed in the Learning Sparql book, if I was going to query against the data using TBC, do I need to import the ontology into the data file? Or will TBC support SPARQL queries against files that are in the same project? Lastly, but relatedly, if I import an RDF file into another file, that is rdffiel1.rdf into rdffile2.rdf and then save the second file, the next time I open this rdffile2.rdf should this file have saved the imported graph within the same file? That doesn't seem to be the ... read more » You received this message because you are subscribed to the Google Group "TopBraid Suite Users", the topics of which include Enterprise Vocabulary Network (EVN), TopBraid Composer, TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/topbraid-users?hl=en |
- [topbraid-users] Referring to Classes, Properties, using n... Bruce Whealton
- Re: [topbraid-users] Referring to Classes, Properties... Scott Henninger
- [topbraid-users] Re: Referring to Classes, Proper... Bruce Whealton
- Re: [topbraid-users] Re: Referring to Classes... Scott Henninger
- [topbraid-users] Re: Referring to Classes... Bruce Whealton
- Re: [topbraid-users] Re: Referring t... Scott Henninger
