Bruce; Please see responses inline:
On 2/26/12 1:11 AM, Bruce Whealton wrote:
Scott,
The Getting Started guide that I read didn't have this kind of
detail. Just for clarification, if I use drag an drop to import one
file, rdffile1.rdf into another rdfile2.rdf, is the statement
rdffile2.rdf owl:imports rdffile1.rdf
automatically added?
Yes, you can verify this in the Ontology Home page -
the House icon in the top row. The Form view will show all
owl:imports triples for the ontology.
Secondly, if I click on the imports tab in the bottom center pane and
import from uri, does that automatically create an owl:imports triple?
Yes. You can verify per the above.
Similarly, if I want to do queries against properties defined in a
vocabulary that is defined on the web, do I need store a local copy,
that is download the ontology, so that it can run queries that use
that vocabulary?
Yes, if you choose to Import from URL, the file will
be downloaded when the containing (importing file) is opened.
Subsequent queries and other actions are applied to the downloaded
file.
Of you want to query the file from its server, then stand up a
SPARQL endpoint for the data.
I assume this must be required in order to have
those properties in memory to use for running queries. Is that right?
This is the main difference between using a text
serialization (.rdf. .ttl, .nt, .owl) and a connector file. With
a text serialization, the entire file needs to be read into memory
to have access to the triples. For a connector, which can connect
to various RDF back-ends (TDB, SDB, Allegro, Sesame, Oracle RDF),
relational data (via D2RQ), etc., the TopBraid Suite platform will
query the data on demand and cache triples. So the answer is a
bit different depending on how your data is configured. In your
use case, as I understand it, you are working with text
serializations and these will be read into memory when you open a
file, be it local or remote.
If you are really interested in learning more about semantic
technologies and Web application development, I'd highly recommend
looking into our training program.
-- Scott
----
Scott Henninger
Platform Product Manager, Senior Product and Training Engineer TopQuadrant, Inc.,
tel: 402-41-6029 / fax: 703 991-8192 / main: 703 299-9330
Training:
Introduction
to Semantic Web Technologies - March. 5-8, 2012, Washington, DC
Introduction
to Semantic Web Technologies - April 24-26, 2012, New York, NY
TopBraid
Advanced Products Training - May 21-24, 2012, Washington, DC
TQ Blog: Voyages of the
Semantic Enterprise
Thanks,
Bruce
On Feb 4, 1:14 pm, Scott Henninger <[email protected]> wrote:
Hello Bruce; It may be a good idea to take a look at the Getting Started Guide for TopBraid Composer. The first few chapters will give you some good guidance on basic operations with files. You can find the guide, and other guides and tutorials, athttp://www.topquadrant.com/products/support.html.
The basic principle is that data is brought into memory for computation, just as with any other application I can think of. You can't run a query on a remote piece of data. You could run a query against a service that wraps a piece of data. That's what a SPARQL Endpoint does.
owl:imports just makes it easy to specify what data needs to be brought into memory as part of the definition of a file. If fileA imports fileB, e.g. {<fileA> owl:imports <fileB>}, then when fileA is opened, Composer will use the base URI of fileB to open it and merge the data in the in-memory storage.
fileB could be a file on the Web. That means when opening fileA, Composer will use the base URI as a URL, download the data, and read it into memory. Depending on network latency and file size, this could take a while. If fileB changes frequently, you may need to keep it that way (i.e. leave the file on the Web). If it changes infrequently, then it may be best to create a copy and place it in Composer's workspace. This reduces the amount of time to open the file. In the case of FOAF, they use a versioning scheme so files are immutable - a change to the file will result in a new file name. Hence downloading to use a specific version is the anticipated use.
Regardless of how the file is stored, loading into Composer involves converting the triples into a data structure (Jena RDF). The data structure implements an in-memory graph structure that optimizes access to triples. This in-memory model is what is used to display Composer's UI, fetch data, apply SPARQL queries, run inferences, etc. Merging an imported file is basically reading the file into this model.
Namespaces have little to do with any of this - it's a separate concept taken from XML. A namespace is the syntactic part of the URL before the name. So forhttp://example.org/mydata#myitem, the namespace is"http://example.org/mydata#". That's it. The namespace can appear in any file to define a fully qualified URI (namespace + local name). It's a "space" in that one can say that there are a set of entities defined in the "space""http://example.org/mydata#", one of which is "myitem" in this example.
So, back to your specific questions. To use FOAF in Composer, it has to be downloaded and read into memory. Then you can apply queries, etc.
<<if I import an RDF file into another file, that
is rdffiel1.rdf into rdffile2.rdf and then save the second file, the
next time I open this rdffile2.rdf should this file have saved the
imported graph within the same file?>>
No, the imported graph will not be saved *in* rdffile2.rdf. If you specify an owl:imports of rdffile1.rdf into rdffile2.rdf, i.e. {rdffile2.rdf owl:imports rdffile1.rdf}, the data fromrdffiel1.rdfstays in rdffile1.rdf. What the imports statement means is "Whenever rdffile2.rdf is opened (read into memory), also open rdffile1.rdf and merge into the same in-memory model." You can then perform browsing and query operations on the files as a single model.
Be clear about separating the concepts of namespaces and imports. Imports are basically an "include statement". Namespace is a syntactic piece of the URI.
Hopefully that helps some. I'd really encourage taking a look at the Getting Started Guide and/or looking into our training offerings.
-- Scott
Scott Henninger, PhD
Platform Product Manager, Senior Product and Training EngineerTopQuadrant, Inc.,
tel: 402-429-3751 / fax: 703 991-8192 / main: 703 299-9330
Training:
Introduction to Semantic Web Technologies - March. 5-8, 2012, Washington, DC
TopBraid Advanced Products Training - April 9-12, 2012, Washington, DC
Introduction to Semantic Web Technologies - April 24-26, 2012, New York, NY
TQ Blog:Voyages of the Semantic Enterprise
On 2/3/12 7:05 PM, Bruce Whealton wrote:I was a bit unclear about this concept. Let's assume that TBC didn't have a copy of FOAF included with the download. So, is it possible to run SPARQL queries against FOAF Classes and Properties using just the namespace for FOAF on the web? Or does TBC need to import the FOAF ontology into the open file? That leads to my second question, if I had an ontology file and a data file, as discussed in the Learning Sparql book, if I was going to query against the data using TBC, do I need to import the ontology into the data file? Or will TBC support SPARQL queries against files that are in the same project? Lastly, but relatedly, if I import an RDF file into another file, that is rdffiel1.rdf into rdffile2.rdf and then save the second file, the next time I open this rdffile2.rdf should this file have saved the imported graph within the same file? That doesn't seem to be the case for me. I have several namespaces that are defined in my foaf.rdf file. At
one point I must have let TBC import the external namespaces because I have import statements, in the file. But each time I open that file, it takes a long time because it has to go get the files off the web. Thanks, Bruce
--
You received this message because you are subscribed to the Google
Group "TopBraid Suite Users", the topics of which include Enterprise Vocabulary Network (EVN), TopBraid Composer,
TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
To post to this group, send email to
[email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/topbraid-users?hl=en
|