Bruce,

Semantic MediaWiki uses MySQL, but not as a triplestore per se (i.e. with
the Jena SDB interface), so the only way for TopBraid to use that data
directly would be with the D2RQ interface, and then you'd have to figure
out how the system models triples in MySQL.
http://semantic-mediawiki.org/wiki/Help:Using_SPARQL_and_RDF_storesdescribes
more about the potential relationship of a Semantic MediaWiki to
RDF triplestores.

Because RDF data in a semantic mediawiki can be accessed with a URL, it's
easy to incorporate it into a TopBraid application.
http://semantic-mediawiki.org/wiki/Help:RDF_export describes how to build
these URLs. For example, to get RDF about the User Manual page on that
site, you would use
http://semantic-mediawiki.org/wiki/Special:ExportRDF/User_manual . You can
use a URL like this in TopBraid to incorporate that data into an
application; for example, on the TBC Import view, you can click the "Import
from URL" button (with the + over a globe) and enter this URL, or a
SPARQLMotion script could retrieve the data with an "Import RDF from URL"
module.

To address your second point, a SPARQL endpoint is a running service that
accepts requests using the SPARQL Protocol (
http://www.w3.org/TR/2008/REC-rdf-sparql-protocol-20080115/). The actual
protocol for delivering a query to an endpoint is something for
applications like TopBraid Composer and ARQ to worry about. ARQ itself does
not act as a SPARQL endpoint, but reads local or remote data and runs your
query that you specify with the --query command line parameter against it.
ARQ and TopBraid's support of the SPARQL 1.1 SERVICE keyword is what lets
them talk to SPARQL Endpoints, but talking to these endpoints is different
from reading a dataset (typically, a file) and running  a query against it.

TBC also lets you enter queries to run against remote or local datasets
that are basically treated as files to read and not as services delivering
data according to a specific protocol. The TopBraid Live Personal Edition
that is included with TBC Maestro edition, on the other hand, can function
as a SPARQL endpoint service; Scott's blog entry at
http://topquadrantblog.blogspot.com/2010/05/how-to-publish-your-linked-data-with.htmldescribes
how to pass queries to this service.

Regarding your last question, for more general questions about the semantic
web, I recommend http://lists.w3.org/Archives/Public/semantic-web/ and
especially http://answers.semanticweb.com/, where Scott and I are both
regulars.

Bob


On Wed, Jan 4, 2012 at 6:31 PM, Bruce Whealton <[email protected]>wrote:

> Hello,
>         I seem to remember reading about using TBC or the Ensemble Suite
> with Semantic Wikis.  I'm not sure how that would work.  I use MediaWiki
> and the Semantic MediaWiki Bundle that does expose data as RDF or it can
> when configured correctly.  It will either produce a RDF output from a page
> or you can create a RDF dump of the entire wiki.  I have a Genealogy
> project I am working on and I'm curious how TBC could fit into the project
> and be used.  Of course, each tool uses different ways for running queries,
> presenting data and etc.  MediaWiki is based on a mysql relational
> database.  So, I don't know if an application developed with TBC would
> exist separately and would use the data from the wiki, or from the wiki
> database or if there are ways to somehow integrate the two.  Has anyone
> done anything like this, using some kind of web based Semantic application
> and TBC?
>           I have been exploring the use of the Semantic Web for Genealogy
> as a separate topic and it would be nice if people that are doing anything
> in this area would share their data so that one could query the data that
> makes the Semantic Web unique from previous approaches based on silos of
> data.  I am enjoying Bob's book on Learning SPARQL.  It is useful to see
> how one can discover the data that exists at a particular URI.  Of course,
> one must know where there might be semantic web data exposed as RDF
> triples.  That way my application could let users discover, or search for
> information across the entire global Semantic web.
>            So, conceptually, the idea of a SPARQL endpoint is something
> for which I wanted to clarify my understanding.  Using the "Learning
> SPARQL" book we use ARQ, for running SPARQL queries from the local drive.
> If a SPARQL endpoint is defined as a source that will accept SPARQL
> queries, does that mean that ARQ is essentially creating a local SPARQL
> endpoint? I'm not sure if I am asking this right.  Similarly, TBC must act
> as a SPARQL endpoint because it allows SPARQL queries to be run.  I would
> like to understand what is required to establish a SPARQL endpoint at a
> particular URL.  I think MediaWiki has an extension for this and so does
> Drupal, so I could explore what they do and try to figure it out but maybe
> someone can clear this up for me.
>            Can anyone recommend some other forums or mailing lists that
> are good for discussing Semantic Web concepts in a broader sense?
> Thanks,
> Bruce
> P.S. The Semantic MediaWiki Genealogy site I am developing is at:
> http://my-family-lineage.com/wiki/Main_Page
>
> --
> You received this message because you are subscribed to the Google
> Group "TopBraid Suite Users", the topics of which include TopBraid
> Composer,
> TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
> To post to this group, send email to
> [email protected]
> To unsubscribe from this group, send email to
> [email protected]
> For more options, visit this group at
> http://groups.google.com/group/topbraid-users?hl=en
>

-- 
You received this message because you are subscribed to the Google
Group "TopBraid Suite Users", the topics of which include TopBraid Composer,
TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
To post to this group, send email to
[email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/topbraid-users?hl=en

Reply via email to