Hi Bruce,

It worked great for me. In TBC, I created an empty model, and then on the
Imports view I clicked the "Import from URL" icon (the one with the + over
a picture of the world).  I entered
http://my-family-lineage.com/wiki/Special:ExportRDF/Edward_Sean_Hayes(1961-%3F)
and clicked OK, and then I was able to run queries against it in TBC's
SPARQL view.

Of course, the same model could import additional RDF files such as
ontologies for your vocabularies or other my-family-lineage data files.
Because they're imports into the local model, updates to the imported files
would be reflected in the imported version.

The Import RDF From URL module would let you do the same kind of import in
a SPARQLMotion script.

Bob



On Sat, Jan 21, 2012 at 12:46 AM, Bruce Whealton <[email protected]>wrote:

> Ok, I applied Bob's comment to use the Special:ExportRDF/myPage
> link and it looked right.
> Please check it out for me and see what you think:
> The site is http://my-family-lineage.com/wiki/Main_Page
> So, to get the RDF output of the listing for me, just go to
> http://my-family-lineage.com/wiki/Special:ExportRDF/BruceWhealton
> if you browse to some other names you can look at some of the newest
> ones added.
> I had a Perl Script created that takes a GEDCOM and then imports into
> my wiki creating individual pages
> for each Individual.
> Here are some new pages created with that script:
> http://my-family-lineage.com/wiki/Edward_Sean_Hayes(1961-%3F)
> http://my-family-lineage.com/wiki/Tamsin_Clare_Hayes(1963-%3F)
> http://my-family-lineage.com/wiki/Alice_Rose_Hayes(1986-%3F)
> http://my-family-lineage.com/wiki/Megan_Christine_Hayes(1986-%3F)
>
> Please see the RDF output using this link for the first one:
>
> http://my-family-lineage.com/wiki/Special:ExportRDF/Edward_Sean_Hayes(1961-%3F)
> and similarly the others.
>
> It appears that all of these are creating RDF files that I could query
> using SPARQL in TBC and the 3 main vocabs: bio, rel, and foaf.
> Does anyone see any problems with this working.
> Thanks,
> Bruce
>
> On Jan 9, 8:50 am, Tim Smith <[email protected]> wrote:
> > Hi Bruce,
> >
> > A year or so ago, I was an active user of both TBC/L and SMW.  I wanted
> to
> > do much of what you indicate in your messages below.  Overall, I wanted
> the
> > standards compliance (SPARQL, Linked Data, Triples, etc..) and processing
> > power of TBC/L behind an accepted wiki user interface like MediaWiki/SMW.
> > The goal was to provide a low-threshold, familiar interface to enable
> > people to create semantic data, i.e. populate an ontology without the
> > complexity of many of today's ontology tools.
> >
> > However, I eventually moved away from SMW due to their lack of support
> for
> > the semantic web standards.  Here are a couple of the issues I faced:
> >
> > *Import/Export of Triples from SMW:*
> >
> > I attempted to build ontologies in TBC and load them into SMW.  However,
> > the importer had serious issues when my ontologies had overlap with the
> SMW
> > default ontologies.  SMW did not recognize the concept of named graphs.
> >
> > Use of the Triple Store Connector and now the 4Store triple store does
> > allow you to access some of the triples externally.
> >
> > *SPARQL Support*
> >
> > There is an extension that enables SPARQL in place of ASK inside the
> wiki.
> > This is pretty good overall, except that the SMW notion of "Internal
> > Objects", known to those on this list as simple bnodes, were not
> accessible
> > via a SPARQL query.  This was a deal breaker for me, not to mention very
> > surprising...
> >
> > *Everything is a Page!*
> >
> > The notion in SMW is that everything is a wiki page.  Thus I found
>  myself
> > creating pages for instances such as "high", "medium", and  "low" in
> order
> > to have these to pick from across multiple semantic forms.  This caused
> > excessive wiki bloat and overall made things difficult to manage.  In
> > addition, all property assignments had to appear as wiki text on the page
> > so you had to edit the page to add a property relationship - not a big
> deal
> > if you are doing it manually for a few instances, but if you want to do
> it
> > programmatically you have to parse the page and insert the text - not
> > exactly like a SPARQL Insert/Construct!
> >
> > *Semantic Forms*
> >
> > SMW Semantic Forms is one of the better features.  The creator, Yaron
> > Koren, has done a tremendous amount of work and covers a huge range of
> use
> > cases with his forms system.  However, it is cumbersome to create and
> > maintain the form.  I've been trying to get him to move his form
> definition
> > into an ontology so forms could be automatically created but he is not
> > interested.  In addition, form assignment is done on a page by page basis
> > with no connection to the underlying ontology.  Thus if you import a
> bunch
> > of instances you will have to edit each page to assign the form.  Try
> that
> > for 2000 instances!
> >
> > The above issues pretty much ended my foray into using SMW as a
> > low-threshold front-end for creating semantic data/information graphs.
> >
> > However, I moved on to create a connector between TBC/L and the
> Confluence
> > Wiki (by Atlassian).  This connector would create a wiki page structure
> in
> > Confluence based on an ontology (with instances) in TBC/L.  The page
> > linkages in Confluence are live connections to TBL so wiki navigation is
> > done via the ontology.  This is much more efficient since you can have
> > non-hierarchical relationships.  Unfortunately, all of this work was
> > shutdown about a year ago.
> >
> > *If I had to do it again....*
> >
> > I still think a wiki/facebook-like user experience, enhanced with
> semantics
> > will provide superior capabilities and user satisfaction.  Thus, if I
> were
> > to embark on the same journey today, I would build a wiki application on
> > top of TBL.  If you think about it, there are only a couple of key
> > capabilities missing.
> >
> > First, I'd find a good web-based text editor.  CKEditor comes to mind.
> [1]
> > Second, I'd create an SWP-based form system for TBC/L
> > Third, I'd find a way to build a file upload capability into TBC/L  This
> > could be a connector from TBL to SharePoint, Drupal, or other content
> > management systems or it could be a generic upload directly to the TBL
> > server.
> >
> > And finally, I'd create the ontologies that will provide the structure
> (and
> > navigation/search) for the wiki.  You would declare what instances and
> > classes should have wiki pages.
> >
> > Maybe some day I will find the budget to pull this one together!
> >
> > Tim
> >
> > [1]http://ckeditor.com/
> >
> > On Sun, Jan 8, 2012 at 7:14 PM, Bruce Whealton <[email protected]
> >wrote:
> >
> >
> >
> >
> >
> >
> >
> > > Bob,
> > >      Thanks so much for your help and explanations.  I might have to
> > > refer some of these questions to the Semantic MediaWiki mailing list
> > > and "hope" someone responds.
> > > I don't always get feedback when I ask a question.
> > > My initial thought though was that to make the application attractive,
> > > we should have lots of data in the wiki.  Maybe if I create an
> > > application using TBC that uses
> > > the wiki data then I can access other Genealogy data from other triple
> > > stores - there is a big linked data collection for the Goodwin family
> > > all in RDF - it is one of the big
> > > items on the linked data cloud or diagram.  I'd like to see if my
> > > efforts could show up there.
> >
> > > SMW does have a maintenance function for exporting the entire wiki as
> > > RDF.  When I looked at a single page of it, I believe I got valid RDF/
> > > XML.  That being the case, I don't
> > > have to worry about how the system models triples in MySQL, I don't
> > > think.
> >
> > > I believe the php library called ARC does provide a PHP based RDF
> > > interface and SPARQL endpoint.
> >
> > > I remember a presentation that I saw on publishing linked data that
> > > tried to sell everyone on publishing both HTML and RDF versions of
> > > each page on the web but this leads into my second
> > > posting here that deals with the value for an average person in having
> > > their website in RDFa format - that somehow this will benefit them but
> > > maybe we need more
> > > data out there and search engines that use the Semantic Web data in
> > > conducting searches.
> > > Thanks,
> > > Bruce
> >
> > > On Jan 5, 12:42 pm, Bob DuCharme <[email protected]> wrote:
> > > > Bruce,
> >
> > > > Semantic MediaWiki uses MySQL, but not as a triplestore per se (i.e.
> with
> > > > the Jena SDB interface), so the only way for TopBraid to use that
> data
> > > > directly would be with the D2RQ interface, and then you'd have to
> figure
> > > > out how the system models triples in MySQL.
> > >http://semantic-mediawiki.org/wiki/Help:Using_SPARQL_and_RDF_storesde.
> ..
> > > > more about the potential relationship of a Semantic MediaWiki to
> > > > RDF triplestores.
> >
> > > > Because RDF data in a semantic mediawiki can be accessed with a URL,
> it's
> > > > easy to incorporate it into a TopBraid application.
> > >http://semantic-mediawiki.org/wiki/Help:RDF_exportdescribeshow to build
> > > > these URLs. For example, to get RDF about the User Manual page on
> that
> > > > site, you would usehttp://
> > > semantic-mediawiki.org/wiki/Special:ExportRDF/User_manual. You can
> > > > use a URL like this in TopBraid to incorporate that data into an
> > > > application; for example, on the TBC Import view, you can click the
> > > "Import
> > > > from URL" button (with the + over a globe) and enter this URL, or a
> > > > SPARQLMotion script could retrieve the data with an "Import RDF from
> URL"
> > > > module.
> >
> > > > To address your second point, a SPARQL endpoint is a running service
> that
> > > > accepts requests using the SPARQL Protocol (
> > >http://www.w3.org/TR/2008/REC-rdf-sparql-protocol-20080115/). The
> actual
> > > > protocol for delivering a query to an endpoint is something for
> > > > applications like TopBraid Composer and ARQ to worry about. ARQ
> itself
> > > does
> > > > not act as a SPARQL endpoint, but reads local or remote data and runs
> > > your
> > > > query that you specify with the --query command line parameter
> against
> > > it.
> > > > ARQ and TopBraid's support of the SPARQL 1.1 SERVICE keyword is what
> lets
> > > > them talk to SPARQL Endpoints, but talking to these endpoints is
> > > different
> > > > from reading a dataset (typically, a file) and running  a query
> against
> > > it.
> >
> > > > TBC also lets you enter queries to run against remote or local
> datasets
> > > > that are basically treated as files to read and not as services
> > > delivering
> > > > data according to a specific protocol. The TopBraid Live Personal
> Edition
> > > > that is included with TBC Maestro edition, on the other hand, can
> > > function
> > > > as a SPARQL endpoint service; Scott's blog entry athttp://
> > > topquadrantblog.blogspot.com/2010/05/how-to-publish-your-linke...
> > > > how to pass queries to this service.
> >
> > > > Regarding your last question, for more general questions about the
> > > semantic
> > > > web, I recommendhttp://lists.w3.org/Archives/Public/semantic-web/and
> > > > especiallyhttp://answers.semanticweb.com/, where Scott and I are
> both
> > > > regulars.
> >
> > > > Bob
> >
> > > > On Wed, Jan 4, 2012 at 6:31 PM, Bruce Whealton <
> [email protected]
> > > >wrote:
> >
> > > > > Hello,
> > > > >         I seem to remember reading about using TBC or the Ensemble
> > > Suite
> > > > > with Semantic Wikis.  I'm not sure how that would work.  I use
> > > MediaWiki
> > > > > and the Semantic MediaWiki Bundle that does expose data as RDF or
> it
> > > can
> > > > > when configured correctly.  It will either produce a RDF output
> from a
> > > page
> > > > > or you can create a RDF dump of the entire wiki.  I have a
> Genealogy
> > > > > project I am working on and I'm curious how TBC could fit into the
> > > project
> > > > > and be used.  Of course, each tool uses different ways for running
> > > queries,
> > > > > presenting data and etc.  MediaWiki is based on a mysql relational
> > > > > database.  So, I don't know if an application developed with TBC
> would
> > > > > exist separately and would use the data from the wiki, or from the
> wiki
> > > > > database or if there are ways to somehow integrate the two.  Has
> anyone
> > > > > done anything like this, using some kind of web based Semantic
> > > application
> > > > > and TBC?
> > > > >           I have been exploring the use of the Semantic Web for
> > > Genealogy
> > > > > as a separate topic and it would be nice if people that are doing
> > > anything
> > > > > in this area would share their data so that one could query the
> data
> > > that
> > > > > makes the Semantic Web unique from previous approaches based on
> silos
> > > of
> > > > > data.  I
> >
> > ...
> >
> > read more ยป
>
> --
> You received this message because you are subscribed to the Google
> Group "TopBraid Suite Users", the topics of which include TopBraid
> Composer,
> TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
> To post to this group, send email to
> [email protected]
> To unsubscribe from this group, send email to
> [email protected]
> For more options, visit this group at
> http://groups.google.com/group/topbraid-users?hl=en
>

-- 
You received this message because you are subscribed to the Google
Group "TopBraid Suite Users", the topics of which include TopBraid Composer,
TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN.
To post to this group, send email to
[email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/topbraid-users?hl=en

Reply via email to