I understand what you mean about needing the funds or budget to do what you propose. The software needed makes seem not directly targeted toward the small- business type web developer who want to produce a great application.
I did have a problem with Everything is a page in MediaWiki. When you use RDFa or something similar, you can markup a section of a page and assign it to a class. Instead, with MW or SMW one must create a different page for each class even though I might be describing a Person in a Genealogy application, if I want to talk about the Marriage details, using the biographic vocabulary, I have to create a second page for the marriage and then use an inline query to display the info on the Person's page. I had no idea that there were these problems with SMW. I thought the RDF created was standards compliant, even though they user internal IDs and namespaces for resources. I also spent time developing forms with imported vocabularies, then defining pages to describe properties, followed by creating a Template and then a Form. I was paying someone to first use a Perl script to convert a GEDCOM to RDF with the proper namespaces and using the vocabularies I chose. I thought by importing many different individuals, or large GEDCOM files so that this would be a more attractive application for people that are into genealogy. So, there is an RDFIO extension that is supposed to do this. We haven't gotten that finished yet. I just have a successful transformation of GEDCOM to valid RDF. So, what should I do? Would you have any suggestions? It will take a while to be able to come up with the funds for the full Maestro edition of TBC and even the standard version doesn't allow one to create rich interactive applications. So, in the mean time what should I do, I wonder. I will have both a wiki that is populated with RDF data and the RDF file for each GEDCOM. I'm looking at that now in TBC. I looked at Drupal but they seemed to lack something, maybe an easy way to output the entire site or a range of pages/articles, as an RDF dump. MediaWiki forms do provide an easy user friendly way to let people enter information. Thanks, Bruce On Jan 9, 8:50 am, Tim Smith <[email protected]> wrote: > Hi Bruce, > > A year or so ago, I was an active user of both TBC/L and SMW. I wanted to > do much of what you indicate in your messages below. Overall, I wanted the > standards compliance (SPARQL, Linked Data, Triples, etc..) and processing > power of TBC/L behind an accepted wiki user interface like MediaWiki/SMW. > The goal was to provide a low-threshold, familiar interface to enable > people to create semantic data, i.e. populate an ontology without the > complexity of many of today's ontology tools. > > However, I eventually moved away from SMW due to their lack of support for > the semantic web standards. Here are a couple of the issues I faced: > > *Import/Export of Triples from SMW:* > > I attempted to build ontologies in TBC and load them into SMW. However, > the importer had serious issues when my ontologies had overlap with the SMW > default ontologies. SMW did not recognize the concept of named graphs. > > Use of the Triple Store Connectorand now the 4Store triple store does > allow you to access some of the triples externally. > > *SPARQL Support* > > There is an extension that enables SPARQL in place of ASK inside the wiki. > This is pretty good overall, except that the SMW notion of "Internal > Objects", known to those on this list as simple bnodes, were not accessible > via a SPARQL query. This was a deal breaker for me, not to mention very > surprising... > > *Everything is a Page!* > > The notion in SMW is that everything is a wiki page. Thus I found myself > creating pages for instances such as "high", "medium", and "low" in order > to have these to pick from across multiple semantic forms. This caused > excessive wiki bloat and overall made things difficult to manage. In > addition, all property assignments had to appear as wiki text on the page > so you had to edit the page to add a property relationship - not a big deal > if you are doing it manually for a few instances, but if you want to do it > programmatically you have to parse the page and insert the text - not > exactly like a SPARQL Insert/Construct! > > *Semantic Forms* > > SMW Semantic Forms is one of the better features. The creator, Yaron > Koren, has done a tremendous amount of work and covers a huge range of use > cases with his forms system. However, it is cumbersome to create and > maintain the form. I've been trying to get him to move his form definition > into an ontology so forms could be automatically created but he is not > interested. In addition, form assignment is done on a page by page basis > with no connection to the underlying ontology. Thus if you import a bunch > of instances you will have to edit each page to assign the form. Try that > for 2000 instances! > > The above issues pretty much ended my foray into using SMW as a > low-threshold front-end for creating semantic data/information graphs. > > However, I moved on to create a connector between TBC/L and the Confluence > Wiki (by Atlassian). This connector would create a wiki page structure in > Confluence based on an ontology (with instances) in TBC/L. The page > linkages in Confluence are live connections to TBL so wiki navigation is > done via the ontology. This is much more efficient since you can have > non-hierarchical relationships. Unfortunately, all of this work was > shutdown about a year ago. > > *If I had to do it again....* > > I still think a wiki/facebook-like user experience, enhanced with semantics > will provide superior capabilities and user satisfaction. Thus, if I were > to embark on the same journey today, I would build a wiki application on > top of TBL. If you think about it, there are only a couple of key > capabilities missing. > > First, I'd find a good web-based text editor. CKEditor comes to mind. [1] > Second, I'd create an SWP-based form system for TBC/L > Third, I'd find a way to build a file upload capability into TBC/L This > could be a connector from TBL to SharePoint, Drupal, or other content > management systems or it could be a generic upload directly to the TBL > server. > > And finally, I'd create the ontologies that will provide the structure (and > navigation/search) for the wiki. You would declare what instances and > classes should have wiki pages. > > Maybe some day I will find the budget to pull this one together! > > Tim > > [1]http://ckeditor.com/ > > On Sun, Jan 8, 2012 at 7:14 PM, Bruce Whealton <[email protected]>wrote: > > > > > > > > > Bob, > > Thanks so much for your help and explanations. I might have to > > refer some of these questions to the Semantic MediaWiki mailing list > > and "hope" someone responds. > > I don't always get feedback when I ask a question. > > My initial thought though was that to make the application attractive, > > we should have lots of data in the wiki. Maybe if I create an > > application using TBC that uses > > the wiki data then I can access other Genealogy data from other triple > > stores - there is a big linked data collection for the Goodwin family > > all in RDF - it is one of the big > > items on the linked data cloud or diagram. I'd like to see if my > > efforts could show up there. > > > SMW does have a maintenance function for exporting the entire wiki as > > RDF. When I looked at a single page of it, I believe I got valid RDF/ > > XML. That being the case, I don't > > have to worry about how the system models triples in MySQL, I don't > > think. > > > I believe the php library called ARC does provide a PHP based RDF > > interface and SPARQL endpoint. > > > I remember a presentation that I saw on publishing linked data that > > tried to sell everyone on publishing both HTML and RDF versions of > > each page on the web but this leads into my second > > posting here that deals with the value for an average person in having > > their website in RDFa format - that somehow this will benefit them but > > maybe we need more > > data out there and search engines that use the Semantic Web data in > > conducting searches. > > Thanks, > > Bruce > > > On Jan 5, 12:42 pm, Bob DuCharme <[email protected]> wrote: > > > Bruce, > > > > Semantic MediaWiki uses MySQL, but not as a triplestore per se (i.e. with > > > the Jena SDB interface), so the only way for TopBraid to use that data > > > directly would be with the D2RQ interface, and then you'd have to figure > > > out how the system models triples in MySQL. > >http://semantic-mediawiki.org/wiki/Help:Using_SPARQL_and_RDF_storesde... > > > more about the potential relationship of a Semantic MediaWiki to > > > RDF triplestores. > > > > Because RDF data in a semantic mediawiki can be accessed with a URL, it's > > > easy to incorporate it into a TopBraid application. > >http://semantic-mediawiki.org/wiki/Help:RDF_exportdescribeshow to build > > > these URLs. For example, to get RDF about the User Manual page on that > > > site, you would usehttp:// > > semantic-mediawiki.org/wiki/Special:ExportRDF/User_manual. You can > > > use a URL like this in TopBraid to incorporate that data into an > > > application; for example, on the TBC Import view, you can click the > > "Import > > > from URL" button (with the + over a globe) and enter this URL, or a > > > SPARQLMotion script could retrieve the data with an "Import RDF from URL" > > > module. > > > > To address your second point, a SPARQL endpoint is a running service that > > > accepts requests using the SPARQL Protocol ( > >http://www.w3.org/TR/2008/REC-rdf-sparql-protocol-20080115/). The actual > > > protocol for delivering a query to an endpoint is something for > > > applications like TopBraid Composer and ARQ to worry about. ARQ itself > > does > > > not act as a SPARQL endpoint, but reads local or remote data and runs > > your > > > query that you specify with the --query command line parameter against > > it. > > > ARQ and TopBraid's support of the SPARQL 1.1 SERVICE keyword is what lets > > > them talk to SPARQL Endpoints, but talking to these endpoints is > > different > > > from reading a dataset (typically, a file) and running a query against > > it. > > > > TBC also lets you enter queries to run against remote or local datasets > > > that are basically treated as files to read and not as services > > delivering > > > data according to a specific protocol. The TopBraid Live Personal Edition > > > that is included with TBC Maestro edition, on the other hand, can > > function > > > as a SPARQL endpoint service; Scott's blog entry athttp:// > > topquadrantblog.blogspot.com/2010/05/how-to-publish-your-linke... > > > how to pass queries to this service. > > > > Regarding your last question, for more general questions about the > > semantic > > > web, I recommendhttp://lists.w3.org/Archives/Public/semantic-web/and > > > especiallyhttp://answers.semanticweb.com/, where Scott and I are both > > > regulars. > > > > Bob > > > > On Wed, Jan 4, 2012 at 6:31 PM, Bruce Whealton <[email protected] > > >wrote: > > > > > Hello, > > > > I seem to remember reading about using TBC or the Ensemble > > Suite > > > > with Semantic Wikis. I'm not sure how that would work. I use > > MediaWiki > > > > and the Semantic MediaWiki Bundle that does expose data as RDF or it > > can > > > > when configured correctly. It will either produce a RDF output from a > > page > > > > or you can create a RDF dump of the entire wiki. I have a Genealogy > > > > project I am working on and I'm curious how TBC could fit into the > > project > > > > and be used. Of course, each tool uses different ways for running > > queries, > > > > presenting data and etc. MediaWiki is based on a mysql relational > > > > database. So, I don't know if an application developed with TBC would > > > > exist separately and would use the data from the wiki, or from the wiki > > > > database or if there are ways to somehow integrate the two. Has anyone > > > > done anything like this, using some kind of web based Semantic > > application > > > > and TBC? > > > > I have been exploring the use of the Semantic Web for > > Genealogy > > > > as a separate topic and it would be nice if people that are doing > > anything > > > > in this area would share their data so that one could query the data > > that > > > > makes the Semantic Web unique from previous approaches based on silos > > of > > > > data. I > > ... > > read more » -- You received this message because you are subscribed to the Google Group "TopBraid Suite Users", the topics of which include TopBraid Composer, TopBraid Live, TopBraid Ensemble, SPARQLMotion and SPIN. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/topbraid-users?hl=en
