Re: Generate RDFa with Epiphany
Hi Benjamin, Nice - can you create GoodRelations (http://purl.org/goodrelations/) patterns in RDFa for existing shop pages, e.g. identify price and product information? Just spotting the product name, description, EAN/UPC code, and price would already very valuable. Best Martin PS: Did you see the Amazon sponger data, e.g. at http://linkeddata.uriburner.com/about/html/http://linkeddata.uriburner.com/about/id/entity/http/www.amazon.com/exec/obidos/ASIN/0596518552 Benjamin Adrian wrote: Hi everyone! Let me introduce the RDFa annotator Epiphany: It uses configurable domain-specific Linked Data to enrich web pages with RDFa annotation, automatically. These annotations link text passages to instances inside the Linked Data model. Hovering an annotation with your mouse opens a lighting box with additional information from the RDF graph behind the instance's HTTP URI. Epiphany runs at: http://projects.dfki.uni-kl.de/epiphany/ On the top right you'll find an example. Under http://projects.dfki.uni-kl.de/epiphany/form, you can write your own text and receive RDFa content. Currently, the underlying Linked Data model is a subset of DBpedia covering German politics. In later versions you will be able to upload or link your own Linked Data model to annotate web pages with your own domain specific RDFa. Please don't hesitate in giving me your comments :). Twitter hashtag is #RDFEPIPHANY Regards Ben -- -- martin hepp e-business web science research group universitaet der bundeswehr muenchen e-mail: h...@ebusiness-unibw.org phone: +49-(0)89-6004-4217 fax: +49-(0)89-6004-4620 www: http://www.unibw.de/ebusiness/ (group) http://www.heppnetz.de/ (personal) skype: mfhepp twitter: mfhepp Check out GoodRelations for E-Commerce on the Web of Linked Data! = Project page: http://purl.org/goodrelations/ Resources for developers: http://www.ebusiness-unibw.org/wiki/GoodRelations Webcasts: Overview - http://www.heppnetz.de/projects/goodrelations/webcast/ How-to - http://vimeo.com/7583816 Recipe for Yahoo SearchMonkey: http://www.ebusiness-unibw.org/wiki/GoodRelations_and_Yahoo_SearchMonkey Talk at the Semantic Technology Conference 2009: Semantic Web-based E-Commerce: The GoodRelations Ontology http://www.slideshare.net/mhepp/semantic-webbased-ecommerce-the-goodrelations-ontology-1535287 Overview article on Semantic Universe: http://www.semanticuniverse.com/articles-semantic-web-based-e-commerce-webmasters-get-ready.html Tutorial materials: ISWC 2009 Tutorial: The Web of Data for E-Commerce in Brief: A Hands-on Introduction to the GoodRelations Ontology, RDFa, and Yahoo! SearchMonkey http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
Re: [pedantic-web] ANN: 20th Century Press Archives as ORE / Linked Data application - Technical Preview
On Mon, Dec 28, 2009 at 8:07 AM, Neubert Joachim j.neub...@zbw.eu wrote: Please feel invited to take a look at it - we would highly appreciate any feedback about our approach. Thanks for announcing this Joachim. It is great to see more linked data as rdfa getting out on the web. I'm particularly excited because of your use of the oai-ore vocabulary to make historic newspaper archives available, since we are doing something similar at the Library of Congress [1]. You must've done something right because I just wrote a little naive crawler [2] in a matter of minutes to pull down what looks like all the rdfa you've put out there so far. It seem to have collected about 11,427 triples [3]. My rdfsum unix command line hack [4] came up with these rdf:type counts: 1533 http://www.openarchives.org/ore/terms/AggregatedResource 526 http://www.openarchives.org/ore/terms/ResourceMap 526 http://www.openarchives.org/ore/terms/Aggregation 336 http://zbw.eu/namespaces/skos-extensions/PmPage 185 http://purl.org/ontology/bibo/Article 2 http://zbw.eu/namespaces/skos-extensions/PmPersonFolder 2 http://zbw.eu/namespaces/skos-extensions/PmCollection Does that sound about right for this initial release? I noticed that you have chosen to link to names in the German National Authority file like: http://zbw.eu/beta/pm20/person/00012 dct:subject http://d-nb.info/gnd/118646419 . I seem to remember hearing at SWIB09 [5] that the Deutsche National Bibliothek was thinking about minting URIs for entries in the authority file that follow Linked Data best practices (hash or 303, etc). Were you planning on modifying these appropriately when those URLs became available? Right now the d-nb URL returns 200 OK, and it isn't a hash URI. Theoretically it would be pretty easy to layer in some rdfa into the page at d-nb that describes: http://d-nb.info/gnd/118646419#person But I realize this is somewhat out of your control. I guess it would also be possible to create a partial PURL [6] for http://d-nb.info/gnd/ that would redirect, since I think the new PURL software supports 303. I was also interested to see that you have published some SKOS Extensions [7] that are used to type each ore:Aggregation as a specialization of skos:Concept: http://zbw.eu/beta/pm20/person/00012 a ore:Aggregation, http://zbw.eu/namespaces/skos-extensions/PmPersonFolder ; skos:prefLabel Abbe, Ernst; 1840-1905 (PM20 Personenarchiv)@de, Abbe, Ernst; 1840-1905 (PM20 Persons Archives)@en . It looks like the rdf that comes back for your skos extensions vocabulary (nice hack with the rdf validator btw) doesn't define PmPersonFolder--but perhaps I missed it? I'm guessing from the skos:prefLabel assertion that the PmPersonFolder is a specialization of skos:Concept? Would it be OK for me to experiment with pulling down the aggregated resource bitstreams (jpg, etc) and storing them on disk? It would just be a single threaded little script. Part of the rationale behind the ore use at LC [1] is to foster LOCKSS [8] scenarios where digital objects are easier to meaningfully harvest. Anyhow, I have rattled on enough for now I suppose -- I mainly wanted to say how exciting it was to see your announcement, being from the digital library tribe in the linked data community :-) //Ed [1] http://chroniclingamerica.loc.gov [2] http://inkdroid.org/bzr/ptolemy/crawl.py [3] http://inkdroid.org/data/pm20.txt [4] http://inkdroid.org/bzr/bin/rdfsum [5] http://www.swib09.de/ [6] http://purl.org [7] http://zbw.eu/namespaces/skos-extensions/ [8] http://en.wikipedia.org/wiki/LOCKSS
Fwd: ISBNs, owl:sameAs, etc
Psst, Chris, Tobias - any chance of RDFBookMashup rendering 'owl:sameAs urn:isbn:12434567' ? I might see if I can glue freebase's 1.8 million or so ISBNs onto rdfbookmashup. -- Forwarded message -- From: Daniel O'Connor daniel.ocon...@gmail.com Date: Tue, Dec 29, 2009 at 2:12 PM Subject: ISBNs, owl:sameAs, etc To: Discussion list for Freebase Experts freebase-expe...@freebase.com I don't suppose anyone wants to mint a whole bunch of URNs for ISBNs via a quick acre application? I'm upset that http://sameas.org/html?uri=urn%3Aisbn%3A9780670063260%0D%0Ax=0y=0 Doesn't give me http://www.freebase.com/view/soft/isbn/9780670063260/besthttp://www.sandbox-freebase.com/view/soft/isbn/9780670063260/best(or its RDF friends) :( WOE.
Re: ISBNs, owl:sameAs, etc
On Tue, Dec 29, 2009 at 4:47 AM, Daniel O'Connor daniel.ocon...@gmail.com wrote: Psst, Chris, Tobias - any chance of RDFBookMashup rendering 'owl:sameAs urn:isbn:12434567' ? I might see if I can glue freebase's 1.8 million or so ISBNs onto rdfbookmashup. It's probably common knowledge, but there's a few scripts here - http://wiki.foaf-project.org/w/DanBri/WikipediaISBNs - for extracting isbns from wikipedia dumps. It found about half a million last time I tried. Dan -- Forwarded message -- From: Daniel O'Connor daniel.ocon...@gmail.com Date: Tue, Dec 29, 2009 at 2:12 PM Subject: ISBNs, owl:sameAs, etc To: Discussion list for Freebase Experts freebase-expe...@freebase.com I don't suppose anyone wants to mint a whole bunch of URNs for ISBNs via a quick acre application? I'm upset that http://sameas.org/html?uri=urn%3Aisbn%3A9780670063260%0D%0Ax=0y=0 Doesn't give me http://www.freebase.com/view/soft/isbn/9780670063260/best (or its RDF friends) :( WOE.