Re: Metadata about single triples

2012-02-22 Thread Bob Ferris

Hi Carsten,

On 2/22/2012 12:02 PM, Carsten Keßler wrote:

Dear LODers,

we are currently working on a project for the United Nations Office
for the Coordination of Humanitarian Affairs (OCHA) in Geneva to
develop a Humanitarian Exchange Language (HXL). Some information about
the project is available at https://sites.google.com/site/hxlproject/.

One of the core components of HXL will be an RDF vocabulary to
annotate the data that are exchanged between humanitarian
organizations. The current draft is available at
http://hxl.humanitarianresponse.info. It is far from complete, but I
think it already shows where we want to go with this. Any feedback on
the vocabulary draft is very welcome, of course.


At a first glance, your ontology looks very interesting and well designed.



The aspect we are currently working on is a metadata section that will
include classes and properties to state who has reported a certain
piece of information, when it was reported, whether it was approved
(and at which level), and so forth. The current idea is to create
named graphs that can be described by these metadata elements. I'd
like to hear your comments on this approach, since this will lead to a
situation where we can have the same triple in several named graphs
For example, graph A with all data reported on Januar 20, 2012 by an
OCHA information officer in Suda, graph B with all data approved by
the OCHA regional office on January 21, and graph C with all data
approved by OCHA in Geneva on January 22. The rationale is  to be able
to query based on these metadata elements via SPARQL, e.g., give me
all figure about refugess in Sudan from January 2012 approved by OCHA
Geneva. Note that the regional office may only approve some of the
triples originally reported, and OCHA Geneva may only approve a subset
of those approved by the regional office. So basically we need to be
able to attach those metadata elements to every single triple.

We will probably run into a situation where we can have the same
triple in 10–20 graphs at the same time. Likewise, we will have a
pretty large number of named graphs in our store, and I'd like to know
whether you think this approach is problematic (e.g. in terms of query
performance), and whether you see an alternative approach?


I investigated some thoughts on this topic as well in the past. This is 
also a topic of the current RDF WG Graphs TF (See [1]).
I think, you exactly pointed out the problems with duplicated triples 
and single triple named graphs. So there might be the (rather old) need 
for statement identifiers, i.e., a URI (or maybe also a bnode) for 
identifying a  single triple and to be able to describe external context 
information. You can find my proposal at the RDF WG comments mailing 
list, see [2].


Cheers,


Bo


[1] http://www.w3.org/2011/rdf-wg/wiki/TF-Graphs
[2] 
http://lists.w3.org/Archives/Public/public-rdf-comments/2011Jan/0001.html







Re: Metadata about single triples

2012-02-22 Thread Bob Ferris

Hi Carsten,

On 2/22/2012 5:51 PM, Carsten Keßler wrote:

Hi Bob,


At a first glance, your ontology looks very interesting and well designed.


thanks, we are doing our best ;)


So there might be the (rather old) need for
statement identifiers, i.e., a URI (or maybe also a bnode) for identifying a
  single triple and to be able to describe external context information. You
can find my proposal at the RDF WG comments mailing list, see [2].


Thanks for these pointers. These ideas all make sense, however, we are
also concerned about the implications in practice, i.e., we have to
make sure that whatever approach we pick is supported by triple
stores. I do see that your proposal of having an identifier makes
sense at a conceptual level, but in practice, does it matter if we end
up with named graphs that may only contain a single triple in some
cases? I don't think it does, but maybe I'm missing something.


I think it matters, because Named Graphs unnecessary fragment complex 
descriptions into (very) small piece due to their provenance 
descriptions*. So when you would like to query this complex description 
at once you may have to include many Named Graphs. This makes the SPARQL 
query rather complex.
A current workaround is to duplicate this fragmented knowledge into a 
default graph to be able to easily query such complex descriptions 
(without their provenance information). This increases the maintenance 
costs as well and the (originally) related knowledge is now decoupled.
On the other side, many triple store vendors are already utilising 
statement identifiers internally. So why not utilising them externally 
as well by introducing URIs instead of internal identifiers.

I really believe that would be a win-win situation for all participants.
Please also remember my proposed utilisation of statement identifiers 
should be optional, i.e., if someone does not need this, one does not 
have to utilise it.


Cheers,


Bo


*) Furthermore, you do not really have the chance to relate knowledge 
that is copied into another Named Graph, but which is practically the 
same and belongs together, because you have the enclosure of every Named 
Graph (you can read many long discussions on the whole named graph topic 
at the official RDF WG mailing list).





Re: Facebook Linked Data

2011-09-23 Thread Bob Ferris

Hi all,

Generally, a huge +1 for implementing this issue @ Facebook!

On 9/23/2011 5:14 PM, Søren Roug wrote:

If you pull the schema http://graph.facebook.com/schema/user then you'll see 
they are thinking about making a lot more properties available than what's sent 
out now.



See also 
http://ontorule-project.eu/parrot/parrot?documentUri=http%3A%2F%2Fgraph.facebook.com%2Fschema%2Fuser%23mimetype=defaultprofile=technicallanguage=encustomizeCssUrl=


Cheers,


Bo


PS: Parrot is really awesome btw





|| -Original Message-
|| From: public-lod-requ...@w3.org [mailto:public-lod-requ...@w3.org] On
|| Behalf Of Jesse Weaver
|| Sent: 23 September 2011 14:10
|| To: semantic-...@w3.org; public-lod@w3.org
|| Subject: Facebook Linked Data
||
|| APOLOGIES FOR CROSS-POSTING
||
|| I would like to bring to subscribers' attention that Facebook now
|| supports RDF with Linked Data URIs from its Graph API.  The RDF is in
|| Turtle syntax, and all of the HTTP(S) URIs in the RDF are dereferenceable
|| in accordance with httpRange-14.  Please take some time to check it out.
||
|| If you have a vanity URL (mine is jesserweaver), you can get RDF about you:
||
|| curl -H 'Accept: text/turtle' http://graph.facebook.com/vanity-url
|| curl -H 'Accept: text/turtle' http://graph.facebook.com/jesserweaver
|| If you don't have a vanity URL but know your Facebook ID, you can use
|| that instead (which is actually the fundamental method).
||
|| curl -H 'Accept: text/turtle' http://graph.facebook.com/facebook-id
|| curl -H 'Accept: text/turtle' http://graph.facebook.com/1340421292
|| From there, try dereferencing URIs in the Turtle.  Have fun!
||
|| Jesse Weaver
|| Ph.D. Student, Patroon Fellow
|| Tetherless World Constellation
|| Rensselaer Polytechnic Institute
|| http://www.cs.rpi.edu/~weavej3/





Re: Cost/Benefit Anyone? Re: Vote for my Semantic Web presentation at SXSW

2011-08-18 Thread Bob Ferris

Just an example from practise:

http://blog.seevl.net/2011/08/18/about-json-ld-and-content-negotiation/

near the end of this blog post:

... Then, we save costs. - that's it! ;)

Cheers,


Bo



Re: Squaring the HTTP-range-14 circle

2011-06-17 Thread Bob Ferris

Hi,

On 6/17/2011 4:11 PM, Leigh Dodds wrote:

Hi,

On 17 June 2011 14:04, Tim Berners-Leeti...@w3.org  wrote:


On 2011-06 -17, at 08:51, Ian Davis wrote:

...

Quite. When a facebook user clicks the Like button on an IMDB page
they are expressing an opinion about the movie, not the page.


BUT when the click a Like button on a blog they are expressing they like the
blog, not the movie it is about.

AND when they click like on a facebook comment they are
saying they like the comment not the thing it is commenting on.

And on Amazon people say I found this review useful to
like the review on the product being reviewed, separately from
rating the product.
So there is a lot of use out there which involves people expressing
stuff in general about the message not its subject.


Well even that's debatable.

I just had to go and check whether Amazon reviews and Facebook
comments actually do have their own pages. That's because I've never
seen them presented as anything other than objects within another
container, either in a web page or a mobile app. So I think you could
argue that when people are linking and marking things as useful,
they're doing that on a more general abstraction, i.e. the Work (to
borrow FRBR terminology) not the particular web page.


Well, that is obviously the level where the (abstract) information 
resource is located (can be located), or? ;)


Cheers,


Bob


PS: cf., e.g., 
http://odontomachus.wordpress.com/2011/02/13/frbr-and-the-web/ ;)




Re: create HTML based on RDF?

2011-05-06 Thread Bob Ferris

Hi Frans,

you can try the RDFa serializer plugin [1] of ARC [2] (written in PHP). 
Albeit, the results do not look really nice by this basis serialization. 
Generally, you will need customized templates for specific 
serialization, e.g., this on of FOAF profiles.

You can check the ARC RDFa serializer plugin online, e.g., here [3].

Cheers,


Bob

On 5/6/2011 1:02 PM, Frans Knibbe wrote:

Hello,

I am continuing my efforts with publishing Linked Data. I am trying to
that step by step. I have now managed to publish data in static RDF
files. Also, I have managed to configure my web server to do 303
redirection, returning either a HTML file or the RDF file, depending on
the client request. I understand that it is good practice to offer a
HTML representation of the data if the client is unable to handle RDF.

I notice that it would be really helpful if I could automatically
generate HTML files based on the RDF files. That way I can focus on just
keeping the RDF file in good shape. After creating or editing an RDF
file I could run something that makes a HTML representation.

Is anyone aware of software that can be used to automatically export a
RDF file to a HTML file that looks nice in an internet browser? Or isn't
this a common problem? I have to admit that I might thinking in the
wrong way about this.

Regards,
Frans



[1] http://arc.semsol.org/download/plugins
[2] http://arc.semsol.org/home
[3] http://zazi.smiy.org/rdfaparser.html



Re: Labels separate from localnames (Was: Best Practice for Renaming OWL Vocabulary Elements

2011-04-22 Thread Bob Ferris

Hi Martin,

On 4/22/2011 6:18 PM, Martin Hepp wrote:

So our only disagreement seems to be about having the cardinality info in the 
label, and I think that, at least for the moment, that is the better choice as 
compared to the alternatives.



I really don't understand why you need this cardinality description in a 
label of a universal. If a developer should get informed about such 
axioms on a term, then a documentation is a good place for this kind of 
knowledge.
For example, my SpecGen v6 fork [1,2] transforms such information 
directly from an RDF graph into a readable HTML documentation that 
includes RDFa as well, i.e., you even can get the full RDF graph out of 
an HTML+RDFa serialized specification documentation. A nice showcase 
term that illustrates the defined restrictions that are set on this 
universal is olo:Slot [3].


Cheers,


Bob


[1] 
http://sourceforge.net/projects/smiy/files/SpecGen/v6/specgen6.tar.gz/download
[2] http://smiy.svn.sourceforge.net/viewvc/smiy/specgen/trunk/ (this 
version is up-to-date)

[3] http://purl.org/ontology/olo/core#Slot




Re: REST and Linked Data (was Minting URIs)

2011-04-17 Thread Bob Ferris

Hi Ed,

this topic was recently discussed in the Semantic Web community [1] and 
the REST community [2,3] as well. It might be an interesting read re. 
this topic.


Cheers,


Bob


[1] 
http://answers.semanticweb.com/questions/2763/the-relation-of-linked-datasemantic-web-to-rest

[2] http://tech.groups.yahoo.com/group/rest-discuss/message/17242
[3] http://tech.groups.yahoo.com/group/rest-discuss/message/17281

On 4/17/2011 12:28 PM, Ed Summers wrote:

Hi Michael,

On Fri, Apr 15, 2011 at 1:57 PM, Michael Hausenblas
michael.hausenb...@deri.org  wrote:

I find [1] a very useful page from a pragmatic perspective. If you're more
into books and not only focusing on the data side (see 'REST and Linked
Data: a match made for domain driven development?' [2] for more details on
data vs. API), I can also recommend [3], which offers some more practical
guidance in terms of URI space management.


Thanks for the reference to the recent REST and Linked Data [1] paper,
I had missed it till now. I had some comments and questions, which I
figured couldn't hurt (too much?) to be discussed here:


Typically a REST service will assume the resource
being transferred in these representations can be considered
a document; in the terminology of the following section, an
‘information resource’.


I guess in some ways this is a minor quibble, but this seems to me to
be a fairly common misconception of REST in the Linked Data community.
In his dissertation Roy says this about resources [2]:


The key abstraction of information in REST is a resource. Any
information that can be named can be a resource: a document or image,
a temporal service (e.g. today's weather in Los Angeles), a
collection of other resources, a non-virtual object (e.g. a person),
and so on.


It is actually pretty common to use URLs to identify real world
things when designing a REST API for a domain model. However, if a
developer implements some server side functionality to allow (for
example) a product to be deleted with an HTTP DELETE to the right URL,
they would typically be more concerned with practical issues such as
authentication, than with the existential ramifications of deleting
the product (should all instances of the product be recalled and
destroyed?).

Section 3.3 of the REST and Linked Data paper goes on to say:


Linked Data services, in implementing the “HTTP range
issue 14” solution [12], add semantics to the content negoti-
ation to distinguish between URIs that are non-information
resources (identifiers for conceptual or real-world objects)
and URIs that are information resources (documents) that
describe the non-information resources. This is because as-
sertions in the RDF graph are usually relationships that ap-
ply to the non-information resource, but Linked Data over-
loads URI usage so that it is also a mechanism for retrieving
triples describing that resource (in a document, i.e. an in-
formation resource). (This is a change iin behaviour from
earlier use of HTTP URIs in RDF, when they were not ex-
pected to be dereferenced.)


Was there really a chapter in the history of the Semantic Web where
HTTP URIs in RDF were not expected to resolve? Wouldn't that have
removed all the machinery for the Web from the Semantic Web? I
only really started paying attention to RDF when the Linked Data
effort (SWEO) picked up, so I'm just generally interested in this
pre-history, and other people's memory of it.

Minor quibbles aside, as a web developer I'm a big supporter of the
paper's conclusions, which promote REST's notion of Hypertext As The
Engine Of Application State (HATEOS), and the semantics that the HTTP
Connector provides, in the Linked Data space:


3. Services that publish Linked Data resources should pay
careful consideration to HATEOAS as a viable altern-
ative SPARQL, and identify resources to enable REST-
ful use of the API.
4. RESTful methods should be developed for the write-
enabled Web of Data.


Thanks to the authors (cc'd here) for writing that down and presenting
it forcefully.

//Ed

[1] http://ws-rest.org/2011/proc/a5-page.pdf
[2] 
http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm#sec_5_2






Re: Minting URIs: how to deal with unknown data structures

2011-04-17 Thread Bob Ferris

Hi Frans,

re. URI design patterns, I would highly recommend you to have a look at 
a presentation that describes how they are doing it at BBC [1]. 
Furthermore, I asked a question on SemanticOverflow (now 
answers.semanticweb.com) some time ago that deals with URI template 
specifications for Linked Data publishing [2]. 	Niklas Lindström 
recommended the CoIN Vocabulary [3] for that purpose. It looks quite 
interesting.


Cheers,


Bob


[1] http://www.slideshare.net/reduxd/beyond-the-polar-bear
[2] 
http://answers.semanticweb.com/questions/2858/uri-template-specifications-for-linked-data-publishing

[3] http://code.google.com/p/court/wiki/COIN

On 4/15/2011 2:48 PM, Frans Knibbe wrote:

Hello,

Some newbie questions here...

I have recently come in contact with the concept of Linked Data and I
have become enthusiastic. I would like to promote the idea within my
company (we specialize is geographical data) and within my country. I
have read the excellent Linked Data book (“Linked Data: Evolving the Web
into a Global Data Space”) and I think I am almost ready to start
publishing Linked Data. I understand that it is important to get the
URIs right, and not have to change them later. That is what my questions
are about.

I have acquired the first part (authority) of my URIs, let's say it is
lod.mycompany.com. Now I am faced with the question: How do I come up
with a URI scheme that will stand the test of time? I think I will start
with publishing some FOAF data of myself and co-workers. And then
hopefully more and more data will follow. At this moment I can not
possible imagine which types of data we will publish. They are likely to
have some kind of geographical component, but that is true for a lot of
data. I believe it is not possible to come up with any hierarchical
structure that will accommodate all types of data that might ever be
published.

So I think it is best to leave out any indication of data organization
in the path element of the URI (i.e. http://lod.mycompany.com/people is
a bad idea). In my understanding, I could use base URIs like
http://lod.mycompany.com/resource, http://lod.mycompany.com/page and
hhtp://lod.mycompany.com.data, and then use unique identifiers for all
the things I want to publish something about. If I understand correctly,
I don't need the URI to describe the hierarchy of my data because all
Linked Data are self-describing. Nice.

But then I am faced with the problem: What method do I use to mint my
identifiers? Those identifiers need to be unique. Should I use a number
sequence, or a hash function? In those cases the URIs would be uniform
and give no indication of the type of data. But a number sequence seems
unsafe, and in the case of a hash function I would still need to make
some kind of structured choice of input values.

I would welcome any advice on this topic from people who have had some
more experience with publishing Linked Data.

Regards,
Frans Knibbe





Re: Take2: 15 Ways to Think About Data Quality (Just for a Start)

2011-04-15 Thread Bob Ferris

Hi Glenn,

thanks a lot for your insightful thoughts. I think, I can fully agree to 
them. This topic reminds me a bit of a question I stated some time ago 
on SemanticOverflow (now answers.semanticweb.com):


When should I use explicit/anonymous defined inverse properties? [1]

(btw, this question is still not marked as answered ;) )

Cheers,


Bob


[1] 
http://answers.semanticweb.com/questions/1126/when-should-i-use-explicitanonymous-defined-inverse-properties 



On 4/15/2011 3:47 PM, glenn mcdonald wrote:

This reminds me to come back to the point about what I initially
called Directionality, and Dave improved to Modeling Consistency.

Dave is right, I think, that in terms of data quality, it is
consistency that matters, not directionality. That is, as long as we
know that a president was involved in a presidency, it doesn't matter
whether we know that because the president linked to the presidency,
or the presidency linked to the president. In fact, in a relational
database the president and the presidency and the link might even be
in three separate tables. From a data-mathematical perspective, it
doesn't matter. All of these are ways of expressing the same logical
construct. We just want it to be done the same way for all
presidents/presidencies/links.

But although directionality is immaterial for data *quality*, it
matters quite a bit for the usability of the system in which the data
reaches people. We know, for example, that in the real world
presidents have presidencies, and vice versa. But think about what it
takes to find out whether this information is represented in a given
dataset:

- In a classic SQL-style relational database we probably have to just
know the schema, as there's usually no exploratory way to find this
kind of thing out. The RDBMS formalism doesn't usually represent the
relationships between tables. You not only have to know it from
external sources, but you have to restate it in each SQL join-query.
This may be acceptable in a database with only a few tables, where the
field-headings are kept consistent by convention, but it's extremely
problematic when you're trying to combine formerly-separate datasets
into large ones with multiple dimensions and purposes. If the LOD
cloud were in relational tables, it would be awful. Arguably the main
point of the cloud is to get the data out of relational tables (where
most of it probably originates) into a graph where the connections are
actually represented instead of implied.

- But even in RDF, directionality poses a significant discovery
problem. In a minimal graph (let's say minimal graph means that each
relationship is asserted in only one direction, so there's no
relationship redundancy), you can't actually explore the data
navigationally. You can't go to a single known point of interest, like
a given president, and explore to find out everything the data holds
and how it connects. You can explore the *outward* relationships from
any given point, but to find out about the *inward* relationships you
have to keep doing new queries over the entire dataset. The same basic
issue applies to an XML representation of the data as a tree: you can
squirrel your way down, but only in the direction the original modeler
decided was down. If you need a different direction, you have to
hire a hypersquirrel.

- Of course, most RDF-presenting systems recognize this as a usability
problem, and address it by turning the minimal graph into a redundant
graph for UI purposes. Thus in a data-browser UI you usually see, for
a given node, lists of both outward and inward relationships. This is
better, but if this abstraction is done at the UI layer, you still
lose it once you drop down into the SPARQL realm. This makes the
SPARQL queries harder to write, because you can't write them the way
you logically think about the question, you have to write them the way
the data thinks about the question. And this skew from real logic to
directional logic can make them *much* harder to understand or
maintain, because the directionality obscures the purpose and reduces
the self-documenting nature of the query.


All of this is *much* better, in usability terms, if the data is
redundantly, bi-directionally connected all the way down to the level
of abstraction at which you're working. Now you can explore to figure
out what's there, and you can write your queries in the way that makes
the most human sense. The artificicial skew between the logical
structure and the representational structure has been removed. This is
perfectly possible in an RDF-based system, of course, if the software
either generates or infers the missing inverses. We incur extra
machine overhead to reduce the human congnitive burden. I contend this
should be considered a nearly-mandatory best-practice for linked data,
and that propogating inverses around the LOD cloud ought to be one of
things that makes the LOD cloud *a thing*, rather than just a
collection of logical silos.





Re: LOV - Linked Open Vocabularies

2011-04-01 Thread Bob Ferris

Hi Bernard,

On 4/1/2011 3:59 PM, Bernard Vatant wrote:

Maybe I missed something, but can someone tell me what the URI of the
ontology of dbpedia is?


please have a look at http://wiki.dbpedia.org/Ontology

Cheers,


Bob




Re: data schema / vocabulary / ontology / repositories

2011-03-16 Thread Bob Ferris

Hi,

On 14.03.2011 22:42, Richard Cyganiak wrote:

Bob,

On 14 Mar 2011, at 10:47, Bob Ferris wrote:

Am 14.03.2011 11:13, schrieb Richard Cyganiak:

The abandoned PhD project type of ontology or vocabulary has no community 
around it. Therefore, one gains very little by re-using it.

...

I can only repeat myself: PhD-project-born ontologies have not to be bad per 
se, or? Banning them a priori is a rather prejudiced approach in my mind.


How did you get from

“One gains very little from re-using an abandoned PhD project ontology”

to

“Ontologies created in PhD projects should be banned”?


Yes, sorry, maybe my interpretation was a bit harsh. However, please 
keep in mind:


We notice that most ontologies and Web vocabularies, especially the 
most popular ones, are developed by academic researchers (FOAF, SIOC, 
Good Relations, Music Ontology, etc.) (cited from [1])


I guess, there are still some lurking jewels or rough diamonds out 
there. So, we have to keep or eyes and ears open, or?


Cheers,


Bob


[1] Zimmermann, Antoine; Ontology Recommendation for the Data 
Publishers; ORES-2010; 2010; 
http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-596/paper-12.pdf




Re: data schema / vocabulary / ontology / repositories

2011-03-14 Thread Bob Ferris

Hello everybody,

Am 14.03.2011 09:28, schrieb Martin Hepp:

Hi Dieter:

There are several ontology repositories available on-line, but to my knowledge 
they all suffer from two serious limitations:

1. They do not rate ontologies by quality/relevance/popularity, so you do not 
get any hint whether foaf:Organization or foo:Organization will be the best way 
to expose your data.


I think, we discussed this issue already sometime ago. A conclusion (at 
least for me) was that it is quite difficult to achieve such a ranking 
quite objective over a very broad range of ontologies that are 
available. It depends often on the complexity of the knowledge 
representation (level of detail) a developer likes to achieve. This is 
the advantage of the Semantic Web. There wouldn't never be an ontology 
for a specific domain that rules all use case in it well.



2. The selection of ontologies listed is, to say the best, often biased or 
partly a random choice. I do not know any repository that
- has a broad coverage,
- includes the top 25 linked data ontologies and


I think, people are looking for an ontology that fit their purpose, 
i.e., popularity is good, however, it is in that case only a secondary 
metric*. A developer is primarily looking for an appropriate ontology. 
Not till then he/she can investigate further efforts into a comparison 
of available ones, if there are more than one appropriate ontology 
available.



- lists more non-toy ontologies than abandoned PhD project prototypes.


I don't want to take a concrete position here, however, every ontology 
development has somewhere its starting point and is there usually not so 
popular. Nevertheless, the ontology design can be a good one, too. For 
that reason, why should be abandon these approach and brand them as evil?


I think, we should really investigate more power in enhancements of, 
e.g., Schemapedia. This approach seems to be a quite good one (at least 
from my personal experience). On the other side, something like 
ontology marketing/advertisement plays another important role. There 
are often quite good jewels out there that are badly discoverable.



Cheers,


Bob


*) I guess, the biology community wouldn't be quite satisfied when 
looking at the proposed ontology charts, or?




Re: data schema / vocabulary / ontology / repositories

2011-03-14 Thread Bob Ferris

Am 14.03.2011 11:13, schrieb Richard Cyganiak:

On 14 Mar 2011, at 09:15, Bob Ferris wrote:

2. The selection of ontologies listed is, to say the best, often biased or 
partly a random choice. I do not know any repository that
- lists more non-toy ontologies than abandoned PhD project prototypes.


I don't want to take a concrete position here, however, every ontology 
development has somewhere its starting point and is there usually not so 
popular. Nevertheless, the ontology design can be a good one, too. For that 
reason, why should be abandon these approach and brand them as evil?


The point in re-using a vocabulary or ontology is this: one joins a community 
of data publishers and re-users who have agreed on certain shared terms for 
shared concepts.

The abandoned PhD project type of ontology or vocabulary has no community 
around it. Therefore, one gains very little by re-using it.

This is why it's so important to involve multiple stakeholders from the start, 
and get feedback from real data owners and data users along the development 
process. That's the first and perhaps most important step in the process that 
you called “ontology marketing” elsewhere in this thread.


Yes, you are absolutely right. However, not every ontology designer has 
the power or reputation to get valuable stakeholders on board (I think, 
I made my personal experience in that area* ;) ). So, I can only repeat 
myself: PhD-project-born ontologies have not to be bad per se, or? 
Banning them a priori is a rather prejudiced approach in my mind. When I 
have to choose an ontology, I try to initially review all available** 
ontologies independent whether they have their origin in a PhD project 
or design by a big industry consortium.
Bad design decisions can be made everywhere - in the small-grouped PhD 
project or that one with a huge industry community behind. I think every 
ontology has the chance to get somehow famous, or?
The ontology with huge stakeholder community in the background is damned 
to get popular and the little-sized-project-born ontology has the 
freedom to get accepted somewhere and somehow.


Regarding ontology marking, I especially try to address the following 
issues:


- the ontology shall be discoverable, even by fuzzy requests (that is 
why, the tagging approach that is followed by Schemapedia is a quite 
good one) and by general purpose search engines alá Google
- the ontology specification shall be provided in as much as possible 
and appropriated serialization formats, e.g., RDF/N3, XHTML+RDFa, 
RDF/JSON, RDF/XML
- the ontology shall be published with a good (interlinked) 
documentation, incl. illustrating examples, graphics of its structure, 
related ontologies, etc. (ideally everything at least available in 
XHTML+RDFa)
- the ontology shall be evolvable by a community, incl. issue trackers, 
mailing lists, etc.


Cheers,


Bob


*) No feedback is also a kind of feedback
**) every ontology I can find that might be somehow appropriated to 
fulfil my addressed purpose somehow




Re: data schema / vocabulary / ontology / repositories

2011-03-13 Thread Bob Ferris

Hi Dieter,

there are several threads on SemanticOverflow that are dealing with this 
topic, e.g., this one [1]


Cheers,


Bob

[1] 
http://www.semanticoverflow.com/questions/1039/where-can-i-find-useful-ontologies


Am 13.03.2011 17:15, schrieb Dieter Fensel:

Dear all,

for a number of projects I was searching for vocabularies/Ontologies
to describe linked data. Could you please recommend me places
where to look for them? I failed to find a convenient entrance point for
such
kind of information. I only found some scattered information here and
there?

Thanks,

Dieter




Re: data schema / vocabulary / ontology / repositories

2011-03-13 Thread Bob Ferris

Hello again,

an issue that is strongly related to the raised concern is ontology 
marketing:


I think, personal advice is still the best one here. It's horrible to 
find appropriate ontologies month after intensive searches, because they 
are hidden well in our universal information space. I suggest to work on 
a guide for ontology marketing or something like that, because is 
crucial to establish more easily shared understanding. (quote from an 
comment to an answer on SemanticOverflow [1])


Cheers,


Bob


[1] 
http://www.semanticoverflow.com/questions/2623/ontology-to-use-for-querying/2630#2630




Re: The truth about SPARQL Endpoint availability

2011-02-28 Thread Bob Ferris

Congrats Pierre, well done!

This might hopefully become a quite useful resource. Any plans to 
publish this information itself as Semantic Web Linked Data?


Cheers,


Bob

Am 28.02.2011 19:55, schrieb Pierre-Yves Vandenbussche:

Hello all,

you have already encountered problems of SPARQL endpoint accessibility ?
you feel frustrated they are never available when you need them?
you develop an application using these services but wonder if it is
reliable?

Here is a tool
http://labs.mondeca.com/sparqlEndpointsStatus/index.html[1]
that allows you to know public SPARQL endpoints availability
and monitor them in the last hours/days.
Stay informed of a particular (or all) endpoint status changes
through RSS feeds.
All availability information generated by this tool is accessible through a 
SPARQL
endpoint.

This tool fetches public SPARQL endpoints from CKAN
http://ckan.net/ open data.
From this list, it runs tests every hour for availability.

[1] http://labs.mondeca.com/sparqlEndpointsStatus/index.html
http://labs.mondeca.com/sparqlEndpointsStatus/index.html[2]
http://ckan.net/

Pierre-Yves Vandenbussche.




Re: The truth about SPARQL Endpoint availability

2011-02-28 Thread Bob Ferris

Oh sorry,

I overlooked this for some reason. What a pitty. However, I thought more 
about some inline Semantic Web Linked Data in the feeds. Would that be 
an option?


Cheers,


Bob


PS: http://labs.mondeca.com/repositories/ENDPOINT_STATUS delivers me a 
Missing parameter: query. So I guess, I have to parametrize the 
request. An instruction for that might be useful then ;)



Am 28.02.2011 23:25, schrieb Pierre-Yves Vandenbussche:

Hello Robert,

Every information produced by this service are stored in a SPARQL
Endpoint :
http://labs.mondeca.com/sparqlEndpointsStatus/endpoint/endpoint.html
These open data are linked to CKAN ones. You can already access them.

best,

Pierre-Yves Vandenbussche
Research  Development
Mondeca
3, cité Nollez 75018 Paris France
Tel. +33 (0)1 44 92 35 07 - fax +33 (0)1 44 92 02 59
Mail: pierre-yves.vandenbuss...@mondeca.com
mailto:pierre-yves.vandenbuss...@mondeca.com
Website: www.mondeca.com http://www.mondeca.com/
Blog: Leçons de choses http://mondeca.wordpress.com/


On Mon, Feb 28, 2011 at 10:45 PM, Bob Ferris z...@elbklang.net
mailto:z...@elbklang.net wrote:

Congrats Pierre, well done!

This might hopefully become a quite useful resource. Any plans to
publish this information itself as Semantic Web Linked Data?

Cheers,


Bob

Am 28.02.2011 19:55, schrieb Pierre-Yves Vandenbussche:

Hello all,

you have already encountered problems of SPARQL endpoint
accessibility ?
you feel frustrated they are never available when you need them?
you develop an application using these services but wonder if it is
reliable?

Here is a tool
http://labs.mondeca.com/sparqlEndpointsStatus/index.html[1]

that allows you to know public SPARQL endpoints availability
and monitor them in the last hours/days.
Stay informed of a particular (or all) endpoint status changes
through RSS feeds.
All availability information generated by this tool is
accessible through a SPARQL
endpoint.

This tool fetches public SPARQL endpoints from CKAN
http://ckan.net/ open data.

 From this list, it runs tests every hour for availability.

[1] http://labs.mondeca.com/sparqlEndpointsStatus/index.html
http://labs.mondeca.com/sparqlEndpointsStatus/index.html[2]
http://ckan.net/

Pierre-Yves Vandenbussche.




Re: Proposal to assess the quality of Linked Data sources

2011-02-25 Thread Bob Ferris

Hi Annika,

Am 25.02.2011 23:19, schrieb Annika Flemming:

- no redefinition of existing vocabularies - sometimes it necessary
e.g., to achieve an OWL DL compiliance of an utilized vocabulary that
doesn't fulfil this requirement originally

Oh ok, I didn't know that, thanks!


See e.g. a related discussion on SemanticOverflow [1]



- any reason for being sometimes quite strict re. the selected
relations for specific indicators (e.g. 4.1) i.e., SIOC is for online
communities and hence rather specific for that domain

First, I wanted to leave things like the interpretation of an
established vocabulary open to the reader. But as it is a diploma
thesis, I was asked to make clear definitions for the indicators which
wouldn't leave much room for interpretation.


Okay. Then it might be good to propose recommendations as you already 
did it for some issues.


Cheers,


Bob


[1] 
http://www.semanticoverflow.com/questions/1105/owl-dl-compliance-why-redefining-existing-concepts-propeties-in-own-ontology




Re: Is it best practices to use a rdfs:seeAlso link to a potentially multimegabyte PDF?, existing predicate for linking to PDF?

2011-01-13 Thread Bob Ferris

be strict when sending and tolerant when receiving [1]

I guess, we shouldn't expect to much ;)

Cheers,


Bob


[1] http://tools.ietf.org/html/rfc1958



Re: Is vCard range restriction on org:siteAddress necessary?

2011-01-04 Thread Bob Ferris

Hi,

Am 04.01.2011 13:38, schrieb Alexander Dutton:


The vCard ontology doesn't give a general property for linking a thing
to its v:VCard, which suggests to me that the only way to discover
addresses in the general case is when properties in the vCard namespace
are applied directly to people, places, etc. (In other words the v:VCard
class simply means a thing to which addresses, phone numbers, etc are
attached.)


This is also a long term FOAF issue (see [1]) with the proposal to 
connect FOAF and vCard object using a property term businessCard (see 
[2]). Since the Organization Ontology is also based on the FOAF 
Vocabulary this might be an option.


Cheers,


Bob


[1] http://wiki.foaf-project.org/w/FOAF_and_vCard
[2] http://wiki.foaf-project.org/w/term_businessCard



Re: Quality Criteria for Linked Data sources

2010-12-16 Thread Bob Ferris

Hi Annika,

Our aim was to decide on a set of criteria that represent the quality 
of a data source. In this document, we understand a data source as an 
access point for Linked Data in the Web.


Does this mean that you consider only information services that follow 
the Linked Data principles? So would you exclude existing information 
services e.g., Wikipedia, MusicBrainz, Last.fm, Discogs, Echo Nest?
I think it is crucial to have information service quality ratings 
especially for this kind of information services, because they are 
currently the backbone of the existing Linked Data services that do 
Linked Data information providing and information integration tasks.


Cheers,


Bob


Am 15.12.2010 20:49, schrieb Annika Flemming:

Hi,
I'm a student at the Humboldt University of Berlin and I'm currently writing my 
diploma thesis under the supervision of Olaf Hartig. The aim of my thesis is to 
draw up a set of criteria to assess the quality of Linked Data sources. My 
findings include eleven criteria grouped into four categories. Each criterion 
includes a set of so-called indicators. These indicators constitute a 
measurable aspect of a criterion and, thus, allow for the assessment of the 
quality of a data source w.r.t the criteria.
I've written a summary of my findings, which can be accessed here:

http://sourceforge.net/apps/mediawiki/trdf/index.php?title=Quality_Criteria_for_Linked_Data_sources

To evaluate my findings, I decided to post this summary hoping to receive some 
feedback about the criteria and indicators I suggested. Moreover, I'd like to 
initiate a discussion about my findings, and about their applicability to a 
quality assessment of data sources.

Your comments might be included in my thesis, but I won't add any names.


OT: I think in a community it is quite natural to honour the people 
somehow, when they contributed useful (!) feedback. You might consider 
this ;)




A further summary will follow shortly, describing a formalism based on these 
criteria and its application to several data sources.

Thanks to everyone participating,
Annika




Re: Is 303 really necessary?

2010-12-14 Thread Bob Ferris

Hi,

I found Alternative to 303 response: Description-ID: header[1] in the 
TAG mailing list archive.


Are there any parallels? ;)

Cheers,


Bob


PS: maybe someone has already mentioned this source here, however, I 
didn't find any reference



[1] http://lists.w3.org/Archives/Public/www-tag/2007Dec/0024.html




Re: Reification alternative

2010-10-14 Thread Bob Ferris

Hi Mirko,

Am 14.10.2010 15:08, schrieb Mirko:

Thank you all for your helpful comments. First, let me clarify my
intention. My question aimed not so much at the (internal) storage of
the data, but really on how to publish them as Linked Data, so that they
are useful for third parties (= easy to query and consume).

I use Virtuoso and want to publish data that are currently stored in SQL
tables as RDF Views over a SPARQL endpoint.

As I read here [1, 2], the problem with publishing reificated data is
two-fold: 1) they are cumbersome to query. 2) semantics are imprecise
for this use case, because what is described with reification is the rdf
triple. In my case, I want to describe the information, i.e. the
interest of a user for an item.

So, four solutions for my problem came up here. As I understand three
solutions - Quads, Named graphs, and publishing the data as several
files - do not really solve the problems. They are more for internal
storage then for publishing.

The suggestions of Bob and Leigh seem to be a better solution, because
the semantics are clear. I think the querying issue remains.

The trick is to express n-ary relations by defining a class for what was
a property before. In my example, instead of using the property
foaf:interest , I use a class like e-foaf:interest [3] or
cco:CognitiveCharacteristic [4]:

@prefix foaf: http://xmlns.com/foaf/0.1/ .
@prefix cco: http://purl.org/ontology/cco/core# .
@prefix dcterms: http://purl.org/dc/terms/ .

ex:AStmt
   a cco:CognitiveCharacteristic ;
   cco:agent ex:AUser ;
   cco:topic ex:AItem ;
   cco:characteristic cco:interest ;
   dcterms:modified 2010-10-13^^xsd:date ;
   dcterms:publisher ex:AService .

ex:AUser a foaf:user.
ex:AItem a foaf:topic.

This is definitely a viable solution. However, the drawback of this
solution is that I need new vocabulary. I define classes for things that
are actually already defined by properties, which seems a bit odd to me.
The reason why I tried reification was that I wanted to re-use existing
vocabulary, FOAF in my case, as it is a recommended best-practice for LD
publishing [1].


Well, the Cognitive Characteristics Ontology is now also an existing 
vocabulary, which is, of course, intented for reutilization.
It's simply the case that the interest relation in FOAF is a shortcut 
relation, where one need further concept (especially a reification 
class) and property definitions to describe that shortcut relation 
more in detail.
So in my mind this is not really a drawback, is it more an elegant 
addition to existing vocabularies, which can be applied in line with them.
However, I would say the information you like to add is more or less 
provenance information and hence 'external context' (after Tolle's 
definition, see [1]) as it can be attached to more or less every 
information resource. So one might like to apply Quads or Named Graphs 
for this context type. Although, the drawback regarding applying Named 
Graphs there, are the singleton graphs, which are not very effective.

This issue is discussed for instance here[2] or here[3].

Cheers,


Bob


PS: BTW, the intended modelling of the e-foaf:interest Vocabulary is 
also included in the Cognitive Characteristics Ontology (as a successor 
of the Weighted Interests Vocabulary 0.5[4]), because their modelling is 
not really applicable due to invalidation (see [5]).


[1] 
http://www.dbis.informatik.uni-frankfurt.de/~tolle/Publications/2004/AISTA04.pdf
[2] 
http://www.semanticoverflow.com/questions/1643/named-graphs-and-multi-graph-documents-standardisation-and-directions

[3] http://lists.w3.org/Archives/Public/semantic-web/2010Sep/0175.html
[4] http://purl.org/ontology/wi/weightedinterests.html
[5] 
http://www.semanticoverflow.com/questions/1472/datatypeproperty-modeled-in-an-objectproperty-way




Re: Reification alternative

2010-10-13 Thread Bob Ferris

Hi Mirko,

well the thing is, it wouldn't really work without a form a of 
reification (in my mind). There are use cases, where people prefer a 
simple knowledge representation of a semantic relation, and other ones, 
where people like to get a more detailed description about the semantic 
relation between two particulars. However, it is important to be able to 
semantically relate both of them.
When I designed the Cognitive Characteristics Ontology[1], I struggled 
(again) with the same issues. Thereby, the Cognitive Characteristics 
Ontology includes two opportunities to model cognitive patterns.
The first one is the representation of cognitive characteristics by 
using the semantic relation cco:cognitive_characteristic or better its 
more specialised sub properties, e.g. cco:interest, to associate the 
topics of the cognitive patterns to the users. The second opportunity is 
the object-oriented context reification of cco:cognitive_characteristic, 
cco:CognitiveCharacteristic, which is a general multiple purpose 
cognitive characteristic concept to describe cognitive patterns more in 
detail for a specific user or user group.
However, to be able to model the semantic relation between the shortcut 
relation and its reification statement, one need a further mechanism, 
which is included into the Property Reification Vocabulary[2]. This 
vocabulary should enable a reasoning engine to apply the implications 
between a shortcut relation and its reification statement, however not 
directly on the RDF Statement however for all possible statement that 
uses the defined shortcut relation properties and reifications classes 
(incl. their related properties).
In case of the Cognitive Characteristics Ontology[3] it enables you to 
decide between a skill in soccer player, an expertise in soccer and and 
interested in football watching (See [4]).
Alternatively, you can apply Named Graphs, however in my mind, they are 
intended to represent 'external context' (especially provenance and 
trust), because their semantics are not really clear in that case. 
However, from my use case above, I like to represent 'internal context', 
as detailed description of a shortcut relation.
Don't hesitate to ask further question. This all is a work in progress 
and suggestions, comments and critics are very welcome.


Cheers,


Bob

[1] http://purl.org/ontology/cco/cognitivecharacteristics.html
[2] http://purl.org/ontology/prv/propertyreification.html
[3] http://purl.org/ontology/prv/propertyreification.html#sec-cco-example
[4] 
http://purl.org/ontology/cco/cognitivecharacteristics.html#sec-soccer-example


Am 13.10.2010 15:02, schrieb Mirko:

Hi all,
I try to understand alternatives to reification for Linked Data
publishing, since reification is discouraged. For example, how could I
express the following without reification:

@prefix dc: http://purl.org/dc/elements/1.1/.
@prefix foaf: http://xmlns.com/foaf/0.1/.

http://ex.org/stmt
   rdfs:label Statement that describes user interest in a document@de;
   rdf:subject http://ex.org/User;
   rdf:predicate foaf:interest;
   rdf:object http://ex.org/Item;
   dc:publisher http://ex.org/Service;
   dc:created 2010-10-13^^xsd:date;
   dc:license http://ex.org/License.

http://ex.org/User rdf:type foaf:Agent.
http://ex.org/Item rdf:type foaf:Document.

Thanks,
Mirko




Re: New LOD Cloud

2010-09-24 Thread Bob Ferris

Am 24.09.2010 20:36, schrieb Richard Cyganiak:

Hi Bob,

On 23 Sep 2010, at 11:19, Bob Ferris wrote:

is there a legend to the coloured cloud, which explains a bit the
coloured clusters, or did I simply missed it? (it would be nice, if
this legend is directly included in the graphic)


Good idea. I added the color legend:
http://richard.cyganiak.de/2007/10/lod/lod-datasets_2010-09-22_colored.html

The assignment of some of the datasets to categories is perhaps
questionable -- if someone can come up with a better way of sorting
these 203 datasets into seven categories ... I'm open to suggestions.


That looks good, Richard! Well done!
Only a small thing: the contrast between Media, Cross Domain and 
Publications is a bit low, but okay ;)


Cheers,


Bob



Re: New LOD Cloud

2010-09-23 Thread Bob Ferris

Hi,

is there a legend to the coloured cloud, which explains a bit the 
coloured clusters, or did I simply missed it? (it would be nice, if this 
legend is directly included in the graphic)


Cheers,

Bob

Am 23.09.2010 10:09, schrieb Antoine Isaac:

Anja, Richard, (ccing the Library Linked Data list)

Really great work! Adding to Rinke's comment, I'm also happily surprised
by the coherence that you still can give to the various parts of the LOD
cloud: the colored version is really fascinating to see [1]. Our core
library linked data core sector fits nicely between the A/V media
one and the scientific publishing one. We just have to create more
links between these, now :-)

Thanks again,

Antoine

[1]
http://richard.cyganiak.de/2007/10/lod/lod-datasets_2010-09-22_colored.png




Re: Vocabulary for Search Results

2010-09-20 Thread Bob Ferris

Hi Bernhard,

the Recommendation Ontology[1] provides a basic concept to represent and 
describe recommendations on different levels of detail. As search can be 
seen as a specific kind of recommendation (this is especially my point 
of view, however, there are some similar views especially in the last 
few years, e.g. [2]), this ontology might fit your requirements.
There is a general recommendation concept, rec:Recommendation, which can 
be used to describe unordered recommendation results (search results). 
Furthermore, there is a ranked recommendation concept, 
rec:RankedRecommendation, which can be used to describe recommendation 
results (search results) with help of a specific ordered list.
One can also think about to extend this ontology explicitly to search 
results. However, in my opinion the modelling should be the other way 
around. That means, search results are specialized recommendation results.


Cheers,


Bob


[1] http://purl.org/ontology/rec/recommendationontology.html
[2] http://technocalifornia.blogspot.com/2010/09/end-of-age-of-search.html

Am 20.09.2010 09:38, schrieb Bernhard Schandl:

Hi,

I am looking for an RDF vocabulary to represent result lists of search 
requests. It should be able to represent an ordered list of result items, 
optionally with a score, where each result item points to a separate RDF 
resource. So far I didn't manage to find a suitable vocabulary, so I'd be 
grateful for any pointers.

If there is no such a vocabulary outside, I'd like to start developing one, if 
you like to participate please let me know.

Best regards
Bernhard







Re: Vocabulary for Search Results

2010-09-20 Thread Bob Ferris

Hi Renaud,

your format / search result ontology looks quite interesting. However, 
it might be useful, if the URI of this ontology 
(http://sindice.com/vocab/search#) is dereferencable and would 
furthermore provide a specification documentation.


Cheers,


Bob

Am 20.09.2010 12:36, schrieb Renaud Delbru:

Hi Bernhard,

you can look at the Sindice API result formats [1], there is a RDF
representation defined.

[1] http://sindice.com/developers/api#SindicePublicAPI-Resultformats




You need it, you want it, you get it ;) - The Recommendation Ontology

2010-07-30 Thread Bob Ferris

Hello everybody,

today I like to announce a first draft of the Recommendation 
Ontology[1,2,3]. As far as I get an overview, there exists currently no 
ontology, which address this purpose as I intended it.
With this ontology it should be possible to associate a recommendation 
to someone or something. The recommendation itself includes the 
recommendation objects and a relation to the recommender, e.g. an 
is:InfoService instance. The association and/or similarity statements, 
which are describing the reasons for a recommendation, can be described 
on the basis of the sim:Association[4] concept, which can be related to 
a sim:association concept by the property ao:included_association[5].
The aim of the Recommendation Ontology is to serve recommendations from 
different information services to users (or something) by having also 
the opportunity to describe the reasons for the recommendation and 
hence, enable transparency. A use case example, which follows the idea 
of transparency recommendations is dbrec.net[6].


Please let me know, what do you think about this modelling. Comments, 
critics and suggestions are very welcome.


Cheers,


Bob


[1] http://smiy.sourceforge.net/rec/rdf/recommendationontology.n3
[2] http://smiy.sourceforge.net/rec/rdf/recommendationontology.owl
[3] http://smiy.sourceforge.net/rec/gfx/rec_-_recommendation.gif
[4] http://purl.org/ontology/similarity/Association
[5] 
http://purl.org/ontology/ao/associationontology.html#included_association

[6] http://dbrec.net/



Re: [ANN] Uberblic Search API

2010-07-22 Thread Bob Ferris
Hi Tom,

Am 21.07.2010 19:46, schrieb Tom Morris:

[snip]

 For developers that means: pick any URI that refers to the entity you mean 
 (any of Scarlett Johanssons above) and you'll be fine.
 In practice, that is: if you're building a movie applications, always pick 
 the uberblic entity from The Movie DB.
 
 How does own determine which types correlate with which collections of
 URIs as best?  Is this information encoded in machine-readable
 format someplace or is it just that humans know that a database called
 movie as got to be best for a type called actor?
 

For such an issue (Information Service selection), I designed the Info
Service Ontology[1]. This should bring the customer of a knowledge base
(federated information service) at the end into the position, to choose
Information Services by Information Service profiles. Furthermore, such
profiles could include Information Service quality ratings (from
different Information Service quality rating agencies).
The Information Service/customer matching could be done automatically,
by matching a descriptive user profile (and/or the query itself) to
Information Service descriptions, or manually, by selecting the
Information Service(s) of his/her own choice (by evaluating Information
Service profiles and/or Information Service quality ratings).

I know to achieve this goal, there are still some tasks to do. However,
I believe that especially end users will benefit from such knowledge.

Cheers,


Bob

[1] http://purl.org/ontology/is/infoservice.html



Re: The Counter Ontology

2010-07-21 Thread Bob Ferris

Hi Toby,

Am 21.07.2010 13:48, schrieb Toby Inkster:

On Tue, 20 Jul 2010 14:56:05 +0200
Bob Ferrisz...@elbklang.net  wrote:


How can I make sure that the value of my counter concept is of the
type xsd:Integer?


co:Counter
rdfs:subClassOf [
a owl:Restriction ;
owl:onProperty rdf:value ;
owl:allValuesFrom xsd:integer
] ;
# and to say that it's a functional property...
rdfs:subClassOf [
a owl:Restriction ;
owl:onProperty rdf:value ;
owl:cardinality 1
] .



co:count is already an owl:FunctionalProperty and the rdfs:range of this 
property is only xsd:Integer. Hence, there should be no other type 
possible, or? I think owl:someValuesFrom and owl:allValueFrom should be 
used, when there is a owl:unionOf range of a property that is in the 
domain of a concept.
Your second statement (... owl:cardinality 1 ...) restricts the 
existence of co:count. That means this property must exist for every 
co:Counter instance. I thought also about adding this restriction to 
co:Counter, because co:count is the necessary value (That's why maybe 
also Vasiliy's thoughts rdf:value) of this concept.

Without this restriction the range of co:count is currently [0..1].

Finally, what do you think should we use now: rdf:value and some 
restrictions on it for co:Counter or co:count as it is already defined + 
a cardinality restriction of 1 on co:Counter for co:count?


Cheers,


Bob

[1] http://purl.org/ontology/co/counterontology.html#count



Re: The Counter Ontology

2010-07-20 Thread Bob Ferris

Hi Vasiliy,


Am 20.07.2010 14:39, schrieb Vasiliy Faronov:

Bob Ferris wrote:

The second property of co:Counter is co:count, which is a simple
xsd:int based datatype property.


Any reasons for not using rdf:value[1]?

Not that it would make a lot of difference, but seems like this property
was made exactly for such statements.



Good question, however, when I read through the description of 
rdf:value[1], I found also:


..the principle that such simple values are often insufficient to 
adequately describe these values is an important one. In a global 
environment such as the Web, it is generally not safe to make the 
assumption that anyone accessing a property value will understand the 
units being used..


How can I make sure that the value of my counter concept is of the type 
xsd:Integer? I think with the current definition:


co:count
  rdf:type rdf:Property , owl:FunctionalProperty ;
  rdfs:comment Links a counter resource to the actual count@en ;
  rdfs:domain co:Counter ;
  rdfs:isDefinedBy co: ;
  rdfs:label has count@en ;
  rdfs:range xsd:integer ;
  vs:term_status stable@en .

it works.

Cheers,


Bob


[1] http://www.w3.org/TR/rdf-primer/#rdfvalue







Updates, updates, updates: Ordered List Ontology, Counter Ontology, Info Service Ontology

2010-07-15 Thread Bob Ferris

Hello everybody,

Apologies for cross-postings ;)

I've updated the specifications of the Ordered List Ontology, the 
Counter Ontology and the Info Service Ontology. Furthermore, I created a 
documentation for each one with examples etc. and included into these 
files also a XHTML+RDFa representation of the ontologies themselves 
(created with [1]).

You can access the specifications as following:

- The Ordered List Ontology: 
http://purl.org/ontology/olo/orderedlistontology.html

- The Counter Ontology: http://purl.org/ontology/co/counterontology.html
- The Info Service Ontology: http://purl.org/ontology/is/infoservice.html

Please note, these documentations are still incomplete and some links 
are unresolved (especially these ones of the different versions). 
However, the specifications should cover more or less the key features 
of each ontology.


Comments, suggestions, critics are very welcome (like every time).

Cheers,


Bob

[1] http://smiy.svn.sourceforge.net/viewvc/smiy/specgen/trunk/



Re: Updates, updates, updates: Ordered List Ontology, Counter Ontology, Info Service Ontology

2010-07-15 Thread Bob Ferris

Hi Kingsley,

Am 16.07.2010 00:09, schrieb Kingsley Idehen:

Bob Ferris wrote:

Hello everybody,

Apologies for cross-postings ;)

I've updated the specifications of the Ordered List Ontology, the
Counter Ontology and the Info Service Ontology. Furthermore, I created
a documentation for each one with examples etc. and included into
these files also a XHTML+RDFa representation of the ontologies
themselves (created with [1]).
You can access the specifications as following:

- The Ordered List Ontology:
http://purl.org/ontology/olo/orderedlistontology.html
- The Counter Ontology: http://purl.org/ontology/co/counterontology.html
- The Info Service Ontology: http://purl.org/ontology/is/infoservice.html

Please note, these documentations are still incomplete and some links
are unresolved (especially these ones of the different versions).
However, the specifications should cover more or less the key features
of each ontology.

Comments, suggestions, critics are very welcome (like every time).

Cheers,


Bob

[1] http://smiy.svn.sourceforge.net/viewvc/smiy/specgen/trunk/


Bob,

Re. Info Service Ontology, have you considered some links with the SIOC
Ontology? Net effect, rieh modelling for Linked Data Spaces connected
via HTTP :-)



I requested sioc:Space rdfs:subClassOf is:InfoService[1]. Furthermore, I 
also requested interlinking with void:Dataset[2], bibo:Collection[3] and 
prv:DataProvidingService[4]. I thought also much about the interlinking 
with DC/DCTerms concepts and properties[5] last time.


Cheers,


Bob


[1] 
http://groups.google.com/group/sioc-dev/browse_thread/thread/b362ec7064361d35

[2] http://vocab.deri.ie/void#Dataset
[3] 
http://bibotools.googlecode.com/svn/bibo-ontology/trunk/doc/classes/Collection___-1390642536.html
[4] 
http://sourceforge.net/mailarchive/forum.php?thread_name=4C231D04.5080301%40elbklang.netforum_name=trdf-prv-vocab-users
[5] 
https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=DC-ARCHITECTURE;dd8bf8c1.1007




Re: RDF and its discontents

2010-07-07 Thread Bob Ferris

Hi Paul,

thanks a lot for your very insightful experience report about Semantic 
Web, RDF and DBPedia.


(more thoughts inline)

Am 02.07.2010 17:07, schrieb Paul Houle:

Here are some of my thoughts



[skip]



(4) I'm one of the people who got interested in semantic tech because of
DBPedia,  but yet,  I've also largely given up on DBPedia.  One day I
realized that I could,  with Freebase,  do things in 20 minutes that
would take 2 weeks of data cleanup with DBPedia.  DBPedia 3.5/3.5.1
seems to be a large step backwards,  with major key integrity problems
that are completely invisible to 'open world' and OWL-paradigm systems.
  I've wound up writing my own framework for extracting 'facts' from
wikipedia because DBPedia isn't interested in extracting the things I
want.  Every time I try to do something with DBpedia,  I make shocking
discoveries (for instance, New York City, Berlin, Tokyo,
Washington , D.C. and Manchester, N.H. are not of rdf:type City)
  The fact that I see so little complaining about this on the mailing
list seems to indicate that not a lot of people are trying to do real
work it.


I ask me all the time, why DBPedia (and now also Uberblic) uses its own 
(very huge) ontology specification in the background. Of course, they 
sometimes re-use some pieces of (well-established) ontology 
specifications. However, I think this pattern should be strongly 
reinforced. There are some good (well-defined and well-established) 
domain specific ontology specifications out there, e.g. the Music 
Ontology (for the music domain), which should also be used instead of 
using DBPedia's own concept and property definitions there.
I know one could now also say that we could apply ontology 
mapping/alignment here. However, that would blow up the whole knowledge 
base (with obsolete mappings) and it would slow down the reasoning 
process over it. I also know that everyone is free to say everything 
about everything. Although, I think it expresses a big redundancy, if we 
define the same concepts and properties over and over again and use for 
the explanation their meaning the same definitions.
If we would like a huge distributed database in the Web, then we should 
at least agree to some important 'best practice' patterns (ontology 
reutilization is one of them) to establish a good interlinking between 
single datasets.



Cheers,


Bob



Re: destabilizing core technologies: was Re: An RDF wishlist

2010-07-02 Thread Bob Ferris

Hi Ian,

 But now people are seeing some of

the data being made available in browseable form e.g. at data.gov.uk
or dbpedia and saying, I want to make one of those.


I don't really believe that people would say after browsing dbpedia I 
want to make one of those. That's not the User Experience users expect 
to get. Please remember the Semantic-Web-UI discussion last time. 
People are tending to use/experience richer visualisations of the 
data/knowledge/information in the background. I hear often, especially 
in the last time, the term 'story telling' - and that's it, I think.


Cheers,


Bob





Re: Show me the money - (was Subjects as Literals)

2010-07-02 Thread Bob Ferris

Hi Richard,

 Such

work can not be realistically done within W3C for obvious reasons. It
has to be done outside W3C by the community.


I believe that's what the normal/standard web developers (I think 
Henry Story called them Web Monkeys ;) ) do already, or?


Cheers,


Bob



Re: Subjects as Literals

2010-07-01 Thread Bob Ferris

Hello everybody,

I think the main issues are already discussed. Hence, here are some 
summarized notes of my thoughts:


1. We shouldn't propagate that a user (always a machine or human beeing) 
has to go this way and not the other one. Leaving this decision by the 
user, leads to more user satisfaction (that's a natural point of view in 
my mind).
That means a inverse relation should exist at every time. If a inverse 
relation includes a new meaning, e.g. 'child' inverse relation of 
'father', than we should define this property explicitly. If not than we 
should define at least an anonymous inverse property (as also discussed 
here[1]).
The outcome is, that an engine, which processes the statement to the 
knowledge base, should always be able to resolve incomming statement. If 
the statement isn't in the form as it can be store in the knowledge base 
(I think it is better to not store statements of an anonymous inverse 
property), than the engine has to transform it into the valid form 
(maybe its even enough to store one way and calculate the inverse 
relation(s)).
(If the machines haven't the calculation power yet, then they will have 
it at least in the near future)


2. We wouldn't write back some literals, if we wouldn't know their 
context, e.g. changing a name of a person, wouldn't happen, if we don't 
know the person (the identifier of that person). That means we have 
always a context.


3. I really don't understand the decision between datatypes and 
individuals (and their disjointness as Michael Schneider point it out; 
maybe it's a bit naive point of view, or that I haven't such deep 
knowledge about really understanding DL).
What about handling (datatyped) literals as in-built individuals, e.g. a 
string typed literal would be then internally resolved to an ex:String 
individual. We could reuse the well-defined xsd datatypes etc.


4. Don't believe the JSON hype ;)
However, feel free to design a good Semantic Graph serialisation format 
based on JSON. JSON looks better than XML. N3 looks also better than 
XML. Currently, we have already a a very good Semantic Graph 
serialisation format based on N3. Why not hyping this one? ;)


Cheers,


Bob

[1] 
http://www.semanticoverflow.com/questions/1126/when-should-i-use-explicit-anonymous-defined-inverse-properties


Am 01.07.2010 05:14, schrieb Pat Hayes:


On Jun 30, 2010, at 8:14 PM, Ross Singer wrote:


I suppose my questions here would be:

1) What's the use case of a literal as subject statement (besides
being an academic exercise)?


A few off the top of my head.

1. Titles of books, music and other works might have properties such as
the date they were registered, who owns them, etc..
2. Dates may have significant properties such as being the day that
someone was shot or when war broke out.
3. Dates represented as character strings in some known date format
other than XSD can be asserted to be the same as a 'real' date by
writing things like

01-02-1481 sameDateAs 01022010^^xsd:date .
01-02-1481 isDateIn :MuslimCalendar .

I am sure that you can think of many more. In general, allowing strings
as subjects opens the door to a wide range of uses of RDF to 'attach'
information to pieces of text. Another example which occurs to me: this
piece of text is the French translation of that piece of text, expressed
as a single RDF triple with two literals.

4. It has been noted that one can map datatyping into RDF itself by
treating the datatypes as properties, and there are several use cases
for this. The natural way to do it involves having literals as subject,
since the dataype map goes from the string to the value:

23 xsd:number 23^^xsd:number .

5. Also, allowing this purely academically has the notable advantage
of simplifying RDF(S) inferencing, including making the forward-chaining
rules simpler. Right now, there is a strange oddity involving blank node
instantiations. One can say things like 'the number of my children is
prime by using an blank node:

:PatHayes hasNumberOfKids _:x .
_:x :a :PrimeNumber .

But this legal RDF can't be instantiated in the obvious way:

:PatHayes hasNumberOfKids 3^^xsd:number .
3^^xsd:number :a PrimeNumber . 

This trips up RDFS reasoners, which can often produce inferences by a
kind of sneaky use-a-bnode-instead maneuver even when the obvious
conclusion cannot be stated because of the restriction. (There are a few
examples in the RDF semantics document.) Removing the restriction would
enable reasoners to work more efficiently with a smaller set of rules.
(I gather that at least some of the RDFS rule engines out there already
do this, internally.)


2) Does literal as subject make sense in linked data (I ask mainly
from a follow your nose perspective) if blank nodes are considered
controversial?


Seems to me that from the linked data POV, anything that can be an
object should also be useable as a subject. Of course, that does allow
for the view that both of them should only ever be IRIs, I guess.

Pat 

Re: The Ordered List Ontology

2010-06-28 Thread Bob Ferris

Am 28.06.2010 10:17, schrieb Barry Norton:


Bob, I wrote a similar representation in WSML-Flight [1] a few years ago
[2], where it was possible to construct an axiom that for a list of
length n there should exist unique values for each of the indices 1-n,
and no others. I doubt that this is possible here (without RIF), is it?



Hi Barry,

as far as I can see, you used rdf:list/rdf:rest for the list modelling, 
is that right? You maybe also followed the still ongoing discussion 
about rdf:list[1].
One conclusion for me was that we need another concept, which is 
independent of rdf:list (rdf:seq). I thought also about adding further 
properties, especially olo:previous and olo:next to express a 
concatenated list.
This properties could get a owl:cardinality restriction of 1. However, 
then it might still be possible to define two slots, which have the same 
index in the ordered list, but I currently don't know how to change this 
without using a rule, which defines that there could only one slot per 
index.


Cheers,


Bob

PS: Please also think about the naming a bit: olo:OrderdList vs. 
olo:Sequence, which one would you prefer? (/me +1 olo:OrderedList)


[1] 
http://old.nabble.com/What-is-it-that%27s-wrong-with-rdf%3AList-to28920391.html






Re: The Ordered List Ontology

2010-06-28 Thread Bob Ferris

Hi Aldo,
Hi Silvio,

Thanks a lot, Silvio, for the Colletion Ontology. I oversaw this 
ontology somehow.


Am 28.06.2010 16:29, schrieb Aldo Gangemi:

Yes, I like the SWAN ontology ... I remember sometimes ago I wanted to
modularize it and submit the modules as design patterns :).

Consider that, besides the typing problem in OLO, there is a difference
between OLO and SWAN in that OLO allows for slots that enable a
designer to assign indexes to items directly, while SWAN does not have
indexes, although they can be inferred with a query over the
swan:nextItem property. SWAN has the advantage of making a clear
distinction between sets, bags and lists.


Yes, the initial and primary access method to single slots in an ordered 
list should be olo:index. The secondary access method is its (currently) 
optional iterator olo:next as shortcut to the next slot in the list.




In principle, with a RIF rule added to SWAN (or a SPARQL/SPIN add-on),
you can get the same results as in OLO, while being able to reason with
transitivity over a sequence relation in a list.

Considering sequencing, it'd be nice to decouple transitivity and
intransitivity (easier queries and rules), cf. the sequence design
pattern in ODP [3].


The transitivity re. the 'follow issue' is also very interesting. Maybe 
we could also add it. However, I see then many triples in the transitive 
'follow properties', which implies a more complicate change mechanism. 
May one have to figure out the performances of the different approaches.


 However, why do you want to represent ordered lists, slots and items 
 as [ rdf:type owl:Class ] (or rdfs:Class)?


Because I like to use here the most abstract concept of a meta model. In 
the OWL world this is for me owl:Class or owl:Thing and in the RDFS 
world this is for me rdfs:Resource (as the most abstract concept 
overall) and rdfs:Class.


 While a list is a set mathematically speaking, is there any advantage 
 in representing the lists you want to talk about as sets?


 This has some bad consequences. In your example, SexMachine and
 GoodFoot are inferred to be [ rdf:type owl:Class ], not only [
 rdf:type mo:Track ]. Therefore James Brown results to be the author
 (foaf:made) of an owl:Class (SexMachine), ehich is at least awkward
 :).

Thanks for that hint, Aldo. I removed the rdfs:range from olo:item in 
the v 0.5 version[1].


Feel free to add further comments, suggestions, critics.

Cheers,


Bob

[1] 
http://motools.svn.sourceforge.net/viewvc/motools/orderedlistsonto/branches/orderedlistsonto_v03/rdf/orderedlistontology.n3




Re: 303 redirect to a fragment – what should a linked data client do?

2010-06-26 Thread Bob Ferris

Hi,

Am 10.06.2010 14:34, schrieb Nathan:

Christoph LANGE wrote:

2010-06-10 13:40 Christoph LANGE ch.la...@jacobs-university.de:

in our setup we are still somehow fighting with ill-conceived legacy
URIs
from the pre-LOD age. We heavily make use of hash URIs there, so it
could
happen that a client, requesting http://example.org/foo#bar (thus
actually
requesting http://example.org/foo) gets redirected to
http://example.org/baz#grr (note that I don't mean
http://example.org/baz%23grr here, but really the un-escaped hash). I
observed that when serving such a result as XHTML, the browser (at least
Firefox) scrolls to the #grr fragment of the resulting page.


Update for those who are interested (all tested on Linux, test with
http://kwarc.info/lodtest#misc --303--
http://kwarc.info/clange/publications.html#inproc for yourself):

* Firefox: #inproc
* Chromium: #inproc
* Konqueror: #inproc
* Opera: #misc

That given, what would an _RDFa_-compliant client have to do? I guess it
would have to do the same as an RDF client, i.e. look into @about
attributes
if in doubt.


As Michael pointed out, there's an open ticket related to this on HTTPBis.

First, I'd suggest that we don't need to worry about what's displayed by
the User Agents, it doesn't really have any bearing on the RDF contained
in the response (even with RDFa).

Second, as with my previous reply, what happens with the dereferencing
process is entirely orthogonal and abstracted from the RDF side of
things, thus I'd suggest that in all cases when you want to find the
description for a URI, you dereference it and consult the RDF
description you get back.

If you get no RDF then you don't have a description, if you do then
check the subject and object values of the triples to see if you can get
a description. Everything that happens between is of no concern to us :)


However, I think this is still the important gap we have to bridge 
between 'the old' existing web and 'the new' forthcoming web, which will 
hopefully provide a semantic graph knowledge/information representation 
behind every dereferencable URI.


Cheers,

Bob






The Counter Ontology

2010-06-24 Thread Bob Ferris

Hello,

Apologies for cross posting ;)

Here is the Counter Ontology [1], which includes a general multiple 
purpose counter concept. This concept could be uses to associate any 
owl:Thing typed concept to (a) co:Counter instance(s) with the property 
co:counter or a specific sub property of it [2,3]. The second property 
of co:Counter is co:count, which is a simple xsd:int based datatype 
property. That means you could use this concept for things like for 
example play counter, skip counter or website hit counter.
Furthermore, this ontology includes already a predefined property to 
associate event specific (event:Event[4]) counter to its related events 
(co:event_counter). This enables the opportunity to trace back all 
related events, which are responsible for a specific count. Of course, 
this is also possible with all other owl:Thing typed concepts ;)


So please let me know, if this concept is strong enough as general 
multiple purpose counter concept. Feel free to add comments, suggestions 
and critics.


Cheers,


Bob

[1] 
http://motools.svn.sourceforge.net/viewvc/motools/counteronto/trunk/rdf/counterontology.n3
[2] 
http://motools.svn.sourceforge.net/viewvc/motools/counteronto/trunk/gfx/co_-_Counter.gif
[3] 
http://motools.svn.sourceforge.net/viewvc/motools/counteronto/trunk/gfx/co_-_Counter_graph.gif

[4] http://purl.org/ontology/event.owl#Event



Re: Info Service Ontology - 1st draft

2010-06-22 Thread Bob Ferris

Hi,

Here are some news re.the Info Service Ontology:

It has now an own repository [1] + mailing list [2]. Feel free to join 
the developing process or leave comments/suggestions/critics. You are 
welcome ;)


Cheers,

Bob

PS: Please also note that there is a new revision of the ontology (v 
0.4) [3,4,5].


[1] http://sourceforge.net/projects/infoserviceonto/
[2] http://groups.google.com/group/info-service-ontology-specification-group
[3] 
http://infoserviceonto.svn.sourceforge.net/viewvc/infoserviceonto/infoservice/trunk/rdf/infoservice.n3
[4] 
http://infoserviceonto.svn.sourceforge.net/viewvc/infoserviceonto/infoservice/trunk/gfx/infoservice.gif
[5] 
http://infoserviceonto.svn.sourceforge.net/viewvc/infoserviceonto/infoservice/trunk/gfx/is_-_musicbrainz_example.gif 



Am 19.06.2010 23:39, schrieb Bob Ferris:

Hello,

I thought this ontology might also be of interest for the lod mailing.
Initially a Music Ontology issue, later a FOAF Ontology issue and now
even with a broader scope ;)

I designed yesterday the Info Service Ontology [1,2]. The initial
intention behind designing this ontology was to add some knowledge re.
linked websites from different info services in semantic graphs [5], e.g.

http://musicbrainz.org/artist/8a1fe33d-6029-462e-bcb7-08e0ebaba6dd.html a
foaf:Document ;
http://musicbrainz.org/artist/8a1fe33d-6029-462e-bcb7-08e0ebaba6dd.html 
is:info_service

mo:musicbrainz .

The Info Service Ontology consists of a basic is:InfoService concept
(which could may owl:equivalentClass prv:DataProvidingService) and some
additional ones for describing such an info service
(is:InfoServiceQuality, is:InfoServiceType - the specific individuals
are currently only proof-of-concept examples). The main hook re.
specific websites from an info service is is:info_service, which
associates an is:InfoService instance to a foaf:Document instance (a
website link).
Furthermore, I defined some is:InfoService individuals, especially
is:musicbrainz [3] as proof-of-concept example. Therefore, I used also
some category definitions from DBpedia.

Please feel to add comments, critics and suggestions re. which
properties might useful for describing an info service.

Planned extensions are:
- enabling multiple info service quality ratings, e.g. by using [4]
- defining a is:recommendation property
- maybe extending the domain of is:info_service to owl:Thing to enabling
info service associations from data entities different from
foaf:Document this means all data such an info service can provide, e.g.
semantic graphs
- add further information service quality properties, this should maybe
done in another sub ontology, because rating information quality could
be somehow complex and be realized on different levels of complexity

With this property (is:info_service as relation to an info service
description) it should be possible, e.g. to enable users the opportunity
to choose their preferred info services as data sources for their
knowledge base by selecting the different properties of such an info
service.
I currently working on my Master-like thesis, which has the topic
Semantic Federation of Musical and Music Related Information for
Establishing a Personal Music Knowledge Base. One concern is that the
user should be able to select music info services of their choice or the
application will choose the data sources automatically through
evaluating the user profile and comparing it with the info service
descriptions.

That's all for the moment ;)

Cheers,

Bob




Info Service Ontology - 1st draft

2010-06-19 Thread Bob Ferris

Hello,

I thought this ontology might also be of interest for the lod mailing.
Initially a Music Ontology issue, later a FOAF Ontology issue and now 
even with a broader scope ;)


I designed yesterday the Info Service Ontology [1,2]. The initial 
intention behind designing this ontology was to add some knowledge re. 
linked websites from different info services in semantic graphs [5], e.g.


http://musicbrainz.org/artist/8a1fe33d-6029-462e-bcb7-08e0ebaba6dd.html a
foaf:Document ;
http://musicbrainz.org/artist/8a1fe33d-6029-462e-bcb7-08e0ebaba6dd.html is:info_service 


mo:musicbrainz .

The Info Service Ontology consists of a basic is:InfoService concept 
(which could may owl:equivalentClass prv:DataProvidingService) and some 
additional ones for describing such an info service 
(is:InfoServiceQuality, is:InfoServiceType - the specific individuals 
are currently only proof-of-concept examples). The main hook re. 
specific websites from an info service is is:info_service, which 
associates an is:InfoService instance to a foaf:Document instance (a 
website link).
Furthermore, I defined some is:InfoService individuals, especially 
is:musicbrainz [3] as proof-of-concept example. Therefore, I used also 
some category definitions from DBpedia.


Please feel to add comments, critics and suggestions re. which 
properties might useful for describing an info service.


Planned extensions are:
- enabling multiple info service quality ratings, e.g. by using [4]
- defining a is:recommendation property
- maybe extending the domain of is:info_service to owl:Thing to enabling 
info service associations from data entities different from 
foaf:Document this means all data such an info service can provide, e.g. 
semantic graphs
- add further information service quality properties, this should maybe 
done in another sub ontology, because rating information quality could 
be somehow complex and be realized on different levels of complexity


With this property (is:info_service as relation to an info service 
description) it should be possible, e.g. to enable users the opportunity 
to choose their preferred info services as data sources for their 
knowledge base by selecting the different properties of such an info 
service.
I currently working on my Master-like thesis, which has the topic 
Semantic Federation of Musical and Music Related Information for 
Establishing a Personal Music Knowledge Base. One concern is that the 
user should be able to select music info services of their choice or the 
application will choose the data sources automatically through 
evaluating the user profile and comparing it with the info service 
descriptions.


That's all for the moment ;)

Cheers,

Bob


[1] 
http://motools.svn.sourceforge.net/viewvc/motools/infoservice/trunk/rdf/infoservice.n3
[2] 
http://motools.svn.sourceforge.net/viewvc/motools/infoservice/trunk/gfx/infoservice.gif
[3] 
http://motools.svn.sourceforge.net/viewvc/motools/infoservice/trunk/gfx/is_-_musicbrainz_example.gif

[4] http://purl.org/stuff/rev#
[5] http://wiki.foaf-project.org/w/FOAF_and_InformationServices



'owl:Class and rdfs:Class' vs. 'owl:Class or rdfs:Class'

2010-06-16 Thread Bob Ferris

Hi,

does anyone know of an already defined best practice re. using 
'owl:Class and rdfs:Class' vs. 'owl:Class or rdfs:Class' type definition 
for concepts in ontologies? (I've searched at ontologydesignpatterns.org 
for it, but didn't found something).
For example the FOAF ontology uses both types in their ontology 
definition [1] (for better reading ;) ). However, I think this depends 
on the evolution of the FOAF ontology, that means it was first defined 
only by using rdfs:Class and owl:Class was added later. On the other 
side, for example the Music Ontology [2] uses only owl:Class for its 
concept definitions (which was design some year later).
The reason for supporting both is that RDFS only systems are then also 
able to process semantic graphs from ontologies with rdfs:Class typed 
concepts.
On the other side, modern SPARQL engines, such as this one from the 
Virtuoso Server [3], are able to handle transitivity - a feature, which 
is very important re. ontologies (I think).


Cheers,

Bob


[1] http://www1.inf.tu-dresden.de/~s9736463/ontologies/FOAF_-_20100101.n3
[2] http://motools.sourceforge.net/doc/musicontology.n3
[3] http://virtuoso.openlinksw.com/features-comparison-matrix/



Re: 'owl:Class and rdfs:Class' vs. 'owl:Class or rdfs:Class'

2010-06-16 Thread Bob Ferris
Well, I think we still get the point during a discussion in the #swig 
channel. The conclusion is:


- if one uses OWL features for modelling an ontology, define the 
concepts only with owl:Class, because RDFS systems, wouldn't know how to 
handle these features

- if not, feel free to include both types

I think, the first case is more common in modern ontology modelling, 
because it is more powerful/ one could express more.


Cheers,

Bob


Am 16.06.2010 13:40, schrieb Antoine Zimmermann:

I don't think there is an established best practice related to this
topic.  Moreover, your choice may depend on your application, use case,
practical needs, etc. However, as far as I can foresee, using both
rdfs:Class and owl:Class is perfectly safe wrt to RDF/RDFS tools and
perfectly safe wrt OWL tools.

AZ

Le 16/06/2010 12:08, Bob Ferris a écrit :

Hi,

does anyone know of an already defined best practice re. using
'owl:Class and rdfs:Class' vs. 'owl:Class or rdfs:Class' type definition
for concepts in ontologies? (I've searched at ontologydesignpatterns.org
for it, but didn't found something).
For example the FOAF ontology uses both types in their ontology
definition [1] (for better reading ;) ). However, I think this depends
on the evolution of the FOAF ontology, that means it was first defined
only by using rdfs:Class and owl:Class was added later. On the other
side, for example the Music Ontology [2] uses only owl:Class for its
concept definitions (which was design some year later).
The reason for supporting both is that RDFS only systems are then also
able to process semantic graphs from ontologies with rdfs:Class typed
concepts.
On the other side, modern SPARQL engines, such as this one from the
Virtuoso Server [3], are able to handle transitivity - a feature, which
is very important re. ontologies (I think).

Cheers,

Bob


[1] http://www1.inf.tu-dresden.de/~s9736463/ontologies/FOAF_-_20100101.n3
[2] http://motools.sourceforge.net/doc/musicontology.n3
[3] http://virtuoso.openlinksw.com/features-comparison-matrix/





Re: MuSim Ontology problems (was Re: Share, Like Ontology)

2010-06-14 Thread Bob Ferris

Hi Antoine,
Hi Kurt,
Hi at all from the different lists,

Am 13.06.2010 22:13, schrieb Kurt J:

Hi Antoine,

I'm very glad to have you review my ontology - apparently it had some
significant problems!



I have some comments on your ontology:
  1) related to OWL DL
  2) related to the use of owl:Class VS rdfs:Class, owl:*Property VS
rdf:Property
  3) related to musim:element, musim:subject and music:object
  4) related to musim:distance and musim:weight


=
1) OWL DL
=
Are you aware that your ontology is not in OWL DL and is it intentional?
Please notice that minor changes would make it a DL ontology.  I can give
you the details if you are interested.



I never made a conscious decision about this one way or the other.
I've spent some time working on it and now thing only thing that
doesn't validate OWL DL seems to be the info about the ontology its
self.  I get an Untyped Individual:
http://purl.org/ontology/similarity/;

do i need to move this info to a separate doc to get OWL DL?





=
2) owl:Class VS rdfs:Class; owl:*Property VS rdf:Property
=
All your classes and properties are declared using the OWL vocabulary.
It would be good to have, *in addition* to this, a declared type rdfs:Class
and rdf:Property like this:

:Similarity a owl:Class, rdfs:Class;
rdfs:label ...;
rdfs:subClassOf ... etc.


done, thnx!



I do not really understand the need for rdfs:Class:
owl:Class is already defined with rdfs:subClassOf rdfs:Class (same thing 
for the properties). So its is a transitivity issue and it depends on 
the used reasoner to resolve that issue.





=
3) musim:element, musim:subject and musim:object
=
musim:element is defined as a FunctionalProperty which means that a given
thing can only have at most one element.  Moreover, music:subject and
music:object are subproperties of music:element, so a subject is an element
and an object is also an element of a thing.  Now that means that it is not
possible to have a subject and an object which are different.
The following knowledge base is inconsistent wrt you ontology:

my:assoc :subject my:sub ;
 :object  my:obj .
my:sub owl:differentFrom my:obj .

This contradicts the description of the properties subject and object.
I imagine that you want to say that an association has one subject and one
object.  Why not use a class restriction like:


yes this is not what i meant at all!  None of these should be
FunctionalProperty and i'm not sure why they ever were.


:Association a owl:Class, rdfs:Class ;
rdfs:subClassOf [
a owl:Restriction ;
owl:onProperty :subject ;
owl:someValuesFrom owl:Thing ]
rdfs:subClassOf [
a owl:Restriction ;
owl:onProperty :object ;
owl:someValuesFrom owl:Thing ]
.
If your data are likely to be processed by an OWL 2 RL reasoner, this would
not be a good solution since it is forbidden in the RL profile of OWL 2 (but
is allowed in EL, QL, DL and Full).


actually, i have come across applications where it would be desirable
to have one subject and multiple objects (or vice versa).  the
restriction conceptually should be something like, if there is a
subject, there must be at least one object and if there is an object
there must be at least one subject - i'm not sure how to encode that
so for now i'm removing all the restrictions.


Antoine's definition expresses that issue you are concerning (at least I 
think so ;) ).





Moreover, :element, :subject and :object are AsymetricProperties.  While
this obviously makes sense at the conceptual level, I don't think it fits
with your ontology.  Your ontology is very lightweight and little
constrained (which is good) except for these properties.  I don't think it
adds much to explicitly put such a constraint.


yes now that i consider this, it makes more sense to just leave it
unconstrained.



=
4) musim:distance and musim:weight
=
I notice that you are defining two datatype properties with multiple range
restriction:

:distance a owl:DatatypeProperty;
rdfs:range xsd:float;
rdfs:range xsd:int;
rdfs:range xsd:double .

and

:weight a owl:DatatypeProperty;
rdfs:range xsd:float;
rdfs:range xsd:int;
rdfs:range xsd:double .

I'm quite sure that it is not what you intend to mean and I imagine that you
would like to say that the weight or the distance can be either a float, a
double or an int.  Here you actually specify that the distance and the
weight of something is necessarily a float, a int and a double at the same
time.

Furthermore, the OWL spec [1] says that:

As specified in XML Schema [XML Schema Datatypes], the value spaces of
xsd:double, xsd:float, and xsd:decimal are pairwise disjoint.

This implies that :distance and :weight are in fact empty relations since it
is impossible to have a value which is both a float and a double.  Using
:distance or :weight in the predicate position of any triple would make the
knowledge base inconsistent.

If you want to say that a 

Re: MuSim Ontology problems (was Re: Share, Like Ontology)

2010-06-14 Thread Bob Ferris

Hi,



I do not really understand the need for rdfs:Class:
owl:Class is already defined with rdfs:subClassOf rdfs:Class (same thing
for the properties). So its is a transitivity issue and it depends on
the used reasoner to resolve that issue.


owl:Class is defined as a subclass of rdfs:Class *in the OWL
specifications*. The RDF/RDFS specification does not say anything about
owl:Class. So, from a pure RDFS perspective, owl:Class has as much
meaning as, e.g., xyz:abc. The fact that someone defines *somewhere*
that xyz:abc is a subclass of rdfs:Class is irrelevant from a pure RDFS
system point of view. As I said in my example, a SPARQL query would not
be able to retrieve the OWL classes or properties that are not directly
asserted as RDFS classes or properties (unless the SPARQL engine
implements part of the OWL spec, which is rarely the case).

Now, that's a small issue but there is no disadvantage of putting the
additional types, as far as I know.



This implies for me that I have to add rdfs:class to every owl:class 
based definition of every ontology. That's makes no real sense for me, 
sorry. It's somehow a step backward.




Yet, it's easy to make a programme that deals equally well with all
these values, whereas it is difficult to ensure that everybody will use
the three datatypes mentioned in the range assertion.


That's why we've defined that restriction, because the use of these 
datatypes is well defined in XSD and they are support from the standard 
programming languages, which might process it as this kind of datatype.




In the absence of range assertion, such values as:

ex:sim :distance very similar .
ex:sim :distance +++^^xsd:string .



This definition is somehow unspecific, because this could only exists 
with a specific definition of these values, which I could at least map 
to a numeric value, e.g. scale 1-5. I think the numeric value is all I 
need, because values of sim:distance and sim:weight are somehow 
technical and need an extra translation step for human reading. So I 
could define the specific human readable term in different languages 
(English, Spain, French, ...).


Cheers,

Bob