Re: Vocabulary to describe software projects and their dependencies

2016-08-09 Thread Graham Klyne

On 08/08/2016 14:57, Martynas Jusevičius wrote:

DOAP vocabulary comes very close: https://github.com/edumbill/doap

Too bad it looks to be unmaintained. Strangely, the schema does not
seem to support relationships (dependencies) between projects.


Just a thought:  would DOAP + PROV fit the bill?

#g
--




Re: New LOD dataset for media types

2015-11-05 Thread Graham Klyne

Hi Silvio,

Nice work: this could be handy.  I have a couple of questions/comments:

1. I assume the machine formats are accessible by content negotiation?  It might 
be handy to also include direct links on the web page.  For example, I found I 
could change the .html suffix to .ttl or .json to get alternative 
representations, but that required guesswork on my part.


2. Are there any plans (and resources) in place to ensure longevity of the 
sparontologies.net domain?


3. I note that the HTML page indicates cc-by licensing of the content, but there 
is no such information in the machine readable formats.


#g
--


On 04/11/2015 23:21, Silvio Peroni wrote:

Dear friends,

We are pleased to announce a new LOD dataset

http://www.sparontologies.net/mediatype

that makes available media types defined as proper resources in RDF, according 
to the SPAR Ontologies [1] and DCTerms [2].

A media type is an identifier (for example "text/turtle") for file formats on the Internet composed 
by two parts: a registry ("text" in the example) and a record ("turtle" in the example). 
They are handled by the Internet Assigned Numbers Authority (IANA), which is the official authority for the 
standardisation and publication of these classifications.

The aforementioned web space, part of the SPAR Ontologies website, has been reserved for providing the RDF 
representation of each media type defined by IANA [3]. In particular, a media type is accessible by 
concatenating the URL "http://www.sparontologies.net/mediatype/; with its related identifier. For 
instance, "http://www.sparontologies.net/mediatype/text/turtle; allows one to access the specific 
resource for the "text/turtle" media type.

Each media type can be accompanied by agents who acted as contributors, the 
related RFC documents documenting it, its current status (either official, 
deprecated or obsoleted), and direct links to Wikipedia pages and DBpedia 
resources related with the media type.

All these resources defining media types, thus, can be used for specifying 
particular formats (e.g., by means of the DCTerms property dcterms:format) that 
a certain entity, such as a book or a dataset, can have.

Please do not hesitate to contact us (sparontolog...@gmail.com) for questions 
and additional information about this LOD dataset.

Have a nice day :-)

S.



1. http://www.sparontologies.net
2. http://dublincore.org/documents/dcmi-terms/
3. http://www.iana.org/assignments/media-types/media-types.xml



Silvio Peroni, Ph.D.
Department of Computer Science and Engineering
University of Bologna, Bologna (Italy)
Tel: +39 051 2094871
E-mail: silvio.per...@unibo.it
Web: http://www.essepuntato.it
Twitter: essepuntato






Re: New LOD dataset for media types

2015-11-05 Thread Graham Klyne


On 05/11/2015 16:35, Silvio Peroni wrote:

>Perhaps establish something likehttps://w3id.org/sparontologies/mediatype/  
  and redirect for your server. I 
believe you could have stronger adoption and use of your ontology if you adopt a good 
IRI design and persistence strategy up front.

That’s a good point, and I’ve just finished to update everything, since it was 
possible in a relatively short time:-)

In particular, I’ve just made a request to w3id.org  for a 
space, and I’ve updated le URLs of all the entities. Now they are accessible using the



Hi Silvio,

Responding to your earlier question, this kind of community-underwritten hosting 
is the sort of thing I was contemplating.


Thanks!

#g
--



Re: Microsoft Access for RDF?

2015-02-25 Thread Graham Klyne

Hi Kingsley,

In https://lists.w3.org/Archives/Public/public-lod/2015Feb/0116.html
You said, re Annalist:

My enhancement requests would be that you consider supporting of at
least one of the following, in regards to storage I/O:

1. LDP
2. WebDAV
3. SPARQL Graph Protocol
4. SPARQL 1.1 Insert, Update, Delete.

As for Access Controls on the target storage destinations, don't worry
about that in the RDF editor itself, leave that to the storage provider
[1] that supports any combination of the protocols above.


Thanks for you comments and feedback - I've taken note of them.

My original (and current) plan is to provide HTTP access (GET/PUT/POST/etc) with 
a little bit of WebDAV to handle directory content enumeration., which I think 
is consistent with your suggestion (cf. [1]).  The other options you mention are 
not ruled out.


You say I shouldn't worry too much about access control, but leave that to the 
back-end store.  If by this you mean *just* access control, then that makes 
sense to me.


A challenge I face is to understand what authentication tokens are widely 
supported by existing HTTP stores.  Annalist itself uses OpenID Connect (ala 
Google+, etc) is its main authentication mechanism, so I cannot assume that I 
have access to original user credentials to construct arbitrary security tokens.


I had been thinking that something based on OAuth2 might be appropriate (I 
looked at UMA [2], had some problems with it as a total solution, but I might be 
able to use some of its elements).  I took a look at the link you provided, but 
there seem to be a lot of moving parts and couldn't really figure out what you 
were describing there.


Thanks!

#g
--

[1] https://github.com/gklyne/annalist/issues/32

[2] http://en.wikipedia.org/wiki/User-Managed_Access, 
http://kantarainitiative.org/confluence/display/uma/Home






Re: Microsoft Access for RDF?

2015-02-20 Thread Graham Klyne

Hi Stian,

Thanks for the mention :)


Graham Klyne's Annalist is perhaps not quite what you are thinking of
(I don't think it can connect to an arbitrary SPARQL endpoint), but I
would consider it as falling under a similar category, as you have a
user interface to define record types and forms, browse and edit
records, with views defined for different record types. Under the
surface it is however all RDF and REST - so you are making a schema by
stealth.

http://annalist.net/
http://demo.annalist.net/


Annalist is still in its prototype phase, but it's available to play with if 
anyone wants to try stuff.  See also https://github.com/gklyne/annalist for 
source.  There's also a Dockerized version.


It's true that Annalist does not currently connect to a SPARQL endpoint, but 
have recently been doing some RDF data wrangling and starting to think about how 
to connect to public RDF (e.g. http://demo.annalist.net/annalist/c/CALMA_data/d/ 
is a first attempt at creating an editable version of some music data from your 
colleague Sean).  In this case, the record types and views have been created 
automatically from the raw data, and are pretty basic - but that automatic 
extraction can serve as a starting point for subsequent editing.  (The reverse 
of this, creating an actual schema from the defined types and views, is an 
exercise for the future, or maybe even for a reader :) )


Internally, the underlying data access is isolated in a single module, intended 
to facilitate connecting to alternative backends, which could be via SPARQL 
access.  (I'd also like to connect up with the linked data fragments work at 
some stage.)


If this looks like something that could be useful to anyone out there, about now 
might be a good time to offer feedback.  Once I have what I feel is a minimum 
viable product release, hopefully not too long now, I'm hoping to use feedback 
and collaborations to prioritize ongoing developments.


#g
--




Re: New Hewlett Packard patent may be a barrier to Semantic Web services adoption

2013-08-01 Thread Graham Klyne

I don't have a legal department to challenge the patent, but in case it helps:

  http://www.ninebynine.org/SWAD-E/Scenario-HomeNetwork/HomeNetworkConfig.html

This was work I did around 2002-03, and includes use of RDF technologies to set 
access controls on a Cisco IOS router, which appears to correspond to the first 
primary claim of the patent:


[[
1. An enforcement system for enforcing policies with regard to service requests 
comprising a processor-readable, non-transient medium storing code representing 
instructions that when executed at a processor cause the processor to implement: 
a plurality of enforcer agents adapted to enforce policies; at least one 
explorer agent adapted to evaluate policy enforcement capabilities available to 
the enforcement system; and a policy decision point adapted to identify the 
policies that need to be enforced for a service request and to pass this 
information to at least one enforcer agent to enforce the identified policies.

]]

I recall there was also an Internet draft published about this time that talked 
about using RDF in a network management control layer: see 
http://tools.ietf.org/html/draft-atarashi-netconfmodel-architecture-00.


#g
--


On 31/07/2013 20:15, Martin Hepp wrote:

Dear all:

Yesterday, Hewlett Packard has been granted a patent on Policy Enforcement:

http://www.freepatentsonline.com/8498959.html
http://www.freepatentsonline.com/8498959.pdf

As far as I can see, it heavily constrains the commercial exploitation of 
research done in the Semantic Web / Semantic Web Services community from 
2001-2009.

So if you worked on policies in the context of Semantic Web Services or 
Semantic Business Process Management before November 2009, it may be worthwhile 
to check whether the patent claims inventions that you can prove to have been 
prior art at that time.

This may be particularly relevant for the organizers and contributors to the 
various policy workshops co-located with ISWC/ESWC conferences.

I am not familiar with the legal process, but if you feel this patent claims 
what was already publicly known / discussed at conferences back then, please 
ask your employer or legal department to challenge the patent. It may otherwise 
put the usage of SWS in business applications at risk.

Best wishes

Martin


martin hepp
e-business  web science research group
universitaet der bundeswehr muenchen

e-mail:  h...@ebusiness-unibw.org
phone:   +49-(0)89-6004-4217
fax: +49-(0)89-6004-4620
www: http://www.unibw.de/ebusiness/ (group)
  http://www.heppnetz.de/ (personal)
skype:   mfhepp
twitter: mfhepp








Re: Subjects as Literals, [was Re: The Ordered List Ontology]

2010-07-02 Thread Graham Klyne

[cc's trimmed]

I'm with Jeremy here, the problem's economic not technical.

If we could introduce subjects-as-literals in a way that:
(a) doesn't invalidate any existing RDF, and
(b) doesn't permit the generation of RDF/XML that existing applications cannot 
parse,


then I think there's a possible way forward.

#g
--

BTW, which list is the most appropriate for this discussion?  I seem to be 
getting 4 copies of some messages!



Jeremy Carroll wrote:

Jiří Procházka wrote:


I wonder, when using owl:sameAs or related, to name literals to be
able to say other useful thing about them in normal triples (datatype,
language, etc) does it break OWL DL 

yes it does


(or any other formalism which is
base of some ontology extending RDF semantics)?


Not OWL full

 Or would it if
rdf:sameAs was introduced?
  


It would still break OWL DL

Best,
Jiri
  
OWL DL is orthogonal to this issue. The OWL DLers already prohibit 
certain RDF - specifically the workaround for not having literal as 
subjects. So they are neutral.
I reiterate that I agree whole-heartedly with the technical arguments 
for making this change; however the economic case is missing.


Jeremy









Re: The Ordered List Ontology

2010-06-30 Thread Graham Klyne

Bob,

A desired feature that led to the current rdf:List structure is the ability to 
close a list - so some separate sub-graph can't silently add properties not 
in the original.  Your pattern might allow this through additon of a 
maxSlotIndex property on olo:OrderedList (not suggesting this as a design, 
just an example).


#g
--


Bob Ferris wrote:

Hello everybody,

in a longer discussion in the Music Ontology mailing list about how to 
model a playlist, Samer Abdallah came up with a very good proposal[1] of 
modelling a sequence/ordered list (as recently also discussed at RDFNext 
Workshop[2]) as semantic graph (in RDF).

So, here we go:

- specification[3] (please also note the anonymous inverse properties)
- concepts and relations in a graphic[4]
- funky playlist example[5,6]

Again, thanks a lot Samer Abdallah for that cool concept.
Comments, suggestions, critics are very welcome.

Cheers,


Bob

PS: its all OWL based ;) however, we could also downgrade the concept on 
the basis of rdfs:class, if needed.




[1] 
http://groups.google.com/group/music-ontology-specification-group/msg/305a42362a1e4145 


[2] http://www.w3.org/2009/12/rdf-ws/slides/rdflist.pdf
[3] 
http://motools.svn.sourceforge.net/viewvc/motools/orderedlistsonto/trunk/rdf/orderedlistontology.n3 

[4] 
http://motools.svn.sourceforge.net/viewvc/motools/orderedlistsonto/trunk/gfx/olo_-_orderedlist.gif 

[5] 
http://motools.svn.sourceforge.net/viewvc/motools/orderedlistsonto/trunk/examples/orderedlist_-_example.n3 

[6] 
http://motools.svn.sourceforge.net/viewvc/motools/orderedlistsonto/trunk/gfx/olo_-_orderedlist_example.gif 










Re: how to consume linked data

2009-09-25 Thread Graham Klyne

Dan Brickley wrote:

This doc-typing idiom never got heavily used in FOAF, beyond the type
PersonalProfileDocument, which FOAF defines. Mostly we just linked
FOAF files together (initially with seeAlso and IFPs, lately using
URIs more explicitly).

I think there are many other reasons why characterising typical RDF
document patterns makes sense, related to the frustration of dealing
with documents when all you know is they have triples in them. We
don't have good mechanisms for doing so yet, ie. for characterising
these higher level patterns. But various folk are heading in same
direction, some using SPARQL, others OWL or XForms, or DC Application
Profile definitions

Without some hints about what we're pointing at with our links,
crawlers don't have much to go on. Merely knowing that the information
at the other end of the link is more RDF, or that it describes a
thing of a certain type, might not be enough. There are a lot of
things you might want to know about a person, or a place, and at many
different levels of detail. For apps eg running in a mobile/handheld
environment, they can't afford to speculatively download everything..


Interesting...  I'm doing work at the moment with CIDOC-CRM 
(http://cidoc.ics.forth.gr/) and its expression in OWL 
(http://www8.informatik.uni-erlangen.de/IMMD8/Services/cidoc-crm/versions.html). 
 Something I've noticed is that the extension/refinement mechanism provided by 
CIDOC-CRM is based on  what they call Types (though I think it's more like 
skos:Concept), so that the core properties tend be be very predictable.  There 
are some areas where I've used new properties to capture finer-grained 
information, but they tend to be at the margins (e.g. putting numeric values on 
date-ranges) rather than in the core (e.g. this object was made in this time 
period).


Maybe there's scope for using SKOS in a doc-typing idiom?

#g