Re: Vocabulary to describe software projects and their dependencies

2016-08-09 Thread Martynas Jusevičius
Thanks everyone!

DOAP + PROV sounds like a good idea actually.

On Tue, Aug 9, 2016 at 8:36 AM, Graham Klyne <g...@ninebynine.org> wrote:
> On 08/08/2016 14:57, Martynas Jusevičius wrote:
>>
>> DOAP vocabulary comes very close: https://github.com/edumbill/doap
>>
>> Too bad it looks to be unmaintained. Strangely, the schema does not
>> seem to support relationships (dependencies) between projects.
>
>
> Just a thought:  would DOAP + PROV fit the bill?
>
> #g
> --
>



Vocabulary to describe software projects and their dependencies

2016-08-08 Thread Martynas Jusevičius
Hey,

I am looking for a way to describe a software project in RDF, in detail.

DOAP vocabulary comes very close: https://github.com/edumbill/doap

Too bad it looks to be unmaintained. Strangely, the schema does not
seem to support relationships (dependencies) between projects.

Are there any other vocabularies I should know about?

Please do not suggest https://schema.org/SoftwareApplication as it is
not nearly expressive enough.


Martynas



Re: Dealing with distributed nature of Linked Data and SPARQL

2016-06-08 Thread Martynas Jusevičius
Mikel, a lot of them do, but they are not required to. Both
datasources work as expected, it is only when trying to combine both
of them that one runs into this situation.

I agree that each of the descriptions could go into separate named
graphs, where the graph name could be the source URI. That is why I
mentioned quads.

Alasdair, with provenance do you mean PROV? I'm afraid that it is not
available in the general case. HTTP headers could possibly be used to
extract Last-Modified dates etc. But according to RDF semantics, isn't
it the case that assertions are never removed? So I think it would be
wrong to ignore the "older" description -- or any "other" description
in general.

On Wed, Jun 8, 2016 at 2:31 PM, Mikel Egaña Aranguren
<mikel.egana.arangu...@gmail.com> wrote:
> Hi Martynas;
>
> I thought that the majority of Linked Data servers work like Pubby, i.e.,
> they serve Linked Data resources by doing a DESCRIBE on a Triple Store,
> therefore serving the same triples. But it seems like you have encountered
> the opposite (Different triples served) in many systems, do you have data on
> how prevalent this issue is?
>
> Cheers
>
> 2016-06-08 14:06 GMT+02:00 Martynas Jusevičius <marty...@graphity.org>:
>>
>> Hey all,
>>
>> we are developing software that consumes data both from Linked Data
>> and SPARQL endpoints.
>>
>> Most of the time, these technologies complement each other. We've come
>> across an issue though, which occurs in situations where RDF
>> description of the same resources is available using both of them.
>>
>> Lest take a resource http://data.semanticweb.org/person/andy-seaborne
>> as an example. Its RDF description is available in at least 2
>> locations:
>> - on a SPARQL endpoint:
>>
>> http://xmllondon.com/sparql?query=DESCRIBE%20%3Chttp%3A%2F%2Fdata.semanticweb.org%2Fperson%2Fandy-seaborne%3E
>> - as Linked Data: http://data.semanticweb.org/person/andy-seaborne/rdf
>>
>> These descriptions could be identical (I haven't checked), but it is
>> more likely than not that they're out of sync, complementary, or
>> possibly even contradicting each other, if reasoning is considered.
>>
>> If a software agent has access to both the SPARQL endpoint and Linked
>> Data resource, what should it consider as the resource description?
>> There are at least 3 options:
>> 1. prioritize SPARQL description over Linked Data
>> 2. prioritize Linked Data description over SPARQL
>> 3. merge both descriptions
>>
>> I am leaning towards #3 as the sensible solution. But then I think the
>> end-user should be informed which part of the description came from
>> which source. This would be problematic if the descriptions are
>> triples only, but should be doable with quads. That leads to another
>> problem however, that both LD and SPARQL responses are under-specified
>> in terms of quads.
>>
>> What do you think? Maybe this is a well-known issue, in which case
>> please enlighten me with some articles :)
>>
>>
>> Martynas
>> atomgraph.com
>> @atomgraphhq
>>
>
>
>
> --
> Mikel Egaña Aranguren, Ph.D.
>
> http://mikeleganaaranguren.com
>
>



Dealing with distributed nature of Linked Data and SPARQL

2016-06-08 Thread Martynas Jusevičius
Hey all,

we are developing software that consumes data both from Linked Data
and SPARQL endpoints.

Most of the time, these technologies complement each other. We've come
across an issue though, which occurs in situations where RDF
description of the same resources is available using both of them.

Lest take a resource http://data.semanticweb.org/person/andy-seaborne
as an example. Its RDF description is available in at least 2
locations:
- on a SPARQL endpoint:
http://xmllondon.com/sparql?query=DESCRIBE%20%3Chttp%3A%2F%2Fdata.semanticweb.org%2Fperson%2Fandy-seaborne%3E
- as Linked Data: http://data.semanticweb.org/person/andy-seaborne/rdf

These descriptions could be identical (I haven't checked), but it is
more likely than not that they're out of sync, complementary, or
possibly even contradicting each other, if reasoning is considered.

If a software agent has access to both the SPARQL endpoint and Linked
Data resource, what should it consider as the resource description?
There are at least 3 options:
1. prioritize SPARQL description over Linked Data
2. prioritize Linked Data description over SPARQL
3. merge both descriptions

I am leaning towards #3 as the sensible solution. But then I think the
end-user should be informed which part of the description came from
which source. This would be problematic if the descriptions are
triples only, but should be doable with quads. That leads to another
problem however, that both LD and SPARQL responses are under-specified
in terms of quads.

What do you think? Maybe this is a well-known issue, in which case
please enlighten me with some articles :)


Martynas
atomgraph.com
@atomgraphhq



Re: Deprecating owl:sameAs

2016-04-01 Thread Martynas Jusevičius
What about using SKOS instead, like the paper suggests?

On Fri, Apr 1, 2016 at 3:01 PM, Sarven Capadisli  wrote:
> There is overwhelming research [1, 2, 3] and I think it is evident at this
> point that owl:sameAs is used inarticulately in the LOD cloud.
>
> The research that I've done makes me conclude that we need to do a massive
> sweep of the LOD cloud and adopt owl:sameSameButDifferent.
>
> I think the terminology is human-friendly enough that there will be minimal
> confusion down the line, but for the the pedants among us, we can define it
> along the lines of:
>
>
> The built-in OWL property owl:sameSameButDifferent links things to things.
> Such an owl:sameSameButDifferent statement indicates that two URI references
> actually refer to the same thing but may be different under some
> circumstances.
>
>
> Thoughts?
>
> [1] https://www.w3.org/2009/12/rdf-ws/papers/ws21
> [2] http://www.bbc.co.uk/ontologies/coreconcepts#terms_sameAs
> [3] http://schema.org/sameAs
>
> -Sarven
> http://csarven.ca/#i
>



Re: RASH version 0.5: less roles, LaTeX formulas, and ROCS

2016-02-17 Thread Martynas Jusevičius
You could simplify the  examples by doing
 and avoid
escaping the source code.

On Thu, Feb 18, 2016 at 12:29 AM, Silvio Peroni  wrote:
> Dear all,
>
> I'm pleased to announce the new version (0.5) of RASH, the Research Articles
> in Simplified HTML format, and of all the tools included in the RASH
> Framework, available at
>
> https://github.com/essepuntato/rash
>
> RASH is a markup language that restricts the use of HTML elements to only 32
> elements for writing academic research articles. It is possible to include
> also RDF statements as RDFa annotations and/or as Turtle, JSON-LD and
> RDF/XML triples by using the tag "script". The RASH documentation is
> available online [1] and documents RASH version 0.5, defined as a RelaxNG
> grammar [2].
>
> These are the new features that RASH 0.5 implements:
>
> - the elements "i" and "b" have been replaced by "em" and "strong"
> respectively (thanks Ruben Verborgh for this);
>
> - the non-standard roles specifiable for the element "figure", i.e.,
> "figurebox", "tablebox", "listingbox", and "formulabox", have been removed,
> while their correct visualisation and conversion is still guaranteed by
> looking at the actual elements such element "figure" contains;
>
> - all the roles for internal references (i.e., "ref", "doc-noteref", and
> "doc-biblioref") have been removed and substituted by using an empty element
> "a" linking to the element one wants to refer to (e.g., a section, a figure,
> a footnote, a bibliographic reference);
>
> - the element "img" can have the role "math" specified if it actually
> represents a mathematical formula;
>
> - the element "span", with the attribute "role" set to "math", can be used
> to include LaTeX formulas within a RASH document;
>
> - added the support for MathJax so as to render correctly both LaTeX and
> MathML formulas in all browsers;
>
> - added the support for SVG (element "svg") for specifying images.
>
> Among the tools of the RASH Framework [3], with the release of this new
> version we have also made available the RASH Online Conversion Service
> (ROCS) [4,5], i.e., a Python web application based on web.py that allows one
> to convert an ODT document written according to simple guidelines [6] into
> RASH, and from RASH documents into LaTeX ones compliant with the Springer
> LNCS LaTeX class and ACM ICPS class. An online version of ROCS is also
> available at
>
> http://dasplab.cs.unibo.it/rocs
>
> A brief description of the whole RASH Framework has been presented during
> the poster and demo session of ISWC 2015 (http://iswc2015.semanticweb.org/),
> and the related article is freely available in RASH and PDF [7]. It is worth
> mentioning that RASH has been already proposed as one of the possible
> formats for HTML submissions in several academic events, listed in [8].
>
> I'm looking forward to having your comments about RASH and its Framework
> and, in case you are already an earlier adopter of it, please feel free to
> participate in a 10 minutes survey about the use of RASH for writing
> academic papers, available at http://esurv.org/?u=rash-format.
>
> Of course, this work could not be possible without the support and help of
> great people [9] who implemented several aspects of the RASH Framework. We,
> as developers, would also like to thank all the early adopters of RASH, and
> Ivan Herman who constructively provided new and effective insights into the
> RASH specification and who suggested several additions that have been
> implemented in the current version.
>
> Please don't hesitate to contact me (email: essepunt...@gmail.com) for
> comments, suggestions, and further questions.
>
> Have a nice day :-)
>
> S.
>
>
> # References
>
> 1. https://rawgit.com/essepuntato/rash/master/documentation/index.html
> 2. https://rawgit.com/essepuntato/rash/master/grammar/rash.rng
> 3. https://github.com/essepuntato/rash/blob/master/tools/
> 4. https://github.com/essepuntato/rash/blob/master/tools/rocs/
> 5. Di Iorio, A., Gonzalez-Beltran, A. G., Osborne, F., Peroni, S., Poggi,
> F., Vitali, F. (2016). It ROCS! The RASH Online Conversion Service. To
> appear in the Companion Volume of the Proceedings of the 25th International
> World Wide Web Conference (WWW 2016).
> https://rawgit.com/essepuntato/rash/master/papers/rash-poster-www2016.html
> 6. https://rawgit.com/essepuntato/rash/master/documentation/rash-in-odt.odt
> 7. Di Iorio, A., Nuzzolese, A. G., Osborne, F., Peroni, S., Poggi, F.,
> Smith, M., Vitali, F. Zhao, J. (2015). The RASH Framework: enabling HTML+RDF
> submissions in scholarly venues. In Proceedings of the poster and demo
> session of the 14th International Semantic Web Conference (ISWC 2015).
> https://rawgit.com/essepuntato/rash/master/papers/rash-demo-iswc2015.html
> 8.
> https://github.com/essepuntato/rash/#venues-that-have-adopted-rash-as-submission-format
> 9. https://github.com/essepuntato/rash/graphs/contributors
>
>
> 

Re: A parser for detecting W3C OWL 2 profiles

2016-01-15 Thread Martynas Jusevičius
I wonder if some SPARQL queries would be enough to do it?
On Fri, 15 Jan 2016 at 10:05, Víctor Rodríguez Doncel 
wrote:

> Ghislain,
>
> We have running an OWL2 profiler at:
>
> http://owlprofiler.appspot.com/
>
> It is a mere wrapper of OWL-API  powering
> the online service.
>
> Regards,
> Víctor
>
>
> El 14/01/2016 a las 11:58, Ghislain Atemezing escribió:
>
> Hi all,
>
> Happy New Year 2016!
>
> I was wondering if there is a tool or owl parser that given a owl
> construction, can detects the OWL2 profiles according to the W3C spec at
> https://www.w3.org/TR/owl2-profiles/.
>
> Any  pointer or hint is welcome.
>
> Best,
>
> Ghislain
> ---
> Ghislain A. Atemezing, Ph.D
> Mail: ghislain.atemez...@gmail.com
> Web: http://www.atemezing.org
> Twitter: @gatemezing
> About Me: https://about.me/ghislain.atemezing
>
>
>
>
>
>
>
>
>
> --
> Víctor Rodríguez-Doncel
> D3205 - Ontology Engineering Group (OEG)
> Departamento de Inteligencia Artificial
> ETS de Ingenieros Informáticos
> Universidad Politécnica de Madrid
>
> Campus de Montegancedo s/n
> Boadilla del Monte-28660 Madrid, Spain
> Tel. (+34) 91336 3753
> Skype: vroddon3
>
>


Re: Ontology to model access control

2015-12-16 Thread Martynas Jusevičius
Why not use W3C ACL ontology? http://www.w3.org/wiki/WebAccessControl

On Wed, Dec 16, 2015 at 11:25 AM, Sebastian Hellmann
 wrote:
> Dear all,
>
> to guide the integration of data into DBpedia+ effectively, we are  working
> on a new release of DataID, i.e. it's ontology [1,2] and an implementation
> of a repository to manage metadata effectively and distribute it to
> different other venues like datahub.io via push.
>
> we are looking for an ontology that allows us to express access rights to
> model the following roles:
>
> for metadata editing: Guest, Creator, Maintainer, Contributor
> for datasets: Guest, Creator, Maintainer, Contributor, Contact, Publisher
>
> We are thinking about copying some ideas from here:
> https://hal-unice.archives-ouvertes.fr/hal-01202126/document
> e.g. to have something like:
>
> [] a dataid:AuthorityEntityContext ;
>dataid:authorizedFor :x ; # this is a prov:Entity (either a DataId, a
> Dataset or a distribution)
>dataid:authorityAgentRole dataid:Maintainer ;
>dataid:authorizedAgent  [ (insert FOAF here) ] .
>
>
> Any ideas or pointers?
> A detailed analysis of the problem has been published by our ALIGNED[4]
> partner SWC [3]
>
> All the best,
> Sebastian and Markus
>
> [1] previous version
> http://svn.aksw.org/papers/2014/Semantics_Dataid/public.pdf
> [2] current ontology version:
> https://raw.githubusercontent.com/dbpedia/dataid/testbranch/ontology/ontology.ttl
> in the testbranch of https://github.com/dbpedia/dataid
> [3]
> https://blog.semantic-web.at/2015/12/02/ready-to-connect-to-the-semantic-web-now-what/
> [4] http://aligned-project.eu/
> --
> Sebastian Hellmann
> AKSW/KILT research group at Leipzig University
> Insitute for Applied Informatics (InfAI) at Leipzig University
> DBpedia Association
> Events:
> * Feb 8th, 2016 Submission Deadline, 5th Workshop on Linked Data in
> Linguistics
> Venha para a Alemanha como PhD: http://bis.informatik.uni-leipzig.de/csf
> Projects: http://dbpedia.org, http://nlp2rdf.org,
> http://linguistics.okfn.org, https://www.w3.org/community/ld4lt
> Homepage: http://aksw.org/SebastianHellmann
> Research Group: http://aksw.org
> Thesis:
> http://tinyurl.com/sh-thesis-summary
> http://tinyurl.com/sh-thesis



Re: What Happened to the Semantic Web?

2015-11-11 Thread Martynas Jusevičius
Wouter,

could you elaborate on the agent calculus bit?


Martynas
graphityhq.com

On Wed, Nov 11, 2015 at 11:56 PM, Wouter Beek  wrote:
> Hi Ruben, Kingsley, others,
> On Wed, Nov 11, 2015 at 9:49 PM, Ruben Verborgh 
> wrote:
>>
>> Of course—but the emphasis in the community has mostly been on servers,
>
> The emphasis has been on servers and, as of late, on Web Services.
>>
>> whereas the SemWeb vision started from agents (clients) that would do
>> things (using those servers).
>
> Today we are nowhere near this vision.  In fact, we may be further removed
> from it today than we were in 2001.  If you look at the last ISWC there was
> particularly little work on (Web) agents.
>
>> Now, the Semantic Web is mostly a server thing, which the Google/CSE
>> example also shows.
>
> With the LOD Laundromat we had the experience that people really like it
> when we make publishing and consuming data very easy for them.  People
> generally find it easier to publish their data through a Web Service rather
> than having to use more capable data publishing software they have to
> configure locally.  We ended up with a highly centralized approach that
> works for many use cases.  It would have been much more difficult the build
> the same thing in a distributed fashion.
>
> I find it difficult to see why centralization will not be the end game for
> the SW as it has been for so many other aspects of computing (search, email,
> social networking, even simple things like text chat).  The WWW shows that
> the 'soft benefits' of privacy, democratic potential, and data ownership are
> not enough to make distributed solutions succeed.
>
> However, I believe that there are other benefits to decentralization that
> have not been articulated yet and that are to be found within the semantic
> realm.  An agent calculus is fundamentally different from a traditional
> model theory.
>
> ---
> Best regards,
> Wouter Beek.
>
> Email: w.g.j.b...@vu.nl
> WWW: wouterbeek.com
> Tel: +31647674624



Re: Please publish Turtle or JSON-LD instead of RDF/XML [was Re: Recommendation for transformation of RDF/XML to JSON-LD in a web browser?]

2015-09-07 Thread Martynas Jusevičius
Unless you drop the object-oriented domain model completely, and apply
the constraints directly on the RDF graph.

On Mon, Sep 7, 2015 at 3:51 PM, Eric Prud'hommeaux  wrote:
>
> On Sep 4, 2015 12:18 PM, "Stian Soiland-Reyes"
>  wrote:
>>
>> One problem is that what many web developer likes is JSON with a
>> structure. We already had RDF/JSON which was a flat and verbose
>> "subject":  { "uri": "http://example.com/; }  style serialization that
>> nobody liked.
>>
>> What made JSON-LD popular is the @context - being able to simplify
>> namespaces and structures, but also that applications can give out a
>> consistent JSON structure that just happens to also be LD and have
>> clearly defined semantics of the links and properties.
>>
>>
>> This is easy enough if your data is stored in a relational or no-sql
>> database, and you generate the JSON with a template.
>>
>> However, if your data is stored natively in a triple/quad store, then
>> to produce a consistent JSON structure you would currently have to use
>> hard-coded templates and custom code (which sounds silly, converting
>> from RDF to RDF manually),  or use JSON-LD Framing, which has not been
>> fully standardized, and has many missing features and bugs.   I think
>> we need to work more on the Framing, so that RDF can be more than just
>> a publication format.
>
> I believe any model-sensitive serialization will always be more appealing to
> consumers, usually at the cost of having programmer brains in the loop. You
> effectively have to parse your domain model out of the graph and take
> advantage of structural constraints to sensibly normalize program
> interfaces. I'm interested in existing template/grammar-based tools for
> this. Pointers?
>
>> JSON-LD Framing was also meant as a way for applications to receive
>> arbitrary JSON-LD content, and then frame it and apply a new @context
>> to shape/select the particular bits of the data the application is
>> interested in.
>>
>> (Mandatory XSLT warning applies)
>>
>>
>> On 3 September 2015 at 22:34, Paul Houle  wrote:
>> > Bernadette,
>> >
>> >  it is not just perception,  it is reality.
>> >
>> >  People find JSON-LD easy to work with,  and often it is a simple
>> > lossless model-driven transformation from an RDF graph to a JSON graph
>> > that
>> > people can do what they want with.
>> >
>> >  Ultimately RDF is a universal data model and it is the data model
>> > that
>> > is important,  NOT the specific implementations.  For instance you can
>> > do a
>> > model-driven transformation of data from RDF to JSON-LD and then any
>> > JSON
>> > user can access it with few hangups even if they are unaware of JSON-LD.
>> > Add some JSON-LD tooling and you've got JSON++.
>> >
>> >   We can use a use relational-logical-graphical methods to process
>> > handle data and we can accept and publish JSON with the greatest of
>> > ease.
>> >
>> > On Thu, Sep 3, 2015 at 5:18 PM, Bernadette Hyland
>> > 
>> > wrote:
>> >>
>> >> +1 David, well said.
>> >>
>> >> Amazing how much the mention of JSON (in the phase JSON-LD) puts people
>> >> at
>> >> ease vs. RDF .  JSON-LD as a Recommendation has helped lower
>> >> the
>> >> defenses of many who used to get their hackles up and say ‘RDF is too
>> >> hard'.
>> >>
>> >> Perception counts for a lot, even for highly technical people including
>> >> Web developers.
>> >>
>> >> Cheers,
>> >>
>> >> Bernadette Hyland
>> >> CEO, 3 Round Stones, Inc.
>> >>
>> >> http://3roundstones.com  || http://about.me/bernadettehyland
>> >>
>> >>
>> >> On Sep 3, 2015, at 1:03 PM, David Booth  wrote:
>> >>
>> >> Side note: RDF/XML was the first RDF serialization standardized, over
>> >> 15
>> >> years ago, at a time when XML was all the buzz. Since then other
>> >> serializations have been standardized that are far more human friendly
>> >> to
>> >> read and write, and easier for programmers to use, such as Turtle and
>> >> JSON-LD.
>> >>
>> >> However, even beyond ease of use, one of the biggest problems with
>> >> RDF/XML
>> >> that I and others have seen over the years is that it misleads people
>> >> into
>> >> thinking that RDF is a dialect of XML, and it is not.  I'm sure this
>> >> misconception was reinforced by the unfortunate depiction of XML in the
>> >> foundation of the (now infamous) semantic web layer cake of 2001, which
>> >> in
>> >> hindsight is just plain wrong:
>> >> http://www.w3.org/2001/09/06-ecdl/slide17-0.html
>> >> (Admittedly JSON-LD may run a similar risk, but I think that risk is
>> >> mitigated now by the fact that RDF is already more established in its
>> >> own
>> >> right.)
>> >>
>> >> I encourage all RDF publishers to use one of the other standard RDF
>> >> formats such as Turtle or JSON-LD.  All commonly used RDF tools now
>> >> support
>> >> Turtle, and many or most already support JSON-LD.
>> >>
>> >> RDF/XML is not officially 

Re: Recommendation for transformation of RDF/XML to JSON-LD in a web browser?

2015-09-03 Thread Martynas Jusevičius
Frans,

you can use an XSLT stylesheet which does exactly that:
https://github.com/Graphity/graphity-client/blob/master/src/main/webapp/static/org/graphity/client/xsl/rdfxml2json-ld.xsl

There's a bug but it shouldn't be hard to fix:
https://github.com/Graphity/graphity-client/issues/62

You will need a XSLT 2.0 processor in the browser - fortunately there
is Saxon-CE:
http://www.saxonica.com/ce/index.xml

Hope this helps.

Martynas
graphityhq.com

On Thu, Sep 3, 2015 at 4:19 PM, Frans Knibbe  wrote:
> Hello,
>
> In a web application that is working with RDF data I would like to have all
> data available as JSON-LD, because I believe it is the easiest RDF format to
> process in a web application. At the moment I am particularly looking at
> processing vocabulary data. I think I can assume that such data will at
> least be available as RDF/XML. So I am looking for a way to transform
> RDF/XML to JSON-LD in a web browser.
>
> What would be the best or easiest way to do this? Attempt the transformation
> in the browser, using jsonld.js plus something else? Or use a server side
> component? And in the case of a server side component, which programming
> environment could be recommended? Python? Node.js? Any general or specific
> advice would be welcome.
>
> Greetings,
> Frans
>
> --
> Frans Knibbe
> Geodan
> President Kennedylaan 1
> 1079 MB Amsterdam (NL)
>
> T +31 (0)20 - 5711 347
> E frans.kni...@geodan.nl
> www.geodan.nl
> disclaimer
>



Re: Please publish Turtle or JSON-LD instead of RDF/XML [was Re: Recommendation for transformation of RDF/XML to JSON-LD in a web browser?]

2015-09-03 Thread Martynas Jusevičius
With due respect, I think it would be foolish to burn the bridges to
XML. The XML standards and infrastructure are very well developed,
much more so than JSON-LD's. We use XSLT extensively on RDF/XML.

Martynas
graphityhq.com

On Thu, Sep 3, 2015 at 8:03 PM, David Booth  wrote:
> Side note: RDF/XML was the first RDF serialization standardized, over 15
> years ago, at a time when XML was all the buzz. Since then other
> serializations have been standardized that are far more human friendly to
> read and write, and easier for programmers to use, such as Turtle and
> JSON-LD.
>
> However, even beyond ease of use, one of the biggest problems with RDF/XML
> that I and others have seen over the years is that it misleads people into
> thinking that RDF is a dialect of XML, and it is not.  I'm sure this
> misconception was reinforced by the unfortunate depiction of XML in the
> foundation of the (now infamous) semantic web layer cake of 2001, which in
> hindsight is just plain wrong:
> http://www.w3.org/2001/09/06-ecdl/slide17-0.html
> (Admittedly JSON-LD may run a similar risk, but I think that risk is
> mitigated now by the fact that RDF is already more established in its own
> right.)
>
> I encourage all RDF publishers to use one of the other standard RDF formats
> such as Turtle or JSON-LD.  All commonly used RDF tools now support Turtle,
> and many or most already support JSON-LD.
>
> RDF/XML is not officially deprecated, but I personally hope that in the next
> round of RDF updates, we will quietly thank RDF/XML for its faithful service
> and mark it as deprecated.
>
> David Booth
>



Re: Scholarly paper in HTML+RDF through RASH

2015-05-22 Thread Martynas Jusevičius
Silvio, nice work!

A couple remarks regarding HTML:
p class=code could be precode
http://www.w3.org/TR/html401/struct/text.html#edef-CODE
p class=quote could blockquote
http://www.w3.org/TR/html401/struct/text.html#edef-BLOCKQUOTE

It think that would be more semantic :)

BTW, shouldn't the JSON-LD media type in section #9 be script
type=application/ld+json ?
http://www.w3.org/TR/json-ld/#h3_interpreting-json-as-json-ld

Martynas

On Fri, May 22, 2015 at 11:52 PM, Silvio Peroni silvio.per...@unibo.it wrote:
 Dear all,

 Considering the several posts about this topic, I would like to share with 
 you my personal experience in using HTML(+RDF) as a format for 
 preparing/submitting/processing papers in scientific events.

 In the past months, I (together with several people in the my research group 
 at the University of Bologna plus other interested researchers from other 
 institutions) have released a format for writing academic articles called 
 RASH, i.e., Research Articles in Simplified HTML. RASH is a markup language 
 that restricts the use of HTML elements to only 25 elements for writing 
 academic research articles. It is possible to includes also RDFa annotations 
 within any element of the language and other RDF statements in Turtle and 
 JSON-LD format by using the appropriate tag script. The RASH documentation 
 is available online at [1] and documents RASH version 0.3.5, defined as a 
 RelaxNG grammar [2].

 RASH is the core component of a larger framework that includes a set of 
 specifications and writing/conversion/extraction tools for academic articles. 
 All the sources (released with Open Source and Creative Commons Licences) are 
 available on GitHub [3] and have been developed by a group of several people 
 so far. An internal note [4] provides a complete overview of the RASH 
 Framework - please find attached the structured abstract of such note at the 
 end of this email, for your convenience.

 Currently, the RASH Framework includes the following tools:

 - a script to enable RASH users to check their documents simultaneously both 
 against the specific requirements in the RASH RelaxNG grammar and also 
 against the full set of HTML checks that the W3C Nu HTML Checker (a.k.a., 
 HTML5 validator) does for all HTML documents (by checking all requirements 
 given in the HTML specification);

 - javascript scripts (based on Bootstrap and JQuery) and CSS stylesheets 
 (partially based on Linked Research [5] CSSs) implementing the visualisation 
 of RASH documents in the browser. Such scripts also include into RASH papers 
 a footbar with statistics about the paper (i.e., number of words, figures, 
 tables and formulas), a menu to change the actual layout of the page, the 
 automatic reordering of footnotes and references, the visualisation of the 
 metadata of the paper, etc.;

 - XSLT 2.0 files for converting RASH documents into LaTeX according to the 
 ACM ICPS [6] and Springer LNCS [7] styles (other styles to come soon);

 - an XSLT 2.0 file to perform conversions from OpenOffice documents into RASH 
 documents;

 - a Java application called SPAR Xtractor suite that takes a RASH document as 
 input and returns a new RASH document where all its markup elements have been 
 annotated with their actual (structural) semantics according to the Document 
 Components Ontology (DoCO) [8].

 In order to experiment with the use of RASH in official venues, it has been 
 already proposed among the possible submission formats in three academic 
 events, i.e., the Semantic Publishing Challenge 2015 [9] (that will be held 
 during ESWC 2015), and the workshops SAVE-SD 2015 [10] (held during  
 2015) and Linking in the Cloud 2015 [11] (that will be held during Hypertext 
 2015).

 In particular, six papers were actually submitted in RASH in the SAVE-SD 2015 
 Workshop [10] (which I have co-organised) - the sources of such papers are 
 available in the workshop program webpage [12]. All the RASH papers also 
 include RDF statements (for a total of about 1300 RDF triples) concerning 
 article metadata, basic article structures (mainly based on DoCO [9]), 
 citation functions (based on CiTO [13]), and even semantic descriptions of 
 figures as in the case of the SAVE-SD 2015 Best RASH Paper [14].

 It is worth mentioning that the conversion of the RASH submissions into the 
 ACM format requested by Sheridan publisher (responsible for the publications 
 of all WWW proceedings including the workshop proceedings) was handled by us, 
 the workshop organisers, through a semi-automatic process. In particular, we 
 used the aforementioned XSLT files to convert RASH papers into LaTeX files 
 compliant with the official ACM format requested [6], and then we fixed only 
 a few of layout misalignments.

 I hope that the RASH Framework (together with others, e.g., Linked Research 
 [5] and Scholarly Markdown [15]) and the related initiatives and adoption in 
 academic events can be considered a first concrete 

Re: Profiles in Linked Data

2015-05-18 Thread Martynas Jusevičius
Lars,

what you describe here is a classic case of data quality control. You
don't want any data to enter your system that does not validate
against your constraints.

As mentioned before, SPARQL and SPIN has been used for this purpose
for a long time. There are readily available constraint libraries:
http://semwebquality.org/ontologies/dq-constraints. But you can easily
create custom ones since they're just SPARQL queries. Constrains can
be (de)referenced from remote systems as well.

Our Graphity Linked Data platform provides a SPIN validator which
checks every incoming RDF request:
http://graphityhq.com/technology/graphity-processor#features

Martynas

On Mon, May 18, 2015 at 3:02 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Kingsley,

 On Tuesday, May 12, 2015 2:58 PM, Kingsley Idehen wrote:

  We have to be careful here. RDF Language sentences/statements have a
  defined syntax as per RDF Abstract Syntax i.e., 3-tuples organized in
 subject,
  predicate, object based structure. RDF Shapes (as far as I know) has
 nothing to
  do with the subject, predicate, object structural syntax of an RDF
  statement/sentence. Basically, it's supposed to provide a mechanism for
  constraining the entity type (class instances) of RDF statement's subject
 and
  object, when creating RDF statements/sentences in documents. Think of
 this as
  having more to do with what's regarded as data-entry validation and
 control, in
  other RDBMS quarters.
  The charter of the data shapes WG [1] says that the product of the RDF 
  Data
 Shapes WG will enable the definition of graph topologies for interface
 specification, code development, and data verification, so it's not_only_  
 about
 validation etc. My understanding is that it's somewhat similar to XML schema
 and thus is essentially a description of the graph structure. As such, it 
 can of
 course be used for validation, but that is only one purpose.

 Terms from a vocabulary or ontology do not change the topology of an RDF
 statement represented as graph pictorial. Neither do additional
 statements that provide constraints on the subjects and objects of a
 predicate. It is still going to be an RDF 3-tuple (or triple).

 Well yes, but that's not my point. I'm not talking about clients and servers 
 negotiation the triple structure of the RDF data. I'm talking about that 
 servers need a way to describe what vocabularies they use to describe their 
 data, including things like cardinalities (a person will have the type 
 foaf:Person and will have exactly one foaf:birthday and maximum one 
 foaf:dnaChecksum). When we describe persons in our linked data service, we 
 will say that a person will have the type gndo:DifferentiatedPerson, one or 
 more gndo:dateOfBirth (there are cases where researchers dispute the exact 
 birth date so we list all of them...), exactly one 
 gndo:preferredNameForThePerson etc. etc. For lack of a better term I have 
 chosen to call those descriptions profiles. Perhaps shapes is a better 
 choice. It has nothing to do with triple structure.

  The function of the profile I believe you (and others that support 
  this) are
  seeking has more to do with enabling clients and servers (that don't
 necessarily
  understand or care about RDF's implicit semantics) exchange hints about
 the
  nature of RDF document content (e.g., does it conform to Linked Data
  principles re. entity naming [denotation + connotation] ).
  No, my use of profile is really a shape in the sense of the data 
  shapes wg.
 Some of their motivations are what I'm envisioning, too, e.g.
 
  * Developers of each data-consuming application could define the shapes
 their software needs to find in each feed, in order to work properly, with
 optional elements it can use to work better.
  * Developers of data-providing systems can read the shape definitions (and
 possibly related RDF Vocabulary definitions) to learn what they need to 
 provide
 
  Cut long story short, a profile hint is about the nature of the RDF 
  content
 (in
  regards to entity names and name interpretation), not its shape (which is
  defined by RDF syntax).
  OK, I stand corrected: My question is: How can clients and servers 
  negotiate
 shape information?

 RDF data has one shape. Use of terms from a vocabulary or ontology don't
 change the shape of RDF document content.

 Profiles are a means of representing preferences. Seeking terms from a
 specific vocabulary or ontology in regards to RDF document content is an
 example of a preference.

 You can use rel=profile as a preference indicator via HTTP message
 exchanges between clients and servers.

 OK. So here we agree. The use of the Link header with rel=profile was one of 
 my original suggestions...

 Best,

 Lars




Re: Profiles in Linked Data

2015-05-18 Thread Martynas Jusevičius
Hey Lars,

yes, SPIN is a machine-readable way to describe RDF constraints.

What I still don't understand is why the client gets to choose the
constraint profile. Isn't it the responsibility of the data receiver,
in this case, the Linked Data server?

Using your previous FOAF/GNDO example, could you illustrate what
constraints would go into profile A and what into profile B?

If for example profile A says that foaf:Person instances must have
mandatory foaf:familyName and foaf:givenName while profile B does not
include this constraint, then you have a potentially conflicting model
of your data.

Martynas

On Mon, May 18, 2015 at 4:56 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Martynas,

 On Monday, May 18, 2015 3:14 PM, Martynas Jusevičius wrote:

 what you describe here is a classic case of data quality control. You
 don't want any data to enter your system that does not validate
 against your constraints.

 Yes, that is one use case.

 As mentioned before, SPARQL and SPIN has been used for this purpose
 for a long time. There are readily available constraint libraries:
 http://semwebquality.org/ontologies/dq-constraints. But you can easily
 create custom ones since they're just SPARQL queries. Constrains can
 be (de)referenced from remote systems as well.

 OK, what I haven't understood yet is how a client and a server can negotiate 
 the constraints the client wants the data to meet.

 Given is a server that has no SPARQL endpoint but is capable to serve RDF 
 conforming to two profiles/shapes/preferences profile:A and profile:B 
 (possibly identified by the URIs http://example.com/profiles/A and 
 http://example.com/profiles/B). When a client wants data adhering to 
 profile:B in text/turtle, what would the http GET request look like, and what 
 would you get when you dereference http://example.com/profiles/B with 
 Accept: text/turtle?

 Our Graphity Linked Data platform provides a SPIN validator which
 checks every incoming RDF request:
 http://graphityhq.com/technology/graphity-processor#features

 Nice, but my case is not only about validation. It's also about having a way 
 to describe the constraints in a fashion that clients and servers can 
 understand. If I understand correctly, you say that SPIN is the best way of 
 doing that.

 Best,

 Lars



Re: Profiles in Linked Data

2015-05-13 Thread Martynas Jusevičius
Lars,

first of all, a SPARQL query can be converted to RDF graph using SPIN
syntax: http://spinrdf.org/sp.html

In my mind the RDF Shapes WG is about RDF validation, and hopefully
will also be based on SPIN. I'm not interested in the part about
non-SPARQL shapes as this is mostly politics at play. If you want to
do practical development, SPARQL is all you need.

Moreover, Shapes WG is very new while SPARQL has been around for 10
years. You wrote not all clients want to talk sparql -- but somehow
those clients will want to talk Shapes? Makes no sense to me.

I'm still of the opinion that you are looking in the wrong places.
Have you actually tried SPARQL for this? What did not work?

Martynas

On Wed, May 13, 2015 at 12:42 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Martynas,

 this is a very simple answer that I have given you before: a shape of
 RDF data is defined as SPARQL query. There are no two ways about it.

 Hmm, the list of deliverables of the data shapes wg [1] mentions an RDF 
 vocabulary to describe shapes, a set of semantics _possibly_ defined as 
 SPARQL operations, etc. It says that one possibility is to use SPARQL queries 
 to evaluate shapes against RDF graphs. At least to me, that doesn't mean that 
 the shape is defined as a SPARQL query, but as an RDF graph.

 [1] http://www.w3.org/2014/data-shapes/charter#deliverables

 Best,

 Lars

 On Tue, May 12, 2015 at 2:18 PM, Svensson, Lars l.svens...@dnb.de wrote:
  Kingsley,
 
  On Monday, May 11, 2015 9:00 PM, Kingsley Idehen wrote:
 
  We have to be careful here. RDF Language sentences/statements have a
  defined syntax as per RDF Abstract Syntax i.e., 3-tuples organized in 
  subject,
  predicate, object based structure. RDF Shapes (as far as I know) has 
  nothing
 to
  do with the subject, predicate, object structural syntax of an RDF
  statement/sentence. Basically, it's supposed to provide a mechanism for
  constraining the entity type (class instances) of RDF statement's subject 
  and
  object, when creating RDF statements/sentences in documents. Think of this
 as
  having more to do with what's regarded as data-entry validation and
 control, in
  other RDBMS quarters.
 
  The charter of the data shapes WG [1] says that the product of the RDF 
  Data
 Shapes WG will enable the definition of graph topologies for interface
 specification, code development, and data verification, so it's not _only_ 
 about
 validation etc. My understanding is that it's somewhat similar to XML schema
 and thus is essentially a description of the graph structure. As such, it 
 can of
 course be used for validation, but that is only one purpose.
 
  The function of the profile I believe you (and others that support 
  this) are
  seeking has more to do with enabling clients and servers (that don't
 necessarily
  understand or care about RDF's implicit semantics) exchange hints about 
  the
  nature of RDF document content (e.g., does it conform to Linked Data
  principles re. entity naming [denotation + connotation] ).
 
  No, my use of profile is really a shape in the sense of the data 
  shapes wg.
 Some of their motivations are what I'm envisioning, too, e.g.
 
  * Developers of each data-consuming application could define the shapes
 their software needs to find in each feed, in order to work properly, with
 optional elements it can use to work better.
  * Developers of data-providing systems can read the shape definitions (and
 possibly related RDF Vocabulary definitions) to learn what they need to 
 provide
 
  Cut long story short, a profile hint is about the nature of the RDF 
  content
 (in
  regards to entity names and name interpretation), not its shape (which is
  defined by RDF syntax).
 
  OK, I stand corrected: My question is: How can clients and servers 
  negotiate
 shape information?
 
  Best,
 
  Lars



Re: Profiles in Linked Data

2015-05-13 Thread Martynas Jusevičius
Phil,

I'm talking from a developer perspective. I can prove with source code
that SPARQL and SPIN is enough to implement a read-write Linked Data
life-cycle.

If you know a client that (currently) accepts Shapes but not SPARQL,
please point me to it. Because I don't think it exists.

Martynas

On Wed, May 13, 2015 at 1:42 PM, Phil Archer ph...@w3.org wrote:


 On 13/05/2015 12:12, Martynas Jusevičius wrote:

 Lars,

 first of all, a SPARQL query can be converted to RDF graph using SPIN
 syntax: http://spinrdf.org/sp.html

 In my mind the RDF Shapes WG is about RDF validation, and hopefully
 will also be based on SPIN. I'm not interested in the part about
 non-SPARQL shapes as this is mostly politics at play. If you want to
 do practical development, SPARQL is all you need.


 That is a political statement Martynas and therefore denies your previous
 sentence.


 Moreover, Shapes WG is very new while SPARQL has been around for 10
 years. You wrote not all clients want to talk sparql -- but somehow
 those clients will want to talk Shapes? Makes no sense to me.


 But it does to others.


 I'm still of the opinion that you are looking in the wrong places.
 Have you actually tried SPARQL for this? What did not work?


 I don't think it is reasonable to expect data portals that harvest metadata
 from other portals to include a SPARQL engine just to check that data
 conforms to a profile, like DCAT-AP. That should be possible without having
 to build or include a SPARQL engine which would be overkill for what is
 essentially a pretty simple task.

 SPIN does a really good job and in many circumstances it is the right tool.
 But not all. IMHO we need a more flexible approach, one that can handle
 simple cases without SPARQL. Now, whether the SHACL work is the answer is,
 mercifully, a question others are answering.

 Phil.




 Martynas

 On Wed, May 13, 2015 at 12:42 PM, Svensson, Lars l.svens...@dnb.de
 wrote:

 Martynas,

 this is a very simple answer that I have given you before: a shape of
 RDF data is defined as SPARQL query. There are no two ways about it.


 Hmm, the list of deliverables of the data shapes wg [1] mentions an RDF
 vocabulary to describe shapes, a set of semantics _possibly_ defined as
 SPARQL operations, etc. It says that one possibility is to use SPARQL
 queries to evaluate shapes against RDF graphs. At least to me, that doesn't
 mean that the shape is defined as a SPARQL query, but as an RDF graph.

 [1] http://www.w3.org/2014/data-shapes/charter#deliverables

 Best,

 Lars

 On Tue, May 12, 2015 at 2:18 PM, Svensson, Lars l.svens...@dnb.de
 wrote:

 Kingsley,

 On Monday, May 11, 2015 9:00 PM, Kingsley Idehen wrote:

 We have to be careful here. RDF Language sentences/statements have a
 defined syntax as per RDF Abstract Syntax i.e., 3-tuples organized in
 subject,
 predicate, object based structure. RDF Shapes (as far as I know) has
 nothing

 to

 do with the subject, predicate, object structural syntax of an RDF
 statement/sentence. Basically, it's supposed to provide a mechanism
 for
 constraining the entity type (class instances) of RDF statement's
 subject and
 object, when creating RDF statements/sentences in documents. Think of
 this

 as

 having more to do with what's regarded as data-entry validation and

 control, in

 other RDBMS quarters.


 The charter of the data shapes WG [1] says that the product of the RDF
 Data

 Shapes WG will enable the definition of graph topologies for interface
 specification, code development, and data verification, so it's not
 _only_ about
 validation etc. My understanding is that it's somewhat similar to XML
 schema
 and thus is essentially a description of the graph structure. As such,
 it can of
 course be used for validation, but that is only one purpose.


 The function of the profile I believe you (and others that support
 this) are
 seeking has more to do with enabling clients and servers (that don't

 necessarily

 understand or care about RDF's implicit semantics) exchange hints
 about the
 nature of RDF document content (e.g., does it conform to Linked Data
 principles re. entity naming [denotation + connotation] ).


 No, my use of profile is really a shape in the sense of the data
 shapes wg.

 Some of their motivations are what I'm envisioning, too, e.g.


 * Developers of each data-consuming application could define the shapes

 their software needs to find in each feed, in order to work properly,
 with
 optional elements it can use to work better.

 * Developers of data-providing systems can read the shape definitions
 (and

 possibly related RDF Vocabulary definitions) to learn what they need to
 provide


 Cut long story short, a profile hint is about the nature of the RDF
 content

 (in

 regards to entity names and name interpretation), not its shape (which
 is
 defined by RDF syntax).


 OK, I stand corrected: My question is: How can clients and servers
 negotiate

 shape information?


 Best,

 Lars




 --


 Phil

Re: Profiles in Linked Data

2015-05-12 Thread Martynas Jusevičius
Lars,

this is a very simple answer that I have given you before: a shape of
RDF data is defined as SPARQL query. There are no two ways about it.

Martynas

On Tue, May 12, 2015 at 2:18 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Kingsley,

 On Monday, May 11, 2015 9:00 PM, Kingsley Idehen wrote:

 We have to be careful here. RDF Language sentences/statements have a
 defined syntax as per RDF Abstract Syntax i.e., 3-tuples organized in 
 subject,
 predicate, object based structure. RDF Shapes (as far as I know) has nothing 
 to
 do with the subject, predicate, object structural syntax of an RDF
 statement/sentence. Basically, it's supposed to provide a mechanism for
 constraining the entity type (class instances) of RDF statement's subject and
 object, when creating RDF statements/sentences in documents. Think of this as
 having more to do with what's regarded as data-entry validation and control, 
 in
 other RDBMS quarters.

 The charter of the data shapes WG [1] says that the product of the RDF Data 
 Shapes WG will enable the definition of graph topologies for interface 
 specification, code development, and data verification, so it's not _only_ 
 about validation etc. My understanding is that it's somewhat similar to XML 
 schema and thus is essentially a description of the graph structure. As such, 
 it can of course be used for validation, but that is only one purpose.

 The function of the profile I believe you (and others that support this) 
 are
 seeking has more to do with enabling clients and servers (that don't 
 necessarily
 understand or care about RDF's implicit semantics) exchange hints about the
 nature of RDF document content (e.g., does it conform to Linked Data
 principles re. entity naming [denotation + connotation] ).

 No, my use of profile is really a shape in the sense of the data shapes 
 wg. Some of their motivations are what I'm envisioning, too, e.g.

 * Developers of each data-consuming application could define the shapes their 
 software needs to find in each feed, in order to work properly, with optional 
 elements it can use to work better.
 * Developers of data-providing systems can read the shape definitions (and 
 possibly related RDF Vocabulary definitions) to learn what they need to 
 provide

 Cut long story short, a profile hint is about the nature of the RDF 
 content (in
 regards to entity names and name interpretation), not its shape (which is
 defined by RDF syntax).

 OK, I stand corrected: My question is: How can clients and servers negotiate 
 shape information?

 Best,

 Lars



Re: Profiles in Linked Data

2015-05-08 Thread Martynas Jusevičius
I think foaf:primaryTopic/foaf:isPrimaryTopic of is a good convention
for linking abstract concepts/physical things to documents about them.
We use it extensively in our datasets. For example:

  some/resource#this a bibo:Book ;
foaf:isPrimaryTopicOf some/resource/dcat , some/resource/premis .

  some/resource/dcat a foaf:Document ;
foaf:primaryTopic some/resource#this .

  some/resource/premis a foaf:Document ;
foaf:primaryTopic some/resource#this .

Hope this helps.

Martynas
graphityhq.com

On Fri, May 8, 2015 at 5:47 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Kingsley,

 Hope this live example helps, in regards to understanding the issue at
 hand. Basically, what a document describes is distinct from the shape
 and form of its content.

 We're totally on the same page here, but I need a way to negotiate the shape 
 and the form of the description and that must in some way refer back to the 
 entity it describes.

 Lars

 Still trying to understand




Re: Profiles in Linked Data

2015-05-07 Thread Martynas Jusevičius
So why don't you include both DCAT and PREMIS in the description and
let the client figure it out?

I haven't yet encountered a use case where profiles would be necessary.

WebArch only talks about representations (descriptions) that differ in
terms of media type:
http://www.w3.org/TR/webarch/#dereference-details

On Thu, May 7, 2015 at 1:34 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Martynas,

 As you wrote, media type is orthogonal to profiles. To retrieve
 RDF/XML, you would use content negotiation (Accept header).

 You would need to run the Graphity processor that would match URI
 templates and execute SPARQL queries from the sitemap ontology.

 Sure, instead of query strings

 OK. But that would require the client to re-write the resource URI to put in 
 the correct query string.

 you could use Accept-Profile/Profile or
 similar headers to advertise profiles and their preference. It's just
 that the uptake for new custom HTTP headers will be slow, so there's
 not much practical advantage.

 On the other hand, it seems like you want different descriptions of a
 resource -- so it seems to me that these should in fact be different
 resources? That could be split into
 http://example.org/some/resource/dcat and
 http://example.org/some/resource/premis, for example.

 Well, at least to me it is two descriptions of the same resource (much as a 
 mobile-optimised website is the same resource as the real website, but sort 
 of minimalised). Particularly when I refer to concepts, e. g. Semantic Web 
 [1], or persons, e. g. Tim Berners-Lee [2], the URI references the RWO in 
 no particular format. When I client actually wants to _do_ something with 
 that information, the client and the server need to negotiate a way to find 
 the best description. That is where profiles (or shapes) enter the equation.

 [1] http://d-nb.info/gnd/4688372-1
 [2] http://d-nb.info/gnd/121649091

 Best,

 Lars



Re: Profiles in Linked Data

2015-05-07 Thread Martynas Jusevičius
Lars,

you could define a default profile using Graphity. It would be an
OWL class with annotations, e.g.:

#SomeResource a owl:Classs ;
  gp:uriTemplate /some/resource ;
  gp:query #ConstructDCAT .

#ConstructDCAT a sp:Construct ;
  sp:text CONSTRUCT ...  . # uses DCAT

You could implement a little extra logic to override the gp:query
value using a ?query= parameter and specifcy e.g. #ConstructPREMIS
instead. See more here:
https://github.com/Graphity/graphity-processor/wiki/Templates

No new HTTP headers are necessary.


Martynas
graphityhq.com

On Wed, May 6, 2015 at 5:04 PM, Svensson, Lars l.svens...@dnb.de wrote:
 All,

 I am looking for a way to specify a profile when requesting a (linked data) 
 resource. A profile in this case is orthogonal to the mime-type and is 
 intended to specify e. g. the use of a specific RDF vocabulary to describe 
 the data (I ask a repository for a list of datasets, specify that I want the 
 data in turtle and also that I want the data dictionary described with DCAT 
 and not with PREMIS). This is adding a new dimension to the traditional 
 content-negotiation (mime-type, language, etc.).

 I have not found a best practice for doing this but the following 
 possibilities have crossed my mind:

 1) Using the Link-Header to specify a profile
 This uses profile as specified in RFC 6906 [1]

 Request:
 GET /some/resource HTTP 1.1
 Accept: application/rdf+xml
 Link: http://example.org/dcat-profile; rel=profile

 The server would then either return the data in the requested profile, answer 
 with 406 (not acceptable), or return the data in a default profile (and set 
 the Link-header to tell the client what profile the server used...)


 2) Register new http headers Accept-Profile and Profile

 Request:
 GET /some/resource HTTP 1.1
 Accept: application/rdf+xml
 Accept-Profile: http://example.org/dcat-profile

 The server would then either return the data in the requested profile, answer 
 with 406 (not acceptable), or return the data in a default profile. If the 
 answer is a 200 OK, the server needs to set the Profile header to let the 
 client know which profile was used. This is consistent with the use of the 
 Accept header.

 3) Use the Accept-Features and Features headers
 RFC 2295 §6 [2] defines so-called features as a further dimension of content 
 negotiation.

 Request:
 GET /some/resource HTTP 1.1
 Accept: application/rdf+xml
 Accept-Features: profile=http://example.org/dcat-profile

 The server would then either return the data in the requested 
 profile/feature, answer with 406 (not acceptable), or return the data in a 
 default profile/feature. If the answer is a 200 OK, the server needs to set 
 the Feature header to let the client know which profile was used. This is 
 consistent with the use of the Accept header.

 Discussion
 The problem I have with the Accept-Features/Features header is that I feel 
 that the provision of a specific (application) profile is not the same as a 
 feature of the requested resource, at least not if I look at the examples 
 they provide in RFC 2295 which includes tables, fonts, screenwidth and 
 colordepth, but perhaps I'm overly picky.

 The registration of Accept-Profile/Profile headers is appealing since their 
 semantics can be clearly defined and that their naming show the similarities 
 to other Accept-* headers. OTOH the process of getting those headers 
 registered with IETF can be fairly heavy.

 Lastly, the use of RFC 6906 profiles has the advantage that no extra work has 
 to be done, the Link header is in place and so is the profile relation type.

 Any feedback would be greatly appreciated.

 [1] http://tools.ietf.org/html/rfc6906
 [2] http://tools.ietf.org/html/rfc2295#section-6

 Best,

 Lars




Re: Profiles in Linked Data

2015-05-07 Thread Martynas Jusevičius
As you wrote, media type is orthogonal to profiles. To retrieve
RDF/XML, you would use content negotiation (Accept header).

You would need to run the Graphity processor that would match URI
templates and execute SPARQL queries from the sitemap ontology.

Sure, instead of query strings you could use Accept-Profile/Profile or
similar headers to advertise profiles and their preference. It's just
that the uptake for new custom HTTP headers will be slow, so there's
not much practical advantage.

On the other hand, it seems like you want different descriptions of a
resource -- so it seems to me that these should in fact be different
resources? That could be split into
http://example.org/some/resource/dcat and
http://example.org/some/resource/premis, for example.

On Thu, May 7, 2015 at 12:56 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Martynas,

 you could define a default profile using Graphity. It would be an
 OWL class with annotations, e.g.:

 #SomeResource a owl:Classs ;
   gp:uriTemplate /some/resource ;
   gp:query #ConstructDCAT .

 #ConstructDCAT a sp:Construct ;
   sp:text CONSTRUCT ...  . # uses DCAT

 You could implement a little extra logic to override the gp:query
 value using a ?query= parameter and specifcy e.g. #ConstructPREMIS
 instead. See more here:
 https://github.com/Graphity/graphity-processor/wiki/Templates

 No new HTTP headers are necessary.

 I'm afraid I don't quite understand how this is supposed to work. When a 
 client calls http://example.org/some/resource, how does it tell the server it 
 wants RDF/XML in the dcat-profile? And how does the server tell the client 
 that it only supports premis?

 Best,

 Lars


 Martynas
 graphityhq.com

 On Wed, May 6, 2015 at 5:04 PM, Svensson, Lars l.svens...@dnb.de wrote:
  All,
 
  I am looking for a way to specify a profile when requesting a (linked data)
 resource. A profile in this case is orthogonal to the mime-type and is 
 intended
 to specify e. g. the use of a specific RDF vocabulary to describe the data 
 (I ask a
 repository for a list of datasets, specify that I want the data in turtle 
 and also
 that I want the data dictionary described with DCAT and not with PREMIS). 
 This
 is adding a new dimension to the traditional content-negotiation (mime-type,
 language, etc.).
 
  I have not found a best practice for doing this but the following 
  possibilities
 have crossed my mind:
 
  1) Using the Link-Header to specify a profile
  This uses profile as specified in RFC 6906 [1]
 
  Request:
  GET /some/resource HTTP 1.1
  Accept: application/rdf+xml
  Link: http://example.org/dcat-profile; rel=profile
 
  The server would then either return the data in the requested profile, 
  answer
 with 406 (not acceptable), or return the data in a default profile (and set 
 the
 Link-header to tell the client what profile the server used...)
 
 
  2) Register new http headers Accept-Profile and Profile
 
  Request:
  GET /some/resource HTTP 1.1
  Accept: application/rdf+xml
  Accept-Profile: http://example.org/dcat-profile
 
  The server would then either return the data in the requested profile, 
  answer
 with 406 (not acceptable), or return the data in a default profile. If the 
 answer
 is a 200 OK, the server needs to set the Profile header to let the client 
 know
 which profile was used. This is consistent with the use of the Accept header.
 
  3) Use the Accept-Features and Features headers
  RFC 2295 §6 [2] defines so-called features as a further dimension of 
  content
 negotiation.
 
  Request:
  GET /some/resource HTTP 1.1
  Accept: application/rdf+xml
  Accept-Features: profile=http://example.org/dcat-profile
 
  The server would then either return the data in the requested 
  profile/feature,
 answer with 406 (not acceptable), or return the data in a default
 profile/feature. If the answer is a 200 OK, the server needs to set the 
 Feature
 header to let the client know which profile was used. This is consistent 
 with the
 use of the Accept header.
 
  Discussion
  The problem I have with the Accept-Features/Features header is that I feel
 that the provision of a specific (application) profile is not the same as a 
 feature
 of the requested resource, at least not if I look at the examples they 
 provide in
 RFC 2295 which includes tables, fonts, screenwidth and colordepth, 
 but
 perhaps I'm overly picky.
 
  The registration of Accept-Profile/Profile headers is appealing since their
 semantics can be clearly defined and that their naming show the similarities 
 to
 other Accept-* headers. OTOH the process of getting those headers registered
 with IETF can be fairly heavy.
 
  Lastly, the use of RFC 6906 profiles has the advantage that no extra work 
  has
 to be done, the Link header is in place and so is the profile relation type.
 
  Any feedback would be greatly appreciated.
 
  [1] http://tools.ietf.org/html/rfc6906
  [2] http://tools.ietf.org/html/rfc2295#section-6
 
  Best,
 
  Lars
 



Re: Profiles in Linked Data

2015-05-07 Thread Martynas Jusevičius
To my understanding, in a resource-centric model resources have a
description containing statements available about them.

When you try split it into parts, then you involve documents or graphs
and go beyond the resource-centric model.

On Thu, May 7, 2015 at 5:25 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Martynas,

 I am not convinced your use case requires a whole new concept (and
 following implementations) of Linked Data profiles.

 I have outlined practical solutions you already can use now:
 1. use a single description including all vocabularies

 I have real customers that say already now that this solution is not 
 acceptable to them.

 2. make separate resources with separate descriptions

 Could work, but I prefer a resource-centric model where I simply deliver 
 different descriptions about the same resource.

 3. give the client SPARQL access

 Not all clients want to talk sparql.

 It seems to me that you have a hypothetical solution and are looking
 for a problem.

 Might be. I'm still not convinced of the opposite being true, though.

 Best,

 Lars



Re: Profiles in Linked Data

2015-05-07 Thread Martynas Jusevičius
Lars,

I am not convinced your use case requires a whole new concept (and
following implementations) of Linked Data profiles.

I have outlined practical solutions you already can use now:
1. use a single description including all vocabularies
2. make separate resources with separate descriptions
3. give the client SPARQL access

It seems to me that you have a hypothetical solution and are looking
for a problem.

On Thu, May 7, 2015 at 3:24 PM, Svensson, Lars l.svens...@dnb.de wrote:
 So why don't you include both DCAT and PREMIS in the description and
 let the client figure it out?

 Because that would mean that my payload would be at least twice as large (or 
 more, depending on how many profiles I want to support). Further, a client 
 that actually wants json but asks for json-ld (because that is the 
 content-type the server supports) has no way to figure out which keys to 
 evaluate and which not. Also too much information can be a constraint when we 
 deal with clients with limited computing capabilities. Lastly I might want to 
 specifically constrain my response to a specific profile in order to be 
 consistent with a certain rdf shape.

 I haven't yet encountered a use case where profiles would be necessary.

 WebArch only talks about representations (descriptions) that differ in
 terms of media type:
 http://www.w3.org/TR/webarch/#dereference-details

 Yes, but I still see a necessity for negotiation profiles, too, not only 
 media types.

 Best,

 Lars

 On Thu, May 7, 2015 at 1:34 PM, Svensson, Lars l.svens...@dnb.de wrote:
  Martynas,
 
  As you wrote, media type is orthogonal to profiles. To retrieve
  RDF/XML, you would use content negotiation (Accept header).
 
  You would need to run the Graphity processor that would match URI
  templates and execute SPARQL queries from the sitemap ontology.
 
  Sure, instead of query strings
 
  OK. But that would require the client to re-write the resource URI to put 
  in
 the correct query string.
 
  you could use Accept-Profile/Profile or
  similar headers to advertise profiles and their preference. It's just
  that the uptake for new custom HTTP headers will be slow, so there's
  not much practical advantage.
 
  On the other hand, it seems like you want different descriptions of a
  resource -- so it seems to me that these should in fact be different
  resources? That could be split into
  http://example.org/some/resource/dcat and
  http://example.org/some/resource/premis, for example.
 
  Well, at least to me it is two descriptions of the same resource (much as a
 mobile-optimised website is the same resource as the real website, but sort
 of minimalised). Particularly when I refer to concepts, e. g. Semantic Web 
 [1],
 or persons, e. g. Tim Berners-Lee [2], the URI references the RWO in no
 particular format. When I client actually wants to _do_ something with that
 information, the client and the server need to negotiate a way to find the 
 best
 description. That is where profiles (or shapes) enter the equation.
 
  [1] http://d-nb.info/gnd/4688372-1
  [2] http://d-nb.info/gnd/121649091
 
  Best,
 
  Lars



Re: Survey on Faceted Browsers for RDF data ?

2015-04-27 Thread Martynas Jusevičius
Hey Christian,

Graphity Platform implements faceted search purely using SPARQL on a
standard triplestore. It manipulates FILTERs on the fly based on the
filters selected by the user. Here's a working example:
http://dedanskeaviser.dk/newspapers

Martynas
graphityhq.com

On Mon, Apr 27, 2015 at 3:02 PM, Christian Morbidoni
christian.morbid...@gmail.com wrote:
 Dear Bernadette, all

 I can surely share my list, and I'll do as soon as I find some time to give
 it some structure and write in proper english...

 Honestly I got a bit stuck asking myself What exactly am I looking for? In
 other words: what is exactly a faceted browser for RDF data? Does it mean
 that it has to query a SPRQL endpoint with no intermediaries, in real-time?
 In fact, this approach is not the best in my opinion...one probably needs to
 materialize data in some other more facets-friedly system (e.g. solr,
 elastic search) to gain good performances (I might be wrong but this is what
 my - limited - experience told me). Then I started asking me...if you put a
 SPARQL connector, then every existing faceted browser can be a RDF data
 faceted browser...
 So...what is the kind of tools that you think should go in this list? All
 existing systems that provide faceted browsing functionality? And what kind
 of features should a comparison take into account? may be how is it easy to
 connect a tool to a SPARQL endpoint? not sure...

 best,

 Christian

 On Wed, Apr 22, 2015 at 7:17 PM, Bernadette Hyland
 bhyl...@3roundstones.com wrote:

 Hi Christian,
 If you produce a list of platforms/browsers for RDF, you'd create a
 valuable resource for the open data / Web of data community.  Thanks in
 advance.  Please share with the list when completed.

 Please add the Callimachus Project to your list.[1]  Callimachus is an
 Open Source web application server. It's an actively supported Open Source
 project that commercial companies, including 3 Round Stones support.
 Callimachus is on GitHub.[2]

 Used by government agencies, healthcare organizations and scientific
 researchers, Callimachus is used to to rapidly build and visualize data from
 the public Web or behind the firewall. It uses a Linked Data approach and
 based on W3C data standards, including RDF.

 Developers use a range of JavaScript libraries for visualizations,
 including D3 and Google Charts. Here is a sampling of apps that use
 Callimachus -- I share these to show it goes well beyond faceted browsing.

 Open Data Directory - Simple app - A crowdsourced community run directory
 of organizations using Linked Data for projects, see the W3C Open Data
 Directory [3]

 GeoHealth US - In beta. GeoHealth.us generates hundreds of millions of
 data-driven pages, including visualizations (heat maps, pollution reports,
 etc) related to environmental exposure and related diseases. It will be
 launched at the upcoming National Health Datapalooza in Washington DC in
 early June.[4]

 ORGpedia - A research project funded by the Alfred P. Sloan Foundation and
 led by New York University Professor Beth Noveck's team at the Wagner School
 of Public Policy.[5]

 WeatherHealth - A pilot for Sentara Healthcare. It combines data from
 multiple government open data sites to demonstrate  the power of patient
 education for better health.[6]

 Linked Data Books website.[7] This community run site publishes resources
 for developers, executives and academics. It's open to anyone who wishes to
 add a publication to the list.  If during your research you identify some
 good books to add, please send us an email. The Linked Data Books website
 was created using the Callimachus Project.

 Lastly, Callimachus served as a reference implementation for the Linked
 Data Platform.[8]

 Cheers,

 Bernadette Hyland
 CEO, 3 Round Stones, Inc.

 http://3roundstones.com  || http://about.me/bernadettehyland

 
 [1] http://callimachusproject.org

 [2] https://github.com/3-Round-Stones/callimachus/

 [3] The Open Data Directory - see http://dir.w3.org

 [4] Environmental exposures  diseases mapper - see http://geohealth.us

 [5] ORGpedia - see http://3RoundStones.com/orgpedia

 [6] Sentara WeatherHealth pilot - see http://3RoundStones.com/sentara

 [7] Linked Data Developer site - see http://linkeddatadeveloper.com

 [8] W3C Linked Data Platform - see http://www.w3.org/2012/ldp/charter

 On Jan 23, 2015, at 6:42 AM, Christian Morbidoni
 christian.morbid...@gmail.com wrote:

 Hi all,

 I'm doing some research to get a comprehensive (as much as possible) view
 on what faceted browsers are out there today for RDF data and what features
 they offer.
 I collected a lot of links to papers, web sites and demos... but I found
 very few comparison/survey papers about this specific topic. [1] contains a
 section on faceted browsers, but not so exhaustive, [2] mentions some
 interesting systems but is a bit outdated.

 So, my questions are:
 1) Do someone know a better paper/resource I can look at for a survey?
 2) Is someone 

Re: Best practices on how to publish SPARQL queries?

2015-04-26 Thread Martynas Jusevičius
Hey Niklas,

we're using SPIN SPARQL syntax for this: http://spinrdf.org/sp.html

Here's an example vocabulary with embedded queries:
https://github.com/Graphity/graphity-processor/blob/master/src/main/resources/org/graphity/processor/vocabulary/gp.ttl

Martynas
graphityhq.com

On Sun, Apr 26, 2015 at 1:01 PM, Niklas Petersen
peter...@cs.uni-bonn.de wrote:
 Hi all,

 I am currently developing a vocabulary which has typical queries related
 to it. I am wondering if there exist any best practices to publish them
 together with the vocabulary?

 The best practices on publishing Linked Data [1] only focuses on the
 endpoints, but not on the queries.

 Has anyone else been in that situation?


 Best regards,
 Niklas Petersen

 [1] http://www.w3.org/TR/ld-bp/#MACHINE

 --
 Niklas Petersen,
 Organized Knowledge Group @Fraunhofer IAIS,
 Enterprise Information Systems Group @University of Bonn.




Re: Microsoft Access for RDF?

2015-02-21 Thread Martynas Jusevičius
On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:
 On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


 Not to criticize, but to seek clarity:

 What does the term resources refer to, in your usage context?

 In a world of Relations (this is what RDF is about, fundamentally) its hard
 for me to understand what you mean by grouped by resources. What is the
 resource etc?

Well, RDF stands for Resource Description Framework after all, so
I'll cite its spec:
RDF graphs are sets of subject-predicate-object triples, where the
elements may be IRIs, blank nodes, or datatyped literals. They are
used to express descriptions of resources.

More to the point, RDF serializations often group triples by subject
URI. You can look at such group as a resource which has
properties.



   Within a resource block, properties are sorted
 alphabetically by their rdfs:labels retrieved from respective
 vocabularies.


 How do you handle the integrity of multi-user updates, without killing
 concurrency, using this method of grouping (which in of itself is unclear
 due to the use resources term) ?

 How do you minimize the user interaction space i.e., reduce clutter --
 especially if you have a lot of relations in scope or the possibility that
 such becomes the reality over time?


I don't think concurrent updates I related to resources or specific
to our editor. The Linked Data platform (whatever it is) and its HTTP
logic has to deal with ETags and 409 Conflict etc.

I was wondering if this logic should be part of specifications such as
the Graph Store Protocol:
https://twitter.com/pumba_lt/status/545206095783145472
But I haven't an answer. Maybe it's an oversight on the W3C side?

We scope the description edited either by a) SPARQL query or b) named
graph content.

 Kingsley


 On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer bru...@netestate.de
 wrote:

 Hello Martynas,

 sorry! You mean this one?


 http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode

 Nice! Looks like a template but you still may have the triple object
 ordering
 problem. Do you? If yes, how did you address it?

 Regards,

 Michael Brunnbauer

 On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote:

 I find it funny that people on this list and semweb lists in general
 like discussing abstractions, ideas, desires, prejudices etc.

 However when a concrete example is shown, which solves the issue
 discussed or at least comes close to that, it receives no response.

 So please continue discussing the ideal RDF environment and its
 potential problems while we continue improving our editor for users
 who manage RDF already now.

 Have a nice weekend everyone!

 On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote:

 So some thoughts here.

 OWL,  so far as inference is concerned,  is a failure and it is time to
 move
 on.  It is like RDF/XML.

 As a way of documenting types and properties it is tolerable.  If I
 write
 down something in production rules I can generally explain to an
 average
 joe what they mean.  If I try to use OWL it is easy for a few things,
 hard
 for a few things,  then there are a few things Kendall Clark can do,
 and
 then there is a lot you just can't do.

 On paper OWL has good scaling properties but in practice production
 rules
 win because you can infer the things you care about and not have to
 generate
 the large number of trivial or otherwise uninteresting conclusions you
 get
 from OWL.

 As a data integration language OWL points in an interesting direction
 but it
 is insufficient in a number of ways.  For instance,  it can't convert
 data
 types (canonicalize mailto:j...@example.com and j...@example.com),
 deal
 with trash dates (have you ever seen an enterprise system that didn't
 have
 trash dates?) or convert units.  It also can't reject facts that don't
 matter and so far as both timespace and accuracy you do much easier if
 you
 can cook things down to the smallest correct database.

 

 The other one is that as Kingsley points out,  the ordered collections
 do
 need some real work to square the circle between the abstract graph
 representation and things that are actually practical.

 I am building an app right now where I call an API and get back chunks
 of
 JSON which I cache,  and the primary scenario is that I look them up by
 primary key and get back something with a 1:1 correspondence to what I
 got.
 Being able to do other kind of queries and such is sugar on top,  but
 being
 able to reconstruct an original record,  ordered collections and all,
 is an
 absolute requirement.

 So far my infovore framework based on Hadoop has avoided collections,
 containers and all that because these are not used in DBpedia and
 Freebase,
 at least not in the A-Box.  The simple representation that each triple
 is a
 record does not work so well in this case because if I just turn blank
 nodes
 into UUIDs and spray them across the cluster

Re: Microsoft Access for RDF?

2015-02-21 Thread Martynas Jusevičius
Kingsley,

I don't need a lecture from you each time you disagree.

Please explain what you think Resource means in Resource
Description Framework.

In any case, I think you know well what I mean.

A grouped RDF/XML output would be smth like this:

rdf:Description rdf:about=http://resource;
  rdf:type rdf:resource=http://type/
  a:propertyvalue/a:property
  b:propertysmth/b:property
/rdf:Description

How would you call this? I call it a resource description. But the
name does not matter much, the fact is that we use it and it works.


On Sat, Feb 21, 2015 at 7:01 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:
 On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

 On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
 kide...@openlinksw.com wrote:

 On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


 Not to criticize, but to seek clarity:

 What does the term resources refer to, in your usage context?

 In a world of Relations (this is what RDF is about, fundamentally) its hard
 for me to understand what you mean by grouped by resources. What is the
 resource etc?

 Well, RDF stands for Resource Description Framework after all, so
 I'll cite its spec:
 RDF graphs are sets of subject-predicate-object triples, where the
 elements may be IRIs, blank nodes, or datatyped literals. They are
 used to express descriptions of resources.

 More to the point, RDF serializations often group triples by subject
 URI.


 The claim often group triples by subject  isn't consistent with the nature
 of an RDF Relation [1].

 A predicate is a sentence-forming relation. Each tuple in the relation is a
 finite, ordered sequence of objects. The fact that a particular tuple is an
 element of a predicate is denoted by '(*predicate* arg_1 arg_2 .. arg_n)',
 where the arg_i are the objects so related. In the case of binary
 predicates, the fact can be read as `arg_1 is *predicate* arg_2' or `a
 *predicate* of arg_1 is arg_2'.)  [1] .

 RDF's specs are consistent with what's described above, and inconsistent
 with the subject ordering claims you are making.

 RDF statements (which represent relations) have sources such as documents
 which are accessible over a network and/or documents managed by some RDBMS
 e.g., Named Graphs in the case of a SPARQL compliant RDBMS .

 In RDF you are always working with a set of tuples (s,p,o 3-tuples
 specifically) grouped by predicate .

 Also note, I never used the phrase RDF Graph in any of the sentences
 above, and deliberately so, because that overloaded phrase is yet another
 source of unnecessary confusion.

 Links:

 [1]
 http://54.183.42.206:8080/sigma/Browse.jsp?lang=EnglishLanguageflang=SUO-KIFkb=SUMOterm=Predicate

 Kingsley


   Within a resource block, properties are sorted
 alphabetically by their rdfs:labels retrieved from respective
 vocabularies.

 How do you handle the integrity of multi-user updates, without killing
 concurrency, using this method of grouping (which in of itself is unclear
 due to the use resources term) ?

 How do you minimize the user interaction space i.e., reduce clutter --
 especially if you have a lot of relations in scope or the possibility that
 such becomes the reality over time?

 I don't think concurrent updates I related to resources or specific
 to our editor. The Linked Data platform (whatever it is) and its HTTP
 logic has to deal with ETags and 409 Conflict etc.

 I was wondering if this logic should be part of specifications such as
 the Graph Store Protocol:
 https://twitter.com/pumba_lt/status/545206095783145472
 But I haven't an answer. Maybe it's an oversight on the W3C side?

 We scope the description edited either by a) SPARQL query or b) named
 graph content.

 Kingsley

 On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer bru...@netestate.de
 wrote:

 Hello Martynas,

 sorry! You mean this one?


 http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode

 Nice! Looks like a template but you still may have the triple object
 ordering
 problem. Do you? If yes, how did you address it?

 Regards,

 Michael Brunnbauer

 On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote:

 I find it funny that people on this list and semweb lists in general
 like discussing abstractions, ideas, desires, prejudices etc.

 However when a concrete example is shown, which solves the issue
 discussed or at least comes close to that, it receives no response.

 So please continue discussing the ideal RDF environment and its
 potential problems while we continue improving our editor for users
 who manage RDF already now.

 Have a nice weekend everyone!

 On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote:

 So some thoughts here.

 OWL,  so far as inference is concerned,  is a failure and it is time to
 move
 on.  It is like RDF/XML.

 As a way of documenting types and properties it is tolerable.  If I
 write
 down something in production rules I can generally explain to an
 average
 joe what they mean.  If I try

Re: Microsoft Access for RDF?

2015-02-21 Thread Martynas Jusevičius
Hey Michael,

the resource description being edited is loaded with a SPARQL query.
The updated description is inserted using SPARQL update.

Although often it is useful to edit and update full content from a
named graph using Graph Store Protocol instead. So view and update
mechanisms are not necessarily symmetrical.

Triple objects also can ordered alphabetically? We use ordering
resources by properties (e.g. container items) a lot, but it is more
relevant in the view case than in the edit case.

Does that answer your question? If not, I'm not exactly sure what kind
of ordering you are referring to, or why it is relevant.

On Fri, Feb 20, 2015 at 8:49 PM, Michael Brunnbauer bru...@netestate.de wrote:

 Hello Martynas,

 On Fri, Feb 20, 2015 at 06:04:49PM +0100, Martynas Jusevi??ius wrote:
 The layout is generated with XSLT from RDF/XML. The triples are
 grouped by resources. Within a resource block, properties are sorted
 alphabetically by their rdfs:labels retrieved from respective
 vocabularies.

 So the ordering of triple objects cannot be controlled by the user and this
 ordering is not relevant for your use case?

 Regards,

 Michael Brunnbauer

 --
 ++  Michael Brunnbauer
 ++  netEstate GmbH
 ++  Geisenhausener Straße 11a
 ++  81379 München
 ++  Tel +49 89 32 19 77 80
 ++  Fax +49 89 32 19 77 89
 ++  E-Mail bru...@netestate.de
 ++  http://www.netestate.de/
 ++
 ++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
 ++  USt-IdNr. DE221033342
 ++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
 ++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel



Re: Microsoft Access for RDF?

2015-02-21 Thread Martynas Jusevičius
Kingsley,

I am fully aware of the distinction between RDF as a data model and
its serializations. That's why I wrote: RDF *serializations* often
group triples by subject URI.

What I tweeted recently was, that despite having concept models and
abstractions in our heads, when pipelining data and writing software
we are dealing with concrete serializations. Am I not right?

So what I mean with it works is that our RDF/POST-based user
interface is simply a generic function of the RDF graph behind it, in
the form of XSLT transforming the RDF/XML serialization.

I commented on concurrency in the previous email, but you haven't
replied to that.

On Sat, Feb 21, 2015 at 8:43 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:
 On 2/21/15 1:34 PM, Martynas Jusevičius wrote:

 Kingsley,

 I don't need a lecture from you each time you disagree.


 I am not lecturing you. I am trying to make the conversation clearer. Can't
 you see that?


 Please explain what you think Resource means in Resource
 Description Framework.

 In any case, I think you know well what I mean.

 A grouped RDF/XML output would be smth like this:

 rdf:Description rdf:about=http://resource;
rdf:type rdf:resource=http://type/
a:propertyvalue/a:property
b:propertysmth/b:property
 /rdf:Description


 You spoke about RDF not RDF/XML (as you know, they are not the same thing).
 You said or implied RDF datasets are usually organized by subject.

 RDF is an abstract Language (system of signs, syntax, and semantics). Thus,
 why are you presenting me with an RDF/XML statement notation based response,
 when we are debating/discussing the nature of an RDF relation?

 Can't you see that the more we speak about RDF in overloaded-form the more
 confusing it remains?

 RDF isn't a mystery. It doesn't have to be some unsolvable riddle. Sadly,
 that's its general perception because we talk about it using common terms in
 an overloaded manner.


 How would you call this? I call it a resource description.


 See my comments above. RDF is a Language. You can create RDF Statements in a
 document using a variety of Notations. Thus, when speaking of RDF I am not
 thinking about RDF/XML, TURTLE, or any other notation. I am thinking about a
 language that systematically leverages signs, syntax, and semantics as a
 mechanism for encoding and decoding information [data in some context].

 But the
 name does not matter much, the fact is that we use it and it works.


 Just works in what sense? That's why I asked you questions about how you
 catered to integrity and concurrency.

 If you have more than one person editing sentences, paragraphs, or a page in
 a book, wouldn't you think handling issues such as the activity frequency,
 user count, and content volume are important? That's all I was seeking an
 insight from you about, in regards to your work.


 Kingsley



 On Sat, Feb 21, 2015 at 7:01 PM, Kingsley  Idehen
 kide...@openlinksw.com wrote:

 On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

 On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
 kide...@openlinksw.com wrote:

 On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


 Not to criticize, but to seek clarity:

 What does the term resources refer to, in your usage context?

 In a world of Relations (this is what RDF is about, fundamentally) its
 hard
 for me to understand what you mean by grouped by resources. What is the
 resource etc?

 Well, RDF stands for Resource Description Framework after all, so
 I'll cite its spec:
 RDF graphs are sets of subject-predicate-object triples, where the
 elements may be IRIs, blank nodes, or datatyped literals. They are
 used to express descriptions of resources.

 More to the point, RDF serializations often group triples by subject
 URI.


 The claim often group triples by subject  isn't consistent with the
 nature
 of an RDF Relation [1].

 A predicate is a sentence-forming relation. Each tuple in the relation
 is a
 finite, ordered sequence of objects. The fact that a particular tuple is
 an
 element of a predicate is denoted by '(*predicate* arg_1 arg_2 ..
 arg_n)',
 where the arg_i are the objects so related. In the case of binary
 predicates, the fact can be read as `arg_1 is *predicate* arg_2' or `a
 *predicate* of arg_1 is arg_2'.)  [1] .

 RDF's specs are consistent with what's described above, and inconsistent
 with the subject ordering claims you are making.

 RDF statements (which represent relations) have sources such as documents
 which are accessible over a network and/or documents managed by some
 RDBMS
 e.g., Named Graphs in the case of a SPARQL compliant RDBMS .

 In RDF you are always working with a set of tuples (s,p,o 3-tuples
 specifically) grouped by predicate .

 Also note, I never used the phrase RDF Graph in any of the sentences
 above, and deliberately so, because that overloaded phrase is yet another
 source of unnecessary confusion.

 Links:

 [1]

 http://54.183.42.206:8080/sigma/Browse.jsp?lang=EnglishLanguageflang=SUO-KIFkb=SUMOterm

Re: Microsoft Access for RDF?

2015-02-20 Thread Martynas Jusevičius
I find it funny that people on this list and semweb lists in general
like discussing abstractions, ideas, desires, prejudices etc.

However when a concrete example is shown, which solves the issue
discussed or at least comes close to that, it receives no response.

So please continue discussing the ideal RDF environment and its
potential problems while we continue improving our editor for users
who manage RDF already now.

Have a nice weekend everyone!

On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle ontolo...@gmail.com wrote:
 So some thoughts here.

 OWL,  so far as inference is concerned,  is a failure and it is time to move
 on.  It is like RDF/XML.

 As a way of documenting types and properties it is tolerable.  If I write
 down something in production rules I can generally explain to an average
 joe what they mean.  If I try to use OWL it is easy for a few things,  hard
 for a few things,  then there are a few things Kendall Clark can do,  and
 then there is a lot you just can't do.

 On paper OWL has good scaling properties but in practice production rules
 win because you can infer the things you care about and not have to generate
 the large number of trivial or otherwise uninteresting conclusions you get
 from OWL.

 As a data integration language OWL points in an interesting direction but it
 is insufficient in a number of ways.  For instance,  it can't convert data
 types (canonicalize mailto:j...@example.com and j...@example.com),  deal
 with trash dates (have you ever seen an enterprise system that didn't have
 trash dates?) or convert units.  It also can't reject facts that don't
 matter and so far as both timespace and accuracy you do much easier if you
 can cook things down to the smallest correct database.

 

 The other one is that as Kingsley points out,  the ordered collections do
 need some real work to square the circle between the abstract graph
 representation and things that are actually practical.

 I am building an app right now where I call an API and get back chunks of
 JSON which I cache,  and the primary scenario is that I look them up by
 primary key and get back something with a 1:1 correspondence to what I got.
 Being able to do other kind of queries and such is sugar on top,  but being
 able to reconstruct an original record,  ordered collections and all,  is an
 absolute requirement.

 So far my infovore framework based on Hadoop has avoided collections,
 containers and all that because these are not used in DBpedia and Freebase,
 at least not in the A-Box.  The simple representation that each triple is a
 record does not work so well in this case because if I just turn blank nodes
 into UUIDs and spray them across the cluster,  the act of reconstituting a
 container would require an unbounded number of passes,  which is no fun at
 all with Hadoop.  (At first I though the # of passes was the same as the
 length of the largest collection but now that I think about it I think I can
 do better than that)  I don't feel so bad about most recursive structures
 because I don't think they will get that deep but I think LISP-Lists are
 evil at least when it comes to external memory and modern memory
 hierarchies.





Re: Microsoft Access for RDF?

2015-02-18 Thread Martynas Jusevičius
Hey Paul,

we have the editing interface, but ontologies are not of much help
here. The question is, how and where to draw the boundary of the
description that you want to edit, because millions of triples on one
page will not work.

Fine-grained named graphs and/or SPARQL queries/updates are 2
solutions for this.

Martynas
graphityhq.com

On Wed, Feb 18, 2015 at 9:08 PM, Paul Houle ontolo...@gmail.com wrote:
 I am looking at some cases where I have databases that are similar to
 Dbpedia and Freebase in character,  sometimes that big (ok,  those
 particular databases),   sometimes smaller.  Right now there are no blank
 nodes,  perhaps there are things like the compound value types from
 Freebase which are sorta like blank nodes but they have names,

 Sometimes I want to manually edit a few records.  Perhaps I want to delete a
 triple or add a few triples (possibly introducing a new subject.)

 It seems to me there could be some kind of system which points at a SPARQL
 protocol endpoint (so I can keep my data in my favorite triple store) and
 given an RDFS or OWL schema,  automatically generates the forms so I can
 easily edit the data.

 Is there something out there?

 --
 Paul Houle
 Expert on Freebase, DBpedia, Hadoop and RDF
 (607) 539 6254paul.houle on Skype   ontolo...@gmail.com
 http://legalentityidentifier.info/lei/lookup



Re: Microsoft Access for RDF?

2015-02-18 Thread Martynas Jusevičius
Why do you assume I assume something?

I simply stated what should be considered. When Paul replies, we'll
know what kind of dataset it is, and whether these issues apply.

He mentions DBPedia as an example, which returns around 20 named
graphs for triples. How are those supposed to be edited?

select distinct ?g where { graph ?g { ?s ?p ?o } }

count() didn't work for me BTW.

On Wed, Feb 18, 2015 at 10:51 PM, Kingsley  Idehen
kide...@openlinksw.com wrote:
 On 2/18/15 4:01 PM, Martynas Jusevičius wrote:

 we have the editing interface, but ontologies are not of much help
 here. The question is, how and where to draw the boundary of the
 description that you want to edit, because millions of triples on one
 page will not work.


 Why do you assume that the target of an edit is a massive dataset presented
 in a single page?



 --
 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog 1: http://kidehen.blogspot.com
 Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
 Twitter Profile: https://twitter.com/kidehen
 Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen
 Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this





Re: Microsoft Access for RDF?

2015-02-18 Thread Martynas Jusevičius
Paul,

does this look something like the interface you could use?

http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode


Martynas
graphityhq.com

On Wed, Feb 18, 2015 at 11:59 PM, Paul Houle ontolo...@gmail.com wrote:
 Well here is my user story.

 I am looking at a page that looks like this

 http://dbpedia.org/page/Albert_Einstein

 it drives me up the wall that the following facts are in there:

 :Albert_Einstein
dbpedia-owl:childOf :EinsteinFamily ;
dbpedia-owl:parentOf :EinsteinFamily .

 which is just awful in a whole lot of ways.  Case (1) is that I click an X
 next to those two triples and they are gone,  Case (2) is that I can create
 new records to fill in his Family tree and that will involve many other user
 stories such as (3) user creates literal field and so forth.

 Freebase and Wikidata have good enough user interfaces that revolve around
 entities,  see

 https://www.freebase.com/m/0jcx  (while you still can)
 http://www.wikidata.org/wiki/Q937

 but neither of those is RDF-centric.  It seems to me that an alternative
 semantics could be defined for RDF and OWL that work work like so:

 * we say :Albert_Einstein is a :Person
 * we then see some form with fields people can fill,  or alternately there
 is a dropdown list with predicates that have this as a known domain;  the
 range can also be used backwards so that we expect a langstring or integer
 or link to another :Person

 It's important that this be some tool that somebody who knows little about
 RDF can enter data and edit it with a little bit of task-oriented (as
 opposed to concept-oriented training.)

 The idea here is that the structures and vocabulary are constrained so that
 the structures are not complex;  both DBpedia and Freebase are so
 constrained.  You might want to say things like

 [ a   sp:Ask ;
 rdfs:comment must be at least 18 years old^^xsd:string ;
 sp:where ([ sp:object sp:_age ;
 sp:predicate my:age ;
 sp:subject spin:_this
   ] [ a   sp:Filter ;
 sp:expression
 [ sp:arg1 sp:_age ;
   sp:arg2 18 ;
   a sp:lt
 ]
   ])
 ]


 and that is cool,  but I have no idea how to make that simple for a muggle
 to use and I'm interested in these things that are similar in character to a
 relational database,  so I'd say that is out-of-scope for now.  I think this
 tool could probably edit RDFS schemas (treating them as instance data) but
 not be able to edit OWL schemas (if you need that use an OWL editor)

 Now if I was really trying to construct family trees I'd have to address
 cycles like that with some algorithms and heuristics because it probably
 take a long time to pluck them out by hand,  but some things you'll want to
 edit by hand and that process will be easier if you are working with a
 smaller data set,  which you can easily find.

 If you have decent type data,  as does Freebase,  it is not hard to pick out
 pieces of the WikiWorld,  such as ski areas or navy ships and the
 project of improving that kind of database with hand tools is much more
 tractable.

 For small projects you don't need access controls,  provenance and that kind
 of thing,  but if you were trying to run something like Freebase and
 Wikidata where you know what the algebra is the obvious thing to do is use
 RDF* and SPARQL*.









Re: linked open data and PDF

2015-01-20 Thread Martynas Jusevičius
Thanks for clarifying that Michael. Well, then I give up :) XMP looks
ridiculous indeed. If normal RDF/XML was allowed, it could be actually
useful.
On Jan 20, 2015 9:28 AM, Michael Brunnbauer bru...@netestate.de wrote:


 Hello Martynas,

 On Tue, Jan 20, 2015 at 02:34:23AM +0100, Martynas Jusevi??ius wrote:
  IMO mixing RDF/XML with JSON doesn't make sense.

 This is not RDF/XML. This is XMP. The last time I looked, XMP was a
 ridiculously crippled version of RDF/XML:


 http://partners.adobe.com/public/developer/en/xmp/sdk/XMPspecification.pdf

  The rdf:about attribute on the rdf:Description element is a required
 attribute that may
  be used to identify the resource whose metadata this XMP describes. The
 value of this
  attribute should generally be empty.

  The XMP storage model does not use the rdf:about attribute to identify
 the
  resource. The value will be preserved, but is not meaningful to XMP.

  All rdf:Description elements within an rdf:RDF element must have the
 same value for their rdf:about attributes.

  The elements immediately within rdf:RDF must be rdf:Description
 elements.

  The rdf:ID and rdf:nodeID attributes are ignored.

  Top-level RDF typed nodes are not supported.

 I guess this is why people here think about using a serialized RDF document
 as value of a property - which multiplies the ridiculousness created by
 Adobe.

 I would not recommend such stunts if we want to be taken seriously.

 Regards,

 Michael Brunnbauer

 --
 ++  Michael Brunnbauer
 ++  netEstate GmbH
 ++  Geisenhausener Straße 11a
 ++  81379 München
 ++  Tel +49 89 32 19 77 80
 ++  Fax +49 89 32 19 77 89
 ++  E-Mail bru...@netestate.de
 ++  http://www.netestate.de/
 ++
 ++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
 ++  USt-IdNr. DE221033342
 ++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
 ++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel



Re: linked open data and PDF

2015-01-19 Thread Martynas Jusevičius
Hey all,

I think APIs for common languages like Java and C# to extract XMP RDF
from PDF Files/Streams would be much more helpful than standalone
tools such as Paul mentions.

I've looked at Adobe PDF Library SDK but none of the features mention metadata:
http://www.adobe.com/devnet/pdf/library.html


Martynas
graphityhq.com

On Mon, Jan 19, 2015 at 11:24 PM, Paul Houle ontolo...@gmail.com wrote:
 I just used Acrobat Pro to look at the XMP metadata for a standards document
 (extra credit if you know which one) and saw something like this

 https://raw.githubusercontent.com/paulhoule/images/master/MetadataSample.PNG

 in this particular case this is fine RDF,  just very little of it because
 nobody made an effort to fill it in.  The lack of a title is particularly
 annoying when I am reading this document at the gym because it gets lost in
 a maze of twisty filenames that all look the same,

 I looked at some financial statements and found that some were very well
 annotated and some not at all.  Acrobat Pro has a tool that outputs the data
 in RDF/XML;  I can't imagine it is hard to get this data out with third
 party tools in most cases.


 On Mon, Jan 19, 2015 at 2:36 PM, Larry Masinter masin...@adobe.com wrote:

 I just joined this list. I’m looking to help improve the story for Linked
 Open Data in PDF, to lift PDF (and other formats) from one-star to five,
 perhaps using XMP. I’ve found a few hints in the mailing list archive here.
 http://lists.w3.org/Archives/Public/public-lod/2014Oct/0169.html
 but I’m still looking. Any clues, problem statements, sample sites?

 Larry
 --
 http://larry.masinter.net




 --
 Paul Houle
 Expert on Freebase, DBpedia, Hadoop and RDF
 (607) 539 6254paul.houle on Skype   ontolo...@gmail.com
 http://legalentityidentifier.info/lei/lookup



Re: linked open data and PDF

2015-01-19 Thread Martynas Jusevičius
PDFBox includes metadata API, but does not mention RDF:
https://pdfbox.apache.org/1.8/cookbook/workingwithmetadata.html

On Mon, Jan 19, 2015 at 11:31 PM, Martynas Jusevičius
marty...@graphity.org wrote:
 Hey all,

 I think APIs for common languages like Java and C# to extract XMP RDF
 from PDF Files/Streams would be much more helpful than standalone
 tools such as Paul mentions.

 I've looked at Adobe PDF Library SDK but none of the features mention 
 metadata:
 http://www.adobe.com/devnet/pdf/library.html


 Martynas
 graphityhq.com

 On Mon, Jan 19, 2015 at 11:24 PM, Paul Houle ontolo...@gmail.com wrote:
 I just used Acrobat Pro to look at the XMP metadata for a standards document
 (extra credit if you know which one) and saw something like this

 https://raw.githubusercontent.com/paulhoule/images/master/MetadataSample.PNG

 in this particular case this is fine RDF,  just very little of it because
 nobody made an effort to fill it in.  The lack of a title is particularly
 annoying when I am reading this document at the gym because it gets lost in
 a maze of twisty filenames that all look the same,

 I looked at some financial statements and found that some were very well
 annotated and some not at all.  Acrobat Pro has a tool that outputs the data
 in RDF/XML;  I can't imagine it is hard to get this data out with third
 party tools in most cases.


 On Mon, Jan 19, 2015 at 2:36 PM, Larry Masinter masin...@adobe.com wrote:

 I just joined this list. I’m looking to help improve the story for Linked
 Open Data in PDF, to lift PDF (and other formats) from one-star to five,
 perhaps using XMP. I’ve found a few hints in the mailing list archive here.
 http://lists.w3.org/Archives/Public/public-lod/2014Oct/0169.html
 but I’m still looking. Any clues, problem statements, sample sites?

 Larry
 --
 http://larry.masinter.net




 --
 Paul Houle
 Expert on Freebase, DBpedia, Hadoop and RDF
 (607) 539 6254paul.houle on Skype   ontolo...@gmail.com
 http://legalentityidentifier.info/lei/lookup



Re: linked open data and PDF

2015-01-19 Thread Martynas Jusevičius
Larry,

IMO mixing RDF/XML with JSON doesn't make sense. Why not keep it
RDF/XML? Like this (not tested):

x:xmpmeta xmlns:x=adobe:ns:meta/
  rdf:RDF xmlns:rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#;
   rdf:Description rdf:about=
  xmlns:pdf=http://ns.adobe.com/pdf/1.3/;
  xmlns:lod=http://5stardata.info/ns/linked-data;
  xmlns:xmp=http://ns.adobe.com/xap/1.0/;
  xmlns:dc=http://purl.org/dc/elements/1.1/;
  xmlns:owl=http://www.w3.org/2002/07/owl#sameAs;
  xmlns:meteo=http://purl.org/ns/meteo#;
  xmlns:xmpMM=http://ns.adobe.com/xap/1.0/mm/;
   pdf:ProducerAcrobat editing after Mac OS X 10.5.8 Quartz
PDFContext/pdf:Producer
   xmp:CreateDate2012-01-22T15:00:00-08:00/xmp:CreateDate
   dc:titleTemperature forecast for Galway, Ireland/dc:title
   dc:creatorhttp://sw-app.org/mic.xhtml#i/dc:creator
   
xmpMM:DocumentIDuuid:80f41db9-628f-48aa-9b7a-b7f5c2629304/xmpMM:DocumentID
  /rdf:Description
  rdf:Description rdf:about=#Galway
   rdf:type rdf:resource=http://purl.org/ns/meteo#Place/
   owl:sameAs rdf:resource=http://dbpedia.org/resource/Galway/
   meteo:forecast rdf:resource=#forecast20101113/
   meteo:forecast rdf:resource=#forecast20101114/
   meteo:forecast rdf:resource=#forecast20101115/
  /rdf:Description
  rdf:Description rdf:about=#temp
   rdf:seeAlso rdf:resource=http://dbpedia.org/resource/Temperature/
   owl:sameAs rdf:resource=http://dbpedia.org/resource/Celsius/
  /rdf:Description
  rdf:Description rdf:about=#forecast20101113
   meteo:predicted
rdf:datatype=http://www.w3.org/2001/XMLSchema#dateTime;2010-11-13T00:00:00Z/meteo:predicted
   meteo:celsius
rdf:datatype=http://www.w3.org/2001/XMLSchema#double;2/meteo:celsius
  /rdf:Description
  rdf:Description rdf:about=#forecast20101114
   meteo:predicted
rdf:datatype=http://www.w3.org/2001/XMLSchema#dateTime;2010-11-14T00:00:00Z/meteo:predicted
   meteo:celsius
rdf:datatype=http://www.w3.org/2001/XMLSchema#double;4/meteo:celsius
  /rdf:Description
  rdf:Description rdf:about=#forecast20101115
   meteo:predicted
rdf:datatype=http://www.w3.org/2001/XMLSchema#dateTime;2010-11-15T00:00:00Z/meteo:predicted
   meteo:celsius
rdf:datatype=http://www.w3.org/2001/XMLSchema#double;7/meteo:celsius
  /rdf:Description
 /rdf:RDF
/x:xmpmeta

Martynas

On Tue, Jan 20, 2015 at 2:04 AM, Larry Masinter masin...@adobe.com wrote:
 I made a little example PDF based on the example at http://5stardata.info/  
 where document metadata is in the XMP itself, but document data is just a 
 string value (this example uses JSON).

 The same data from http://5stardata.info/gtd-5.html should be available in 
 the attached, starting from http://5stardata.info/gtd-1.pdf.

 I thought a concrete example would help


 x:xmpmeta xmlns:x=adobe:ns:meta/
   rdf:RDF xmlns:rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#;
rdf:Description rdf:about=
   xmlns:pdf=http://ns.adobe.com/pdf/1.3/;
   xmlns:lod=http://5stardata.info/ns/linked-data;
   xmlns:xmp=http://ns.adobe.com/xap/1.0/;
   xmlns:dc=http://purl.org/dc/elements/1.1/;
   xmlns:xmpMM=http://ns.adobe.com/xap/1.0/mm/;
pdf:ProducerAcrobat editing after Mac OS X 10.5.8 Quartz 
 PDFContext/pdf:Producer
  lod:JSON-Data 
 {#Galway:{rdf:type:[http://purl.org/ns/meteo#Place],http://www.w3.org/2002/07/owl#sameAs:
  
 [http://dbpedia.org/resource/Galway],http://purl.org/ns/meteo#forecast:[#forecast20101113,#forecast20101114,#forecast20101115]},#temp:{http://www.w3.org/2000/01/rdf-schema#seeAlso:[http://dbpedia.org/resource/Temperature],http://www.w3.org/2002/07/owl#sameAs:[http://dbpedia.org/resource/Celsius]},#forecast20101113:{http://purl.org/ns/meteo#predicted:[2010-11-13T00:00:00Z],http://purl.org/ns/meteo#temperature:[#temp20101113]},#temp20101113:{http://purl.org/ns/meteo#celsius:[2]},#forecast20101114:{http://purl.org/ns/meteo#predicted:[2010-11-14T00:00:00Z],http://purl.org/ns/meteo#temperature:[#temp20101114]},#temp20101114:{http://purl.org/ns/meteo#celsius:[4]},#forecast20101115:{http://purl.org/ns/meteo#predicted:[2010-11-15T00:00:00Z],http://purl.org/ns/meteo#temperature:[#temp20101115]},
 #temp20101115:{http://purl.org/ns/meteo#celsius:[7]}}/lod:JSON-Data
xmp:CreateDate2012-01-22T15:00:00-08:00/xmp:CreateDate
dc:titleTemperature forecast for Galway, Ireland/dc:title
dc:creatorhttp://sw-app.org/mic.xhtml#i/dc:creator

 xmpMM:DocumentIDuuid:80f41db9-628f-48aa-9b7a-b7f5c2629304/xmpMM:DocumentID
   /rdf:Description
  /rdf:RDF
 /x:xmpmeta




Re: scientific publishing process (was Re: Cost and access)

2014-10-06 Thread Martynas Jusevičius
Dear Peter,

please show me how to query PDFs with SPARQL. Then I'll believe there
are no benefits of XHTML+RDFa over PDF.

Addressing the issue from the reviewer perspective only is too narrow,
don't you think?


Martynas

On Mon, Oct 6, 2014 at 6:08 PM, Peter F. Patel-Schneider
pfpschnei...@gmail.com wrote:


 On 10/06/2014 08:38 AM, Phillip Lord wrote:

 Peter F. Patel-Schneider pfpschnei...@gmail.com writes:

 I would be totally astonished if using htlatex as the main way to produce
 conference papers were as simple as this.

 I just tried htlatex on my ISWC paper, and the result was, to put it
 mildly,
 horrible.  (One of my AAAI papers was about the same, the other one
 caused an
 undefined control sequence and only produced one page of output.)
 Several
 parts of the paper were rendered in fixed-width fonts.  There was no
 attempt
 to limit line length.  Footnotes were in separate files.



 The footnote thing is pretty strange, I have to agree. Although
 footnotes are a fairly alien concept wrt to the web. Probably hover
 overs would be a reasonable presentation for this.


 Many non-scalable images were included, even for simple math.


 It does MathML I think, which is then rendered client side. Or you could
 drop math-mode straight through and render client side with mathjax.


 Well, somehow png files are being produced for some math, which is a
 failure.  I don't know what the way to do this right would be, I just know
 that the version of htlatex for Fedora 20 fails to reasonably handle the
 math in this paper.

 My carefully designed layout for examples was modified in ways that
 made the examples harder to understand.


 Perhaps this is a key difference between us. I don't care about the
 layout, and want someone to do it for me; it's one of the reasons I use
 latex as well.


 There are many cases where line breaks and indentation are important for
 understanding.  Getting this sort of presentation right in latex is a pain
 for starters, but when it has been done, having the htlatex toolchain mess
 it up is a failure.

 That said, the result was better than I expected.  If someone upgrades
 htlatex
 to work well I'm quite willing to use it, but I expect that a lot of work
 is
 going to be needed.


 Which gets us back to the chicken and egg situation. I would probably do
 this; but, at the moment, ESWC and ISWC won't let me submit it. So, I'll
 end up with the PDF output anyway.


 Well, I'm with ESWC and ISWC here.  The review process should be designed to
 make reviewing easy for reviewers.  Until viewing HTML output is as
 trouble-free as viewing PDF output, then PDF should be the required format.

 This is why it is important that web conferences allow HTML, which is
 where the argument started. If you want something that prints just
 right, PDF is the thing for you. If you you want to read your papers in
 the bath, likewise, PDF is the thing for you. And that's fine by me (so
 long as you don't mind me reading your papers in the bath!). But it
 needs to not be the only option.


 Why?  What are the benefits of HTML reviewing, right now?  What are the
 benefits of HTML publishing, right now?  If there were HTML-based tools that
 worked well for preparing, reviewing, and reading scientific papers, then
 maybe conferences would use them.  However, conference organizers and
 reviewers have limited time, and are thus going for the simplest solution
 that works well.

 If some group thinks that a good HTML-based solution is possible, then let
 them produce this solution.  If the group can get pre-approval of some
 conference, then more power to them.  However, I'm not going to vote for any
 pre-approval of some future solution when the current situation is
 satisficing.

 Phil


 peter





Re: scientific publishing process (was Re: Cost and access)

2014-10-06 Thread Martynas Jusevičius
Following the same logic, we still could have been using paper
submissions? All you have to do is to scan them to turn them into
PDFs.

It's been a while since I was in the university, but wasn't
dissemination an important part of science? What about dogfooding
after all?


Martynas

On Mon, Oct 6, 2014 at 6:48 PM, Peter F. Patel-Schneider
pfpschnei...@gmail.com wrote:
 It's not hard to query PDFs with SPARQL.  All you have to do is extract the
 metadata from the document and turn it into RDF, if needed.  Lots of
 programs extract and display this metadata already.

 No, I don't think that viewing this issue from the reviewer perspective is
 too narrow.  Reviewers form  a vital part of the scientific publishing
 process. Anything that makes their jobs harder or the results that they
 produce worse is going to have to have very large benefits over the current
 setup.  In any case, I haven't been looking at the reviewer perspective
 only, even in the message quoted below.

 peter

 PS:  This is *not* to say that I think that the reviewing process is
 anywhere near ideal.  On the contrary, I think that the reviewing process
 has many problems, particularly as it is performed in CS conferences.



 On 10/06/2014 09:19 AM, Martynas Jusevičius wrote:

 Dear Peter,

 please show me how to query PDFs with SPARQL. Then I'll believe there
 are no benefits of XHTML+RDFa over PDF.

 Addressing the issue from the reviewer perspective only is too narrow,
 don't you think?


 Martynas

 On Mon, Oct 6, 2014 at 6:08 PM, Peter F. Patel-Schneider
 pfpschnei...@gmail.com wrote:



 On 10/06/2014 08:38 AM, Phillip Lord wrote:


 Peter F. Patel-Schneider pfpschnei...@gmail.com writes:


 I would be totally astonished if using htlatex as the main way to
 produce
 conference papers were as simple as this.

 I just tried htlatex on my ISWC paper, and the result was, to put it
 mildly,
 horrible.  (One of my AAAI papers was about the same, the other one
 caused an
 undefined control sequence and only produced one page of output.)
 Several
 parts of the paper were rendered in fixed-width fonts.  There was no
 attempt
 to limit line length.  Footnotes were in separate files.




 The footnote thing is pretty strange, I have to agree. Although
 footnotes are a fairly alien concept wrt to the web. Probably hover
 overs would be a reasonable presentation for this.


 Many non-scalable images were included, even for simple math.



 It does MathML I think, which is then rendered client side. Or you could
 drop math-mode straight through and render client side with mathjax.



 Well, somehow png files are being produced for some math, which is a
 failure.  I don't know what the way to do this right would be, I just
 know
 that the version of htlatex for Fedora 20 fails to reasonably handle the
 math in this paper.

 My carefully designed layout for examples was modified in ways that
 made the examples harder to understand.



 Perhaps this is a key difference between us. I don't care about the
 layout, and want someone to do it for me; it's one of the reasons I use
 latex as well.



 There are many cases where line breaks and indentation are important for
 understanding.  Getting this sort of presentation right in latex is a
 pain
 for starters, but when it has been done, having the htlatex toolchain
 mess
 it up is a failure.

 That said, the result was better than I expected.  If someone upgrades
 htlatex
 to work well I'm quite willing to use it, but I expect that a lot of
 work
 is
 going to be needed.



 Which gets us back to the chicken and egg situation. I would probably do
 this; but, at the moment, ESWC and ISWC won't let me submit it. So, I'll
 end up with the PDF output anyway.



 Well, I'm with ESWC and ISWC here.  The review process should be designed
 to
 make reviewing easy for reviewers.  Until viewing HTML output is as
 trouble-free as viewing PDF output, then PDF should be the required
 format.

 This is why it is important that web conferences allow HTML, which is
 where the argument started. If you want something that prints just
 right, PDF is the thing for you. If you you want to read your papers in
 the bath, likewise, PDF is the thing for you. And that's fine by me (so
 long as you don't mind me reading your papers in the bath!). But it
 needs to not be the only option.



 Why?  What are the benefits of HTML reviewing, right now?  What are the
 benefits of HTML publishing, right now?  If there were HTML-based tools
 that
 worked well for preparing, reviewing, and reading scientific papers, then
 maybe conferences would use them.  However, conference organizers and
 reviewers have limited time, and are thus going for the simplest solution
 that works well.

 If some group thinks that a good HTML-based solution is possible, then
 let
 them produce this solution.  If the group can get pre-approval of some
 conference, then more power to them.  However, I'm not going to vote for
 any
 pre-approval of some future

Re: Technical challenges (Was Re: [ESWC 2015] First Call for Paper)

2014-10-01 Thread Martynas Jusevičius
Hey all,

is there any established and/or widely supported LaTeX XML schema?

I have found several projects, but not sure how much they're used:
- http://dlmf.nist.gov/LaTeXML/
- http://getfo.org/texml/
- http://www-sop.inria.fr/marelle/tralics/

If there would be an agreed XML schema, it would be trivial to provide
templates for different styles using XSLT+CSS.


Martynas
graphity.org

On Thu, Oct 2, 2014 at 12:42 AM, Ali SH asaegyn+...@gmail.com wrote:
 Sarven, great work! We definitely need more initiatives like yours.

 It seems to me a big hindrance to this adoption is more sociologically than
 technological.

 A quick suggestion - in the current thread we have people saying HTML/RDFa
 -- LaTeX - why not the other way around?

 Having LaTeX -- HTML/RDFa would bridge the gap. People who are writing
 papers can continue writing them in LaTeX, and when they're done, they
 simply publish as HTML/RDFa using a LaTeX plug-in?

 This helps reduce the activation energy for the shift for more Linked Data
 friend formats, as people don't really need to change their writing
 practices (at least the LaTeX people), and would immediately generate a
 Linked Data ready format.


 On Wed, Oct 1, 2014 at 5:48 PM, Sarven Capadisli i...@csarven.ca wrote:

 On 2014-10-01 22:32, Pablo N. Mendes wrote:


 It may help to preemptively address concerns here. Does anyone have a
 HTML+CSS(+RDFa) template that looks exactly like the LNCS-formatted
 PDFs? Can we show that papers using this template:
 - look consistent with each other (follow the LNCS typesetting
 instructions)
 - look the same as the PDF counterparts
 - look the same in any reader
 - look the same on screen and printed
 - can be read both online and offline
 - have the same or smaller file size
 - make it easy to share with others (all in one file?)

 Can LaTeX to HTML be achieved easily with this template? Or at least is
 it as easy yo write this HTML as it is to write in LaTeX?

 I feel like this thread warrants a manifesto with a backing github
 repo where everybody interested can chip in.


 The core of your concerns were addressed over the past few years in
 different ways on this mailing list. When some posed the situation as a
 technological problem, I've created some templates and LNCS and ACM
 styles:

 https://github.com/csarven/linked-research

 Reached out to OCs, supervisors, and authors. They all have a part in
 this. Even wrote manifestos:

 * http://csarven.ca/linked-research

 * http://csarven.ca/call-for-linked-research


 How about we try to solve a different problem? The one that I've posed:
 will SW/LD conferences encourage the community to eat their own dogfood for
 papers? We can certainly improve on whatever needs to be improved over
 time. The problem is that, if SW/LD technologies are not even welcome to
 share scientific knowledge at these conferences, it is irrelevant to worry
 about the technological comparisons.

 We have a Social Problem 101. Period.

 -Sarven
 http://csarven.ca/#i




 --


 (•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,



Re: Technical challenges (Was Re: [ESWC 2015] First Call for Paper)

2014-10-01 Thread Martynas Jusevičius
Actually LaTeXML seems to do pretty much I what I was thinking about:
http://dlmf.nist.gov/LaTeXML/manual/usage/usage.single.html#SS0.SSS0.P7

Could be packaged in a more user-friendly way though.

On Thu, Oct 2, 2014 at 12:56 AM, Martynas Jusevičius
marty...@graphity.org wrote:
 Hey all,

 is there any established and/or widely supported LaTeX XML schema?

 I have found several projects, but not sure how much they're used:
 - http://dlmf.nist.gov/LaTeXML/
 - http://getfo.org/texml/
 - http://www-sop.inria.fr/marelle/tralics/

 If there would be an agreed XML schema, it would be trivial to provide
 templates for different styles using XSLT+CSS.


 Martynas
 graphity.org

 On Thu, Oct 2, 2014 at 12:42 AM, Ali SH asaegyn+...@gmail.com wrote:
 Sarven, great work! We definitely need more initiatives like yours.

 It seems to me a big hindrance to this adoption is more sociologically than
 technological.

 A quick suggestion - in the current thread we have people saying HTML/RDFa
 -- LaTeX - why not the other way around?

 Having LaTeX -- HTML/RDFa would bridge the gap. People who are writing
 papers can continue writing them in LaTeX, and when they're done, they
 simply publish as HTML/RDFa using a LaTeX plug-in?

 This helps reduce the activation energy for the shift for more Linked Data
 friend formats, as people don't really need to change their writing
 practices (at least the LaTeX people), and would immediately generate a
 Linked Data ready format.


 On Wed, Oct 1, 2014 at 5:48 PM, Sarven Capadisli i...@csarven.ca wrote:

 On 2014-10-01 22:32, Pablo N. Mendes wrote:


 It may help to preemptively address concerns here. Does anyone have a
 HTML+CSS(+RDFa) template that looks exactly like the LNCS-formatted
 PDFs? Can we show that papers using this template:
 - look consistent with each other (follow the LNCS typesetting
 instructions)
 - look the same as the PDF counterparts
 - look the same in any reader
 - look the same on screen and printed
 - can be read both online and offline
 - have the same or smaller file size
 - make it easy to share with others (all in one file?)

 Can LaTeX to HTML be achieved easily with this template? Or at least is
 it as easy yo write this HTML as it is to write in LaTeX?

 I feel like this thread warrants a manifesto with a backing github
 repo where everybody interested can chip in.


 The core of your concerns were addressed over the past few years in
 different ways on this mailing list. When some posed the situation as a
 technological problem, I've created some templates and LNCS and ACM
 styles:

 https://github.com/csarven/linked-research

 Reached out to OCs, supervisors, and authors. They all have a part in
 this. Even wrote manifestos:

 * http://csarven.ca/linked-research

 * http://csarven.ca/call-for-linked-research


 How about we try to solve a different problem? The one that I've posed:
 will SW/LD conferences encourage the community to eat their own dogfood for
 papers? We can certainly improve on whatever needs to be improved over
 time. The problem is that, if SW/LD technologies are not even welcome to
 share scientific knowledge at these conferences, it is irrelevant to worry
 about the technological comparisons.

 We have a Social Problem 101. Period.

 -Sarven
 http://csarven.ca/#i




 --


 (•`'·.¸(`'·.¸(•)¸.·'´)¸.·'´•) .,.,



Re: URIs within URIs

2014-08-24 Thread Martynas Jusevičius
Hey all,

the only specification I know that actually solves practical RDF input
problems is RDF/POST encoding:
http://www.lsrn.org/semweb/rdfpost.html

It can be easily incorporated into XSLT stylesheets to provide
reusable layout templates, and with SPIN constraints to provide
validation:
https://github.com/Graphity/graphity-browser/blob/master/src/main/webapp/static/org/graphity/client/xsl/layout.xsl

Best,

Martynas
graphityhq.com

On Sun, Aug 24, 2014 at 7:16 PM, Mark Baker dist...@acm.org wrote:

 On Aug 22, 2014 12:23 PM, Ruben Verborgh ruben.verbo...@ugent.be wrote:
 This gets us to a deeper difference between (current) Linked Data and the
 rest of the Web:
 Linked Data uses only links as hypermedia controls,
 whereas the remainder of the Web uses links *and forms*.
 Forms are a much more powerful mechanism to discover information.

 Indeed. Interestingly, this use case was the first one I published as an
 example of RDF Forms;

 http://www.markbaker.ca/2003/10/UriProxy/



Re: URIs within URIs

2014-08-22 Thread Martynas Jusevičius
Hey all,

Graphity Client uses the same ?uri= convention:
http://semanticreports.com/?uri=http%3A%2F%2Fwww4.wiwiss.fu-berlin.de%2Feurostat%2Fresource%2Fcountries%2FDanmark

Martynas
graphityhq.com

On Fri, Aug 22, 2014 at 5:56 PM, Bill Roberts b...@swirrl.com wrote:
 Hi Luca

 We certainly find a need for that kind of feature (as do many other linked 
 data publishers) and our choice in our PublishMyData platform has been the 
 URL pattern {domain}/resource?uri={url-encoded external URI} to expose info 
 in our databases about URIs in other domains.

 If there was a standard URL route for this scenario, we'd be glad to 
 implement it

 Best regards

 Bill

 On 22 Aug 2014, at 16:44, Luca Matteis lmatt...@gmail.com wrote:

 Dear LOD community,

 I'm wondering whether there has been any research regarding the idea
 of having URIs contain an actual URI, that would then resolve
 information about what the linked dataset states about the input URI.

 Example:

 http://foo.com/alice - returns data about what foo.com has regarding alice

 http://bar.com/endpoint?uri=http%3A%2F%2Ffoo.com%2Falice - doesn't
 just resolve the alice URI above, but returns what bar.com wants to
 say about the alice URI

 For that matter http://bar.com/?uri=http%3A%2F%2Ffoo.com%2Falice could 
 return:

 http://bar.com/?uri=http%3A%2F%2Ffoo.com%2Falice a void:Dataset .
 http://foo.com/alice #some #data .

 I know SPARQL endpoints already have this functionality, but was
 wondering whether any formal research was done towards this direction
 rather than a full-blown SPARQL endpoint.

 The reason I'm looking for this sort of thing is because I simply need
 to ask certain third-party datasets whether they have data about a URI
 (inbound links).

 Best,
 Luca






Re: What happened to my trusted Turtle validator and converter?

2014-05-06 Thread Martynas Jusevičius
Phil,

AFAIK, none of the validators on that list take large enough user
input both as RDF/XML and Turtle, as rdfabout.com used to. That is why
it's sad to see it go.

I think it is also safe to say that W3C's own validator is pretty
outdated and useless because of the same reason:
http://www.w3.org/RDF/Validator/


Martynas
graphityhq.com

On Tue, May 6, 2014 at 5:05 PM, Phil Archer ph...@w3.org wrote:
 I should have said this in reply to Frans' original mail - there's a list of
 validators on the W3C Sem Web wiki, see
 http://www.w3.org/2001/sw/wiki/Category:Validator

 That's pulled from the more general list of tools
 http://www.w3.org/2001/sw/wiki/Tools

 There are recent entries on that wiki so I dare to think that it is still
 doing its job of providing useful info.

 HTH

 Phil.



 On 05/05/2014 08:58, Frans Knibbe | Geodan wrote:

 On 2014-04-28 22:58, Ghislain Atemezing wrote:

 Hi Frans,
 According to the creator of the tool on Twitter
 (https://twitter.com/JoshData/)/, he does not have any plan at the

 moment to fix the issue.

 Yes, after writing him (Josh Tauberer) he explained that the validator
 broke after an upgrade of the host OS, and that he does not have time to
 look into it.

 This is a great list! Already three alternatives for the Turtle
 validator and converter are noted. Thanks a lot for developing and
 sharing those. I will try them all out.


 However, he kindly shared the code at
 https://gist.github.com/JoshData/11370152.
 So, maybe you might be interested in hosting it ? (or even someone in
 this list) ?.
 I agree, it was/is a nice tool that deserve a « second life » ;)


 That is a good idea. I will try to see if I can find an open time window
 somewhere.

 Regards,
 Frans


 Best,
 Ghislain




 

 Frans Knibbe
 Geodan
 President Kennedylaan 1
 1079 MB Amsterdam (NL)

 T +31 (0)20 - 5711 347
 E frans.kni...@geodan.nl
 www.geodan.nl http://www.geodan.nl | disclaimer
 http://www.geodan.nl/disclaimer
 


 --


 Phil Archer
 W3C Data Activity Lead
 http://www.w3.org/2013/data/

 http://philarcher.org
 +44 (0)7887 767755
 @philarcher1




Re: What happened to my trusted Turtle validator and converter?

2014-04-27 Thread Martynas Jusevičius
Oh noes :( I was using it all the time as well.

Martynas
graphityhq.com
On Apr 27, 2014 5:23 PM, Frans Knibbe | Geodan frans.kni...@geodan.nl
wrote:

  Hi!

 I really liked the RDF validator and converter at
 http://www.rdfabout.com/demo/validator/. I used it often to validate the
 Turtle I wrote and sometimes I used it to convert Turtle to RDF/XML. So I
 was on the verge of being shocked when I noticed that it was gone from the
 web. Yesterday I got a HTTP error and today I am redirected to a blog about
 RDF.

 What has happened to the validator?
 Will it be back?
 Is there an alternative somewhere?

 Regards,
 Frans

  --
 Frans Knibbe
 Geodan
 President Kennedylaan 1
 1079 MB Amsterdam (NL)

 T +31 (0)20 - 5711 347
 E frans.kni...@geodan.nl
 www.geodan.nl | disclaimer http://www.geodan.nl/disclaimer
 --



Re: How to avoid that collections break relationships

2014-03-25 Thread Martynas Jusevičius
Vuk,

If URIs like /alice identify documents, then your example treats
documents as persons. In other words, you conflate information resources
and real-world resources.

Martynas
graphityhq.com
On Mar 25, 2014 11:35 AM, Vuk Milicic vuk.mili...@eurecom.fr wrote:

 Hi Markus,

 How about this:

 /markus/friends/ rdfs:subClassOf schema:Person .
 /alice a /markus/friends/ .
 /markus schema:knows /alice .


 Vuk Milicic
 @faviki


 On 24 Mar 2014, at 16:24, Markus Lanthaler markus.lantha...@gmx.net
 wrote:

  Hi all,
 
  We have an interesting discussion in the Hydra W3C Community Group [1]
  regarding collections and would like to hear more opinions and ideas. I'm
  sure this is an issue a lot of Linked Data applications face in practice.
 
  Let's assume we want to build a Web API that exposes information about
  persons and their friends. Using schema.org, your data would look
 somewhat
  like this:
 
   /markus a schema:Person ;
 schema:knows /alice ;
 ...
 schema:knows /zorro .
 
  All this information would be available in the document at /markus
 (please
  let's not talk about hash URLs etc. here, ok?). Depending on the number
 of
  friends, the document however may grow too large. Web APIs typically
 solve
  that by introducing an intermediary (paged) resource such as
  /markus/friends/. In Schema.org we have ItemList to do so:
 
   /markus a schema:Person ;
 schema:knows /markus/friends/ .
 
   /markus/friends/ a schema:ItemList ;
 schema:itemListElement /alice ;
 ...
 schema: itemListElement /zorro .
 
  This works, but has two problems:
   1) it breaks the /markus --[knows]-- /alice relationship
   2) it says that /markus --[knows]-- /markus/friends
 
  While 1) can easily be fixed, 2) is much trickier--especially if we
 consider
  cases that don't use schema.org with its weak semantics but a
 vocabulary
  that uses rdfs:range, such as FOAF. In that case, the statement
 
   /markus foaf:knows /markus/friends/ .
 
  and the fact that
 
   foaf:knows rdfs:range foaf:Person .
 
  would yield to the wrong inference that /markus/friends is a
 foaf:Person.
 
  How do you deal with such cases?
 
  How is schema.org intended to be used in cases like these? Is the above
 use
  of ItemList sensible or is this something that should better be avoided?
 
 
  Thanks,
  Markus
 
 
  P.S.: I'm aware of how LDP handles this issue, but, while I generally
 like
  the approach it takes, I don't like that fact that it imposes a specific
  interaction model.
 
 
  [1] http://bit.ly/HydraCG
 
 
 
  --
  Markus Lanthaler
  @markuslanthaler
 
 
 
 
 






Re: Is the same video but in different encodings the owl:sameAs?

2013-12-05 Thread Martynas Jusevičius
Thomas,

then the HTML5 controls don't make sense, in my opinion, as the defeat
the purpose of content negotiation. It's hard for me to take that
specification seriously anymore.

You could have a single video resource URI and choose the
representation format based on Accept headers. You need to have
explicit media type metadata in your example, then it should be enough
to implement this approach:

  http://videos.example.org/#video a ma:MediaResource ;
ma:title Sample Video ;
ma:description Sample Description ;
ma:locator http://ex.org/video ;
dct:hasVersion http://videos.example.org/#video.mp4 ,
http://videos.example.org/#video.ogv .

  http://videos.example.org/#video.mp4 ma:format
http://mediatypes.appspot.com/video/mp4 .
  http://videos.example.org/#video.ogv ma:format
http://mediatypes.appspot.com/video/ogg .


Martynas
graphityhq.com

On Thu, Dec 5, 2013 at 8:31 PM, Thomas Steiner to...@google.com wrote:
 Hi Milorad,

 Unfortunately content negotiation in the HTTP sense is not an option with
 HTML5 video, as the content negotiation in a sense is taken care of by the
 browser. In my initial mail, check the link on the @currentSrc attribute. If
 you want to support all browsers on all platforms and device types, you have
 to specify several sources in different encodings. So you almost always end
 up with several media resources for the same video, hence my original
 question…

 Cheers,
 Tom


 --
 Thomas Steiner, Employee, Google Inc.
 http://blog.tomayac.com, http://twitter.com/tomayac

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.12 (GNU/Linux)

 iFy0uwAntT0bE3xtRa5AfeCheCkthAtTh3reSabiGbl0ck0fjumBl3DCharaCTersAttH3b0ttom.hTtP5://xKcd.c0m/1181/
 -END PGP SIGNATURE-



Re: DOM for RDF?

2013-12-02 Thread Martynas Jusevičius
Richard,

I think Graphity is close to what you're looking for:
https://github.com/Graphity/graphity-browser
It supports SPARQL Protocol and Graph Store Protocol, content
negotiation, XSLT transformations of RDF output and RDF input user
interface using RDF/POST (http://www.lsrn.org/semweb/rdfpost.html).

We are also working on a multi-tenant solution, which turns Graphity
into application platform, on which Linked Data sites can be
configured declaratively (via user interface) over SPARQL endpoints.

Martynas
graphityhq.com

On Mon, Dec 2, 2013 at 2:21 PM, Richard Light rich...@light.demon.co.uk wrote:

 On 02/12/2013 11:10, Alfredo Serafini wrote:

 Hi Richard

 from my point of view the DOM-like approach does exists yet, and it's by
 SPARQL and LDpath. What are them lacking? Do you feel there should be an
 object-oriented approach? As for the Jena model or Sesame internal Graph
 representation? If this is the case it could be interesting from my point of
 view an approach similar to the current implmentation of the Sail interface.

 Alfredo,

 I think a query language on its own isn't enough.  I would like to see an
 environment in which I can create an in-memory graph, load it with one or
 more graphs and/or query results, create, delete and edit graph nodes and
 triples within it, query it, and serialize all or part of the result to any
 of the popular serialization formats, plus (X)HTML, ideally using a tool as
 powerful as XSLT. :-)

 Looking up LDPath I came across Marmotta [1], which seems rather closer to
 what I have in mind.

 Richard

 [1] http://marmotta.apache.org/


 My 2 cents

 Alfredo Serafini


 2013/12/2 Richard Light rich...@light.demon.co.uk

 Hi,

 I'm sure this has been discussed many times and/or ages ago, but I am
 struck by the absence of a DOM-like W3C framework for RDF. By this, I mean
 an application programming interface (API) for [RDF graphs], which will be
 a standard programming interface that can be used in a wide variety of
 environments and applications. The [RDF] DOM is designed to be used with any
 programming language. (Quotes taken from [1])

 A quick search turns up a number of PHP-based libraries, and the odd one
 for javascript, Delphi, Python and Ruby, but as far as I can see there is
 little, or no, commonality of approach or functionality amongst these
 offerings.  This means that a programmer (a) has to decide which of these
 widely varying approaches to adopt, (b) only gets whatever documentation
 each chooses to provide and (c) is faced with a complete rewrite, should
 they decide to switch RDF platform.

 Might this situation be a significant factor in the slow take-up of RDF by
 mainstream developers?

 Richard

 [1] http://www.w3.org/TR/REC-DOM-Level-1/introduction.html

 --
 Richard Light



 --
 Richard Light



Re: representing hypermedia controls in RDF

2013-11-22 Thread Martynas Jusevičius
Markus,

in the Linked Data context, what is the difference between
identifier and hyperlink? Last time I checked, URIs were opaque
and there was no such distinction.

Martynas

On Thu, Nov 21, 2013 at 6:43 PM, Markus Lanthaler
markus.lantha...@gmx.net wrote:
 +public-hydra since there are a couple of things which we should look at
 there as well

 On Thursday, November 21, 2013 3:03 PM, Ruben Verborgh wrote:
  - representing hyperlinks in RDF (in addition to subject/object
URLs)
 
  hydra:Resource along with hydra:Link covers that:
  http://bit.ly/1b9IK32

 And it does it the way I like: resource-oriented!

 Yet, the semantic gap I need to bridge is on the level of predicates.
 None of the Hydra properties [1] have hydra:Link as range or domain.
 So how to connect a link to a resource?

 Right, nothing has hydra:Link as range because it's a specialization of
 rdf:Property. The range would need to be hydra:Resource.

 Typically you would define your link relations, aka predicates, to express
 the semantic relationship between the two resources. If you want such a
 predicate to become a hypermedia affordance, you type it as hydra:Link
 instead of rdf:Property:

   vocab:homepage a hydra:Link .

   a vocab:homepage b

 That way even clients that know nothing about vocab:homepage will be able to
 find out that it is a link relation and that b is expected to be
 dereferenceable. The spec doesn't include any entailments yet (to not scare
 too many people away at this stage) but they would be

   xxx a hydra:Link .
   aaa xxx bbb .

   == bbb a hydra:Resource .


 More or less like:
 a
 href=http://en.wikipedia.org/wiki/Daft_Punk;http://dbpedia.org/resour
 ce/Daft_Punk/a
 On the human Web, we do this all the time:
 a href=http://en.wikipedia.org/wiki/Daft_Punk;Daft Punk/a
 The difference of course with the Semantic Web is that the identifiers
 need to be URIs, not labels.

 Right. Hydra provides a framework to define such things. It (currently)
 doesn't define a predicate for untyped hyperlinks because I believe they
 generally don't make much sense in a m2m context. Nevertheless, it may make
 sense to define something for very simple use case (crawlers) or to be able
 to express, e.g., links extracted from a HTML page.

 I raised ISSUE-15 [1] to keep track of this.


 I guess a seeAlso would do, (but then again, seeAlso probably applies
 to anything):
 dbpedia:Daft_Punk rdfs:seeAlso wikipedia:Daft_Punk.

 However, I really want something stronger here.
 But perhaps usual hyperlinks are not interesting enough,
 as they can actually be represented as predicates (= typed links):

 dbpedia:Daft_Punk :hasArticle wikipedia:Daft_Punk.

 Right, but nothing here tell's you whether wikipedia:Daft_Punk is just an
 identifier or a hyperlink. If you type wikipedia:Daft_Punk as hydra:Resource
 or :hasArticle as hydra:Link it becomes explicit.



  - representing URI templates [2]
  It's covered by hydra:IriTemplate: http://bit.ly/1e2z2NW

 Now this case is much more interesting than simple links :-)

 Same semantic problem for me though:
 what predicates do I use to connect them to my resource?
 For instance:

 /users :membersHaveTemplate :UsersTemplate.
 :UsersTemplate a hydra:IriTemplate;
  hydra:template /users/{userid}.

 Well, same thing in principle

   :membersHaveTemplate a hydra:TemplatedLink .

 Of course Hydra can't know what :membersHaveTemplate really *means*, i.e.,
 in which relationship the two resource stand. It just allows you to tell a
 client that it's values are link templates and that you can use them to
 construct URLs which lead you to things that are of type hydra:Resource


 So what I actually need is the equivalent of hydra:members,
 but then with a template as range.
 Should we discuss take this to the Hydra list? I'd be interested!

 Definitely.. I already CCed public-hydra. We should move the discussion
 there as it is quite specific to hydra.


 (Also, have you considered hydra:template's range as something more
 specific than xsd:string?)

 No, I'm not aware of any type representing RFC6570 IRI templates and didn't
 see the need to complicate Hydra by defining one :-)


  - representing forms (in the HTML sense)
  In Hydra this is done by a combination of hydra:Operation,
 hydra:expects and
  hydra:supportedProperties, see http://bit.ly/17t9ecB

 I like example 10 in that regard, but I'm stuck at predicates again:
 how to connect the Link to the resource it applies to?

 I'm not sure I understand your question. Example 10 defines the property
 http://api.example.com/doc/#comments You can then simply use it in your data

   @prefix api: http://api.example.com/doc/# .

   / api:comments /comments/ .

 The client can lookup what api:comments is. It will find out that it is a
 hydra:Link, so it represents a hyperlink. Furthermore, it will see that it
 can create new comments by POSTing a api:Comment to /comments/.

 Have you seen the issue tracker demo on my homepage?

 

Re: representing hypermedia controls in RDF

2013-11-22 Thread Martynas Jusevičius
Hey Ruben,

regarding RFC6570, I'm not planning to adopt it, since the
specification is better suited for building URIs, not matching them
(1.4 Limitations): In general, regular expression languages are
better suited for variable matching
I'm using JAX-RS syntax since it can be used for matching and
building, and has utility classes that help you do that.

I don't like how Linked Data API uses endpoints either. I reuse
api:ListEndpoint in some of Graphity ontologies, but give it no
special meaning. If I understand right what you want, I am using SIOC
to define the parent/child relationship between resources (which is
usually represented by a slash in the URI):
http://rdfs.org/sioc/spec/#term_Container

In your case, it could be:

/topics a sioc:Container .
/topics/Global_Warming sioc:has_space /topics .

Cheers,

Martynas
graphityhq.com

On Wed, Nov 20, 2013 at 4:52 PM, Ruben Verborgh ruben.verbo...@ugent.be wrote:
 Hi Martynas,

 - URI templates: Linked Data API vocabulary
 https://code.google.com/p/linked-data-api/wiki/API_Vocabulary

 Cool, I do like that. Have you thought about extending to RFC6570?
 Do you know about usage of this vocabulary?

 The one thing that I like less is the notion of endpoints.
 While this is perfect for SPARQL, which is indeed an endpoint
 or “data-handling process” that expects a “block of data” [1],
 it does not work well in resource-oriented environments.

 I’m looking for predicates that work with groups of resource, such as:
 /topics/Global_Warming :belongsTo /topics.
 /topics a :ResourceList;
   api:itemTemplate /topics/{topicID}.
 That is, I don't consider there to be a topics endpoint;
 instead, there is a topics resource which lists several topics,
 and individual topics can be accessed by ID.
 The reason I would still need the template is because /topics/ is not 
 exhaustive,
 and new elements can be created by following the template.
 This would be equivalent to a HTML GET form.

 - HTML forms: RDF/POST encoding http://www.lsrn.org/semweb/rdfpost.html

 Interesting, thanks!

 Best,

 Ruben

 [1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.5



Re: representing hypermedia controls in RDF

2013-11-22 Thread Martynas Jusevičius
Mike,

so if RDF representation includes a triple such as

  http://example.com/x a foaf:Image .

is that an affordance? Because that gives me enough information to
render it as img src=http://example.com/x/.

By the way, nothing stops me from having a href=isbn:343-224122
either. It will probably be clickable, but won't work.

Martynas

On Fri, Nov 22, 2013 at 4:42 PM, mike amundsen mam...@yahoo.com wrote:
 snip
 A browser for example doesn't render the string
 http://example.com/343-224122 as a clickable link unless you mark it up as
 one using the a tag.
 /snip

 Yep, the A element is the thing that _affords_ clicking. it is the A element
 which is the affordance.

 Affordances don't just supply addresses, they supply information about what
 you can _do_ with that address (navigate, transclude, send arguments, write
 data, remove data, etc.). The appearance of a URL alone provides very little
 affordance.

 For example:
 - http://example.com/x
 - http://example.com/y
 one of the two URLs points to a blog page to which the user can navigate,
 the other points to a logo which should be displayed inline. which is which?

 Now this:
 - a href=...blog/a
 - img href=...  /
 one of the two URLs points to a blog page, the other points to a logo. which
 is which?

 Note it is not the URL that provides the information (which is for
 navigation, which is for transclusion), but the element in which the URL
 appears. The element is the affordance. These are HTML affordances. There
 are a couple more hypermedia affordances in HTML. Other message models
 (media types) contain their own affordances.

 It is the appearance of affordances within the response representation that
 is a key characteristic of hypermedia messages.



 mamund
 +1.859.757.1449
 skype: mca.amundsen
 http://amundsen.com/blog/
 http://twitter.com/mamund
 https://github.com/mamund
 http://www.linkedin.com/in/mamund


 On Fri, Nov 22, 2013 at 10:13 AM, Markus Lanthaler
 markus.lantha...@gmx.net wrote:

 Hi Martynas,

 On Friday, November 22, 2013 3:12 PM, Martynas Jusevičius wrote:
  Markus,
 
  in the Linked Data context, what is the difference between
  identifier and hyperlink? Last time I checked, URIs were opaque
  and there was no such distinction.

 These things quickly turn into philosophical discussions but simply
 speaking
 the difference lies in the expectations of a client. In XML for example,
 namespaces are just identifiers. There's no expectation that you can go
 and
 dereference that namespace identifier (even though in most cases they use
 HTTP URIs). The same is true about RDF. All URIs are just identifiers.
 From
 an RDF point of view, there's no difference between isbn:343-224122 and
 http://example.com/343-224122. As you say, they are opaque.

 But if you build applications, it is important to distinguish between
 identifiers and hyperlinks. A browser for example doesn't render the
 string
 http://example.com/343-224122 as a clickable link unless you mark it up as
 one using the a tag.

 Linked Data advocates that all URIs are dereferenceable. But that's
 communicated out of band. Apart from JSON-LD, which states that URIs
 SHOULD
 be dereferenceable, no other RDF media type makes such a statement. Thus
 you
 need to use constructs such as hydra:Link and hydra:Resource to make the
 distinction explicit.

 Hope this helps. If not, let me know.


 --
 Markus Lanthaler
 @markuslanthaler






Re: representing hypermedia controls in RDF

2013-11-20 Thread Martynas Jusevičius
Ruben,

2 things I'm aware of and have implemented:

- URI templates: Linked Data API vocabulary
https://code.google.com/p/linked-data-api/wiki/API_Vocabulary
Graphity reuses api:uriTemplate and api:itemTemplate to match request
URIs against ontology classes. The actual template syntax is reused
from JAX-RS:
http://docs.oracle.com/cd/E19798-01/821-1841/ginpw/

- HTML forms: RDF/POST encoding http://www.lsrn.org/semweb/rdfpost.html
Graphity includes a Jena-based RDF/POST parser:
https://github.com/Graphity/graphity-browser/blob/master/src/main/java/org/graphity/client/reader/RDFPostReader.java

Hope that helps.

Martynas
graphityhq.com

On Wed, Nov 20, 2013 at 12:23 PM, Ruben Verborgh
ruben.verbo...@ugent.be wrote:
 Dear all,

 Do we have other approaches besides RDF Forms [1] to represent hypermedia 
 controls in RDF?

 Basically, I’m looking for any of the following:
 - representing hyperlinks in RDF (in addition to subject/object URLs)
 - representing URI templates [2]
 - representing forms (in the HTML sense)

 I’m aware of CoIN, which describes URI construction [2]. Is it used?

 Pointers to vocabularies or examples would be very much appreciated!

 Thanks,

 Ruben

 [1] http://www.markbaker.ca/2003/05/RDF-Forms/
 [2] http://tools.ietf.org/html/rfc6570
 [2] http://court.googlecode.com/hg/resources/docs/coin/spec.html



Re: Which datatype to use for time intervals

2013-11-12 Thread Martynas Jusevičius
Lars,

I'm using the Time ontology for this purpose: http://www.w3.org/TR/owl-time/

Martynas
graphityhq.com

On Tue, Nov 12, 2013 at 3:47 PM, Svensson, Lars l.svens...@dnb.de wrote:
 Is there a standard (recommended) datatype to use when I want to specify a 
 time interval (e. g. 2013-11-13--2013-11-14)? The XML Schema types [1] don't 
 include a time interval format (unless you want to encode it as starting time 
 + duration). There seems to be a way to encode it using ISO 8601, the 
 Wikipedia says that intervals can be expressed as 'Start and end, such as 
 2007-03-01T13:00:00Z/2008-05-11T15:30:00Z' [2], but I haven't found a 
 formally defined datatype to use with RDF data.

 [1] www.w3.org/TR/xmlschema-2/
 [2] http://en.wikipedia.org/wiki/ISO_8601#Time_intervals

 Thanks for any help,

 Lars



Re: Beginning work on an official Web Access Control spec.

2013-10-17 Thread Martynas Jusevičius
Andrei,

I would be interested. I have worked on ACL a lot recently, with a
goal to produce a transparent JAX-RS authorization filter for our
Graphity platform: http://graphity.org.

I have successfully implemented the filter using W3C ACL ontology and
plain SPARQL 1.1, but the code is unfortunately closed-source so far.
Single SPARQL query checks access for a specific foaf:Agent instance
(or foaf:Agent class in case of public access) and uses federation if
necessary. The main issues I've encountered were mostly related to
distributing ACL data across repositories and/or named graphs and
attaching them to user accounts.

Martynas
graphityhq.com

On Thu, Oct 17, 2013 at 3:05 PM, Andrei Sambra andrei.sam...@gmail.com wrote:
 Dear all,

 For those of you who know me, please skip this paragraph. For the others, I
 would first like to introduce myself. My name is Andrei Sambra and for the
 past three years I have been involved in different W3C groups, such as
 WebID, LDP and RWW (co-chair). As an advocate of Semantic Web technologies,
 especially those taking user privacy into consideration, I am currently
 working on two projects, MyProfile [1] (WebID provider / social network) and
 RWW.IO [2], the later including support for WebID, LDP and WAC [3]. RWW.IO
 is a Read/Write Web-based personal data store.

 Over the past few years, we have noticed that Linked Data is no longer a
 technology limited to the public space, finding its way into consumer
 applications. As a consequence, it becomes increasingly important to be able
 to protect access to private/sensitive resources. To this regard, the Web
 Access Control (WAC) ontology [3] has been put together by Tim Berners-Lee,
 offering the basic means to set up ACLs. Due to its nature (i.e. an
 ontology) however, it does not provide the formalism necessary to implement
 it in order to achieve interoperability, nor does it provide an organized
 space where it can be discussed and improved.

 The reason behind writing the email is that I would like to know how many
 people are interested in participating to the standardization process of a
 Web Access Control spec.

 The Read Write Web community group has so far been the host of inquiries
 regarding the WAC ontology. However, being a community group, it does not
 have access to W3C's teleconference system, nor to the issue tracking
 system. Depending on your interest in a WAC spec, and the preliminary
 discussions we might have, we may very well have to create a dedicated
 working group. For now however, I suggest we use the public RWW list
 (public-...@w3.org) in order to coordinate the efforts on this subject.

 Please let me know how you stand on this subject and perhaps suggest a way
 to count who is interested in participating (doodle, something else maybe?).

 Best wishes,
 Andrei

 [1] https://my-profile.eu/
 [2] https://rww.io/
 [3] http://www.w3.org/wiki/WebAccessControl



Re: WebID Frustration

2013-08-06 Thread Martynas Jusevičius
Was following the thread, decided to jump in :) So how do I create a
certificate? I have a FOAF profile and want to add it there.
As Hugh pointed out,
http://webid.myxwiki.org/xwiki/bin/view/WebId/CreateCert doesn't work.
Is it still maintained?

Martynas
graphityhq.com

On Tue, Aug 6, 2013 at 11:17 PM, Hugh Glaser h...@ecs.soton.ac.uk wrote:
 Thank you all very much - all really helpful.
 I think it turned out I had succeeded, but the WebID login on the site wasn't 
 working.
 Or something. :-)

 So then I thought I would try again, by clearing out things - I had at least 
 two certs by now. :-)
 Typical errors I seemed to hit from pages were things like:
 http://data.turnguard.com/java/1.6.0_29/com/turnguard/webid/exceptions/ModulusMismatchException
  is thrown by http://www.glasers.org/hugh.rdf#me 
 (https://webid.turnguard.com/WebIDTestServer/onlywithcert)
 Failed to execute the [velocity] macro (http://www.w3.org/wiki/WebID - 
 http://webid.myxwiki.org/xwiki/bin/view/WebId/CreateCert)

 Because I am on a mac, I knew that the right way to do things is Keychain.
 So I followed Kingsley's excellent instructions, which gave me the crucial 
 parameters.
 I needed to find the Public Key, which I eventually found in the info, by 
 actually clicking on, even though it doesn't look like a link.

 *Conclusion*
 So, as a mac user, the pages I found most useful were
 https://plus.google.com/112399767740508618350/posts/62pFBxAm7Ev
 to generate the cert
 and
 https://webid.turnguard.com/WebIDTestServer/debug
 to check I had it right.
 Also, http://www.w3.org/wiki/WebID told me what the RDF should look like.

 It still seems to me that this is not a technology that is very useable - it 
 really shouldn't have taken so many messages to help me!
 I was thinking of setting up for my users to use WebID on a little social 
 networking site I have, but I think I will give it a miss for the moment!

 And yes, now I am logged in at RWW.IO!

 So thank you all for the time and very detailed messages - they all 
 contributed to my success!
 Hugh

 On 6 Aug 2013, at 16:54, Norman Gray nor...@astro.gla.ac.uk
  wrote:


 Hugh and Kingsley, hello.

 On 2013 Aug 6, at 14:27, Kingsley Idehen wrote:

 In reality though, for your particular user profile I would encourage you 
 to simply manually add insert the relations required by the WebID+TLS 
 protocol into your existing profile, after you've generated an X.509 
 certificate using in-built OS utilities [1].

 I've just done this, prompted by your message, Hugh, and it was oddly easy, 
 _with_ Kingsley's hints.  The following fills in a couple of elided steps.

 1. Create a Profile Document -- this gets you a Personal HTTP URI (or 
 WebID) that denotes entity You

 I already have a FOAF file http://nxg.me.uk/norman/.  Tick!

 2. Generate an X.509 Certificate -- as part of the process, place your 
 WebID in the SAN (Subject Alternative Name) slot

 I did that, using Kingsley's walkthrough of the OS X Certificate Assistant 
 (within Keychain Access) at 
 https://plus.google.com/112399767740508618350/posts/62pFBxAm7Ev.

 This took two goes, because I decided that I should create a certificate 
 with CN Norman Gray (WebID), adding the (WebID) to avoid confusing 
 myself.

 3. Add a relation to your Profile Document that associates your WebID with 
 the Public Key (exponent and modulus) from the Cert. generated in step #3.

 If you use OS X Keychain Access, then 'Get Info' on the certificate will 
 show the exponent and modulus.  The wrinkle here is that the Get Info 
 display names the modulus as 'Public Key' (which I suppose one could quibble 
 with).

 If you want to do it the hard way (as I had to do, to work out that that 
 _was_ what they meant by 'Public Key'), then export the certificate as a 
 .cer file, and

  % openssl x509 -inform DER -modulus -noout -in ~/Desktop/norman-webid.cer

 I added this to my FOAF file with:

cert:key [
cert:exponent 65537;
cert:modulus 
 B1CF550703951EE7DFAC2E32DF1FDF8986F17B1167FFB2780109DD7D77C109F37BB558E67F031C41BD224B98CFA04F6265F02FB88C9F392CAC6C02A712B0091C63267ACDD155CCE4631EA0B177023F9C3DD898A7EEA14F72CACC4A5F64677566F36C3D98BF9492691711E1BA181667D159AEBD8B02DDBCAAD8E80451F41F9D389185533D9A6FB5316039A21494EDBE4A71DA212F91C57D66B8307E395605E02017BF3398132383928F0F36D1BC6EE9F68F03BE9C38A52180937F868869DF0FBEF1FEB8A5D799C67CCEE70C4DA7458CB9B9B73BE2614B922E2747CA6FEBB1519328C2CCEA8355873AC6790624C3A05922797319F55E146F76EEE2230FFBD46147^^xsd:hexBinary;
];

 I got the details of that from http://www.w3.org/wiki/WebID.

 Then I put it on the web.

 4. Verify your WebID

 I went to http://webid.turnguard.com/WebIDTestServer/ and clicked on 
 'OnlyWithCert'.  I was asked to trust the server (because its certificate 
 wasn't signed by a CA), and to choose which certificate to use, and ... it 
 worked.  That was with both Chrome and Safari.

 5. Start authenticating against apps and services that support WebID+TLS 
 based 

Re: Rendering RDF in interactive html forms

2013-06-26 Thread Martynas Jusevičius
RDF/POST Encoding for RDF: http://www.lsrn.org/semweb/rdfpost.html

Graphity includes a Jena-based RDF/POST parser.

Martynas
graphityhq.com

On Wed, Jun 26, 2013 at 1:46 PM, Dominic Oldman do...@oldman.me.uk wrote:

 I am aware of RForms (https://code.google.com/p/rforms/) but are there also
 other RDF template systems that support normal html form features.

 Thanks,

 Dominic



Re: RDF's challenge

2013-06-11 Thread Martynas Jusevičius
I disagree completely that RDF is not Web-native. Read-write RDF Linked
Data is the way the Web was supposed to be, in my opinion.

Martynas
On Jun 11, 2013 5:33 PM, Alvaro Graves alv...@graves.cl wrote:

 When talking to web developers, they tell me they find little benefit on
 using RDF. This is due to two main reasons, in my opinion (there may be
 others, for sure):

 - Lack of usable tools: How many good, stable tools for managing data in
 RDF are available out there? How many are for CSV? Even an array of arrays
 is good enough sometimes.
 - Lack of usable data: In the case of Open Government Data, there are tons
 of CSV documents available. Modeling data as RDF requires an extra effort,
 which most people won't take, since they already have the data available.

 If you add the fact that tabular data is easier in many cases easier to
 understand (or at least we are more used to) I can understand why many
 developers don't like RDF. The cherry on top is the the fact that URIs are
 not human-friendly (ok, CURIEs makes it easier, I admit it), so the
 Semantic Web does not look very attractive to web developers.

 I do believe however that RDF is a great data model. For example, features
 of SPARQL 1.1, (I'm thinking on property paths here) and the use of
 inference can give you a powerful workbench to work with. I tend to agree
 with Rufus re. the diagnosis (RDF is not web native), but I differ in the
 solution. For me, instead of getting rid of a nice data model such as RDF,
 we need is to provide usable tools, usable for developers at least. I know
 there are many efforts on this regard, but there are many opportunities we
 haven't considered. We need easier ways to take data and convert it, manage
 it and use it, and the tools for that should be at least as simple as other
 common tools.

 I need to bring David Karger's article (based on his keynote at ESWC) at
 http://groups.csail.mit.edu/haystack/blog/2013/06/10/keynote-at-eswc-part-3-whats-wrong-with-semantic-web-research-and-some-ideas-to-fix-it/.
 I think he expresses with great clarity some of the problems of the SemWeb
 community and RDF in particular.



 Alvaro Graves-Fuenzalida
 Web: http://graves.cl - Twitter: @alvarograves


 On Tue, Jun 11, 2013 at 1:49 PM, Phil Archer ph...@w3.org wrote:

 Thanks for picking this up Kingsley.

 I'd just like to highlight the end of the report [1] where I've described
 what we're proposing to our members on this, namely a new WG that will look
 specifically at CSV and the metadata needed to easily transform it into RDF
 or any other format. Jeni's work and others are inputs to that group. All
 being well it'll be chartered in the early autumn but we have hoops to go
 through first.

 I gave a talk on this at SemTech last week and made a slidecast version
 [2]. It sets out a bunch of things we're doing or proposing to do at W3C in
 the imminent future.

 Cheers

 Phil.

 [1] 
 http://www.w3.org/2013/04/odw/**report#nexthttp://www.w3.org/2013/04/odw/report#next
 [2] 
 http://philarcher.org/diary/#**semtechhttp://philarcher.org/diary/#semtech

 On 11/06/2013 14:00, Kingsley Idehen wrote:

 All,

 /RDF isn't natural --- and therefore is barely used --- by the average

 Web developer or data wrangler. CSV, by contrast, is. And you are going
 to need to win the hearts and minds of those folks for whatever approach
 is proposed/. -- Rufus Pollock (OKFN) [1][2].


 RDF is actually natural.  Unfortunately, narratives around it have now
 created the illusion that its unnatural. We observe our world using
 patterns much closer to RDF (entity relationship graphs) than CSV (when
 used a mechanism for Tabular representation of entity relationships).

 SPARQL enables one to expose RDF based data in a myriad of ways will
 also enabling easy to comprehend Linked Data utility (i.e., HTTP URI
 based super keys that specically resolve to documents that describe a
 URIs referent).

 Following the Open Data meeting I stumbled across a CSV browser [3]
 developed by @JeniIT . I took a quick look and realized it could provide
 the foundation addressing some of the confusion around Open Data, RDF,
 and Linked Data. Thus, I had one of our interns simply tweak the CSV
 browser such that on receipt of SPARQL-FED protocol URLs that resolve to
 CSV formatted data you end up with a Linked Data browser.

 The simple example above basically showcases how Linked Data aids data
 discovery using the Web's basic follow-your-nose exploration pattern by
 leveraging what CSV has to offer i.e., using a format that many (users
 and developers) are already familiar with as a bridge builder en route
 to showcasing the virtues of RDF, SPARQL, and Linked Data.

 Links:

 [1] 
 http://www.w3.org/2013/04/odw/**reporthttp://www.w3.org/2013/04/odw/report--
  Open Data Report.
 [2]
 http://blog.okfn.org/2013/04/**24/frictionless-data-making-**
 

Re: It is bad practice to consume Linked Data and not publish Linked Data

2013-04-04 Thread Martynas Jusevičius
Hey Harry,

HeltNormalt (http://heltnormalt.dk) is a danish entertainment
content-publishing site built entirely on Linked Data principles, using
Dydra triplestore (http://dydra.com) and Graphity Linked Data platform (
http://graphity.org).

Content negotiation was not implemented because of caching reasons (there
is quite a high traffic), but RDF is accessible using a query parameter:
http://heltnormalt.dk/striben/2011/03/09?view=rdf

We presented a paper about its architecture at the W3C LEDP workshop:
http://www.w3.org/2011/09/LinkedData/ledp2011_submission_1.pdf

Martynas
graphity.org


On Thu, Apr 4, 2013 at 2:25 PM, Harry Halpin hhal...@ibiblio.org wrote:



 On Thu, Apr 4, 2013 at 3:25 AM, Kingsley Idehen kide...@openlinksw.comwrote:

  On 4/3/13 6:06 PM, Harry Halpin wrote:

 Is there a list of WebApps that actually consume Linked Data?

 Does anyone know of any sites that actually use RDF as a backend?

 Crossing fingers.


 There's been a whole thread in the last month to which most responses
 have included URLs to Linked Data consumer apps [1] . I would start there
 :-)

 Link: http://lists.w3.org/Archives/Public/public-lod/2013Mar/0152.html .




 These are all about visualization. I'm not sure if that's a real problem
 with a concrete Web App. Linked Data visualization is only a problem if you
 first believe Linked Data is the solution to your problem. I'm looking for
 apps where Linked Data provides a concrete benefit over, say, just using
 SQL or attribute-value pairs on the backend.

 It would be great if a list of these Linked Data (AJAR I remember TimBL
 saying) were kept on a wiki page somewhere!

  Kingsley


 On Wed, Apr 3, 2013 at 11:47 PM, Kingsley Idehen 
 kide...@openlinksw.comwrote:

  On 4/3/13 5:32 PM, Hugh Glaser wrote:

 Because it is. :-)

 Along with Kingsley's Crime #2 Against Linked Data, I think this is
 Crime #1 Against Linked Data.
 (It is the other side of what Kingsley would possibly call the value
 chain.)

 Someone spent a lot of effort creating and publishing the data you are
 consuming.
 And went to the effort of making it easy for you by publishing it as
 Linked Data.
 OK, if you are just doing a bit of republishing, maybe there isn't much
 point, but if you have done anything of interest, and especially if you
 have added any knowledge, let other people consume the fruits of your
 labours as easily as the people you got the stuff from made it for you.
 You clearly know about Linked Data, because you are consuming it, so it
 shouldn't be that hard for you (OK, maybe we need to make it easier!).

 And never think that the stuff you were publishing isn't interesting
 for someone else to consume!
 If everyone thought like that we wouldn't have any Linked Data at all.

 Crime #3 Against Linked Data?
 Using a string to identify a resource, because nobody would want to
 make a statement about that.

 Cheers
 Hugh



  Amen!!

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen








 --

 Regards,

 Kingsley Idehen  
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen







Re: It is bad practice to consume Linked Data and not publish Linked Data

2013-04-04 Thread Martynas Jusevičius
NXP Semiconductors are building their product information hub on Linked
Data:
http://blog.nxp.com/is-linked-data-the-future-of-data-integration-in-the-enterprise/

BBC has quite a few blogposts about semantic publishing and their use of
Linked Data, for example:
http://www.bbc.co.uk/blogs/internet/posts/Linked-Data-Connecting-together-the-BBCs-Online-Content

Martynas
graphity.org



On Thu, Apr 4, 2013 at 3:04 PM, Harry Halpin hhal...@ibiblio.org wrote:



 On Thu, Apr 4, 2013 at 1:40 PM, Martynas Jusevičius marty...@graphity.org
  wrote:

 Hey Harry,

 HeltNormalt (http://heltnormalt.dk) is a danish entertainment
 content-publishing site built entirely on Linked Data principles, using
 Dydra triplestore (http://dydra.com) and Graphity Linked Data platform (
 http://graphity.org).

 Content negotiation was not implemented because of caching reasons (there
 is quite a high traffic), but RDF is accessible using a query parameter:
 http://heltnormalt.dk/striben/2011/03/09?view=rdf

 We presented a paper about its architecture at the W3C LEDP workshop:
 http://www.w3.org/2011/09/LinkedData/ledp2011_submission_1.pdf


 That's exactly the type of example I'm looking for. Any others?


 Martynas
 graphity.org


 On Thu, Apr 4, 2013 at 2:25 PM, Harry Halpin hhal...@ibiblio.org wrote:



 On Thu, Apr 4, 2013 at 3:25 AM, Kingsley Idehen 
 kide...@openlinksw.comwrote:

  On 4/3/13 6:06 PM, Harry Halpin wrote:

 Is there a list of WebApps that actually consume Linked Data?

 Does anyone know of any sites that actually use RDF as a backend?

 Crossing fingers.


 There's been a whole thread in the last month to which most responses
 have included URLs to Linked Data consumer apps [1] . I would start there
 :-)

 Link: http://lists.w3.org/Archives/Public/public-lod/2013Mar/0152.html.




 These are all about visualization. I'm not sure if that's a real problem
 with a concrete Web App. Linked Data visualization is only a problem if you
 first believe Linked Data is the solution to your problem. I'm looking for
 apps where Linked Data provides a concrete benefit over, say, just using
 SQL or attribute-value pairs on the backend.

 It would be great if a list of these Linked Data (AJAR I remember TimBL
 saying) were kept on a wiki page somewhere!

  Kingsley


 On Wed, Apr 3, 2013 at 11:47 PM, Kingsley Idehen 
 kide...@openlinksw.com wrote:

  On 4/3/13 5:32 PM, Hugh Glaser wrote:

 Because it is. :-)

 Along with Kingsley's Crime #2 Against Linked Data, I think this is
 Crime #1 Against Linked Data.
 (It is the other side of what Kingsley would possibly call the value
 chain.)

 Someone spent a lot of effort creating and publishing the data you
 are consuming.
 And went to the effort of making it easy for you by publishing it as
 Linked Data.
 OK, if you are just doing a bit of republishing, maybe there isn't
 much point, but if you have done anything of interest, and especially if
 you have added any knowledge, let other people consume the fruits of your
 labours as easily as the people you got the stuff from made it for you.
 You clearly know about Linked Data, because you are consuming it, so
 it shouldn't be that hard for you (OK, maybe we need to make it easier!).

 And never think that the stuff you were publishing isn't interesting
 for someone else to consume!
 If everyone thought like that we wouldn't have any Linked Data at all.

 Crime #3 Against Linked Data?
 Using a string to identify a resource, because nobody would want to
 make a statement about that.

 Cheers
 Hugh



  Amen!!

 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen








 --

 Regards,

 Kingsley Idehen
 Founder  CEO
 OpenLink Software
 Company Web: http://www.openlinksw.com
 Personal Weblog: http://www.openlinksw.com/blog/~kidehen
 Twitter/Identi.ca handle: @kidehen
 Google+ Profile: https://plus.google.com/112399767740508618350/about
 LinkedIn Profile: http://www.linkedin.com/in/kidehen









Re: uri for uri

2013-04-01 Thread Martynas Jusevičius
Shouldn't the path component of the URIs be percent-encoded? That is,

  http://uri4uri.net/uri/%0Ahttp%3A%2F%2Fdbpedia.org%2Fresource%2FCopenhagen

instead of

  http://uri4uri.net/uri/http://dbpedia.org/resource/Copenhagen

Martynas
graphity.org

On Mon, Apr 1, 2013 at 11:37 AM, Christopher Gutteridge
c...@ecs.soton.ac.uk wrote:
 Well if I've understood correctly, uri4uri is an extreme version of
 reification. rdfs: gave a way to describe a triple in triples but it still
 related resources together, not the identifiers for those resources. That
 makes it impossible to make statements about, say, what authority assigned
 the URI and when.



 On 01/04/2013 08:49, Michael Brunnbauer wrote:

 Hello Chris,

 what a great step forward ! Now if the RDF WG would adopt this proposal,
 LOD and RDF would really be ready to save the world!

 http://www.brunni.de/extending_the_rdf_triple_model.html

 Regards,

 Michael Brunnbauer

 On Mon, Apr 01, 2013 at 12:13:19AM +0100, Christopher Gutteridge wrote:

 Apparently http://uri4uri.net/ launched today and claims to solves many
 of the problems of Linked data. It looks promising..

 --
 Christopher Gutteridge -- http://users.ecs.soton.ac.uk/cjg

 University of Southampton Open Data Service:
 http://data.southampton.ac.uk/
 You should read the ECS Web Team blog:
 http://blogs.ecs.soton.ac.uk/webteam/


 --
 Christopher Gutteridge -- http://users.ecs.soton.ac.uk/cjg

 University of Southampton Open Data Service: http://data.southampton.ac.uk/
 You should read the ECS Web Team blog: http://blogs.ecs.soton.ac.uk/webteam/





Re: How can I express containment/composition?

2013-02-21 Thread Martynas Jusevičius
Hey Frans,

Dublin Core Terms has some general properties for this:
dct:hasPart http://dublincore.org/documents/dcmi-terms/#terms-hasPart
dct:isPartOf http://dublincore.org/documents/dcmi-terms/#terms-isPartOf

Martynas
graphity.org

On Thu, Feb 21, 2013 at 2:47 PM, Frans Knibbe | Geodan
frans.kni...@geodan.nl wrote:
 Hello,

 I would like to express a composition relationship. Something like:
 A Country consist of Provinces
 A Province consists of Municipalities

 I thought this should be straightforward because this is a common and
 logical kind of relationship, but I could not find a vocabulary which allows
 be to make this kind of statement. Perhaps I am bad at searching, or maybe I
 did not use the right words.

 I did find this document:
 http://www.w3.org/2001/sw/BestPractices/OEP/SimplePartWhole/ (Simple
 part-whole relations in OWL Ontologies). It explains that OWL has no direct
 support for this kind of relationship and it goes on to give examples on how
 one can create ontologies that do support the relationship in one way or the
 other.

 Is there a ready to use ontology/vocabulary out there that can help me
 express containment/composition?

 Thanks in advance,
 Frans





Re: annotations and RDF

2013-02-07 Thread Martynas Jusevičius
Matteo,

if you want annotations interleaving with text, maybe you could use
RDFa? Here's one of the first RDFa annotations Google hits:
http://www.aclweb.org/anthology-new/R/R11/R11-2008.pdf

Martynas
graphity.org

On Thu, Feb 7, 2013 at 3:43 PM, Matteo Casu mattec...@gmail.com wrote:
 Thank you Robert!

 I've just seen what I think is the new draft (february 5th). I will go 
 through it! In the meantime, I'm wondering what you think on the problem of 
 keeping all the annotations of a text in RDF vs. keeping them in a separate 
 store and bind them to entities in the RDF.

 The use case I have in mind is: imagine a book, say The Lord of the Rings. 
 Assume we want to annotate domain information in RDF (characters, actions, 
 etc..) as well as linguistic (or librarian)-oriented annotations: 
 paragraphs, lines, pages (in order to make citations..), down to lemmas and 
 so on..

 We could follow the FRBR model and keep in an RDF graph the domain 
 information AND some librarian information. But what about the annotations on 
 text as -- say -- links between a character and the lines on which they 
 appear?Should these be RDF statements? What about the the problem of text 
 duplications in annotations which are not independent (e.g. lemmas and 
 sentences)?
 Have you (as a community) a definite idea on this issue or perhaps is 
 something which is still under observation?




 Il giorno 04/feb/2013, alle ore 21:37, Robert Sanderson azarot...@gmail.com 
 ha scritto:

 Hi Matteo,

 The Annotation Ontology has merged with Open Annotation Collaboration
 in the W3C community group:
  http://www.w3.org/community/openannotation/

 And Paolo is co-chair along with myself.

 We're *just* about to release the next version of the Community Group
 draft, so your interest comes at a great time.
 The NIF folk are also part of the Community Group, and we of course
 would encourage your participation as well!

 Many thanks,

 Rob Sanderson
 (Open Annotation Community Group co-chair)


 On Mon, Feb 4, 2013 at 2:55 AM, Matteo Casu mattec...@gmail.com wrote:
 Hi everybody,

 [my apologies for cross posting -- possibly of interest for both 
 communities]

 does anybody could point me to the major pros and cons in using the 
 Annotation Ontology [0] [1] vs. the NLP interchange format in the context 
 of annotating (portions of) literary texts? My impression is that when 
 someone is using UIMA, the integration of AO with Clerezza-UIMA could give 
 more comfort wrt NiF.

 [0] http://code.google.com/p/annotation-ontology/
 [1] http://www.annotationframework.org/
 [2] http://nlp2rdf.org/about






Re: Content negotiation for Turtle files

2013-02-06 Thread Martynas Jusevičius
JSON is not a silver bullet. By only providing JSON, you cut off
access for the whole XML toolchain.
My related post on HackerNews: http://news.ycombinator.com/item?id=4417111

Martynas
graphity.org

On Wed, Feb 6, 2013 at 2:23 PM, William Waites w...@styx.org wrote:
 On Wed, 06 Feb 2013 11:45:10 +, Richard Light rich...@light.demon.co.uk 
 said:

  In a web development context, JSON would probably come second
  for me as a practical proposition, in that it ties in nicely
  with widely-supported javascript utilities.

 If it were up to me, XML with all the pointy brackets that make my
 eyes bleed would be deprecated everywhere. Most or all modern
 programming languages have good support for JSON, the web browsers do
 natively as well, and it's much easier to work with since it mostly
 maps directly to built-in datatypes.

  To me, Turtle is symptomatic of a world in which people are
  still writing far too many Linked Data examples and resources by
  hand, and want something that is easier to hand-write than
  RDF/XML.  I don't really see how that fits in with the promotion
  of the idea of machine-processible web-based data.

 Kind of agree. Turtle is a relic of trying to make a machine readable
 quasi-prose representation of data, which is suitable for both
 machines and people. But it's not general enough -- you can only use
 it to write RDF, which means you need specialised tools. It's
 saddening because (especially with some of the N3 enhancements) it's
 quite an elegant approach.

 Cheers,
 -w




On Wed, Feb 6, 2013 at 2:23 PM, William Waites w...@styx.org wrote:
 On Wed, 06 Feb 2013 11:45:10 +, Richard Light rich...@light.demon.co.uk 
 said:

  In a web development context, JSON would probably come second
  for me as a practical proposition, in that it ties in nicely
  with widely-supported javascript utilities.

 If it were up to me, XML with all the pointy brackets that make my
 eyes bleed would be deprecated everywhere. Most or all modern
 programming languages have good support for JSON, the web browsers do
 natively as well, and it's much easier to work with since it mostly
 maps directly to built-in datatypes.

  To me, Turtle is symptomatic of a world in which people are
  still writing far too many Linked Data examples and resources by
  hand, and want something that is easier to hand-write than
  RDF/XML.  I don't really see how that fits in with the promotion
  of the idea of machine-processible web-based data.

 Kind of agree. Turtle is a relic of trying to make a machine readable
 quasi-prose representation of data, which is suitable for both
 machines and people. But it's not general enough -- you can only use
 it to write RDF, which means you need specialised tools. It's
 saddening because (especially with some of the N3 enhancements) it's
 quite an elegant approach.

 Cheers,
 -w





Re: Querying different SPARQL endpoints

2012-12-19 Thread Martynas Jusevičius
Hey Vishal,

knowledge of structure can be useful but is not necessary. Probably
the most common pitfall is not to check if named graphs were used.
Your can try this query that handles both cases (default and named graphs):

PREFIX rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns#
PREFIX rdfs: http://www.w3.org/2000/01/rdf-schema#
PREFIX owl: http://www.w3.org/2002/07/owl#
PREFIX xsd: http://www.w3.org/2001/XMLSchema#

SELECT DISTINCT *
WHERE
{
{ ?s ?p ?o }
UNION
{
GRAPH ?g
{ ?s ?p ?o }
}
}

Martynas
graphity.org

On Wed, Dec 19, 2012 at 8:28 PM, Vishal Sinha vishal.sinha...@yahoo.com wrote:
 There are many public SPARQL endpoints available.
 For example the cultural linked data:
 http://cultura.linkeddata.es/sparql

 How can I know what type of information is available in this dataset.
 Based on what assumption I can query it?
 Do I need to know any structure beforehand?

 Viashal




Re: Temporal analysis or RDF graph data

2012-10-31 Thread Martynas Jusevičius
Hey Vishal,

if you mean a triple store with built-in provenance metadata, I'm not
aware of any that provide this out-of-the-box.

However it should be possible to implement using SPARQL named graphs
as this paper shows:
http://static.usenix.org/event/tapp11/tech/final_files/Halpin.pdf

This is not a triplestore but it uses a similar approach: http://www.datomic.com

Martynas
graphity.org

On Mon, Oct 29, 2012 at 9:00 PM, Vishal Sinha vishal.sinha...@yahoo.com wrote:
 Hi,

 I want to model a RDF graph such that I can query by temporal factors later,
 for example:
 - how the graph changed between 20th October 2012 to 30th October 2012. I
 want to see all updates.
 - Snapshot of a particular node on 20th July 2012, 25th July 2012, etc.

 How can I model such a graph or how can I store such requirements in any
 RDF-srore providing such
 functionality.

 Thanks,
 Vishal



Re: LD browser rot

2012-09-22 Thread Martynas Jusevičius
Hey Sebastian,

could you please try Graphity Browser:
http://semanticreports.com/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga

I'll send an email to Denny to add it to the list.

Martynas
http://graphity.org

On Sat, Sep 22, 2012 at 12:34 PM, Sebastian Hellmann
hellm...@informatik.uni-leipzig.de wrote:
 Hi all,
 I was looking for simple linked data browsers and started at:
 http://browse.semanticweb.org/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Here is the sad story:

 First, I had to switch to Chrome as browse.semanticweb.org didn't work in my
 Firefox (could be the fault of my Firefox customization)

 URIs in order as  given by http://browse.semanticweb.org

 Fail -
 http://dig.csail.mit.edu/2005/ajar/release/tabulator/0.8/tab?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Good:
 http://iwb.fluidops.com/resource/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Fail -
 http://visinav.deri.org/detail?focus=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Fail -
 http://www5.wiwiss.fu-berlin.de/marbles?lang=enuri=http://dbpedia.org/resource/Lady_Gaga
 Fail -
 http://dataviewer.zitgist.com/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Good -
 http://demo.openlinksw.com/rdfbrowser2/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Fail -
 http://www4.wiwiss.fu-berlin.de/rdf_browser/?browse_uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 Good -
 http://graphite.ecs.soton.ac.uk/browser/?uri=http://dbpedia.org/resource/Lady_Gaga
 ?? -
 http://139.82.71.26:3001/explorator/index?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FLady_Gaga
 ?? - Triplr- putting non information resources in Triplr doesn't make sense:
 http://triplr.org/turtle/dbpedia.org/resource/Lady_Gaga

 DBpedia seems to be working fine:
 curl -IL -H Accept: application/rdf+xml
 http://dbpedia.org/resource/Lady_Gaga

 I would added this issue to individual trackers, but I think it is something
 a community should solve together.

 All the best,
 Sebastian


 --
 Dipl. Inf. Sebastian Hellmann
 Department of Computer Science, University of Leipzig
 Events:
 * http://sabre2012.infai.org/mlode (Leipzig, Sept. 23-24-25, 2012)
 * http://wole2012.eurecom.fr (*Deadline: July 31st 2012*)
 Projects: http://nlp2rdf.org , http://dbpedia.org
 Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
 Research Group: http://aksw.org



Re: Linked Data publishing (LAMP) tool

2012-09-11 Thread Martynas Jusevičius
Hey Dimitris,

Graphity Browser has a UI, SPARQL endpoint and supports content
negotiation: https://github.com/Graphity/graphity-browser
Demo instance here: http://semanticreports.com

It does not support data import as of yet, but it works out-of-the-box
on SPARQL endpoints, so the researcher could import his/her data into
a service like Dydra (http://dydra.com) and then deploy the browser on
top of it.

There is also LAMP version of Graphity but it does not include the UI:
https://github.com/Graphity/graphity-core

Martynas
graphity.org

On Sat, Sep 8, 2012 at 5:49 PM, Dimitris Kontokostas
kontokos...@informatik.uni-leipzig.de wrote:
 Hi,

 I was wondering what is the simplest (LAMP) tool for someone to publish
 Linked Data.
 The tool should have a GUI, support uploading of data, content negotiation
 and a SPARQL endpoint.

 The scenario behind this is a researcher who owns some data. He/she manages
 to convert it to RDF (somehow) and now needs to publish it.
 His/her technical background is limited to setting up/administrating a
 wordpress website.
 So, do we have anything similar to wordpress for Linked Data?

 This question will be discussed in the mlode workshop and this mail will be
 used to summarize all the available tools.

 Best,
 Dimitris

 --
 Dimitris Kontokostas
 Department of Computer Science, University of Leipzig
 Research Group: http://aksw.org
 Homepage:http://aksw.org/DimitrisKontokostas



[ANN] Graphity - making rapid development of Linked Data webapps easy

2012-07-24 Thread Martynas Jusevičius
Hey all,

I'm pleased to announce Graphity -- a fully extensible generic Linked
Data platform for building end-user Web applications.

Rationale
==

Graphity started as JAX-RS [JAX] and Jena-compatible framework, since
there was a lack of RDF and Linked Data tools in PHP. It was created
while developing http://heltnormalt.dk (one of the biggest online
entertainment sites in Scandinavia and another successful Linked Data
business case), later open-sourced and presented at the W3C Linked
Enterprise Data Patterns workshop [LEDP].

Since then the development has moved to Java and focused on
application components for building Linked Data consumer webapps. The
LOD cloud is growing exponentially [LOD], which means publishing is
not an issue anymore as we're entering LD consumption phase.

Design
=

Graphity was designed for compatibility with established standards and
APIs such as Jena, JAX-RS , SPIN [SPIN], and RDF/POST [RP]. It tries
to create as few new conventions as possible.

The code is generic and offers extensibility and flexibility without
the bloat often found in other tools. It is possible because Graphity
does not have any object model above the RDF level and can do a full
read/write data roundtrip in RDF. This eliminates data model
mismatches (as in object/relational/XML/RDF) and conversions that
result in development time and bugs (think ORM).

The current Web 2.0 API many-to-many integration approach can be
replaced by pivotal integration, using RDF as the central data model
and Graphity as the RDF gateway. As the data volumes grow, this has
linear costs instead exponential.

Usage
=

The code is open-source under GPL license an can be found on GitHub
along with more documentation:
https://github.com/Graphity/graphity-browser
This release bundles the core packages with a JAX-RS webbap built on
it -- a Linked Data browser, which is also a basic implementation of
the Linked Data Platform [LDP]. Download .jar to include Graphity in
your project, or .war to run the standalone webapp.

Browser demo can be seen here (note: DBPedia can be slow, and the
server is not production-grade):
http://semanticreports.com/?uri=http%3A%2F%2Fdbpedia.org%2Fresource%2FCopenhagen

3rd party webapps can be built on Graphity Browser by simply extending
JAX-RS resources and overriding the necessary methods, as well as
including the XSLT stylesheets and overriding the templates -- or
using a custom web application and/or XSLT layout altogether.
In this way, Graphity can be used to build triplestore-backed data
mgmt systems and publishing portals, semantic CMS, Linked Data
analytics and visualizations, tools for integration of Web 2.0 APIs as
Linked Data etc.

Users of Apache Clerezza, IKS, Elda, Callimachus, Information
Workbench and similar tools might might find Graphity interesting.

The core package is also available in PHP (and includes probably the
first basic implementation of JAX-RS in PHP) under Apache license:
https://github.com/Graphity/graphity-core

Call for collaboration
==

Feedback is greatly appreciated. Bugs and feature/improvement
suggestions can be filed here:
https://github.com/Graphity/graphity-browser/issues
We can also be found on Twitter:
https://twitter.com/graphityhq

We welcome suggestions and open-source contributions, for the Java
packages as well as XSLT stylesheets, for example new GRDDL converters
or user experience improvements for specific datasources/and or
vocabularies.

References


[LEDP] http://www.w3.org/2011/09/LinkedData/ledp2011_submission_1.pdf
[LOD] http://richard.cyganiak.de/2007/10/lod/
[JAX] http://jcp.org/en/jsr/detail?id=311
[SPIN] http://topbraid.org/spin/api/
[RP] http://www.lsrn.org/semweb/rdfpost.html
[LDP] http://www.w3.org/2012/ldp/charter.html

Martynas Jusevicius
graphity.org
semantic-web.dk



Re: Linked Data Demand Discussion Culture on this List, WAS: Introducing Semgel, a semantic database app for gathering analyzing data from websites

2012-07-22 Thread Martynas Jusevičius
Hey all,

speaking of (business) use cases for Linked Data, there is a number of
them on W3C site:
http://www.w3.org/2001/sw/sweo/public/UseCases/

However I needed to present a few cases as a minimal slide deck, so
here it is -- maybe it will be helpful to someone:
http://www.slideshare.net/graphity/linked-data-success-stories
(Disclaimer: my project is mentioned in the end)

Martynas
graphity.org

On Sun, Jul 22, 2012 at 6:26 AM, Harish Kumar M. har...@semgel.com wrote:
 Hi,

 Thank you all for your observations on Semgel. I was really delighted to see
 Sebastian taking it upon himself to articulate in some detail about how
 Semgel aligns with the Linked Data vision. Much appreciated!

 Its also been great to see some of the interesting thoughts and pointers
 that have been shared in this thread. I would like to offer  (albeit with
 the risk of rehashing prior discussions in this group) clarifications and
 observations on a few points .

 - The need for LinkedData consuming apps publishing Linked data URI's
 (Kingsley's suggestion that served as a trigger for this thread!)
 - Balancing idealism(ie dogma) and pragmatism(ie market-driven) in realizing
 the vision of the Semantic web. (amplifying Bergman  Giovanni)
 - The need for robust Linked Data Usecases which can logically be shown to
 be superior to other/traditional approaches (amplifying Sebastian)

 ---
 Linked-Data consuming apps should publish Linked-date URI's

 First off, I want to clarify that I considered Kingsley's queries and
 suggestions to be perfectly reasonable and did not perceive them in any way
 to be negative. I just happened to disagree with him about priorities. And
 if the cut and thrust of argument can lead to a discussion like this, we
 don't have much to complain about!

 Getting back to the point, Semgel's involvement with linked-data is a
 strategic decision - its a leap of faith. So, in no way am I trying to
 debate whether there is market of linked-data - after investing a bunch of
 time and effort, I and most of us in this group are well past that point!

 However, we would like our tactical decisions to be market-driven. I saw
 Kingsley's suggestion that linked-data consuming apps too should publish
 LinkedData URI's as something that should be market-driven.

 Somewhere in the thread, Kingsley elegantly articulated the technical
 rationale for doing this

 ... the application ingests structured data but emits HTML pages (reports)
 where the actual data keys (URIs) for the data are now dislocated from the
 value chain? If you consume Linked Data there's no reason to obscure access
 to those data sources in a solution. There are a number of best practice
 patterns for keeping URIs accessible and discoverable to user agents

 How could the geek in me not agree with this! However, wearing the business
 hat, I need to silence the geek and recognize that this cannot be a priority
 when we are still trying to firmly establish a basic ecosystem of
 linked-data publishing and consuming apps.

 Kingsley reached out to me privately (very gracious of him!) and indicated
 there is indeed a business case for Semgel to do this. I intend to engage
 with him with a open mind to better understand his point of view.

 --
 Balancing idealism and pragmatism in realizing the vision of the Semantic
 web.

 Semweb has always had more than its fair share of idealism and dogma
 associated with it. However, at the risk of stating the obvious, we do need
 to balance it with a appropriate amount of pragmatism. We just don't want to
 go down the path of becoming architectural astronauts!
 (http://bit.ly/bFnrDG)

 When Bergman speaks about seeing linked data as a useful and often
 desirable technique, but not a means and Giovanni bemoans the fact that 
 features are neglected because they do not fit with the pure original
 visions and insists that The community must honestly assess where semantic
 technologies don't fit and on the other hand which features of the semantic
 web  stack make some sense and bring value to the scenarios that have
 (bring)economic value, I could not agree more!

 We want to focus on the value we deliver, not on how we deliver it. A user
 of the Semgel app for instance is never made aware of its semweb roots -
 although some of them do wonder why some simple ops are sometimes so very
 slow :)

 Given Semgel's focus on linked-data consumption in general and UI in
 particular, we have primarily drawn our inspiration from the work done by
 the MIT/Simile folks. What makes them stand out for me is their pragmatism.
 Exhibit, Potluck, Parallax and Refine all have pioneered fundamental ideas
 without necessarily embracing the full semweb stack. This is what we would
 like to emulate

 We also have the brilliant sig.ma from Sindice (which does explicitly expose
 the underlying uri's) and I am very much looking forward to exploring
 Martynas's graphity (discovered through this thread!)

 --
 The need for 

Re: Linked Data Demand Discussion Culture on this List, WAS: Introducing Semgel, a semantic database app for gathering analyzing data from websites

2012-07-20 Thread Martynas Jusevičius
Sebastian, all,

I'm on your side here. But regarding Linked Data, consider the
following points that slow down its adoption:
- data-heavy players such as Facebook and Google might not be
interested in adopting a new open, even if superior, data approach,
since it is in their interest to keep as much control over the data as
possible
- in the corporate world, big vendors like Microsoft and Oracle have
created a lock-in, and big companies and organizations are hesitating
to invest in new long-term solutions
- the long term is where Linked Data really shines, because while the
global data interconnectedness increases, it provides linear
integration costs instead of exponential as in the Web 2.0 API-to-API
approach
- RDF and Linked Data are quietly doing their job at research
institutes and innovative organizations like BBC and are not receiving
the marketing dollars thrown at NoSQL solutions such as MongoDB.
However when it comes to production use, NoSQL is no less problematic
than triplestores (I have some experience in the startup world), while
RDF is the only standardized NoSQL/graph data model, which even has a
query language and quite a few tools.
- RDF and Linked Data are taught at very few schools. Even in computer
science studies, web application development is often stuck at
PHP+MySQL level, or Web 2.0 and RESTful APIs at best.

So I would say Linked Data is like electrical vehicles -- most who
understand the technology would find it superior, but there are a lot
of different agendas and interests that not necessarily result in what
is better for the public. And then there is ignorance as well.

When it comes to Linked Data applications, I'm about to release to
open-source code which I hope will make it easier.

Martynas
graphity.org

On Fri, Jul 20, 2012 at 5:48 PM, Sebastian Schaffert
sebastian.schaff...@salzburgresearch.at wrote:
 Kingsley,

 I am trying to respond to your factual arguments inline. But let me first 
 point out that the central problem for me is exactly what Mike pointed out: 
 In your enthusiasm and cheerleading you as often turn people off as inspire 
 them. You too frequently take it upon yourself to speak for the community. 
 Semgel is a nice contribution being contributed by a new, enthusiastic 
 contributor. I think this is to be applauded, not lectured or scolded. Semgel 
 is certainly as much on topic as most of the posts to this forum.

 The message you should hear is that many people are frustrated by the way the 
 discussions in this forum are carried out and have already stopped 
 contributing or even reading. And this is a very bad development for a 
 community. The topic we are discussing right now is only a symptom. Please 
 think about it.

 Am 20.07.2012 um 16:43 schrieb Kingsley Idehen:

 On 7/20/12 4:06 AM, Sebastian Schaffert wrote:
 Am 19.07.2012 um 20:50 schrieb Kingsley Idehen:

 I completely understand and appreciate your desire (which I share) to see 
 a mature landscape with a range of linked data sources. I can also 
 understand how a database or spreadsheet can potentially offer 
 fine-grained data access - your examples do illustrate the point very 
 well indeed!

 However, if we want to build a sustainable business, the decision to 
 build these features needs to be demand driven.
 I disagree.
 Note, I responded because I assumed this was a new Linked Data service. 
 But it clearly isn't. Thus, I don't want to open up a debate about Linked 
 Data virtues if you incorrectly assume they should be *demand driven*.

 Remember, this is the Linked Open Data (LOD) forum. We've long past the 
 issue of *demand driven* over here, re. Linked Data.
 But I agree. A technology that is not able to fire proof its usefulness in 
 a demand driven / problem driven environment is maybe interesting from an 
 academic standpoint but otherwise not really useful.

 So are you claiming that Linked Data hasn't fire proofed its usefulness in a 
 demand drive / problem driven environment?


 Indeed. This is my right as much as yours is to claim the opposite.

 My claim is founded in the many discussions I have when going to the CTOs of 
 *real* companies (big ones, outside the research business) out there and 
 trying to convince them that they should build on Semantic Web technologies 
 (because I believe they are superior). Believe me, even though I strongly 
 believe in the technology, this is a very tough job without a good reference 
 example that convinces them they will save X millions of Euros or improve the 
 life or their employees or the society in the short- to medium term.

 Random sample answer from this week (I could bring many): So this Linked 
 Data is a possibility for data integration. Tell me, why should I convince my 
 engineers to throw away their proven integration solutions? Why is Linked 
 Data so superior to existing solutions? Where is it already in enterprise 
 use?.

 The big datasets always sold as a success story in the Linked Data Cloud are 
 

Re: best practice RDF in HTML

2012-06-13 Thread Martynas Jusevičius
Hey Sebastian,

can't you simply use link rel=alternate type=application/rdf+xml
href=recshop.rdf ?
It's not embedding per se, but it's one of the patterns. More info here:
http://linkeddatabook.com/editions/1.0/#htoc66 (section Making RDF
Discoverable from HTML)

Martynas
graphity.org

On Tue, Jun 12, 2012 at 7:22 PM, Sebastian Hellmann
hellm...@informatik.uni-leipzig.de wrote:
 Dear Mark,
 my main concerns are:
 1. What are the best practices to include invisible RDFa in an HTML
 document. I think Keith answered that. Maybe at the end of the body would be
 the most unobtrusive way.  The same question was raised here:
 http://answers.semanticweb.com/questions/10161/is-visually-hidden-rdfa-an-anti-pattern
 I just wanted to reassure that hidden RDFa is not contradicting the
 intention of RDFa.
 Are there any practical disadvantages (besides the obvious increase in byte
 size)?

 2. Are there any alternatives to RDFa to include RDF in HTML ?

 All the best,
 Sebastian



 On 06/12/2012 05:52 PM, Mark Birbeck wrote:

 Hi Sebastian,

 It's not clear to me whether you are saying that you don't want to use
 RDFa because:

 * you don't like it, or;

 * you think that it needs to have some user-oriented manifestation.

 There is no requirement that the RDFa in a document is displayed to
 the user in any way, or that the triples somehow 'double-up'. This
 means that your example could also be marked up like this:

   div
     xmlns:cd=http://www.recshop.fake/cd#;
     about=http://www.recshop.fake/cd/Empire_Burlesque;
   
      span property=cd:artist content=Bob Dylan/span
     span property=cd:dbpedia
 resource=http://dbpedia.org/resource/Empire_Burlesque;/span
   /div

 Regards,

 Mark


 On Tue, Jun 12, 2012 at 3:02 PM, Sebastian Hellmann
 hellm...@informatik.uni-leipzig.de  wrote:

 Dear list,
 What are the best practice to include a set of RDF triples in HTML.
 *Please note*: I am not looking for the RDFa way to include triples. I
 just
 want to add a set of triples somewhere in an HTML document. They are not
 supposed to show up like Wikinomics, Don Tapscott in  the following
 example:

 div  xmlns:dc=http://purl.org/dc/elements/1.1/;
  about=http://www.example.com/books/wikinomics;
  span  property=dc:titleWikinomics/span
  span  property=dc:creatorDon Tapscott/span
  span  property=dc:date2006-10-01/span
 /div

 I don't want to use the strings in the HTML document as objects in the
 triples. My use case is that I just have a large set of triples, e.g.
 1000
 that I want to include as a bulk somewhere and ship along with the html.
 Which way is the best? Do the examples below work?
 All the best,
 Sebastian

 ***
 Include in head
 **
 html
 head
 script type=application/rdf+xml
 rdf:RDF
 xmlns:rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#;
 xmlns:cd=http://www.recshop.fake/cd#;

 rdf:Description
 rdf:about=http://www.recshop.fake/cd/Empire Burlesque
 cd:artistBob Dylan/cd:artist
 cd:dbpedia rdf:resource=http://dbpedia.org/resource/Empire_Burlesque;
 /rdf:Description
 /rdf:RDF
 /script
 /head
 body
 /body
 /html
 **
 attach after html
 *
 html
 head
 /head
 body
 /body
 /html
 rdf:RDF
 xmlns:rdf=http://www.w3.org/1999/02/22-rdf-syntax-ns#;
 xmlns:cd=http://www.recshop.fake/cd#;

 rdf:Description
 rdf:about=http://www.recshop.fake/cd/Empire Burlesque
 cd:artistBob Dylan/cd:artist
 cd:dbpedia rdf:resource=http://dbpedia.org/resource/Empire_Burlesque;
 /rdf:Description
 /rdf:RDF


 --
 Dipl. Inf. Sebastian Hellmann
 Department of Computer Science, University of Leipzig
 Projects: http://nlp2rdf.org , http://dbpedia.org
 Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
 Research Group: http://aksw.org





 --
 Dipl. Inf. Sebastian Hellmann
 Department of Computer Science, University of Leipzig
 Projects: http://nlp2rdf.org , http://dbpedia.org
 Homepage: http://bis.informatik.uni-leipzig.de/SebastianHellmann
 Research Group: http://aksw.org