On 2/21/15 2:57 PM, Martynas Jusevičius wrote:
Kingsley,

I am fully aware of the distinction between RDF as a data model and
its serializations. That's why I wrote: "RDF *serializations* often
group triples by subject URI."

What I tweeted recently was, that despite having concept models and
abstractions in our heads, when pipelining data and writing software
we are dealing with concrete serializations. Am I not right?

Trouble here is that we have a terminology problem. If I disagree with you (about terminology), it is presumed I am lecturing.

FWIW -- serialization formats, notations, and languages are not the same thing. Unfortunately, all of these subtly distinct items are conflated in RDF-land.


So what I mean with "it works" is that our RDF/POST-based user
interface is simply a generic function of the RDF graph behind it, in
the form of XSLT transforming the RDF/XML serialization.

I commented on concurrency in the previous email, but you haven't
replied to that.

I'll go find that comment, and respond if need be. Otherwise, I would rather just make time to use your RDF editing tool and provide specific usage feedback, in regards to the issues of concern and interest to me.

Kingsley

On Sat, Feb 21, 2015 at 8:43 PM, Kingsley  Idehen
<[email protected]> wrote:
On 2/21/15 1:34 PM, Martynas Jusevičius wrote:
Kingsley,

I don't need a lecture from you each time you disagree.

I am not lecturing you. I am trying to make the conversation clearer. Can't
you see that?

Please explain what you think "Resource" means in "Resource
Description Framework".

In any case, I think you know well what I mean.

A "grouped" RDF/XML output would be smth like this:

<rdf:Description rdf:about="http://resource";>
    <rdf:type rdf:resource="http://type"/>
    <a:property>value</a:property>
    <b:property>smth</b:property>
</rdf:Description>

You spoke about RDF not RDF/XML (as you know, they are not the same thing).
You said or implied "RDF datasets are usually organized by subject".

RDF is an abstract Language (system of signs, syntax, and semantics). Thus,
why are you presenting me with an RDF/XML statement notation based response,
when we are debating/discussing the nature of an RDF relation?

Can't you see that the more we speak about RDF in overloaded-form the more
confusing it remains?

RDF isn't a mystery. It doesn't have to be some unsolvable riddle. Sadly,
that's its general perception because we talk about it using common terms in
an overloaded manner.

How would you call this? I call it a "resource description".

See my comments above. RDF is a Language. You can create RDF Statements in a
document using a variety of Notations. Thus, when speaking of RDF I am not
thinking about RDF/XML, TURTLE, or any other notation. I am thinking about a
language that systematically leverages signs, syntax, and semantics as a
mechanism for encoding and decoding information [data in some context].

But the
name does not matter much, the fact is that we use it and it works.

Just works in what sense? That's why I asked you questions about how you
catered to integrity and concurrency.

If you have more than one person editing sentences, paragraphs, or a page in
a book, wouldn't you think handling issues such as the activity frequency,
user count, and content volume are important? That's all I was seeking an
insight from you about, in regards to your work.


Kingsley


On Sat, Feb 21, 2015 at 7:01 PM, Kingsley  Idehen
<[email protected]> wrote:
On 2/21/15 9:48 AM, Martynas Jusevičius wrote:

On Fri, Feb 20, 2015 at 6:41 PM, Kingsley  Idehen
<[email protected]> wrote:

On 2/20/15 12:04 PM, Martynas Jusevičius wrote:


Not to criticize, but to seek clarity:

What does the term "resources" refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its
hard
for me to understand what you mean by "grouped by resources". What is the
"resource" etc?

Well, RDF stands for "Resource Description Framework" after all, so
I'll cite its spec:
"RDF graphs are sets of subject-predicate-object triples, where the
elements may be IRIs, blank nodes, or datatyped literals. They are
used to express descriptions of resources."

More to the point, RDF serializations often group triples by subject
URI.


The claim "often group triples by subject"  isn't consistent with the
nature
of an RDF Relation [1].

"A predicate is a sentence-forming relation. Each tuple in the relation
is a
finite, ordered sequence of objects. The fact that a particular tuple is
an
element of a predicate is denoted by '(*predicate* arg_1 arg_2 ..
arg_n)',
where the arg_i are the objects so related. In the case of binary
predicates, the fact can be read as `arg_1 is *predicate* arg_2' or `a
*predicate* of arg_1 is arg_2'.") " [1] .

RDF's specs are consistent with what's described above, and inconsistent
with the subject ordering claims you are making.

RDF statements (which represent relations) have sources such as documents
which are accessible over a network and/or documents managed by some
RDBMS
e.g., Named Graphs in the case of a SPARQL compliant RDBMS .

In RDF you are always working with a set of tuples (s,p,o 3-tuples
specifically) grouped by predicate .

Also note, I never used the phrase "RDF Graph" in any of the sentences
above, and deliberately so, because that overloaded phrase is yet another
source of unnecessary confusion.

Links:

[1]

http://54.183.42.206:8080/sigma/Browse.jsp?lang=EnglishLanguage&flang=SUO-KIF&kb=SUMO&term=Predicate

Kingsley


    Within a resource block, properties are sorted
alphabetically by their rdfs:labels retrieved from respective
vocabularies.

How do you handle the integrity of multi-user updates, without killing
concurrency, using this method of grouping (which in of itself is unclear
due to the use "resources" term) ?

How do you minimize the user interaction space i.e., reduce clutter --
especially if you have a lot of relations in scope or the possibility
that
such becomes the reality over time?

I don't think concurrent updates I related to "resources" or specific
to our editor. The Linked Data platform (whatever it is) and its HTTP
logic has to deal with ETags and 409 Conflict etc.

I was wondering if this logic should be part of specifications such as
the Graph Store Protocol:
https://twitter.com/pumba_lt/status/545206095783145472
But I haven't an answer. Maybe it's an oversight on the W3C side?

We scope the description edited either by a) SPARQL query or b) named
graph content.

Kingsley

On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer <[email protected]>
wrote:

Hello Martynas,

sorry! You mean this one?



http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode

Nice! Looks like a template but you still may have the triple object
ordering
problem. Do you? If yes, how did you address it?

Regards,

Michael Brunnbauer

On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote:

I find it funny that people on this list and semweb lists in general
like discussing abstractions, ideas, desires, prejudices etc.

However when a concrete example is shown, which solves the issue
discussed or at least comes close to that, it receives no response.

So please continue discussing the ideal RDF environment and its
potential problems while we continue improving our editor for users
who manage RDF already now.

Have a nice weekend everyone!

On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle <[email protected]> wrote:

So some thoughts here.

OWL,  so far as inference is concerned,  is a failure and it is time to
move
on.  It is like RDF/XML.

As a way of documenting types and properties it is tolerable.  If I
write
down something in production rules I can generally explain to an
"average
joe" what they mean.  If I try to use OWL it is easy for a few things,
hard
for a few things,  then there are a few things Kendall Clark can do,
and
then there is a lot you just can't do.

On paper OWL has good scaling properties but in practice production
rules
win because you can infer the things you care about and not have to
generate
the large number of trivial or otherwise uninteresting conclusions you
get
from OWL.

As a data integration language OWL points in an interesting direction
but it
is insufficient in a number of ways.  For instance,  it can't convert
data
types (canonicalize <mailto:[email protected]> and "[email protected]"),
deal
with trash dates (have you ever seen an enterprise system that didn't
have
trash dates?) or convert units.  It also can't reject facts that don't
matter and so far as both time&space and accuracy you do much easier if
you
can cook things down to the smallest correct database.

----

The other one is that as Kingsley points out,  the ordered collections
do
need some real work to square the circle between the abstract graph
representation and things that are actually practical.

I am building an app right now where I call an API and get back chunks
of
JSON which I cache,  and the primary scenario is that I look them up by
primary key and get back something with a 1:1 correspondence to what I
got.
Being able to do other kind of queries and such is sugar on top,  but
being
able to reconstruct an original record,  ordered collections and all,
is an
absolute requirement.

So far my infovore framework based on Hadoop has avoided collections,
containers and all that because these are not used in DBpedia and
Freebase,
at least not in the A-Box.  The simple representation that each triple
is a
record does not work so well in this case because if I just turn blank
nodes
into UUIDs and spray them across the cluster,  the act of
reconstituting a
container would require an unbounded number of passes,  which is no fun
at
all with Hadoop.  (At first I though the # of passes was the same as
the
length of the largest collection but now that I think about it I think
I can
do better than that)  I don't feel so bad about most recursive
structures
because I don't think they will get that deep but I think LISP-Lists
are
evil at least when it comes to external memory and modern memory
hierarchies.


--
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München
++  Tel +49 89 32 19 77 80
++  Fax +49 89 32 19 77 89
++  E-Mail [email protected]
++  http://www.netestate.de/
++
++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
++  USt-IdNr. DE221033342
++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel



--
Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this





--
Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this


--
Regards,

Kingsley Idehen
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this





--
Regards,

Kingsley Idehen 
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to