On 2/20/15 12:04 PM, Martynas Jusevičius wrote:
Hey Michael,

this one indeed.

The layout is generated with XSLT from RDF/XML. The triples are
grouped by resources.

Not to criticize, but to seek clarity:

What does the term "resources" refer to, in your usage context?

In a world of Relations (this is what RDF is about, fundamentally) its hard for me to understand what you mean by "grouped by resources". What is the "resource" etc?


  Within a resource block, properties are sorted
alphabetically by their rdfs:labels retrieved from respective
vocabularies.

How do you handle the integrity of multi-user updates, without killing concurrency, using this method of grouping (which in of itself is unclear due to the use "resources" term) ?

How do you minimize the user interaction space i.e., reduce clutter -- especially if you have a lot of relations in scope or the possibility that such becomes the reality over time?

Kingsley

On Fri, Feb 20, 2015 at 4:59 PM, Michael Brunnbauer <[email protected]> wrote:
Hello Martynas,

sorry! You mean this one?

http://linkeddatahub.com/ldh?mode=http%3A%2F%2Fgraphity.org%2Fgc%23EditMode

Nice! Looks like a template but you still may have the triple object ordering
problem. Do you? If yes, how did you address it?

Regards,

Michael Brunnbauer

On Fri, Feb 20, 2015 at 04:23:14PM +0100, Martynas Jusevi??ius wrote:
I find it funny that people on this list and semweb lists in general
like discussing abstractions, ideas, desires, prejudices etc.

However when a concrete example is shown, which solves the issue
discussed or at least comes close to that, it receives no response.

So please continue discussing the ideal RDF environment and its
potential problems while we continue improving our editor for users
who manage RDF already now.

Have a nice weekend everyone!

On Fri, Feb 20, 2015 at 4:09 PM, Paul Houle <[email protected]> wrote:
So some thoughts here.

OWL,  so far as inference is concerned,  is a failure and it is time to move
on.  It is like RDF/XML.

As a way of documenting types and properties it is tolerable.  If I write
down something in production rules I can generally explain to an "average
joe" what they mean.  If I try to use OWL it is easy for a few things,  hard
for a few things,  then there are a few things Kendall Clark can do,  and
then there is a lot you just can't do.

On paper OWL has good scaling properties but in practice production rules
win because you can infer the things you care about and not have to generate
the large number of trivial or otherwise uninteresting conclusions you get
from OWL.

As a data integration language OWL points in an interesting direction but it
is insufficient in a number of ways.  For instance,  it can't convert data
types (canonicalize <mailto:[email protected]> and "[email protected]"),  deal
with trash dates (have you ever seen an enterprise system that didn't have
trash dates?) or convert units.  It also can't reject facts that don't
matter and so far as both time&space and accuracy you do much easier if you
can cook things down to the smallest correct database.

----

The other one is that as Kingsley points out,  the ordered collections do
need some real work to square the circle between the abstract graph
representation and things that are actually practical.

I am building an app right now where I call an API and get back chunks of
JSON which I cache,  and the primary scenario is that I look them up by
primary key and get back something with a 1:1 correspondence to what I got.
Being able to do other kind of queries and such is sugar on top,  but being
able to reconstruct an original record,  ordered collections and all,  is an
absolute requirement.

So far my infovore framework based on Hadoop has avoided collections,
containers and all that because these are not used in DBpedia and Freebase,
at least not in the A-Box.  The simple representation that each triple is a
record does not work so well in this case because if I just turn blank nodes
into UUIDs and spray them across the cluster,  the act of reconstituting a
container would require an unbounded number of passes,  which is no fun at
all with Hadoop.  (At first I though the # of passes was the same as the
length of the largest collection but now that I think about it I think I can
do better than that)  I don't feel so bad about most recursive structures
because I don't think they will get that deep but I think LISP-Lists are
evil at least when it comes to external memory and modern memory
hierarchies.


--
++  Michael Brunnbauer
++  netEstate GmbH
++  Geisenhausener Straße 11a
++  81379 München
++  Tel +49 89 32 19 77 80
++  Fax +49 89 32 19 77 89
++  E-Mail [email protected]
++  http://www.netestate.de/
++
++  Sitz: München, HRB Nr.142452 (Handelsregister B München)
++  USt-IdNr. DE221033342
++  Geschäftsführer: Michael Brunnbauer, Franz Brunnbauer
++  Prokurist: Dipl. Kfm. (Univ.) Markus Hendel




--
Regards,

Kingsley Idehen 
Founder & CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog 1: http://kidehen.blogspot.com
Personal Weblog 2: http://www.openlinksw.com/blog/~kidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen
Personal WebID: http://kingsley.idehen.net/dataspace/person/kidehen#this


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to