Mike Bergman wrote:
Hi David,
David Huynh wrote:
Hi Mike,
Well put! It's an accurate account of what's going on. As I
explicitly said in my original post, this was more or less a tease
and shameless self-promotion :-), but hopefully beneficial, as there
are, in my opinion, some interesting ideas from this query editor
that can be adopted / adapted. In fact, if you watch the screencast
again, there might be hints of "data driving interfaces", albeit only
for developers, not end-users. What other major wins at no cost are
you seeing?
OK; I'm game. This is a list Fred and I have not published before for
"major wins":
1. Context-specific autocompletion
2. Context-specific dropdown lists
3. Instance record previews
4. Class (concept) previews
5. Instance record details
6. Class (concept) details
7. CRUD actions on all items (depending on authorization/access rights)
8. Label viewing/selection v IRIs
9. Item look-up/search (contextual, via label or IRIs)
10. Synonym/alias matching ("semsets") or lookup
11. Easy internationalization (viz labels)
12. Automatic inferencing
13. What's related lookup
14. More specific/more generic lookups
15. Local graph visualization
16. Full linkage navigation (linked data)
17. Contextual, directed work flows (query building, template
building/selection, analysis, etc)
18. Contextual tooltips/popups
19. Contextual help
20. Dataset-level access and scope.
I saw a few of the above in your system. I especially like your syntax
tolerance, interactive tree ("one-by-one"), and "inside out" ideas.
Very clever.
There are many reasons why we think RDF the superior choice for all of
these purposes [1].
In our own work, we also give great care to the separation of
instances and their records (ABox) from conceptual schema (TBox),
which, when using the same techniques above, also provides highly
useful contextuality and targeted selections. For example, as a
teaser, all instance types can be classified into discrete, disjoint
"supertypes" that greatly aid disambiguation or display template
selection [2].
These reasons are why we describe our stuff as "data-driven
applications". Generic tools incorporating the ideas above can be
contextually driven through simple swap outs of ontologies and
domains. Indeed, ontology(ies) are what *actually* drives the system.
IMO, from a consumer perspective, Fred's query builder when at Zitgist
was the best I have seen [3]. That was two years ago, and we never
commercialized it, but have taken some of those ideas forward with our
current stuff.
We have no issue with JSON as a naive authoring environment or for
data exchange. Indeed, we view it as often superior to straight RDF.
But, our internal, canonical representations are RDF as structured and
organized above because we get all of these data-driven interface
advantages "for free". We further can avoid all of the SEO problems
of client-side JSON, a concern of mine going back to early work with
the deep Web.
I'd be interested in your or others' similar ideas.
Mike
[1] http://www.mkbergman.com/?p=483
[2] For example, see references to BBN or Sekine in
http://en.wikipedia.org/wiki/Named_Entity_Recognition
[3] http://www.zitgist.com/products/query_builder.html
Mike,
A little re-org based on the functionality partitioning theme of my last
post:
View (Browsers/Explorers that may be local or remote clients to a data
server):
- Context-specific autocompletion
- Context-specific dropdown lists
- Instance record previews
- Class (concept) previews
- Instance record details
- Class (concept) details
- Label viewing/selection v IRIs
- Item look-up/search (contextual, via label or IRIs)
- Synonym/alias matching ("semsets") or lookup
- Easy internationalization (viz labels *also done using combination
of inference rules, user agent locale deduction, and content negotiation)
- Local graph visualization
- Contextual, directed work flows (query building, template
building/selection, analysis, etc)
- Contextual help
- Contextual tooltips/popups
Controller (external or data server hosted app. logic):
- More specific/more generic lookups
- Automatic inferencing (e.g. Easy internationalization viz labels
*also done using combination of inference rules, user agent locale
deduction, and transparent content negotiation qos algorithims)
Model (expressed via data server hosted linked data graph):
- More specific/more generic lookups
- What's related lookup
- Full linkage navigation (linked data and how you make "what related
work" by taking the grunt work from the client)
- Automatic inferencing (e.g. Easy internationalization viz labels
*also done using combination of inference rules, user agent locale
deduction, and transparent content negotiation qos algorithims)
- Dataset-level access and scope (foaf+ssl is the only practical option
here).
BTW - Do you know that the Faceted Browser Service at:
<http://lod.openlinksw.com > actually covers a lot of the above? Anyway,
talks are under way with David and others to mesh Parallax with the
Faceted Browser Service. This service was developed with the
aforementioned in mind. Making the above work, atop 1 million triples is
challenging, we do it atop 4.5+ Billion (and counting) triples.
The path to a powerful UI that ignites Linked Data comprehension and
appreciation outside the Semantic Web and LOD communities really isn't
that far away. What remains mercurial is our collective ability align
skills, and execute.
Anyway, the list from yourself and Fred is a good start, my re-org
should hopefully add some clarity etc.
David: Please comment :-)
Kingsley
David
Mike Bergman wrote:
This has been a classic case of Cool Hand Luke and a failure to
communicate. Indeed, it happens all of the time in this forum.
David comes from a perspective of usability and user interfaces,
granted with a JS bias. Most all of us have recognized his genius
for quite some time, and he is a leading innovator in such data
presentation.
Kingsley has been a passionate advocate for data connectivity and
overcoming all things "silo". Middleware is his game (and OL's).
Data and manipulating data is his perspective, and we know the
superior infrastructure that his personal and then corporate
commitments to these issues have brought.
Benjamin notes today the difference in perspective. Does it begin
with the user experience, or does it begin with the data?
The answer, of course, is Yes.
David with JSON and MQL and other things FB might be criticized. As
he knows, I have done so personally offline and directly.
Kingsley might be criticized for facile hand-waving at UI and
usability questions; he, too, knows I have made those points privately.
I truly don't know what our "community" really is or, if indeed, we
even have one. But I do know this:
All of us work on these issues because we believe in them and have
passion. So, I have a simple suggestion:
Keep looking outward. We need to talk and speak to the
"unaffiliated". In that regard, David has the upper hand because
presentation and flash will always be easier to understand for the
non-cognescenti. But, David, you know this too: your job is easier
if the nature of the data and its structure drives your display.
There are HUGE, HUGE advantages of data driving interfaces and
usability that neither of you are discussing. Let's next turn our
attention there and gain some major wins at no cost.
Mike
David Huynh wrote:
Kingsley,
Thanks for the resources and the detailed explanation! Looks like
all the pieces are there!
David
Kingsley Idehen wrote:
David Huynh wrote:
Thanks for the link, Juan.
Just curious, even if I know SPARQL, how do I (as a new user)
know which properties and types there are in the data? And what
URIs to use for what?
David,
Not speaking for Jaun, but seeking to answer the question you posed.
Our iSPARQL interface takes the view that:
1. You lookup vocabularies and ontologies of interest before
constructing triple patterns since the terms need to come from
somewhere
2. You then you use the ontology/vocabulary tree to drag and drop
classes over Subject and Object nodes
3. Do the same thing re. properties by selecting them and dropping
them over the connectors (predicates)
4. Repeat the above until you've completely painted an SPO graph
of what you seek.
BTW - the pattern in steps 2-4 above originated from RDF Author,
and we simply adopted it for SPARQL (following a skype session I
had with Danbri years ago re. the need for SPARQL QBE). Note: RDF
Author allowed you to write Triples directly into RDF information
resources via their URLs. Which means the same UI works fine for
SPARUL (writing to Quad Store via its internal Graph IRI or Web
Information Resource URL).
Links:
1. http://rdfweb.org/people/damian/RDFAuthor/Tutorial/ -- RDF Author
Kingsley
David
Juan Sequeda wrote:
You may want to check out a tool that we are working on: SQUIN
http://squin.informatik.hu-berlin.de/SQUIN/
Juan Sequeda, Ph.D Student
Dept. of Computer Sciences
The University of Texas at Austin
www.juansequeda.com <http://www.juansequeda.com>
www.semanticwebaustin.org <http://www.semanticwebaustin.org>
On Wed, Apr 22, 2009 at 9:18 PM, David Huynh
<[email protected] <mailto:[email protected]>> wrote:
Hi all,
Admittedly this is somewhat of a tease and shameless
self-promotion :-) but I think there are a few interesting
concepts in the query editor for Freebase that I've been
working
on that can be very useful for querying and consuming LOD
data sets:
http://www.freebase.com/app/queryeditor/about
Or maybe I missed it totally--is there anything similar for
writing SPARQL queries over LOD?
Cheers,
David
--
Regards,
Kingsley Idehen Weblog: http://www.openlinksw.com/blog/~kidehen
President & CEO
OpenLink Software Web: http://www.openlinksw.com