Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2015-01-12 Thread Steffen Lohmann

Hi John,

On 09.01.2015 14:08, John Walker wrote:
  On January 9, 2015 at 1:15 PM Steffen Lohmann 
steffen.lohm...@vis.uni-stuttgart.de wrote:

On 09.01.2015 11:59, John Walker wrote:
I see under Selection Details that there is a count of instances 
when a class is selected.
Is there any option to show other otherwise enumerate the instances 
of a class?
Maybe showing the instances in the graph might clutter things up 
(could add a filter for this), but simply adding a list of links 
under Selection Details would be a good start.


Good point. It is already on our list of issues and also defined in 
the VOWL 2 spec: http://vowl.visualdataweb.org/v2/#individuals
We will not go for the VOWL 1 representation in WebVOWL for several 
reasons, but use the recommended implementation in the sidebar (as 
you also proposed). We may, however, not list all individuals but 
only a subset (if there are many), as VOWL focuses on the TBox and as 
VOWL-JSON files could become quite large if we would include all 
individuals. We may use separate JSON files for TBox and ABox in some 
future version of WebVOWL, but this requires some major structural 
changes. 


In those cases where those individuals are part of the 
ontology/vocabulary, I would consider them as part of the terminology 
and useful to include in the visualization somehow. Of course you 
would want to start visualizing all resources with type foaf:Person :)




That's right (as there is no strict separation between TBox and ABox in 
OWL). We will see what we can do here. We plan to list at least up to a 
certain number of individuals in WebVOWL in the future.




This would be useful if the ontology contains, for example, code lists.


Do you have a good example for such an ontology? Or for any other 
ontology that contains many individuals? 


I was trying it out with the current draft GS1 Vocabulary:

http://vowl.visualdataweb.org/webvowl/#iri=http://dydra.com/nlv0/gs1.ttl

Something like GoodRelations also contains predefined/enumerated lists 
of values:


http://vowl.visualdataweb.org/webvowl/#iri=http://purl.org/goodrelations/v1



Thank you. These are indeed good examples for ontologies with 
individuals. We will use them as test cases.



In the GS1 case I notice many of the classes are shown in darker blue 
as they are not defined to be an owl:Class, however some of the 
classes like http://gs1.org/voc/FasteningTypeCode are shown in lighter 
blue and shown with type owl:Class even though this is not stated in 
the source data. Any ideas why?




Dark blue is the recommended color for external elements, i.e. elements 
whose base URI differs from that of the visualized ontology - see
http://vowl.visualdataweb.org/v2/#externalElements and 
http://vowl.visualdataweb.org/v2/#colorExternal


You are right that the coloring is partly wrong for the above 
ontologies. This seems to be a bug in our OWL2VOWL converter, resulting 
from an internal comparison of the ontology IRI with the element IRI. We 
will fix it in the next release.


Thanks again for your feedback,
Steffen

--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 .http://www.vis.uni-stuttgart.de/~lohmansn



Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2015-01-09 Thread Steffen Lohmann

On 05.01.2015 16:29, Stian Soiland-Reyes wrote:

This is great stuff!


Thank you, Stian. We are glad to hear that.


I tried it with my ontology PAV - and it seems it is struggling a bit
because PAV (deliberately) don't have a defined domain and range on
object properties:

http://vowl.visualdataweb.org/webvowl/#iri=http://purl.org/pav/

Hence everything goes from and to Thing in the centre, which makes it
a bit clotted - I could not use the Gravity setting to space out the
properties.

In our manually made diagram I show this using additional vague
resource boxes - perhaps each unbound property could just get an
empty dotted box on the outside instead of going back to Thing?

http://pav-ontology.googlecode.com/svn/trunk/images/pav-overview.png


If there is a large set of properties without domain and range axioms in 
an ontology, the current WebVOWL rendering is indeed not perfect. Your 
PAV ontology is a good example for this. We will think about possible 
alternatives to adapt or extend the splitting rules of VOWL 2 - 
http://vowl.visualdataweb.org/v2/#splittingRules .



Here's a nice view that I liked, using PROV-O

http://vowl.visualdataweb.org/webvowl/#iri=http://www.w3.org/ns/prov-o


The visualization is nice but not 100% complete yet, as the current 
WebVOWL implementation does not render properties with multiple domain 
or range axioms properly at the moment (e.g., the property hadActivity 
has more than one domain). We are working on that issue and it will 
likely be resolved in the next WebVOWL release.



Is it possible to turn off the Subclass of label and only show the line?


Several people asked for that already, so we plan to include some kind 
of expert mode with a reduced notation in the next WebVOWL release. 
However, we do not want to remove that label from the initial 
visualization, as it clarifies the direction of the subclass relation 
and is important to make VOWL understandable to casual users, as we 
found out in our evaluations - see our papers on VOWL linked at 
http://vowl.visualdataweb.org/v2/#references



Would it be possible to save a link to a particular view (without
having to save the SVG)?  That navigation state fits much better in
the # parameters than the iri= I would belive.


This is a tricky question, as it would require to save the node 
positions of the VOWL graph in an annotated JSON or the like. The 
current VOWL-JSON is not optimized for that, but we may incorporate such 
a feature in one of the next releases (though I cannot promise).



I know it's not a proper ontology - yet people still like it for some
reason. Here however it fails with Ontology could not be loaded.
Conversion failed. 

http://vowl.visualdataweb.org/webvowl/#iri=http://purl.org/dc/terms/


This is because the vocabulary misses some OWL constructs that are 
expected by WebVOWL (e.g., an ontology IRI). We will try to fix it and 
also allow the parsing of such RDFS vocabularies in the next WebVOWL 
release.


Many thanks again for your feedback - very useful,
Steffen



On 19 December 2014 at 15:49, Steffen Lohmann
steffen.lohm...@vis.uni-stuttgart.de  wrote:

Hi all,

we are glad to announce the release of WebVOWL 0.3, which integrates our
OWL2VOWL converter now. WebVOWL works in modern web browsers without any
installation so that ontologies can be instantly visualized. Check it out
at:http://vowl.visualdataweb.org/webvowl.html

To the best of our knowledge, WebVOWL is the first comprehensive ontology
visualization completely based on open web standards (HTML, SVG, CSS,
JavaScript). It implements VOWL 2, which has been designed in a
user-oriented process and is clearly specified at
http://vowl.visualdataweb.org  (incl. references to scientific papers).

Please note that:
- WebVOWL is a tool for ontology visualization, not for ontology modeling.
- VOWL considers many language constructs of OWL but not all of them yet.
- VOWL focuses on the visualization of the TBox of small to medium-size
ontologies but does not sufficiently support the visualization of very large
ontologies and detailed ABox information for the time being.
- WebVOWL 0.3 implements the VOWL 2 specification nearly completely, but the
current version of the OWL2VOWL converter does not.
These issues are subject to future work.

Have fun with it!

On behalf of the VOWL team,
Steffen

--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 .http://www.vis.uni-stuttgart.de/~lohmansn






Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2015-01-09 Thread Steffen Lohmann

Hi John,

On 09.01.2015 11:59, John Walker wrote:
I see under Selection Details that there is a count of instances when 
a class is selected.
Is there any option to show other otherwise enumerate the instances of 
a class?
Maybe showing the instances in the graph might clutter things up 
(could add a filter for this), but simply adding a list of links under 
Selection Details would be a good start.


Good point. It is already on our list of issues and also defined in the 
VOWL 2 spec: http://vowl.visualdataweb.org/v2/#individuals
We will not go for the VOWL 1 representation in WebVOWL for several 
reasons, but use the recommended implementation in the sidebar (as you 
also proposed). We may, however, not list all individuals but only a 
subset (if there are many), as VOWL focuses on the TBox and as VOWL-JSON 
files could become quite large if we would include all individuals. We 
may use separate JSON files for TBox and ABox in some future version of 
WebVOWL, but this requires some major structural changes.




This would be useful if the ontology contains, for example, code lists.


Do you have a good example for such an ontology? Or for any other 
ontology that contains many individuals?


Cheers,
Steffen

--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 . http://www.vis.uni-stuttgart.de/~lohmansn



[CfP] VOILA @ ISWC 2015 - Visualizations and User Interfaces for Ontologies and Linked Data

2015-04-27 Thread Steffen Lohmann
Notification: July 30, 2015
Camera-ready: August 14, 2015


Attendance
==

Note that workshop attendees cannot register for the workshop only, but 
need to register for the main conference, as well.



Organizers
==

Valentina Ivanova, Linköping University, Sweden
Patrick Lambrix, Linköping University, Sweden
Steffen Lohmann, University of Stuttgart, Germany
Catia Pesquita, University of Lisbon, Portugal





[Ann] QueryVOWL

2015-06-11 Thread Steffen Lohmann

Hi all,

last week we presented a prototype at ESWC that implements our 
VOWL-based visual query language (QueryVOWL) for SPARQL-based Linked 
Data querying. Check it out at: http://queryvowl.visualdataweb.org


Note that it has mainly been developed to demonstrate the QueryVOWL 
approach and should not be considered a mature tool (e.g., it contains 
some known bugs). It does also not implement all current features of the 
visual language that are described at 
http://vowl.visualdataweb.org/queryvowl/v1/index.html


The web demo is configured for the DBpedia endpoint. It can only be used 
if the DBpedia endpoint is available and may slow down if many people 
access it simultaneously. For these cases, we also provide a short 
screencast of the tool.


That being said, enjoy the demo (or video)!

On behalf of the QueryVOWL team,
Steffen

--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 .http://www.vis.uni-stuttgart.de/~lohmansn





[CfP] VOILA @ ISWC 2015 - Visualizations and User Interfaces for Ontologies and Linked Data

2015-06-11 Thread Steffen Lohmann
, 2015
Camera-ready: August 14, 2015


Attendance
==

Note that workshop attendees cannot register for the workshop only, but need to 
register for the main conference, as well.


Organizers
==

Valentina Ivanova, Linköping University, Sweden
Patrick Lambrix, Linköping University, Sweden
Steffen Lohmann, University of Stuttgart, Germany
Catia Pesquita, University of Lisbon, Portugal






Job Openings at Fraunhofer IAIS in Bonn: Project Manager, Business Developer, and Software Engineers

2015-11-12 Thread Steffen Lohmann
Fraunhofer IAIS is currently seeking for talented project managers, 
business developers, and software engineers for a couple of Semantic Web 
/ Linked Data related projects that will start soon. We are particularly 
looking for people with German language skills this time. Therefore, the 
job descriptions are only available in German:


Project Manager: - http://www.iais.fraunhofer.de/6322.html
Business Developer: - http://www.iais.fraunhofer.de/6323.html
Software Engineer: - http://www.iais.fraunhofer.de/6324.html

If you have any questions regarding these jobs, do not hesitate to 
contact me or any other member of the EIS team: 
http://eis.iai.uni-bonn.de/Team.html





CFP: Visualization and Interaction for Ontologies and Linked Data (VOILA) at ISWC 2016

2016-05-03 Thread Steffen Lohmann

CALL FOR PAPERS

VOILA 2016 - Visualization and Interaction for Ontologies and Linked Data

2nd International Workshop at ISWC 2016, 15th International Semantic Web 
Conference
October 17 or 18, 2016, Kobe, Japan

http://voila2016.visualdataweb.org

--
Abstracts Deadline: June 27, 2016
Submission Deadline: July 1, 2016
--


Motivation and Objectives
==

'A picture is worth a thousand words', we often say, yet many areas are in 
demand of sophisticated visualization techniques, and the Semantic Web is not 
an exception. The size and complexity of ontologies and Linked Data in the 
Semantic Web constantly grows and the diverse backgrounds of the users and 
application areas multiply at the same time. Providing users with visual 
representations and intuitive interaction techniques can significantly aid the 
exploration and understanding of the domains and knowledge represented by 
ontologies and Linked Data.

Ontology visualization is not a new topic and a number of approaches have 
become available in recent years, with some being already well-established, 
particularly in the field of ontology modeling. In other areas of ontology 
engineering, such as ontology alignment and debugging, although several tools 
have recently been developed, few provide a graphical user interface, not to 
mention navigational aids or comprehensive visualization and interaction 
techniques.

In the presence of a huge network of interconnected resources, one of the 
challenges faced by the Linked Data community is the visualization of 
multidimensional datasets to provide for efficient overview, exploration and 
querying tasks, to mention just a few. With the focus shifting from a Web of 
Documents to a Web of Data, changes in the interaction paradigms are in demand 
as well. Novel approaches also need to take into consideration the 
technological challenges and opportunities given by new interaction contexts, 
ranging from mobile, touch, and gesture interaction to visualizations on large 
displays, and encompassing highly responsive web applications.

There is no one-size-fits-all solution but different use cases demand different 
visualization and interaction techniques. Ultimately, providing better user 
interfaces, visual representations and interaction techniques will foster user 
engagement and likely lead to higher quality results in different applications 
employing ontologies and proliferate the consumption of Linked Data.


Topics of Interest
==

Topics, subjects, and contexts of interest include (but are not limited to):

* Topics:
- visualizations
- user interfaces
- visual analytics
- requirements analysis
- case studies
- user evaluations
- cognitive aspects

* Subjects:
- ontologies
- linked data
- ontology engineering (development, collaboration, ontology design 
patterns, alignment, debugging, evolution, provenance, etc.)

* Contexts:
- classical interaction contexts (desktop, keyboard, mouse, etc.)
- novel interaction contexts (mobile, touch, gesture, etc.)
- special settings (large, high-resolution, and multiple displays, etc.)
- specific user groups and needs (people with disabilities, domain 
experts, etc.)


Submission Guidelines
==

Paper submission and reviewing for this workshop will be electronic via 
EasyChair. The papers should be written in English, following the Springer LNCS 
format, and be submitted in PDF on or before July 1, 2016. Paper abstracts are 
due by June 27, 2016.

The following types of contributions are welcome. The recommended page length 
is given in brackets. There is no strict page limit but the length of a paper 
should be commensurate with its contribution.

Full research papers (8-12 pages);
Experience papers (8-12 pages);
Position papers (6-8 pages);
Short research papers (4-6 pages);
System papers (4-6 pages).

Accepted papers will again be published as a volume in the CEUR Workshop 
Proceedings series.


Special Issue in JWS
==

We are preparing a special issue on the workshop topic for the Journal of Web 
Semantics. More information about it will be announced soon.


Important Dates
==

Abstracts: June 27, 2016
Submission: July 1, 2016
Notification: July 29, 2016
Camera-ready: August 12, 2015


Organizers
==

Valentina Ivanova, Linköping University, Sweden
Patrick Lambrix, Linköping University, Sweden
Steffen Lohmann, Fraunhofer IAIS, Germany
Catia Pesquita, University of Lisbon, Portugal





Last CfP: Interacting with Multimedia Content in the Social Semantic Web

2008-09-09 Thread Steffen Lohmann


Submission Deadline this Friday!!



  IMC-SSW 2008 - CALL FOR PAPERS

International Workshop on Interacting with Multimedia Content
 in the Social Semantic Web
  (IMC-SSW 2008)

http://aksw.org/Events/2008/IMCSSW

in conjunction with the 3rd International Conference on Semantic
and Digital Media Technologies (SAMT 2008)
Koblenz, Germany, Dez 2008

 http://samt2008.uni-koblenz.de/

  Submission Deadline:
  14th September 2008


Media sharing and social networking websites have attracted many
millions of users resulting in vast collections of user generated
content. The contents are typically poorly structured and spread over
several platforms, each supporting specific media types. With the
increasing growth and diversity of these websites, new ways to access
and manage the contents are required - both within and across web
platforms.


TOPICS OF INTEREST

This workshop focuses on the interaction with these multimedia contents.
We are particularly interested in contributions that follow Web 2.0
principles of simplicity and/or social navigation in combination with
the representation, annotation, and linking power of the Semantic Web.
Topics of interest include, but are not limited to:

* Navigating and visualizing semantically annotated multimedia content
* Social semantic tagging of multimedia content
* Social semantic recommendation and search of multimedia content
* Semantic mashups of multimedia content across websites
* Semantic annotation of multimedia content in Wikis and Weblogs
* Interaction with multimedia content on the semantic desktop
* Semantic user interfaces for media sharing and social networking websites
* Experiences and use cases regarding the interaction with multimedia
content on the Social Semantic Web


TARGET AUDIENCE

* Researchers of the Semantic Web and Knowledge Representation
communities interested in interaction aspects
* Researchers of the Multimedia and Human-Computer-Interaction
communities interested in semantic aspects
* Developers of web applications, interaction designers, and
information architects


SUBMISSION TYPES

* Full papers (8-12 Pages)
* Position papers (4-8 Pages)
* Poster and Demo papers (3-4 Pages)


IMPORTANT DATES

* 14th September: Submission Deadline
* 5th October: Author's Notification
* 26th October: Camera ready
* 3rd Dezember: Workshop


COMMITTEES

Organizers:
Sören Auer, University of Leipzig, Germany
Sebastian Dietzold, University of Leipzig, Germany
Steffen Lohmann, University of Duisburg, Germany
Jürgen Ziegler, University of Duisburg, Germany

Program Comittee
Scott Bateman, University of Saskatchewan, Canada
Simone Braun, FZI Research Center for IT, Germany
Davide Eynard, Politecnico di Milano, Italy
Michael Hausenblas, Johanneum Research, Austria
Andreas Hess, Lycos Europe GmbH, Germany
Knud Möller, DERI Galway, Ireland
Jasminko Novak, University of Zurich, Switzerland
Alexandre Passant, Université Paris-Sorbonne, France
Yves Raimond, Queen Mary University of London, UK
Harald Sack, University of Potsdam, Germany
Sebastian Schaffert, Salzburg Research, Austria
Andreas Schmidt, FZI Research Center for IT, Germany


-- Event resources ---

RSS-Feed: http://blog.aksw.org/feed/atom/?tag=imcssw
iCal Calender: 
http://www.google.com/calendar/ical/l0m29bt3mp79jhobmf3u8ufu88%40group.calendar.google.com/public/basic.ics 

RDF: 
http://demo.ontowiki.net/resource/export/?r=http%3A%2F%2F3ba.se%2Fconferences%2FIMCSSW2008m=http%3A%2F%2F3ba.se%2Fconferences%2Foutput=xmlfile=imc-ssw-2008.rdf 

OntoWiki-View: 
http://demo.ontowiki.net/resource/view/IMCSSW2008?m=http%3A%2F%2F3ba.se%2Fconferences%2F 



---




RelFinder - Version 1.0 released

2010-03-15 Thread Steffen Lohmann

Hi all,

we are happy to announce the release of version 1.0 of the RelFinder.

The RelFinder is a tool that extracts and visualizes relationships 
between given objects in Semantic Web datasets and makes these 
relationships interactively explorable.
It advances the idea of the DBpedia Relationship Finder by offering 
improved visualization and exploration techniques and working with any 
dataset that provides SPARQL access. Some key features are:


- relationships even between more than two given objects (all visualized 
in one 'relationship graph')
- easy configurability of the accessed dataset and search parameters 
(via settings menu or config file)
- aggregations and global filters (based on relationship length, class, 
link type, connectivity)
- highly interactive visualization (highlighting, red thread, pickpin, 
details on demand, animations)


The RelFinder is implemented in Adobe Flex and requires only a 
Webbrowser with installed Flash Player. Give it a try at 
http://relfinder.semanticweb.org and discover relationships that you 
have not been aware of before ;-)


Thanks go to Jens Lehmann, Jürgen Ziegler, Lena Tetzlaff, Laurent 
Alquier  Sebastian Hellmann.


Best regards,
Philipp, Timo  Steffen



Re: RelFinder - Version 1.0 released

2010-03-15 Thread Steffen Lohmann

Thanks Kingsley,

I quickly added URIBurner as a dataset but cannot see the added value 
w.r.t. the RelFinder - your Google-Apple example produces mainly 
seeAlso links, which are not that helpful to discover new 
relationships. Here is the link with the parameters:

http://relfinder.semanticweb.org/RelFinder.swf?obj1=R29vZ2xlfGh0dHA6Ly9kYnBlZGlhLm9yZy9yZXNvdXJjZS9Hb29nbGU=obj2=QXBwbGV8aHR0cDovL2RicGVkaWEub3JnL3Jlc291cmNlL0FwcGxlname=VVJJQnVybmVyabbreviation=YnVybmVydescription=QSBzZXJ2aWNlIHRoYXQgZGVsaXZlcnMgUkRGLWJhc2VkIHN0cnVjdHVyZWQgZGVzY3JpcHRpb25zIG9mIFdlYiBhZGRyZXNzYWJsZSByZXNvdXJjZXMgKGRvY3VtZW50cyBvciByZWFsIHdvcmxkIG9iamVjdHMpIGluIGEgdmFyaWV0eSBvZiBmb3JtYXRzIHRocm91Z2ggR2VuZXJpYyBIVFRQIFVSSXMuendpointURI=aHR0cDovL3VyaWJ1cm5lci5jb20vc3BhcnFsisVirtuoso=dHJ1ZQ==useProxy=dHJ1ZQ==autocompleteURIs=aHR0cDovL3d3dy53My5vcmcvMjAwMC8wMS9yZGYtc2NoZW1hI2xhYmVsimageURIs=aHR0cDovL3htbG5zLmNvbS9mb2FmLzAuMS9kZXBpY3Rpb24=linkURIs=aHR0cDovL3htbG5zLmNvbS9mb2FmLzAuMS9wYWdlmaxRelationLegth=Mg==

Once again a demonstration that variety of link types is a big challenge 
when automatically generating RDF data (link variety is indeed important 
for the RelFinder).


Anyways - thanks for the pointer. We are generally interest in 
integrating further datasets to the RelFinder's default installation (as 
long as they produce valuable relationships). Further ideas and links 
are welcome!


Steffen

--
Kingsley Idehen schrieb:

Steffen,

Very cool!

Please add: http://uriburner.com/sparql to the default list of SPARQL 
endpoints. The effect of doing this ups the implications of this tool 
exponentially! Try it yourself e.g. Google to Apple (use the DBpedia 
URIs).


The density of the graph, the response time provide quite an 
experience to the user (especially a Linked Data neophyte).


Notes about URIBurner:

1. Quad Store is populated progressively with contributions by anyone 
that uses the service to seek structured descriptions of an HTTP 
accessible resource via ODE bookmarklets, browser extensions, or 
SPARQL FROM Clause that references external Data Sources
2. As part of the graph construction process it not only performs RDF 
model transformation (re. non RDF data sources); it also performs LOD 
cloud lookups and joins, ditto across 30+ Web 2.0 APIs
3. Anyone with a Virtuoso installation that enables the RDF Mapper VAD 
(which is how the Sponger Middleware is packaged) ends up with their 
own personal or service specific variant of URIBurner.


Again, great stuff, this tool is going to simplify the message, a lot 
re., Linked Data and its penchant for serendipitous discovery of 
relevant things.







ICWE 2011: Call for Tutorials

2010-12-01 Thread Steffen Lohmann
Since a special focus of ICWE 2011 will be on Web Data Engineering, it 
would be great to have also one LOD-related tutorial next year.


-
ICWE 2011: CALL FOR TUTORIALS

11th International Conference on Web Engineering (ICWE 2011)

June 20-24, 2011, Paphos, Cyprus

http://icwe2011.webengineering.org/

Submission Deadline: February 14, 2011
-

The International Conference on Web Engineering (ICWE) aims at promoting 
scientific and practical excellence on Web Engineering, and at bringing 
together researchers and practitioners working in technologies, 
methodologies, tools, and techniques used to develop and maintain 
Web-based applications leading to better systems, and thus to enabling 
and improving the dissemination and use of content and services through 
the Web.



*** Call for Tutorials ***

ICWE 2011 invites proposals for tutorials that will provide conference 
participants with the opportunity to gain knowledge and insights in a 
broad range of Web Engineering areas. Participants at the tutorials 
include scientists and practitioners, who are seeking to gain a better 
understanding of the technologies, methodologies, tools, and techniques 
used to develop and maintain Web-based applications.


Proposed tutorials can be either half-day long (3 hours) or full-day 
long (6 hours). Tutorial proposals should address issues related to the 
topics of the conference. Topics not mentioned in the topics of interest 
list are also encouraged as long as they are of special relevance to 
ICWE 2011.


Tutorial presenters will receive an honorarium depending on the number 
of attendees and one free registration for ICWE 2011. The precise amount 
of the honorarium will be determined after the early registration 
deadline. This amount is per tutorial, not per speaker. Travel and 
accommodation arrangements are up to the speakers.


Tutorials that have less than 8 early registrants will face the risk of 
cancellation.



*** Submission instructions ***

Tutorial proposals must clearly identify the intended audience and its 
assumed background. Proposals must be no more than 3 pages and must 
provide a sense of both the scope of the tutorial and depth within the 
scope. The intended length of the tutorial (3 or 6 hours) should also be 
indicated, together with justification that a high-quality presentation 
will be achieved within the chosen time frame. Proposals should also 
include contact information and a brief bio of the presenters. If the 
tutorial has been given previously, the proposal should include where 
the tutorial has been given and how it will be modified for ICWE 2011.

In summary, tutorial proposals must include the following information:

 * Tutorial title.
 * Tutorial abstract: 1-2 paragraphs describing the goals and contents 
of the tutorial (will be included in the conference registration materials).

 * Intended Length: half-day (3 hours) or full-day (6 hours).
 * Description of the tutorial providing its scope (general topic area) 
and depth, as well as its aims and learning outcomes.

 * Intended audience and assumed background knowledge.
 * Presenter(s) contact information: name, affiliation, email, mailing 
address, phone, fax.
 * Presenter(s) short biography demonstrating that she/he is an expert 
on the subject of the tutorial (will be included in the conference 
registration materials).

 * Relevant references.
 * Indication if the proposed tutorial has been given previously.
 * Sample slides of the tutorial, if available.

Proposals must be in PDF format and submitted electronically to the 
tutorial chairs (tutorials [at] icwe2011.webengineering.org) no later 
than the submission deadline.


Presenters of accepted tutorials will be required to prepare a one-page 
summary of the tutorial by the camera-ready copy deadline to be included 
in the Springer LNCS main conference proceedings. Presenters should also 
make the slides available to the ICWE 2011 participants, both in 
hardcopy and online.



*** Important Dates ***

 * Submission deadline: February 14, 2011
 * Notification of acceptance: April 1, 2011
 * Camera-ready version: April 28, 2011


*** Tutorial Chairs ***

 * Cesare Pautasso, University of Lugano, Switzerland
 * Steffen Lohmann, Universidad Carlos III de Madrid, Spain


*** Contact Information ***

In case of inquiries, please contact the tutorial chairs at: tutorials 
[at] icwe2011.webengineering.org






ANN: Modular Unified Tagging Ontology (MUTO)

2011-11-17 Thread Steffen Lohmann

Hi all,

we are pleased to announce version 1.0 release of the Modular Unified 
Tagging Ontology (MUTO). MUTO is an attempt to unify the core concepts 
of existing tagging ontologies (such as TAGS, TagOnt, MOAT, etc.) in one 
consistent schema. We reviewed available ontologies and created a 
compact core ontology (in OWL Lite) that supports different forms of 
tagging, such as private, group, automatic, and semantic tagging. MUTO 
should thus not be considered as yet another tagging ontology but as a 
unification of existing approaches. We hope it will be useful to the 
community and a helpful starting point for future extensions.


For more information, please see the specification at 
http://purl.org/muto/core# (redirect to http://muto.socialtagging.org)


Best,
Steffen

--
Steffen Lohmann - DEI Lab
Computer Science Department, Universidad Carlos III de Madrid
Avda de la Universidad 30, 28911 Leganés, Madrid (Spain), Office: 22A20
Phone: +34 916 24-9419, http://www.dei.inf.uc3m.es/slohmann/





Re: ANN: Modular Unified Tagging Ontology (MUTO)

2011-11-18 Thread Steffen Lohmann

On 17.11.2011 17:35, Kingsley Idehen wrote:

On 11/17/11 9:34 AM, Steffen Lohmann wrote:

Hi all,

we are pleased to announce version 1.0 release of the Modular 
Unified Tagging Ontology (MUTO). MUTO is an attempt to unify the 
core concepts of existing tagging ontologies (such as TAGS, TagOnt, 
MOAT, etc.) in one consistent schema. We reviewed available 
ontologies and created a compact core ontology (in OWL Lite) that 
supports different forms of tagging, such as private, group, 
automatic, and semantic tagging. MUTO should thus not be considered 
as yet another tagging ontology but as a unification of existing 
approaches. We hope it will be useful to the community and a helpful 
starting point for future extensions.


For more information, please see the specification at 
http://purl.org/muto/core# (redirect to http://muto.socialtagging.org)


Best,
Steffen



Great stuff!

Wondering if you could add rdfs:isDefinedBy relations to this 
ontology? Doing so makes it much more navigable.


Sure. I just forgot adding it. Thanks to point me to this, Kingsley. The 
rdfs:isDefinedBy statements are now included.




Here's the current state of affairs showcasing limited TBox navigability:

1. 
http://linkeddata.uriburner.com/describe/?url=http://purl.org/muto/core
2. 
http://linkeddata.uriburner.com/describe/?url=http%3A%2F%2Fpurl.org%2Fmuto%2Fcore%23tagMeaningurilookup=1 
.


Once you add the isDefinedBy relations, the ontology description page 
becomes the launch pad for powerful FYN exploration across both the 
ontology TBox and and associated ABox.




Looks very nice.

Thanks,
Steffen

--
Steffen Lohmann - DEI Lab
Computer Science Department, Universidad Carlos III de Madrid
Avda de la Universidad 30, 28911 Leganés, Madrid (Spain), Office: 22A20
Phone: +34 916 24-9419, http://www.dei.inf.uc3m.es/slohmann/





Re: ANN: Modular Unified Tagging Ontology (MUTO)

2011-11-18 Thread Steffen Lohmann

On 17.11.2011 20:03, Richard Cyganiak wrote:

Hi Steffen,

On 17 Nov 2011, at 14:34, Steffen Lohmann wrote:

MUTO should thus not be considered as yet another tagging ontology but as a 
unification of existing approaches.

I'm curious why you decided not to include mappings (equivalentClass, 
subProperty etc) to the existing approaches.


Good point, Richard. I thought about it but finally decided to separate 
these alignments from the core ontology - therefore the MUTO Mappings 
Module (http://muto.socialtagging.org/core/v1.html#Modules).


SIOC and SKOS can be nicely reused but aligning MUTO with the nine 
reviewed tagging ontologies is challenging and would result in a number 
of inconsistencies. This is mainly due to a different conceptual 
understanding of tagging and folksonomies in the various ontologies. To 
give some examples:


- Are tags with same labels merged in the ontology (i.e. are they one 
instance)?

- Is the number of tags per tagging limited to one or not?
- In case of semantic tagging: Are single tags or complete taggings 
disambiguated?

- How are the creators of taggings linked?
- Are tags from private taggings visible to other users or not?

Apart from that, I would have risk that MUTO is no longer OWL Lite/DL 
which I consider important for a tagging ontology (reasoning of 
folksonomies).


The current version of the MUTO Mappings Module provides alignments to 
Newman's popular TAGS ontology (mainly for compatibility reasons). Have 
a look at it and you'll get an idea of the difficulties in correctly 
aligning MUTO with existing tagging ontologies.


Best,
Steffen

--
Steffen Lohmann - DEI Lab
Computer Science Department, Universidad Carlos III de Madrid
Avda de la Universidad 30, 28911 Leganés, Madrid (Spain), Office: 22A20
Phone: +34 916 24-9419, http://www.dei.inf.uc3m.es/slohmann/





Ann: VOWL 2, ProtegeVOWL, WebVOWL

2014-05-12 Thread Steffen Lohmann

Hi all,

we are glad to announce that version 2.0 of the Visual Notation for OWL 
Ontologies (VOWL) has been published a few weeks ago. Along with the 
specification, we released a Protégé plugin and a web tool that 
implement large parts of VOWL. The works may be of interest to those of 
you who would like to visualize (smaller) ontologies in an intuitive 
way. They can be tested online at: http://vowl.visualdataweb.org


We are currently working on an improved JSON structure and additional 
functionality for the web tool (WebVOWL) to allow for an easier 
transformation of OWL to JSON and to provide a more compact 
visualization for larger ontologies. We are also planning to integrate 
further visual elements from the spec into the Protégé plugin 
(ProtégéVOWL). VOWL itself will also be advanced in the future to 
consider further OWL elements (particularly from OWL 2) and additional 
cases.


Note that the focus of VOWL 2 is on the ontology schema (i.e. the 
classes, properties and datatypes, sometimes called TBox), while the 
focus of VOWL 1 was on the integrated representation of classes and 
individuals (TBox and ABox) - which is, however, still possible with VOWL 2.


On behalf of the VOWL team,
Steffen

--
Steffen Lohmann . Institute for Visualization and Interactive Systems
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 . http://www.vis.uni-stuttgart.de/~lohmansn





Re: Ann: VOWL 2, ProtegeVOWL, WebVOWL

2014-05-13 Thread Steffen Lohmann

Thanks for your comments, Kingsley, Alfredo.

A live service where you can upload an OWL ontology and get it 
visualized with WebVOWL would indeed be very nice - but we are not there 
yet. The next step is a new JSON structure that makes it easier to 
transform ontologies into the WebVOWL format and to integrate WebVOWL 
with other projects. In the meantime, the VOWL plugin for Protégé 
(ProtégéVOWL) can be used to visualize and explore arbitrary ontologies: 
http://vowl.visualdataweb.org/protegevowl.html (note that it does not 
implement the complete VOWL spec and works with Java 1.7 at the moment).


Steffen

--
On 12.05.2014 20:11, Kingsley Idehen wrote:

On 5/12/14 1:28 PM, Alfredo Serafini wrote:
do you plan to release it for generale usage and integration on live 
services?
Well that's for Steffen Lohmann and Co. to answer, I simply indicated 
(by my reply) that this project has produced a very useful tool :)


Kingsley



2014-05-12 19:27 GMT+02:00 Alfredo Serafini ser...@gmail.com 
mailto:ser...@gmail.com:


wonderful!! :-)


2014-05-12 17:30 GMT+02:00 Kingsley Idehen
kide...@openlinksw.com mailto:kide...@openlinksw.com:

On 5/12/14 11:04 AM, Steffen Lohmann wrote:

Hi all,

we are glad to announce that version 2.0 of the Visual
Notation for OWL Ontologies (VOWL) has been published a
few weeks ago. Along with the specification, we released
a Protégé plugin and a web tool that implement large
parts of VOWL. The works may be of interest to those of
you who would like to visualize (smaller) ontologies in
an intuitive way. They can be tested online at:
http://vowl.visualdataweb.org

We are currently working on an improved JSON structure
and additional functionality for the web tool (WebVOWL)
to allow for an easier transformation of OWL to JSON and
to provide a more compact visualization for larger
ontologies. We are also planning to integrate further
visual elements from the spec into the Protégé plugin
(ProtégéVOWL). VOWL itself will also be advanced in the
future to consider further OWL elements (particularly
from OWL 2) and additional cases.

Note that the focus of VOWL 2 is on the ontology schema
(i.e. the classes, properties and datatypes, sometimes
called TBox), while the focus of VOWL 1 was on the
integrated representation of classes and individuals
(TBox and ABox) - which is, however, still possible with
VOWL 2.

On behalf of the VOWL team,
Steffen

-- 
Steffen Lohmann . Institute for Visualization and

Interactive Systems
University of Stuttgart . Universitaetstrasse 38 .
D-70569 Stuttgart
Phone: +49 711 685-88438 tel:%2B49%20711%20685-88438 .
http://www.vis.uni-stuttgart.de/~lohmansn
http://www.vis.uni-stuttgart.de/%7Elohmansn


Great Job!!

-- 


Regards,

Kingsley Idehen
Founder  CEO
OpenLink Software
Company Web: http://www.openlinksw.com
Personal Weblog: http://www.openlinksw.com/blog/~kidehen
http://www.openlinksw.com/blog/%7Ekidehen
Twitter Profile: https://twitter.com/kidehen
Google+ Profile: https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile: http://www.linkedin.com/in/kidehen










--

Regards,

Kingsley Idehen 
Founder  CEO
OpenLink Software
Company Web:http://www.openlinksw.com
Personal Weblog:http://www.openlinksw.com/blog/~kidehen
Twitter Profile:https://twitter.com/kidehen
Google+ Profile:https://plus.google.com/+KingsleyIdehen/about
LinkedIn Profile:http://www.linkedin.com/in/kidehen







--
Steffen Lohmann . Institute for Visualization and Interactive Systems
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 . http://www.vis.uni-stuttgart.de/~lohmansn



Add: Ann: VOWL 2, ProtegeVOWL, WebVOWL

2014-05-14 Thread Steffen Lohmann
As there were several requests: We recompiled the VOWL plugin for 
Protégé so that it also works with the older Java version 1.6 now (which 
is the Java version that comes with the bundle installation of Protégé 
4.3). The new JAR file (for those who got errors with the old one) is 
available at: http://vowl.visualdataweb.org/protegevowl.html


Steffen

--
On 12.05.2014 17:04, Steffen Lohmann wrote:

Hi all,

we are glad to announce that version 2.0 of the Visual Notation for 
OWL Ontologies (VOWL) has been published a few weeks ago. Along with 
the specification, we released a Protégé plugin and a web tool that 
implement large parts of VOWL. The works may be of interest to those 
of you who would like to visualize (smaller) ontologies in an 
intuitive way. They can be tested online at: 
http://vowl.visualdataweb.org


We are currently working on an improved JSON structure and additional 
functionality for the web tool (WebVOWL) to allow for an easier 
transformation of OWL to JSON and to provide a more compact 
visualization for larger ontologies. We are also planning to integrate 
further visual elements from the spec into the Protégé plugin 
(ProtégéVOWL). VOWL itself will also be advanced in the future to 
consider further OWL elements (particularly from OWL 2) and additional 
cases.


Note that the focus of VOWL 2 is on the ontology schema (i.e. the 
classes, properties and datatypes, sometimes called TBox), while the 
focus of VOWL 1 was on the integrated representation of classes and 
individuals (TBox and ABox) - which is, however, still possible with 
VOWL 2.


On behalf of the VOWL team,
Steffen




--
Steffen Lohmann . Institute for Visualization and Interactive Systems
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 . http://www.vis.uni-stuttgart.de/~lohmansn





CfP: VISUAL workshop @ EKAW

2014-09-04 Thread Steffen Lohmann
 generation
- augmented human reasoning
- novel visualizations of data and metadata
- visual approaches for semantic similarity measurement
- exploratory information visualization
- domain-specific visual analytics
- interactive systems in business intelligence
- cognition and sensemaking in visual contexts
- evaluation of interactive systems


Submission Guidelines
==

Paper submission and reviewing for this workshop will be electronic via 
EasyChair. The papers should be written in English, following Springer 
LNCS format, and be submitted in PDF.


The following types of contributions are welcome:

- Full research papers (8-12 pages);
- Experience papers (8-12 pages);
- Position papers (6-8 pages);
- Short research papers (4-6 pages);
- System papers (4-6 pages).

Accepted papers will be published as a volume in the CEUR Workshop 
Proceedings series.



Important Dates
==

- Submission: September 19, 2014
- Notification: October 17, 2014
- Camera-ready: November 7, 2014
- Workshop: November 24/25, 2014


Organizers
==

- Valentina Ivanova, Linköping University, Sweden
- Tomi Kauppinen, Aalto University, Finland, and University of Bremen, 
Germany

- Steffen Lohmann, University of Stuttgart, Germany
- Suvodeep Mazumdar, The University of Sheffield, UK
- Catia Pesquita, University of Lisbon, Portugal
- Toomas Timpka, Linköping University, Sweden
- Kai Xu, Middlesex University, UK





Re: CfP: VISUAL workshop @ EKAW

2014-09-04 Thread Steffen Lohmann

Sarven,

I forward your questions to Tomi who is responsible for 
linkedscience.org . We just use that domain to host the web page of our 
workshop. The page could have also been hosted on any other domain, 
since I do not see a direct relation been linkedscience.org and our 
workshop (but maybe Tomi sees it).


Best,
Steffen

--
On 04.09.2014 11:51, Sarven Capadisli wrote:

On 2014-09-04 11:36, Steffen Lohmann wrote:

Submission Guidelines
==

Paper submission and reviewing for this workshop will be electronic via
EasyChair. The papers should be written in English, following Springer
LNCS format, and be submitted in PDF.


I am struggling to understand the role that PDF plays towards Linked 
Science.


Would you mind helping me understand:

* What is Linked Science?

* How does PDF (better?) contribute towards fulfilling the Linked 
Science promise in comparison to the alternatives methods?


* At what granularity is the information in the papers that's 
submitted to this Linked Science workshop preserved? Which information 
is not? And, most importantly, which information should be preserved 
for future research(ers)? What was your decision process?



Thanks,

-Sarven
http://csarven.ca/#i




--
Steffen Lohmann . Institute for Visualization and Interactive Systems
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 . http://www.vis.uni-stuttgart.de/~lohmansn





Updated CfP: Visualizations and User Interfaces for Knowledge Engineering and Linked Data Analytics

2014-09-15 Thread Steffen Lohmann
 (but are not limited to):

- interactive semantic systems
- design of interactive systems
- visual pattern discovery
- (semi-)automatic hypothesis generation
- augmented human reasoning
- novel visualizations of data and metadata
- visual approaches for semantic similarity measurement
- exploratory information visualization
- domain-specific visual analytics
- interactive systems in business intelligence
- cognition and sensemaking in visual contexts
- evaluation of interactive systems


Submission Guidelines
==

Paper submission and reviewing for this workshop will be electronic via 
EasyChair. The papers should be written in English, following Springer 
LNCS format, and be submitted in PDF.


The following types of contributions are welcome:

- Full research papers (8-12 pages);
- Experience papers (8-12 pages);
- Position papers (6-8 pages);
- Short research papers (4-6 pages);
- System papers (4-6 pages).

Accepted papers will be published as a volume in the CEUR Workshop 
Proceedings series.



Important Dates
==

- Submission: September 30, 2014
- Notification: October 21, 2014
- Camera-ready: November 11, 2014
- Workshop: November 24 or 25, 2014


Organizers
==

- Valentina Ivanova, Linköping University, Sweden
- Tomi Kauppinen, Aalto University, Finland, and University of Bremen, 
Germany

- Steffen Lohmann, University of Stuttgart, Germany
- Suvodeep Mazumdar, The University of Sheffield, UK
- Catia Pesquita, University of Lisbon, Portugal
- Toomas Timpka, Linköping University, Sweden
- Kai Xu, Middlesex University, UK

--
Steffen Lohmann . Institute for Visualization and Interactive Systems
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 . http://www.vis.uni-stuttgart.de/~lohmansn






[Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-19 Thread Steffen Lohmann

Hi all,

we are glad to announce the release of WebVOWL 0.3, which integrates our 
OWL2VOWL converter now. WebVOWL works in modern web browsers without any 
installation so that ontologies can be instantly visualized. Check it 
out at: http://vowl.visualdataweb.org/webvowl.html


To the best of our knowledge, WebVOWL is the first comprehensive 
ontology visualization completely based on open web standards (HTML, 
SVG, CSS, JavaScript). It implements VOWL 2, which has been designed in 
a user-oriented process and is clearly specified at 
http://vowl.visualdataweb.org (incl. references to scientific papers).


Please note that:
- WebVOWL is a tool for ontology visualization, not for ontology modeling.
- VOWL considers many language constructs of OWL but not all of them yet.
- VOWL focuses on the visualization of the TBox of small to medium-size 
ontologies but does not sufficiently support the visualization of very 
large ontologies and detailed ABox information for the time being.
- WebVOWL 0.3 implements the VOWL 2 specification nearly completely, but 
the current version of the OWL2VOWL converter does not.

These issues are subject to future work.

Have fun with it!

On behalf of the VOWL team,
Steffen

--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 .http://www.vis.uni-stuttgart.de/~lohmansn





Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-23 Thread Steffen Lohmann
Thank you, Colin. I am glad to hear that. Your RDFS ontology looks 
indeed quite nice in WebVOWL.


Have fun with it,
Steffen

--
On 22.12.2014 13:48, Colin Maudry wrote:

Great job!

I'm particularly happy because it shows good support for an RDFS 
ontology 
(http://vowl.visualdataweb.org/webvowl/#iri=http://purl.org/dita/ns).


Lately I see new tools that mostly support OWL ontologies, so thanks a 
lot!


Colin Maudry
@CMaudry



--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 .http://www.vis.uni-stuttgart.de/~lohmansn



Re: [Ann] WebVOWL 0.3 - Visualize your ontology on the web

2014-12-23 Thread Steffen Lohmann

Kingsley, Timothy, Sarven, Melvin, Ali,

On 22.12.2014 16:20, Kingsley Idehen wrote:
I just want the URI of the current node in the graph to be a live link 
i.e., an exit point from the visualization tool to the actual source. 
You offer something close to this in the side panel, but its scoped to 
the entire ontology rather than a selected term.


I am suggesting you make the selected node text e.g., Tagging an 
HTTP URI (hyperlink) via a href=http://purl.org/muto/core#Tagging 
Tagging/a .


[1] http://susepaste.org/36507989 -- screenshot showing what's 
currently offered 


The URI and link is already there! The labels in the Selection Details 
(e.g., Tagging) are hyperlinks that you can click on to go to the 
actual URIs. As it does not seem to be that clear (and the hyperlink URI 
may not be properly shown in all web browsers), we already discussed to 
add further tooltips with the URIs in the GUI.



On 22.12.2014 17:26, Timothy W. Cook wrote:
You call it the label, Protege calls it the Description and in RDF/XML 
it is the URI fragment after the # symbol in the rdf:about attribute.  
So, I am not exactly sure what it is supposed to be called, I call it 
the 'name'​; for what shows up in the tooltip.  Which is exactly the 
same thing as what is in the circle, rectangle, etc. on the page.


We display the rdfs:label of the elements in the language that is 
selected in the sidebar. If IRI-based is selected as language, the 
label is generated from the last part of the URI. The tooltips with the 
full label are helpful in cases where long labels are abbreviated in the 
visualization.




In the sidebar 'Description' I do have a dc:description inside 
the owl:Ontology definition.  However, it doesn't display in WebVOWL.


Usually, the dc:description annotation for the ontology is shown in the 
sidebar. Here is an example where it works:

http://vowl.visualdataweb.org/webvowl/#iri=http://purl.org/muto/core



But my question was about the possibility of displaying (in tooltip or 
sidebar) other Dublin Core metadata for each class and property.  This 
would be really great documentation about the ontology being viewed.


We plan to add additional elements to the selection details. Dublin Core 
is a candidate here, even though we cannot consider all possible 
vocabularies (remember that VOWL has mainly been designed for OWL and 
not for Dublin Core, SKOS, etc.). We will try to find a more generic 
approach of considering metadata in the future.



On 23.12.2014 00:17, Sarven Capadisli wrote:

I would suggest that, either use ? and let the server trigger everything
(which is IMO the right thing to do here, and with simpler/better
caching possibilities), or stick to # and let JavaScript manage it all
(as is now).

On 23.12.2014 02:53, Melvin Carvalho wrote:
The standard ? is a way of creating a cool URI that can be shared 
bookmarked etc.


The # character in HTTP is unfortunately overloaded to do a few 
things, which often causes confusion.  Primarily linked data people 
should be aware that the # character is a mechanism to point to linked 
data inside a document (frag ids).  It can be used in a few other ways 
sure, but I think in this case the motivation for hiding the query 
from the server is not high.


You can even let the server ignore the query string in this case and 
just have the split function detect ('#') or ('?')


Thanks for your comments on that. We actually call the server in the 
background to process the ontology files, as we use our OWL2VOWL 
converter here that is based on Java and the OWL API.


Using '?' for the requested ontology IRI and '#' for a part of it (e.g., 
a selected class) sounds quite canonical to me. Will be an issue for the 
next WebVOWL version (but not for the next couple of days ;-) ).



On 22.12.2014 14:01, Ali ABBASSENE wrote:

Is there any open-source version of WebVOWL ? Or any stencil of SVG files for 
the VOWL graphical representation ?
Because I am planning to implement a VOWL enabled editor under the versatile 
modeler Oryx-editor (https://code.google.com/p/oryx-editor/).


I am glad to hear that. Feel free to use VOWL for your editor, which 
comes with a creative commons license.


WebVOWL is open source! It is released under the MIT license. The files 
are available at http://vowl.visualdataweb.org/webvowl.html#installation


The visual language itself (VOWL 2) is specified at 
http://vowl.visualdataweb.org/v2 . The specification contains the SVG 
code of all VOWL elements (+ style information in a separate CSS).


Just go ahead! I am looking forward to the result.

Cheers,
Steffen

--
Dr. Steffen Lohmann . Visualization and Interactive Systems (VIS)
University of Stuttgart . Universitaetstrasse 38 . D-70569 Stuttgart
Phone: +49 711 685-88438 .http://www.vis.uni-stuttgart.de/~lohmansn