Call for local organizers of European Data Forum 2017 in UK or Malta

2016-03-13 Thread Sören Auer
Call for Expressions of Interest to host EDF 2016 in United Kingdom or Malta

The European Data Forum, http://data-forum.eu/, is a meeting place for
industry, research, policymakers and community initiatives to discuss
the challenges of the emerging Data Economy (Linked Data, Big
Data, Open Data, respective business models, etc.).

The European Data Forum Steering Committee solicits Expressions of
Interest (EoI) to act as local host/organizer of the European Data Forum
in 2017.

The European Data Forum follows the country of the EU Presidency, i.e.
Copenhagen (2012), Dublin (2013, Athens (2014), Luxembourg (2015 and
Eindhoven (2016) cf. http://data-forum.eu

In 2017, the United Kingdom and Malta will hold the EU Presidency.
Hence, we are looking for local organizers from these two countries.

The requirements and duties of a local organizer include:
* Arrangement of an attractive and accessible venue, catering, social
event/dinner
* Outreach and engagement of local communities including politics and
industry etc.
* Management of the event budget

As the local host/organizer of the EDF 2017, you would be uniquely
positioned in the European Data community with many collaboration
opportunities arising from this extremely important community service.

Interested persons/institutions should submit an EoI including
background information about the key people in the local organization
team, organizational support, event organization experience and local
community outreach capabilities. Please submit your EoI until April 15
2016 to a...@cs.uni-bonn.de.

We look very much forward to receiving your Expression of Interest to
become the EDF 2017 local host.

Sören Auer
on behalf of the European Data Forum Steering Committee



CfP: WWW2016 workshop on Linked Data on the Web (LDOW2016)

2015-12-17 Thread Sören Auer
Hi all,

In case you don't know yet what do in your X-Mas holidays, why not
preparing a submission for the WWW2016 workshop on Linked Data on the
Web (LDOW2016) in Montreal, Canada ;-) The paper submission deadline for
the workshop is  24 January, 2016. Please find the call for papers below.

BTW: LDOW now also accepts HTML5+RDFa submissions according to the
Linked Research principles: https://github.com/csarven/linked-research
with embedded semantic and interactive content.

Looking forward seeing you at LDOW2016 in Montreal!

Cheers,

Sören Chris, Tim, and Tom




  Call for Papers: 9th Workshop on Linked Data on the Web (LDOW2016)


 Co-located with 25th International World Wide Web Conference
April 11 to 15, 2016 in Montreal, Canada


   http://events.linkeddata.org/ldow2016/



The Web is developing from a medium for publishing textual documents
into a medium for sharing structured data. This trend is fueled on the
one hand by the adoption of the Linked Data principles by a growing
number of data providers. On the other hand, large numbers of websites
have started to semantically mark up the content of their HTML pages and
thus also contribute to the wealth of structured data available on the Web.

The 9th Workshop on Linked Data on the Web (LDOW2016) aims to stimulate
discussion and further research into the challenges of publishing,
consuming, and integrating structured data from the Web as well as
mining knowledge from the global Web of Data. The special focus of
this year’s LDOW workshop will be Web Data Quality Assessment and Web
Data Cleansing.


*Important Dates*

* Submission deadline: 24 January, 2016 (23:59 Pacific Time)
* Notification of acceptance: 10 February, 2016
* Camera-ready versions of accepted papers: 1 March, 2016
* Workshop date: 11-13 April, 2016


*Topics of Interest*

Topics of interest for the workshop include, but are not limited to, the
following:

Web Data Quality Assessment
* methods for evaluating the quality and trustworthiness of web data
* tracking the provenance of web data
* profiling and change tracking of web data sources
* cost and benefits of web data quality assessment
* web data quality assessment benchmarks

Web Data Cleansing
* methods for cleansing web data
* data fusion and truth discovery
* conflict resolution using semantic knowledge
* human-in-the-loop and crowdsourcing for data cleansing
* cost and benefits of web data cleansing
* web data quality cleansing benchmarks

Integrating Web Data from Large Numbers of Data Sources
* linking algorithms and heuristics, identity resolution
* schema matching and clustering
* evaluation of linking and schema matching methods

Mining the Web of Data
* large-scale derivation of implicit knowledge from the Web of Data
* using the Web of Data as background knowledge in data mining
* techniques and methodologies for Linked Data mining and analytics

Linked Data Applications
* application showcases including Web data browsers and search engines
* marketplaces, aggregators and indexes for Web Data
* security, access control, and licensing issues of Linked Data
* role of Linked Data within enterprise applications (e.g. ERP, SCM,CRM)
* Linked Data applications for life-sciences, digital humanities, social
sciences etc.


*Submissions*

We seek two kinds of submissions:

  1. Full scientific papers: up to 10 pages in ACM format
  2. Short scientific and position papers: up to 5 pages in ACM format

Submissions must be formatted using the ACM SIG template available at
http://www.acm.org/sigs/publications/proceedings-templates or in HTML5
e.g. according to the Linked Research
(https://github.com/csarven/linked-research) principles.

For authoring submission according to the Linked Research principles
authors can use dokieli (https://github.com/linkeddata/dokieli) - a
decentralized authoring and annotation tooling. HTML5 papers can be
submitted by either providing an URL to their paper (in HTML+RDFa, CSS,
JavaScript etc.) with supporting files, or an archived zip file
including all the material.

Accepted papers will be presented at the workshop and included in the
CEUR workshop proceedings. At least one author of each paper has to
register for the workshop and to present the paper.


*Organizing Committee*

 Christian Bizer, University of Mannheim, Germany
 Tom Heath, Open Data Institute, UK
 Sören Auer, University of Bonn and Fraunhofer IAIS, Germany
 Tim Berners-Lee, W3C/MIT, USA


*Contact Information*

For further information about the workshop, please contact the workshops
chairs at:  ldow2...@events.linkeddata.org


-- 
Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de/SoerenAuer

Fraunhofer-Institute Intelligent Analysis & Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de/Auer.html

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer



Last CfP BISE Journal special issue on "Linked Data in Business"

2015-09-07 Thread Sören Auer
Dear all,

I would like point your attention again to the BISE Journal special
issue on "Linked Data in Business":

There is a common misunderstanding concerning enterprise data – linked
not necessarily means open. Internal company data linked to open data
can still be private. This way enterprises gain additional value by
extension, enhancements and verification of own data against external
sources. In this special issue on “Linked Data in Business” we would
like to focus on research that studies the exploitation of linked data
in business, economics and management. Enterprises can integrate data
and discover new insights more easily and this can lead to the emergence
of new products and services.  They will also be able to solve business
challenges in new ways. For this to come true the linked data
exploration seems to be the next big step. Through the integration of
private data and linked open data as well as through the combination of
structured and originally unstructured data added-value chains can be
established.

In the context of the above, the following topics are of special interest:
* data extraction, mapping, publishing and linking methods
* data cataloguing
* datasets retrieval
* language technologies for linked data
* business vocabularies
* geographical linked data
* enterprise data integration
* linked data mining and analytics

For details please visit: http://www.bise-journal.com/?p=974

Submission deadline is *1 November 2015*

Schedule:
* Paper submission deadline:1-11-2015
* Author notification:  10-1-2016
* Revision due: 28-2-2016
* Second revision:  23-5-2016
* Planned publication:  October 2016

Best,

Sören

-- 
Big Data Europe: http://big-data-europe.eu

Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de

Fraunhofer Institute Intelligent Analysis & Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer



Linked and Web Data Science aficionados sought

2015-05-20 Thread Sören Auer
Dear all,

At EIS research group and Fraunhofer IAIS in Bonn, we still have two
fully-funded PhD positions open in the context of the Marie
Skłodowska-Curie ITN WDAqua (Answering Questions using Web Data):

http://eis.iai.uni-bonn.de/Jobs.html#wdaqua

Deadline to apply is end of this month. If you are ambitious about
research and passionate about technology don't hesitate to get in touch
with us.

Best,

Sören

-- 
Project: BigDataEurope Empowering Communities with Big Data technologies
http://big-data-europe.eu

Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de/SoerenAuer

Fraunhofer-Institute Intelligent Analysis  Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de/Auer.html

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer



CfP BISE Journal special issue on Linked Data in Business

2015-03-22 Thread Sören Auer
Dear all,

I would like point your attention to the BISE Journal special issue on
Linked Data in Business:

There is a common misunderstanding concerning enterprise data – linked
not necessarily means open. Internal company data linked to open data
can still be private. This way enterprises gain additional value by
extension, enhancements and verification of own data against external
sources. In this special issue on “Linked Data in Business” we would
like to focus on research that studies the exploitation of linked data
in business, economics and management. Enterprises can integrate data
and discover new insights more easily and this can lead to the emergence
of new products and services.  They will also be able to solve business
challenges in new ways. For this to come true the linked data
exploration seems to be the next big step. Through the integration of
private data and linked open data as well as through the combination of
structured and originally unstructured data added-value chains can be
established.

In the context of the above, the following topics are of special interest:
* data extraction, mapping, publishing and linking methods
* data cataloguing
* datasets retrieval
* language technologies for linked data
* business vocabularies
* geographical linked data
* enterprise data integration
* linked data mining and analytics

For details please visit: http://www.bise-journal.com/?p=974

Submission deadline is *1 November 2015*

Schedule:
* Paper submission deadline:1-11-2015
* Author notification:  10-1-2016
* Revision due: 28-2-2016
* Second revision:  23-5-2016
* Planned publication:  October 2016

Best,

Sören

-- 
New H2020 project Big Data Europe: http://big-data-europe.eu

Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de

Fraunhofer Institute Intelligent Analysis  Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer



CfP: 8th Workshop on Linked Data on the Web (LDOW2015) at WWW2015 in Florence, Italy

2015-02-01 Thread Sören Auer
Dear Linked Data aficionados,

The 8th edition of the Linked Data on the Web workshop will take place
at WWW2015 in Florence, Italy. Papers are due March 15th, 2015. Please
find the call for papers below.

We are looking forward to having another exciting LDOW workshop and to
seeing many of you in Florence.

Best,

Sören, Chris, Tom, Tim



*** Call for Papers ***

8th Workshop on Linked Data on the Web (LDOW2015)

Co-located with 24nd International World Wide Web Conference
18-22 May 2015, Florence, Italy

http://events.linkeddata.org/ldow2015/


The Web is continuing to develop from a medium for publishing textual
documents into a medium for sharing structured data. In 2014, the Web of
Linked Data grew to a size of about 1000 datasets with contributions
coming from companies, governments and other public sector bodies such
as libraries, statistical bodies or research institutions. In parallel,
the schema.org initiative has found increasing adoption with large
numbers of websites semantically marking up the content of their HTML pages.

The 8th Workshop on Linked Data on the Web (LDOW2015) aims to stimulate
discussion and further research into the challenges of publishing,
consuming, and integrating structured data from the Web as well as
mining knowledge from the global Web of Data. In addition to its
traditional focus on open web data, the special focus of this year’s
LDOW workshop will be the application of Linked Data technologies in
enterprise settings as well as the potentials of interlinking closed
enterprise data with open data from the Web.


*Important Dates*

* Submission deadline:  15 March, 2015
* Notification of acceptance:  6 April,, 2015
* Camera-ready versions of accepted papers: 20 April, 2015
* Workshop date: 19 May, 2015


*Topics of Interest*

Topics of interest for the workshop include, but are not limited to, the
following:

Linked Enterprise Data
* role of Linked Data within enterprise applications (e.g.ERP, SCM, CRM)
* integration of SOA and Linked Data approaches in joint frameworks
* authentication, security and access control approaches for Linked
Enterprise Data
* use cases combining closed enterprise data with open data from the Web

Mining the Web of Data
* large-scale approaches to deriving implicit knowledge from the Web of Data
* using the Web of Data as background knowledge for data mining

Integrating Data from Large Numbers of Web Data Sources
* crawling, caching and querying Web data
* identity resolution, linking algorithms and heuristics
* schema matching and clustering
* data fusion
* evaluation of linking, schema matching and data fusion methods

Quality Assessment, Provenance Tracking and Licensing
* evaluating quality and trustworthiness of Web data
* tracking provenance and usage of Web data
* licensing issues in Web data publishing and integration
* profiling and change tracking of Web data sources

Linked Data Applications
* application showcases including browsers and search engines
* marketplaces, aggregators and indexes for Web data
* Linked Data applications for life-sciences, digital humanities, social
sciences etc.
* business models for Linked Data publishing and consumption


*Submissions*

We seek two kinds of submissions:

1. Full scientific papers: up to 10 pages in ACM format
2. Short scientific and position papers: up to 5 pages in ACM format

Submissions must be formatted using the ACM SIG template available at
http://www.acm.org/sigs/publications/proceedings-templates. Accepted
papers will be presented at the workshop and included in the CEUR
workshop proceedings.


*Organizing Committee*

Christian Bizer, University of Mannheim, Germany
Tom Heath, Open Data Institute, UK
Sören Auer, University of Bonn and Fraunhofer IAIS, Germany
Tim Berners-Lee, W3C/MIT, USA




-- 
Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de

Fraunhofer Institute Intelligent Analysis  Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer



Interested in hosting EDF 2015 in Latvia or Luxembourg?

2014-01-25 Thread Sören Auer
Call for Expressions of Interest to host EDF 2015 in Latvia or Luxembourg

The European Data Forum, http://data-forum.eu/, is a meeting place for
industry, research, policymakers and community initiatives to discuss
the challenges of the emerging Data Economy (Linked Open Data, Big
Data, business models, etc.).

The European Data Forum Steering Committee solicits Expressions of
Interest (EoI) to act as local host/organizer of the European Data Forum
in 2015.

The European Data Forum follows the country of the EU Presidency:
* EDF 2012 in Copenhagen: http://2012.data-forum.eu
* EDF 2013 in Dublin: http://2013.data-forum.eu
* EDF 2014 in Athens: http://2014.data-forum.eu

In 2015, Latvia and Luxembourg will hold the EU Presidency. Hence, we
are looking for local organizers from these two countries.

The requirements and duties of a local host/organizer include (but are
not limited to):
* Arrangement of an attractive and accessible venue, catering, social
event/dinner
* Outreach and engagement of local communities including politicians
and industry representatives, etc.
* Management of the event budget

As the local host/organizer of the EDF 2015, you would be uniquely
positioned in the European Data community with hopefully many
collaboration opportunities arising from this extremely important
community service.

Interested persons/institutions should submit an EoI including
background information about yourself, your organization, your event
organization experience and your local community outreach
capabilities. Please submit your EoI until 31 February 2014 to
a...@cs.uni-bonn.de.

We look very much forward to receiving your Expression of Interest to
become the EDF 2015 local host.

Dieter Fensel and Sören Auer
(On behalf of the European Data Forum Steering Committee)



GeoKnow survey on linked geo data stakeholders needs

2013-11-13 Thread Sören Auer
Dear all,

The GeoKnow FP7 project currently performs a survey to identify the
needs of linked geo data stakeholders to shape the research and
development efforts:

http://survey.geoknow.eu/index.php/747189

If you are a geo data user or aficionado, please participate.

Best,

Sören


Here is the full announcement:

How can geospatial Linked (Open) Data help you (and your business or
research)?

The Geoknow FP7 project aims to facilitate the exploitation of the Web
as a platform for geospatial knowledge integration as well as for
exploration of geographic information. In order to identify the needs of
our potential users, we have created a survey to find out how we can
help your research or business most. If you work with geospatial data,
your participation would be greatly appreciated. The survey will take no
longer than 10 minutes to complete. There is a little incentive as well:
If you leave your email address, you will be automatically entered to
win one of three Amazon vouchers worth 50 Euro.
Survey: http://survey.geoknow.eu/index.php/747189



Fellowships for PostDocs from developing countries

2013-10-27 Thread Sören Auer
Dear all,

German Humboldt foundation is sponsoring fellowships for post-doctoral
researchers from developing countries [1]:

http://www.humboldt-foundation.de/georgforster

The topics pursued during such a fellowship should also be of relevance
to the future development of the fellows country of origin.

I think Linked Data and semantic technologies have the potential to
address societal challenges (in developing countries).

If you are or know someone from a developing country working on (or
interested in) a topic of relevance for development, please point him to
the announcement! We and I'm sure also many other German Semantic Web
research groups would be happy to host such fellows.

Best,

Sören


[1]
http://www.humboldt-foundation.de/pls/web/wt_show.text_page?p_text_id=1513


-- 
Enterprise Information Systems, Computer Science, University of Bonn
http://eis.iai.uni-bonn.de

Fraunhofer Institute Intelligent Analysis  Information Systems (IAIS)
Organized Knowledge -- http://www.iais.fraunhofer.de

Skype: soerenauer, Mobile +4915784988949

http://linkedin.com/in/soerenauer
https://twitter.com/SoerenAuer



Re: UNMOOC on Web Science to start!

2013-10-25 Thread Sören Auer
Steffen, all,

I agree Wikiversity is an interesting platform and of course its great
you started this project there. However, reuse is also one of the main
rationale of SlideWiki:

* all content published in SlideWiki has exactly the same licensing
conditions as in Wikipedia (hence its easy to reuse content on either of
the two platforms)

* in SlideWiki, we optimized the CourseWare content representation for
both presence courses held by a teacher and individual online/offline
learning even from mobile devices

* the SlideWiki content is highly-structured - a course is actually a
tree of sub modules and individual slides and self-assessment questions,
which can be easily reused and repurposed

* SlideWiki also aims to facilitate the multi-lingual reuse, by offering
semiautomated and crowdsourced translation support and facilities for
syncing different language editions of the same course

There are still many issues and things around SlideWiki we want to add
and improve and everyone is invited to get involved - its completely
open-source and open-content.

Great to see more open courseware initiatives such as UNMOOC. I think
the availability and accessibility of qualitative, multilingual open
courseware is still a major issue.

Best,

Sören


Am 25.10.2013 10:30, schrieb Steffen Staab:
 Dear Sören,
 
 thanks for your pointer! This is  very interesting!
 
 However, we favor a slightly different approach.
 We are interested in facilitating reuse beyond the creation of material
 for a course,
 hence our contents are put into the the wikimedia foundation space
 where they are also easily available, e.g., for Wikipedia (where
 appropriate).
 
 For instance, the wikipedia entry
 https://en.wikipedia.org/wiki/Ethernet_frame now reuses our video
 https://commons.wikimedia.org/w/index.php?title=File%3AHow_to_build_an_Ethernet_Frame.webm
 
 which is part of our course
 https://en.wikiversity.org/wiki/Topic:Web_Science/Part1:_Foundations_of_the_web/Internet_Architecture/Ethernet/Ethernet_Header
 
 
 This has already (at this very early stage) generated useful feedback from
 people that would never have considered to look at course material
 and it reaches an audience that would otherwise not benefit
 from our course material.
 
 Cheers,
 Steffen
 
 
 
 
 
 Am 25.10.13 09:17, schrieb Sören Auer:
 Dear Steffen, all,

 This seems to be indeed a very interesting endeavor. In particular the
 creation of open and evolving content with interactive,
 community-oriented feedback sessions seems an interesting concept. Have
 you seen our OpenCOurseWare authoring platform SlideWiki
 http://slidewiki.org - its now open-source, used with dozens of courses,
 several hundred students. There are btw. also comprehensive lecture
 series on Semantic Web, Information Retrieval, Intelligent Systems. The
 Semantic Web lecture (700 slides, 100 self-assessment questions) is
 available in 12 languages (Thai, French Portuguese, Spanish, Italian,
 Hindi, Greek, Persian, Arabic, Russian, German, English). We are really
 interested to build a larger user/author communities around SlideWiki
 content and any ideas in that regard are highly appreciated.

 Best,

 Sören

 Am 25.10.2013 08:47, schrieb Steffen Staab:
 Institute WeST is about to start an UNMOOC -
 an UNusual Massive Open Online Course - on *Web Science*

 A MOOC is an online course aimed at large-scale interactive
 participation and open access via the Web (quote from Wikipedia).
 Usual MOOCs combine canned content (text, videos, etc.) with
 interactive, community-oriented feedback sessions.
 In contrast, our unusual MOOC targets the creation of open and evolving
 content with interactive, community-oriented feedback sessions.

 Find more basic information at our starting page:
 http://studywebscience.org/

 The course itself will be hosted on wikiversity and wikimedia commons,
 two free and open siblings of the Wikipedia platform:
 https://en.wikiversity.org/wiki/Topic:Web_Science

 There are three models for certification
 1. Joining exams in Koblenz and earning ECTS credits
 2. Examination by an outside institution.
 If you are an outside institution and want to award
 credits based on participation in the MOOC exam, please contact us.
 3. Informal certification (no earning of ECTS credits)
 by remote participation in the MOOC exam.

 Rene Pickhardt  Steffen Staab
 Institute for Web Science and Technologies
 University of Koblenz-Landau

 http://west.uni-koblenz.de



 
 




Re: Semantic Web content management systems, written in PHP

2013-10-24 Thread Sören Auer
Christoph,

That's exactly a usecase of OntoWiki (http://ontowiki.net). We use
OntoWiki as a CMS for example on http://lod2.eu, http://aksw.org,
http://geoknow.eu
These are driven by the site extension for OntoWiki.

You can either install the OntoWiki+Virtuoso open-source versions alone,
install the complete LOD2 Stack (where both are included along with lots
of other useful Linked Data tools [1] or obtain an enterprise grade
commercially supported version from our partner company Eccenca.com

Best,

Sören

[1] http://stack.lod2.eu

Am 23.10.2013 23:26, schrieb Christoph Seelus:
 Hello there,
 
 I'm a student, currently working with the 'Information Systems and
 Data Center' at the 'Helmholtz-Centre Potsdam (GFZ - German Research
 Centre for Geosciences)', to evaluate different frameworks/tools for
 the implementation of a semantic web based content management system.
 
 The final goal is to use our OWL-based ontology
 (http://rz-vm30.gfz-potsdam.de/ontology/isdc_1.4.owl) as a knowledge
 foundation in a content management system, which would enable us to
 enrich available data with Linked Open Data.
 
 Currently we are focussing on evaluating frameworks that are based on
 PHP. So far we tried Drupal with minor success, the only other CMS
 currently on our radar is Ximdex.
 
 Any other suggestions, regarding a Semantic web centered content
 management system, written in PHP, would be kindly appreciated.
 
 
 Best regards
 
 Christoph
 
 




ERCIM News 96 on Linked Open Data - Call for contributions

2013-10-21 Thread Sören Auer
Call for short article contributions (cf. http://ercim-news.ercim.eu/call)


ERCIM News No. 96 (January 2014)

DEADLINE: Thursday 21 November 2013

Please read the guidelines below before submitting an article

The sections of ERCIM News 96 are :

  * Joint ERCIM Actions
  * Special theme: *Linked Open Data*
  * *Research and Innovation*
  * Events
  * In Brief

The *Special Theme* and the*Research and Innovation* sections contain
articles presenting a panorama of European research activities. The
Special Theme focuses on a sector which has been selected by the editors
from a short list of currently hot topics whereas the Research and
Innovation section contains articles describing scientific activities,
research results, and technical transfer endeavours in any sector of
ICT, telecommunications or applied mathematics. Submissions to the
Special Theme section are subjected to an external review process
coordinated by invited guest editors whereas submissions to the Research
and Innovation section are checked and approved by the ERCIM News
editorial board.

Special Theme: *Linked Open Data*

Guest editors:

  * Sören Auer: University of Bonn and Fraunhofer IAIS
  * Irini Fundulaki: Institute of Computer Science, FORTH

This ERCIM News special theme invites short articles on new approaches,
achievements and applications in the area of Linked Data. The W3C
Linking Open Data Initiative has boosted the publication and
interlinkage of a large number of datasets on the web resulting in the
emergence of a Web of Data. The dynamic growth of Linked Data stimulates
new research in various areas such as data management, semantic web and
web engineering. In this special issue, we aim to provide an overview on
a wide spectrum of state-of-the-art and newly emerging approaches
related to Linked Data. Technical articles are solicited on topics
related to all stages of the Linked Data management life cycle. Since
the Linked Data principles cannot only be applied to openly published
data on the Web, but also to data published within an organization's
intranet, we also invite contributions regarding the deployment of
Linked Data technologies in Intranet settings.

Topics include but are not limited to:

  * Searching the Web of Data
  * Transformation, mapping and publishing of Linked Data
  * Storage, query processing and optimization for (distributed) RDF
  * Architectures and applications for consuming Linked Data
  * Indexing and crawling the Web of Data
  * Linked Data evolution, enrichment, repair, change management
  * Web Data quality assessment and trustworthiness
  * Linked Data integration (entity resolution, instance and schema
matching)
  * Reasoning on Linked Open Data
  * Benchmarking RDF and graph database engines
  * Linked Data summarization
  * Visualization of Linked Open Data
  * Linked Data applications (e.g. life sciences, enterprise data
integration, digital humanities)
  * Management of Licenses for Linked Open Data

Articles have to be sent to the local editor for your country (see About
ERCIM News http://ercim-news.ercim.eu/about-ercim-news) or to the
central editor peter.k...@ercim.eu mailto:peter.k...@ercim.eu

Reviewing:
Articles submitted to the special theme are subject to a review process.


  Guidelines for ERCIM News articles

*Style:* ERCIM News is read by a large variety of people. Keeping this
in mind the article should be descriptive (emphasize more the 'what'
than the 'how') without too much technical detail together with an
illustration, if possible.

Contributions in ERCIM News are normally presented without formulas. One
can get a long way with careful phrasing, although it is not always wise
to avoid formulas altogether. In cases where authors feel that the use
of formulas is necessary to clarify matters, this should be done in a
separate box (to be treated as an illustration). However, formulas and
symbols scattered through the text must be avoided as much as possible.

*Length:* Keep the article short, i.e. 700-800 words.

*Format:* Submissions preferably in ASCII text or MS Word.
Pictures/Illustrations must be submitted as separate files (not embedded
in a MS Word file) in a resolution/quality suitable for printing.

*Structure of the article:*
The emphasis in ERCIM News is on 'NEWS'. This should be reflected in
both title and lead ('teaser').
Also: NO REVIEW ARTICLES!

  * *Title*
  * *Author *(full name, max. two or three authors)
  * *Teaser:*
a few words about the project/topic. Printed in bold face, this part
is intended to raise interest (keep it short).
  * *Details describing*:
what the project/product is
which institutions are involved
where it takes place
why the research is being done
when it was started/completed the aim of the project
the techniques employed
the orientation of the project
future activities
other institutes involved in this project
co-operation with other ERCIM members in this field

ERCIM News 96 on Linked Open Data - Call for contributions

2013-10-21 Thread Sören Auer
Call for short article contributions (cf. http://ercim-news.ercim.eu/call)


ERCIM News No. 96 (January 2014)

DEADLINE: Thursday 21 November 2013

Please read the guidelines below before submitting an article

The sections of ERCIM News 96 are :

  * Joint ERCIM Actions
  * Special theme: *Linked Open Data*
  * *Research and Innovation*
  * Events
  * In Brief

The *Special Theme* and the*Research and Innovation* sections contain
articles presenting a panorama of European research activities. The
Special Theme focuses on a sector which has been selected by the editors
from a short list of currently hot topics whereas the Research and
Innovation section contains articles describing scientific activities,
research results, and technical transfer endeavours in any sector of
ICT, telecommunications or applied mathematics. Submissions to the
Special Theme section are subjected to an external review process
coordinated by invited guest editors whereas submissions to the Research
and Innovation section are checked and approved by the ERCIM News
editorial board.

Special Theme: *Linked Open Data*

Guest editors:

  * Sören Auer: University of Bonn and Fraunhofer IAIS
  * Irini Fundulaki: Institute of Computer Science, FORTH

This ERCIM News special theme invites short articles on new approaches,
achievements and applications in the area of Linked Data. The W3C
Linking Open Data Initiative has boosted the publication and
interlinkage of a large number of datasets on the web resulting in the
emergence of a Web of Data. The dynamic growth of Linked Data stimulates
new research in various areas such as data management, semantic web and
web engineering. In this special issue, we aim to provide an overview on
a wide spectrum of state-of-the-art and newly emerging approaches
related to Linked Data. Technical articles are solicited on topics
related to all stages of the Linked Data management life cycle. Since
the Linked Data principles cannot only be applied to openly published
data on the Web, but also to data published within an organization's
intranet, we also invite contributions regarding the deployment of
Linked Data technologies in Intranet settings.

Topics include but are not limited to:

  * Searching the Web of Data
  * Transformation, mapping and publishing of Linked Data
  * Storage, query processing and optimization for (distributed) RDF
  * Architectures and applications for consuming Linked Data
  * Indexing and crawling the Web of Data
  * Linked Data evolution, enrichment, repair, change management
  * Web Data quality assessment and trustworthiness
  * Linked Data integration (entity resolution, instance and schema
matching)
  * Reasoning on Linked Open Data
  * Benchmarking RDF and graph database engines
  * Linked Data summarization
  * Visualization of Linked Open Data
  * Linked Data applications (e.g. life sciences, enterprise data
integration, digital humanities)
  * Management of Licenses for Linked Open Data

Articles have to be sent to the local editor for your country (see About
ERCIM News http://ercim-news.ercim.eu/about-ercim-news) or to the
central editor peter.k...@ercim.eu mailto:peter.k...@ercim.eu

Reviewing:
Articles submitted to the special theme are subject to a review process.


  Guidelines for ERCIM News articles

*Style:* ERCIM News is read by a large variety of people. Keeping this
in mind the article should be descriptive (emphasize more the 'what'
than the 'how') without too much technical detail together with an
illustration, if possible.

Contributions in ERCIM News are normally presented without formulas. One
can get a long way with careful phrasing, although it is not always wise
to avoid formulas altogether. In cases where authors feel that the use
of formulas is necessary to clarify matters, this should be done in a
separate box (to be treated as an illustration). However, formulas and
symbols scattered through the text must be avoided as much as possible.

*Length:* Keep the article short, i.e. 700-800 words.

*Format:* Submissions preferably in ASCII text or MS Word.
Pictures/Illustrations must be submitted as separate files (not embedded
in a MS Word file) in a resolution/quality suitable for printing.

*Structure of the article:*
The emphasis in ERCIM News is on 'NEWS'. This should be reflected in
both title and lead ('teaser').
Also: NO REVIEW ARTICLES!

  * *Title*
  * *Author *(full name, max. two or three authors)
  * *Teaser:*
a few words about the project/topic. Printed in bold face, this part
is intended to raise interest (keep it short).
  * *Details describing*:
what the project/product is
which institutions are involved
where it takes place
why the research is being done
when it was started/completed the aim of the project
the techniques employed
the orientation of the project
future activities
other institutes involved in this project
co-operation with other ERCIM members in this field

Re: New DBpedia Overview Article Available

2013-06-24 Thread Sören Auer
Am 24.06.2013 18:28, schrieb Ghislain Atemezing:
 Hi Kinsgley
 we are pleased to announce that a new overview article for DBpedia is
 available:http://svn.aksw.org/papers/2013/SWJ_DBpedia/public.pdf
 Ummm.. Is this link (URL) really public?

The official submission is available here:

http://semantic-web-journal.net/content/dbpedia-large-scale-multilingual-knowledge-base-extracted-wikipedia

Best,

Sören



Job: Post-doctoral Researchers / Research Group Leaders at Uni Bonn / Fraunhofer IAIS

2013-06-24 Thread Sören Auer

  ***Post-doctoral Researchers / Research Group Leaders***
   at Uni Bonn / Fraunhofer IAIS


The department Enterprise Information Systems (EIS) [1] at the Institute
for Applied Computer Science [2] at University of Bonn [3] and
Fraunhofer Institute for Intelligent Analysis and Information Systems
(IAIS) [4] is currently being established.

We are looking for candidates aiming to take the challenge to contribute
to building up an international research and innovation group in the
area of enterprise information systems and semantic technologies.

The ideal candidate holds a doctoral degree in Computer Science or a
related field and is able to combine theoretical and practical aspects
in her work. The candidate is expected to build up her own research
group and should ideally have experience with: publications in renowned
venues, software engineering, supervision of students, collaboration
with other research groups, industry, NGOs as well as open-source and
community initiatives, competing for funding, transfer and
commercialization of research results.

All details can be found at: http://eis.iai.uni-bonn.de/

We provide an scientifically and intellectually inspiring environment
with an entrepreneurial mindset embedded in a world-leading university
and one of the largest research organizations (Fraunhofer). Our primary
aim is to provide the environment and resources to make the research
group leaders successful in their field.

Bonn, the city on the banks of the Rhine River, former German capital
located right next to Germany's fourth largest city Cologne offers an
outstanding quality of life, developed into a hub of international
cooperation and is in easy reach of many European metropoles (e.g.
Amsterdam, Brussels, Paris and Frankfurt).

Please indicate your willingness to apply as soon as possible with a
short email to a...@cs.uni-bonn.de

[1] http://eis.iai.uni-bonn.de/
[2] http://www.iai.uni-bonn.de/
[3] http://www.uni-bonn.de/
[4] http://www.iais.fraunhofer.de/



Open Positions for PostDocs, PhD Students and Developers

2013-04-11 Thread Sören Auer
Dear all,

We have several open positions for postdoctoral researchers, doctoral
students and software developers at AKSW research group
(http://aksw.org/) in Leipzig. More information can be found here:

http://wiki.aksw.org/Jobs

Applicants will become part of an very international, innovative and
social team, be embedded into an entrepreneurial environment and working
on successful industry, research, community and open-source projects
such as DBpedia, OntoWiki, LIMES, SlideWiki or DL-Learner.

Leipzig, the home of AKSW, is the largest city in eastern Germany,
combines a high standard of living with lowest expenses of major German
cities. Leipzig is well connected internationally (international airport
and ICE high-speed rail hub), and has rich and lively cultural,
scientific and economic scenes.

Please forward it to prospective candidates you might now.

Sören

-- 
Working on LOD2 - from Linked Data 2 Knowledge: http://lod2.eu

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: [Ann] Data Web Lecture at SlideWiki.org

2013-02-23 Thread Sören Auer
BTW: We also have a preliminary Linked Data/RDF interface using Triplify
exposing all content from SlideWiki as described here:

http://slidewiki.org/documentation/#tree-618-slide-17294-2-view

The content is currently represented using to FOAF, SIOC, CC, Dublin
Core vocabularies. Support for W3C PROV and SPARQL endpoint via
SparqlMap is planned...

Best,

Sören

http://slidewiki.org

Am 21.02.2013 10:17, schrieb ali khalili:
 Dear all,
 
 In the last months we were working on the collaborative educational
 content authoring platform http://SlideWiki.org
 SlideWiki allows to create richly structured presentations comprising
 slides, self-test questionnaires, illustrations etc.
 
 SlideWiki *features* include:
 
 * WYSIWYG slide authoring
 * Logical slide and deck representation
 * LaTeX/MathML integration
 * Multilingual decks / semi-automatic translation in 50+ languages
 * PowerPoint/HTML import
 * Source code highlighting within slides
 * Dynamic CSS themability and transitions
 * Social networking activities
 * Full revisioning and branching of slides and decks
 * E-Learning with self-assessment questionnaires
 
 Together with our colleagues at AKSW we now started to create a
 comprehensive *lecture series on the Semantic Data Web*:
 
 http://slidewiki.org/item/deck/750
 
 We now almost completed the first lectures on RDF and RDF-Schema and aim
 to complete the whole series by May. We are also working on translating
 this to different languages (e.g. Russian, Persian, Arabic, Portuguese,
 Italian, German, Greek, cf. Persian version at:
 http://slidewiki.org/deck/870).
 
 Please feel invited to contribute to these and other lectures. With
 SlideWiki, we hope to make educational material (on Semantic Web
 technologies and in general) much more interactive, multilingual and
 accessible.
 
 More information can be found at:
 * Documentation: http://slidewiki.org/documentation
 * Mailinglist: https://groups.google.com/d/forum/slidewiki
 * Paper: CrowdLearn: Crowd-sourcing the Creation of Highly-structured
 E-Learning Content
 (http://www.bibsonomy.org/bibtex/2d6735d1e8ca41e72ba1cd2be64aca72e/aksw)
 
 On behalf of the SlideWiki development team and AKSW (http://aksw.org),
 
 Ali, Darya, Sören and the rest of AKSW
 
 
 PS:  There are also lecture series on Semantic Web Services
 (http://slidewiki.org/deck/964) and Intelligent Systems
 (http://slidewiki.org/deck/1002) in preparation - please let us know if
 you have ideas for further content.




Re: [Ann] Data Web Lecture at SlideWiki.org

2013-02-23 Thread Sören Auer
Am 23.02.2013 17:49, schrieb Kingsley Idehen:
 On 2/23/13 7:10 AM, Sören Auer wrote:
 BTW: We also have a preliminary Linked Data/RDF interface using Triplify
 exposing all content from SlideWiki as described here:

 http://slidewiki.org/documentation/#tree-618-slide-17294-2-view

 The content is currently represented using to FOAF, SIOC, CC, Dublin
 Core vocabularies. Support for W3C PROV and SPARQL endpoint via
 SparqlMap is planned...

 Best,

 Sören

 http://slidewiki.org
 
 
 Very nice!
 
 Are you exposing a graph that describes these lecture collections? If
 so, how does one discover it?

Exactly, that's what the graph exhibits - it basically consists of
decks, as well as sub-decks and slides associated with those decks. Each
slide contains the full html source.
Currently, we publish the RDF via slidewiki.org/triplify
As mentioned before, this is still a quick hack currently - we plan to
deploy SparqlMap [1], which will also allow SPARQL querying and will add
proper RDF discovery.

Best,

Sören

[1] http://aksw.org/Projects/SparqlMap.html



[CfP] 6th Workshop on Linked Data on the Web (LDOW2013)

2013-01-17 Thread Sören Auer
  Call for Papers: 6th Workshop on Linked Data on the Web (LDOW2013)
 Co-located with 22nd International World Wide Web Conference
14 May 2013, Rio de Janeiro, Brazil
   http://events.linkeddata.org/ldow2013/


Linked Data is a set of best practices for publishing structured data on
the Web which focuses on setting hyperlinks between data items provided
by different web servers. These hyperlinks connect the data from all
servers into a single global data graph - the Web of Linked Data.

The 6th Workshop on Linked Data on the Web (LDOW2013) aims to stimulate
further research into exploiting this global data graph to deliver
transformative applications to large user bases, as well as to mine the
graph for implicit knowledge. Inevitably the challenges associated with
Linked Data range from lower level 'plumbing' issues over large-scale
data processing and mining, to higher level conceptual questions of
value propositions and business models. LDOW2013 will provide a forum
for exposing novel, high quality research and applications in all of
these areas. In addition, by bringing together researchers in the field,
the workshop will further shape the ongoing Linked Data research agenda.


*Important Dates*

* Submission deadline: 10 March, 2013
* Notification of acceptance: 30 March, 2013
* Camera-ready versions of accepted papers: 15 April, 2013
* Workshop date: 14 May, 2013


*Topics of Interest*

Mining the Web of Linked Data
* large-scale derivation of implicit knowledge from the Web of Data
* using the Web of Linked Data as background knowledge in data mining

Linking and Fusion
* linking algorithms and heuristics, identity resolution
* increasing the value of Schema.org/OpenGraphProtocol through linking
* Web data integration and fusion
* performance of linking infrastructures/algorithms on Web data

Quality, Trust, Provenance and Licensing in Linked Data
* profiling and change tracking of Linked Data sources
* tracking provenance and usage of Linked Data
* evaluating quality and trustworthiness of Linked Data
* licensing issues in Linked Data publishing

Linked Data Applications and Business Models
* Linked Data browsers and search engines
* Linked Data as data integration technology within corporate contexts
* marketplaces, aggregators and indexes for Linked Data
* interface and interaction paradigms for Linked Data applications
* business models for Linked Data publishing and consumption
* Linked Data applications for life-sciences, digital humanities, social
sciences etc.


*Submissions*

We seek two kinds of submissions:
  1. Full scientific papers: up to 10 pages in ACM format
  2. Short scientific and position papers: up to 5 pages in ACM format
Submissions must be formatted using the ACM SIG template available at
http://www.acm.org/sigs/publications/proceedings-templates. Accepted
papers will be presented at the workshop and included in the CEUR
workshop proceedings. At least one author of each paper has to register
for the workshop and to present the paper. Please submit papers via
EasyChair at: https://www.easychair.org/conferences/?conf=ldow2013


Christian Bizer, Tom Heath, Tim Berners-Lee, Michael Hausenblas and
Sören Auer

LDOW2013 Workshop chairs



ANN: 3rd PUBLINK Linked Data publishing and tooling support action

2012-11-23 Thread Sören Auer
Dear all,

After more than a dozen small PUBLINK projects have been graduated in
the last two years [1], we are now launching a third round, which also
includes specific support for Linked Data tool developers, who want to
integrate their tool with the Debian-based LOD2 Stack [2]. The PUBLINK
Linked Open Data Consultancy is backed by the LOD2 project [3].

In order to lower the entrance barrier for potential data publishers and
to improve the integration and interoperation of tools we offer the
*free* PUBLINK Linked Open Data Consultancy to up to five selected
organizations supporting their data publishing or tool integration
projects with an overall effort of 10-20 days each comprising support
from highly skilled Linked Data professionals.

More information about PUBLINK and instructions on how to apply can be
found at:

http://lod2.eu/Article/Publink.html

PUBLINK aims to support Linked Data tool developers as well as
organizations (e.g. governmental agencies, data providers, public
administrations), which are interested to publish large amounts of
structured information of a potentially high public interest.
Brief applications of interested organizations are being accepted till
December 31st 2012.

Please forward this announcement to any potential stakeholders in this
domain you might know.

On behalf of the LOD2 consortium,

Sören

[1]
http://lod2.eu/BlogPost/1353-publink-linked-data-starter-service-call-2013.html
[2] http://stack.lod2.eu
[3] http://lod2.eu


--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: DBpedia Data Quality Evaluation Campaign

2012-11-15 Thread Sören Auer
Am 15.11.2012 19:12, schrieb Giovanni Tummarello:
 Am i really supposed to know if any of the fact below is wrong?
 really?

Its not about factual correctness, but about correct extraction and
representation. If Wikipedia contains false information DBpedia will
too, so we can not change this (at that point). What we want to improve,
however, is the quality of the extraction.

Best,

Sören

 dbp-owl:PopulatedPlace/area
 10.63 (@type = http://dbpedia.org/datatype/squareKilometre)
 dbp-owl:abstract
 La Chapelle-Saint-Laud is a commune in the Maine-et-Loire department
 of western France. (@lang = en)
 dbp-owl:area
 1.063e+07 (@type = http://www.w3.org/2001/XMLSchema#double)
 dbp-owl:canton
 dbpedia:Canton_of_Seiches-sur-le-Loir
 dbp-owl:country
 dbpedia:France
 dbp-owl:department
 dbpedia:Maine-et-Loire
 dbp-owl:elevation
 85.0 (@type = http://www.w3.org/2001/XMLSchema#double)
 dbp-owl:intercommunality
 dbpedia:Pays_Loire-Angers
 dbp-owl:intercommunality
 dbpedia:Communauté_de_communes_du_Loir
 dbp-owl:maximumElevation
 98.0 (@type = http://www.w3.org/2001/XMLSchema#double)
 dbp-owl:minimumElevation
 28.0 (@type = http://www.w3.org/2001/XMLSchema#double)
 dbp-owl:populationTotal
 583 (@type = http://www.w3.org/2001/XMLSchema#integer)
 dbp-owl:postalCode
 49140 (@lang = en)
 dbp-owl:region
 dbpedia:Pays_de_la_Loire
 dbp-prop:areaKm
 11 (@type = http://www.w3.org/2001/XMLSchema#integer)
 dbp-prop:arrondissement
 Angers (@lang = en)
 dbp-prop:canton
 dbpedia:Canton_of_Seiches-sur-le-Loir
 dbp-prop:demonym
 Capellaudain, Capellaudaine (@lang = en)
 dbp-prop:department
 dbpedia:Maine-et-Loire
 dbp-prop:elevationM
 85 (@type = http://www.w3.org/2001/XMLSchema#integer)
 dbp-prop:elevationMaxM
 98 (@type = http://www.w3.org/2001/XMLSchema#integer)
 dbp-prop:elevationMinM
 28 (@type = http://www.w3.org/2001/XMLSchema#integer)
 dbp-prop:insee
 49076 (@type = http://www.w3.org/2001/XMLSchema#integer)
 dbp-prop:intercommunality
 dbpedia:Pays_Loire-Angers
 dbp-prop:intercommunality
 dbpedia:Communauté_de_communes_du_Loir
 
 On Thu, Nov 15, 2012 at 4:58 PM,  zav...@informatik.uni-leipzig.de wrote:
 Dear all,

 As we all know, DBpedia is an important dataset in Linked Data as it is not
 only connected to and from numerous other datasets, but it also is relied
 upon for useful information. However, quality problems are inherent in
 DBpedia be it in terms of incorrectly extracted values or datatype problems
 since it contains information extracted from crowd-sourced content.

 However, not all the data quality problems are automatically detectable.
 Thus, we aim at crowd-sourcing the quality assessment of the dataset. In
 order to perform this assessment, we have developed a tool whereby a user
 can evaluate a random resource by analyzing each triple individually and
 store the results. Therefore, we would like to request you to help us by
 using the tool and evaluating a minimum of 3 resources. Here is the link to
 the tool: http://nl.dbpedia.org:8080/TripleCheckMate/, which also includes
 details on how to use it.

 In order to thank you for your contributions, a lucky winner will win either
 a Samsung Galaxy Tab 2 or an Amazon voucher worth 300 Euro. So, go ahead,
 start evaluating now !! Deadline for submitting your evaluations is 9th
 December, 2012.

 If you have any questions or comments, please do not hesitate to contact us
 at dbpedia-data-qual...@googlegroups.com.

 Thank you very much for your time.

 Regards,
 DBpedia Data Quality Evaluation Team.
 https://groups.google.com/d/forum/dbpedia-data-quality

 
 This message was sent using IMP, the Internet Messaging Program.




 
 




Re: DBpedia Data Quality Evaluation Campaign

2012-11-15 Thread Sören Auer
Am 15.11.2012 19:44, schrieb Giovanni Tummarello:
 i understand. Anyway also wrt to wrong extractions it might be of use
 to consider supporting the users e.g. proposing only suspicious cases
 and not any resource.

From our experience almost any resource (still) contains some problems
or issues. Once we reduced the number of problems significantly you are
perfectly right and we should look for bad smells...

Best,

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-06-22 Thread Sören Auer
Am 22.06.2012 11:30, schrieb Denny Vrandecic:
 According to your definition, then LODStats is misnamed.
 It should be LOD Datasets Stats.
 
 Or am I misunderstanding something?

Maybe you are right Denny, but there is never a perfect name.
Actually LODStats is both, a tool and a service. The open-source tool
(https://github.com/AKSW/LODStats) can be used for analysing anything.
If you are not happy with our selection criteria in the service, you can
run your own LODStats installation, put a crawler in front and analyse
all the datasets you want. Just our service at stats.lod2.eu is a little
selective ;-)

Best,

Sören

 On 22 Jun 2012, at 01:30, Sören Auer wrote:
 
 Am 21.06.2012 17:08, schrieb Hugh Glaser:
 Hi.
 On 21 Jun 2012, at 11:40, Sören Auer wrote:

 Am 21.06.2012 12:03, schrieb Hugh Glaser:
 Interesting question from Denny.
 I guess you don't do http://thedatahub.org/dataset/sameas-org
 for the same reason.
 And
 http://thedatahub.org/dataset/dbpedia-lite
 (Or at least I couldn't find them.)

 I'm not sure you should claim all LOD datasets registered on CKAN

 Depends on the definition of dataset - for me a dataset is something
 available in bulk and not a pointer to a large space of URLs containing
 some data fragments requiring extensive crawling.
 I can't agree with this.
 To rule out Linked Data that only provides Linked Data without SPARQL or 
 dump and say it is not a LOD Dataset seems to be terribly restrictive.

 I would distinguish between Linked Data and a LOD dataset:

 For me (and I would assume most people) /dataset/ means a set of data,
 i.e. a downloadable dump or bulk data access (e.g. via SPARQL) to a data
 repository.

 When the data adheres to the RDF data model and dereferenceable IRIs are
 used its a /Linked Data dataset/.

 When licensed under an open license (according to the open definition)
 its a /Linked Open Data (LOD) dataset/.

 I agree, that /Linked Data/ also comprises individual data resources
 (either independently) or integrated into HTML as RDFa, but I would not
 call these dataset then and also not open (if not licensed according to
 the open definition). BTW: The open definition also requires bulk data
 access! So we have already to reasons, why the concept LOD dataset
 should imply availability of bulk data. This is also, what we mention
 everywhere when describing LODStats.

 When you are interested in statistics about arbitrary Linked Data
 Sindice provides probably the better statistics.

 For example, the eprints (eprints.org) Open Archives have upwards of 100M 
 triples of pretty interesting (to some people) Linked Data.

 Maybe interesting, but if I have to crawl it in order to make use of it
 the burden is way too high for most users.

 It is mostly not in thedatahub, but even if it was you would ignore it.
 In fact, anything that is a wrapper around things like dbpedia, twitter, 
 Facebook, or even Facebook itself is ignored, I am assuming from what you 
 say.

 For DBpedia you don't need a wrapper - the whole dataset is available in
 bulk. All others are from my point of view neither datasets nor open.
 Maybe you can call them data services, where you can obtain an
 individual data item at a time. And why would you want to call a wrapper
 dataset. Fundamental requirements for datasets would be from my point of
 view that you can apply set operations like merging, joining etc. You
 can not do that with wrappers, so why should we call them datasets?

 To publish statistics that claims to collect statistics from all LOD 
 datasets using a method that ignores such resources is to seriously 
 underreport the LOD activity (not a Good Thing), and also is to publish 
 what I can only say is misleading statistical reports about LOD in general.
 I leave aside that you also fail to collect statistics from more than half 
 of the datasets you claim to be collecting.

 I agree, that our figures are quite pessimistic, but in a way, they
 reflect, what people really see -- if there is no link to the dump in
 thedatahub the dataset is difficult to find obviously, if
 confusing/non-standard file extensions or dataset package formats are
 used this makes it also very difficult for people to actually use this
 data. So I think its better, to be a little more pessimistic in this
 case instead of reporting skyrocking numbers all the time.

 Sören

 
 




Re: [Ann] LODStats - Real-time Data Web Statistics

2012-06-21 Thread Sören Auer
Am 21.06.2012 11:33, schrieb Denny Vrandecic:
 This is really cool.
 
 On 2 Feb 2012, at 12:04, Sören Auer wrote:
 A demo installation collecting statistics from all LOD datasets
 registered on CKAN is available from:

 http://stats.lod2.eu
 
 
 
 Are you missing this one?
 
 http://thedatahub.org/dataset/linked-open-numbers
 
 Since you say all LOD datasets registered on CKAN, why is LON excluded? :)

Since there doesn't seem to be a dump and/or SPARQL endpoint available.
We don't do Linked Data crawling. Also, there seems to be a problem with
your alternate links:

link rel=alternate type=application/rdf+xml
href=http://km.aifb.kit.edu/projects/numbers/data/n; /

http://km.aifb.kit.edu/projects/numbers/data/n gives a 404.

Best,

Sören




Re: [Ann] LODStats - Real-time Data Web Statistics

2012-06-21 Thread Sören Auer
 I am starting to use LODStats and I think it is a very useful tool.
 Actually I would be interested on using it over SPARQL endpoints but I
 dont know how to do that. Does anybody knows whether it is possible?

We don't have a SPARQL endpoint available (yet), but
you can obtain a complete dump of all VoID descriptions from

http://stats.lod2.eu/rdfdocs/void

Best,

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-06-21 Thread Sören Auer
Am 21.06.2012 12:03, schrieb Hugh Glaser:
 Interesting question from Denny.
 I guess you don't do http://thedatahub.org/dataset/sameas-org
 for the same reason.
 And
 http://thedatahub.org/dataset/dbpedia-lite
 (Or at least I couldn't find them.)
 
 I'm not sure you should claim all LOD datasets registered on CKAN

Depends on the definition of dataset - for me a dataset is something
available in bulk and not a pointer to a large space of URLs containing
some data fragments requiring extensive crawling.

I understand why Linked Open Numbers is not available as a dump - how
would you package a countable infinite number of resources ;-)

 if you don't have dbpedialite, for example.

Does there exist a dump for dbpedialite - a link to the dump does not
seem to be registered at thedatahub.

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-06-21 Thread Sören Auer
Am 21.06.2012 17:08, schrieb Hugh Glaser:
 Hi.
 On 21 Jun 2012, at 11:40, Sören Auer wrote:
 
 Am 21.06.2012 12:03, schrieb Hugh Glaser:
 Interesting question from Denny.
 I guess you don't do http://thedatahub.org/dataset/sameas-org
 for the same reason.
 And
 http://thedatahub.org/dataset/dbpedia-lite
 (Or at least I couldn't find them.)

 I'm not sure you should claim all LOD datasets registered on CKAN

 Depends on the definition of dataset - for me a dataset is something
 available in bulk and not a pointer to a large space of URLs containing
 some data fragments requiring extensive crawling.
 I can't agree with this.
 To rule out Linked Data that only provides Linked Data without SPARQL or dump 
 and say it is not a LOD Dataset seems to be terribly restrictive.

I would distinguish between Linked Data and a LOD dataset:

For me (and I would assume most people) /dataset/ means a set of data,
i.e. a downloadable dump or bulk data access (e.g. via SPARQL) to a data
repository.

When the data adheres to the RDF data model and dereferenceable IRIs are
used its a /Linked Data dataset/.

When licensed under an open license (according to the open definition)
its a /Linked Open Data (LOD) dataset/.

I agree, that /Linked Data/ also comprises individual data resources
(either independently) or integrated into HTML as RDFa, but I would not
call these dataset then and also not open (if not licensed according to
the open definition). BTW: The open definition also requires bulk data
access! So we have already to reasons, why the concept LOD dataset
should imply availability of bulk data. This is also, what we mention
everywhere when describing LODStats.

When you are interested in statistics about arbitrary Linked Data
Sindice provides probably the better statistics.

 For example, the eprints (eprints.org) Open Archives have upwards of 100M 
 triples of pretty interesting (to some people) Linked Data.

Maybe interesting, but if I have to crawl it in order to make use of it
the burden is way too high for most users.

 It is mostly not in thedatahub, but even if it was you would ignore it.
 In fact, anything that is a wrapper around things like dbpedia, twitter, 
 Facebook, or even Facebook itself is ignored, I am assuming from what you say.

For DBpedia you don't need a wrapper - the whole dataset is available in
bulk. All others are from my point of view neither datasets nor open.
Maybe you can call them data services, where you can obtain an
individual data item at a time. And why would you want to call a wrapper
dataset. Fundamental requirements for datasets would be from my point of
view that you can apply set operations like merging, joining etc. You
can not do that with wrappers, so why should we call them datasets?

 To publish statistics that claims to collect statistics from all LOD 
 datasets using a method that ignores such resources is to seriously 
 underreport the LOD activity (not a Good Thing), and also is to publish what 
 I can only say is misleading statistical reports about LOD in general.
 I leave aside that you also fail to collect statistics from more than half of 
 the datasets you claim to be collecting.

I agree, that our figures are quite pessimistic, but in a way, they
reflect, what people really see -- if there is no link to the dump in
thedatahub the dataset is difficult to find obviously, if
confusing/non-standard file extensions or dataset package formats are
used this makes it also very difficult for people to actually use this
data. So I think its better, to be a little more pessimistic in this
case instead of reporting skyrocking numbers all the time.

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-21 Thread Sören Auer
Am 21.02.2012 15:38, schrieb Rinke Hoekstra:
 However... is it me, or isn't the 'almost 2B triples' a very
 disappointing number? If you go through all datasets advertised on the
 Data Hub, the advertised number of triples is over 40B ! This means
 that only one out of 20 triples in the linked 'open' data cloud is
 publicly accessible.

It certainly is and this is one of the reasons we developed this tool to
get a better picture of the LOD cloud. Of cause this difference is
partially caused by invalid links in CKAN and some issues we still have
with dealing with very large datasets, but these issues real users might
have as well.

 Another thing... it seems as if LODStats is merely checking whether a
 SPARQL endpoint is 'up' and whether the endpoint actually contains the
 data that has been advertised on the Data Hub. For instance, my very
 own bubble is listed without problems, but I know for a fact that the
 triple store no longer contains the data (sorry!). Do you have any
 thoughts/ideas on how to detect such problems?

We currently don't delete our stats when an endpoint is not available
once, but try to check back later. Of course after a certain number of
check backs and timeouts the stats should be invalidated. Can you point
me to your endpoint and we will have a look what's the problem there.

Best,

Sören



[Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Sören Auer
Dear all,

We are happy to announce the first public *release of LODStats*.

LODStats is a statement-stream-based approach for gathering
comprehensive statistics about datasets adhering to the Resource
Description Framework (RDF). LODStats was implemented in Python and
integrated into the CKAN dataset metadata registry [1]. Thus it helps to
obtain a comprehensive picture of the current state of the Data Web.

More information about LODStats (including its open-source
implementation) is available from:

http://aksw.org/projects/LODStats

A demo installation collecting statistics from all LOD datasets
registered on CKAN is available from:

http://stats.lod2.eu

We would like to thank the AKSW research group [2] and LOD2 project [3]
members for their suggestions. The development LODStats was supported by
the FP7 project LOD2 (GA no. 257943).

On behalf of the LODStats team,

Sören Auer, Jan Demter, Michael Martin, Jens Lehmann

[1] http://ckan.net
[2] http://aksw.org
[3] http://lod2.eu



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Sören Auer
Am 02.02.2012 12:18, schrieb Michael Hausenblas:
 We are happy to announce the first public *release of LODStats*.
 
 Very nice! Does it output VoID [1]? Didn't find it skimming the source ...

It does, might not be directly linked yet, but we will add the links soon.
However, not all LODStats staistics can be represented using VoID, which
is why we suggest to add another property to VoID allowing to attach
DataCubes to a VoID descriptions.
You can find the detail in our technical report - would be creat, if
such a property would find its way into the next revision of DataCube ;-)

Thanks for the encouraging comments,

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Sören Auer
Am 02.02.2012 12:18, schrieb Michael Hausenblas:
 We are happy to announce the first public *release of LODStats*.
 
 
 Very nice! Does it output VoID [1]? Didn't find it skimming the source ...

Have to correct myself, the VoID is already there, see for example:

http://stats.lod2.eu/rdfdoc/view/195

Can be displayed inline or downloaded as a separate file.

Cheers,

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Sören Auer
Am 02.02.2012 12:32, schrieb Richard Cyganiak:
 Congrats, this is awesome.

Thanks Richard, we are happy you like it ;-)

 So you're automatically harvesting 200+ datasets by starting with the LOD 
 Cloud metadata we're collecting on the Data Hub (ex CKAN), leading to a total 
 of almost 2B triples.

Exactly.

 Also fascinating is the list of 250 datasets that couldn't be automatically 
 harvested due to SPARQL errors or errors in the RDF dumps:
 http://stats.lod2.eu/rdfdoc/?errors=1
 This is an excellent interoperability testbed and should be closely studied 
 by anyone who's interested in the state of actual interoperability on the web 
 of linked data (hence a CC to the Pedantic Web Group).

Yes, having an interoperability testbed and a timely view on the current
state was one of the primary reasons for developing LODStats. Some
problems might, however, also be related to incorrect CKAN metadata or
some glitches in LODStats itself - we will try to iron them out as much
as possible in the next weeks.

 One request: on http://stats.lod2.eu/stats it shows top 5 lists of various 
 sorts (top vocabularies, classes, languages etc). Would it be possible to 
 allow drill-down to see longer lists, let's say top 100 or top 1000? These 
 lists are great, but the really interesting stuff often happens in the 
 midfield.

Indeed, thats a great suggestion and will be implemented soon.

 I see VoID summaries for each individual dataset. Are they aggregated 
 somewhere into a single file that I could SPARQL?

Not yet, but that's planned. For now it should be relatively easy to
crawl and concat the VoID files, but we will make it more convenient ;-)

 Also, how do I cite your work in publications? Is there a paper (or at least 
 tech report) yet?

We submitted a paper, which you can cite:

Jan Demter, Sören Auer, Michael Martin, Jens Lehmann: LODStats – An
Extensible Framework for High-performance Dataset Analytics, submitted
to ESWC2012

http://svn.aksw.org/papers/2011/RDFStats/public.pdf

Best,

Sören



Re: [Ann] LODStats - Real-time Data Web Statistics

2012-02-02 Thread Sören Auer
Richard,

These are all great suggestions, which we will try to implement in the
next days.
The LODSTats logo in the header was supposed to serve as a link to the
About page (http://aksw.org/projects/LODStats
), but I guess we should place that more prominently.

Thanks for your valuable feedback,

Sören

Am 02.02.2012 12:42, schrieb Richard Cyganiak:
 
 On 2 Feb 2012, at 11:04, Sören Auer wrote:
 A demo installation collecting statistics from all LOD datasets
 registered on CKAN is available from:

 http://stats.lod2.eu
 
 One more thing. Can I search for the stats for a particular datasets somehow?
 
 Let's say I want to see the stats for the prefix-cc dataset (or rather, check 
 if LODStats was able to produce stats at all or whether there was an error). 
 Looks like currently I have to manually page through all packages to find it.
 
 Hacking the URL also doesn't work as you're not using Data Hub IDs in your 
 URLs but your own numeric identifiers for the datasets.
 
 It would be great if you had URLs like stats.lod2.eu/rdfdoc/view/prefix-cc as 
 redirects/aliases for http://stats.lod2.eu/rdfdoc/view/119 because that would 
 make it possible to link to this statistics page from other places, like 
 directly from CKAN, or from an alternative version of the LOD Cloud diagram 
 that colors datasets according to their interoperability.
 
 Finally, the stats.lod2.eu site lacks an About page that explains the purpose 
 of the site, sketches the process that is used to generate the stats, states 
 the authors/credits, and states where I'm supposed to send my feature 
 requests ;-)
 
 Best,
 Richard
 




Fwd: Panton Fellowships

2012-01-25 Thread Sören Auer
Dear LODers,

Thought this could be interesting for some of us:

Funded by the Open Society Institute, two Panton Fellowships will be
awarded by Open Knowledge Foundation to scientists who actively promote
open data in science. The Fellowships are open to all, and would
particularly suit graduate students and early-stage career scientists.

See attached email from OKFN's Laura.

Best,

Sören
---BeginMessage---
Dear all,

The OKFN is delighted to announce the launch of the Panton Fellowships!

Funded by the Open Society Institute, two Panton Fellowships will be
awarded to scientists who actively promote open data in science.

The Fellowships are open to all, and would particularly suit graduate
students and early-stage career scientists. Fellows will have the freedom
to undertake a range of activities, which should ideally complement their
existing work. Panton Fellows may wish to explore solutions for making data
open, facilitate discussion, and catalyse the open science community.

Fellows will receive £8k p.a. Prospective applicants should send a CV and
covering letter to jobs[@]okfn.org by Friday 24th February.

Full details can be found at [Panton Principles](
http://pantonprinciples.org/panton-fellowships/). You can also see our
[blog post](http://blog.okfn.org/2012/01/25/panton-fellowships-apply-now/).

Please do feel free to circulate these details to interested individuals
and appropriate mailing lists!

Kind regards,
Laura


-- 
Laura Newman
Community Coordinator
Open Knowledge Foundation
http://okfn.org/
Skype: lauranewmanonskype
___
open-science mailing list
open-scie...@lists.okfn.org
http://lists.okfn.org/mailman/listinfo/open-science
---End Message---


ANN: 2nd PUBLINK Linked Data publishing and tooling support action

2011-11-28 Thread Sören Auer
Dear all,

After the successful completion of the first PUBLINK iteration [1], we
are now launching a second round, which also includes specific support
for Linked Data tool developers, who want to integrate their tool with
the Debian-based LOD2 Stack [2]. The PUBLINK Linked Open Data
Consultancy is backed by the consortia of the EU-FP7 LOD2 [3] and LATC
projects [4].

In order to lower the entrance barrier for potential data publishers and
to improve the integration and interoperation of tools we offer the
*free* PUBLINK Linked Open Data Consultancy to up to five selected
organizations supporting their data publishing or tool integration
projects with an overall effort of 10-20 days each comprising support
from highly skilled Linked Data professionals.

More information about PUBLINK and instructions on how to apply can be
found at:

http://lod2.eu/Article/Publink.html

PUBLINK aims to support Linked Data tool developers as well as
organizations (e.g. governmental agencies, data providers, public
administrations), which are interested to publish large amounts of
structured information of a potentially high public interest.
Applications of interested organizations are being accepted till
December 31st 2011.

Please forward this announcement to any potential stakeholders in this
domain you might know.

On behalf of the LOD2 and LATC consortia,

Sören

[1] http://lod2.eu/Article/Results_2010.html
[2] http://lod2.eu/Article/BlogPost/677-first-release-of-the-lod2-stack.html
[3] http://lod2.eu
[4] http://latc-project.eu


--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[CfP] WWW 2012 - Semantic Web Track - ***Abstract due tomorrow***

2011-10-30 Thread Sören Auer
LAST CALL FOR PAPERS

21st INTERNATIONAL WORLD WIDE WEB CONFERENCE
  (WWW 2012 - http://www2012.org)

 April 16-20, 2012
   Lyon, France

Abstracts for papers due: *Monday, November 1st, 2011*
Papers due: November 7th, 2011


SEMANTIC WEB TRACK

One of the biggest challenges in Computer Science is the exploitation of
the Web as a global platform for data and information integration as
well as for intelligent search and querying. Semantic Web technologies
and particularly the Linked Data paradigm have evolved as powerful
enablers for the enrichment of the current document-oriented Web with a
Web of interlinked data and, ultimately, a the Semantic Web. To
facilitate this transition many aspects of distributed data and
information management need to be adapted, advanced and integrated.


TOPICS

We invite original contributions on topics related to the Semantic Web,
including (but not limited to):

* Linked Data on the Web as well as in the enterprise
* RDF stores and repositories
* RDF data publishing and access
* Querying and searching Semantic Web Data, including combinations with
  statistics, natural language, soft computing and distributed
  approaches
* Methods for linking, integrating and federating Data on the Web
* Semantic annotation and metadata
* Community and social mechanisms for the definition of semantics of
  data, and metadata and ontology creation
* Ontologies, Reasoning and representation languages (such as OWL), as
  they pertain to Web needs
* Re-purposing of data, information, and multimedia using semantics
* Applications of Semantic Web formats for enterprises, learning and
  science
* Other novel applications that exploit structured Web data sources
* Blogs, wikis, browsers, crawlers, harvesters, content management
  systems, search engines and other applications that produce and
  consume Semantic Web Data
* Mobile and ubiquitous applications exploiting semantics
* User interfaces for interacting with Semantic Web Data
* Methodologies for the engineering of Semantic Web applications

See also: http://www2012.org/?page_id=1569


IMPORTANT DATES

November 1st, 2011  Abstracts for papers due
November 7th, 2011  Papers due
January 30th, 2012  Paper notifications out
February 28th, 2012 Camera ready papers due
April 16th, 2012Conference begins


TRACK CHAIRS

* Sören Auer, Universität Leipzig, Germany
* Axel Polleres, Siemens AG, Austria



[Ann] First public release of the LOD2 Stack

2011-10-06 Thread Sören Auer
Dear all,

The LOD2 consortium [1] is happy to announce the first release of the
LOD2 stack available at: http://stack.lod2.eu

The LOD2 stack is an integrated distribution of aligned tools which
support the life-cycle of Linked Data from extraction, authoring over
enrichment, interlinking, fusing to visualization. The stack comprises
new and substantially extended tools from LOD2 members and 3rd parties.
The LOD2 stack is organized as a Debian package repository making the
tool stack easy to install on any Debian-based system (e.g. Ubuntu).

A quick look at the stack and its components is available via the online
demo at: http://demo.lod2.eu/lod2demo

For more thorough experimentation a virtual machine image (VMware or
VirtualBox) with pre-installed LOD2 Stack can be downloaded from:
http://stack.lod2.eu/VirtualMachines/

More details and the instructions on installing the LOD2 Stack locally
are available in the HOWTO Start document [2]. This first release of the
LOD2 stack contains the following components:

   * LOD2 demonstrator, the root package (TenForce/LOD2)
   * Virtuoso, RDF storage and data management platform (Openlink)
   * OntoWiki, semantic data wiki authoring tool (ULEI)
   * SigmaEE, multi-source exploration tool (DERI)
   * D2R, RDF wrapper for SQL databases (FUB)
   * Silk, interlinking engine (FUB)
   * ORE, ontology repair and enrichment toolkit (ULEI)

PoolParty (taxonomy manager by SWCG), Spotlight (annotating texts wrt.
DBpedia by FUB) and CKAN/thedatahub.org were integrated as online
services. A selection of datasets has been packaged and is available in
the LOD2 Stack repository.

The LOD2 stack is an open platform for Linked Data components. We are
happy to welcome new components. Detailed instructions how to integrate
your component into the LOD2 Stack as Debian package are available in
the HOWTO Contribute [3]. From now on we will regularly release improved
and extended versions of the LOD2 Stack. Major releases are expected for
Fall 2012 and 2013. For assistance or any questions related to the
LOD2-stack contact support-st...@lod2.eu

Special thanks for their substantial contributions to this release go to
Bert van Nuffelen, Sebastian Tramp, Robert Isele, Hugh Williams, and
Jens Lehmann.

On behalf of the LOD2 consortium,

Sören Auer

[1] http://lod2.eu
[2] http://lod2-stack.googlecode.com/svn/trunk/documents/HowToStart.pdf
[3] http://lod2-stack.googlecode.com/svn/trunk/documents/HowToContribute.pdf


--
Working on LOD2 - from Linked Data 2 Knowledge: http://lod2.eu
--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[Ann] Indian-summer school on Linked Data (ISSLOD 2011)

2011-07-18 Thread Sören Auer
  *Indian-Summer School on Linked Data*
Leipzig, Sep 12-18, 2011
 http://isslod.lod2.eu/

ISSLOD takes place in late summer with hopefully still a lot of Indian
Summer (i.e. Altweibersommer / Бабье лето) sunshine rays.

The Linked Data methodology is a light-weight approach to facilitate the
transition from the document Web to the Web of Data and ultimately a
Semantic Web. With a wide availability of Linked Data tools and
knowledge bases, a steadily growing RD community, industrial
applications, the Linked Data paradigm already became crucial building
block of the Web architecture.

ISSLOD is primarily intended for postgraduate (PhD or MSc) students,
postdocs, and other young researchers investigating aspects related to
the Semantic Data Web. The Summer School will also be open to senior
researchers wishing to learn about Semantic Web issues related to their
own fields of research.

For further details please visit: http://isslod.lod2.eu

ISSLOD is organized by the EU-FP7 project LOD2 - Creating Knowledge out
of Interlinked Data. Lecturers comprise distinguished experts from LOD2
member organizations as well as invited speakers, the majority of which
will - apart from their lectures - also be present for the duration of
the school to interact with students. Interaction with senior
researchers and establishing contacts within young researchers is a main
focus of the school, which will be supported through social activities
and an interactive, amicable atmosphere.

   ISSLOD Application Deadline: 30 July 2011
   Notifications:5 August 2011
   ISSLOD:   12-18 September 2011

There will be a limited number of student grants available. Details of
the registration process will be announced on the Web site, after the
application deadline. We will keep the registration fee low (175 EUR)
and provide reasonable accommodation packages (less than 40 EUR per
night) for students.

On behalf of the LOD2 project and AKSW research group

Sören



Indian-summer school on Linked Data (ISSLOD 2011)

2011-06-07 Thread Sören Auer
  *Indian-Summer School on Linked Data*
Leipzig, Sep 12-18, 2011
 http://lod2.eu/ISSLOD

ISSLOD takes place in late summer with hopefully still a lot of Indian
Summer (i.e. Altweibersommer / Бабье лето) sunshine rays.

The Linked Data methodology is a light-weight approach to facilitate the
transition from the document Web to the Web of Data and ultimately a
Semantic Web. With a wide availability of Linked Data tools and
knowledge bases, a steadily growing RD community, industrial
applications, the Linked Data paradigm already became crucial building
block of the Web architecture.

ISSLOD is primarily intended for postgraduate (PhD or MSc) students,
postdocs, and other young researchers investigating aspects related to
the Semantic Data Web. The Summer School will also be open to senior
researchers wishing to learn about Semantic Web issues related to their
own fields of research.

For further details please visit: http://lod2.eu/ISSLOD/

ISSLOD is organized by the EU-FP7 project LOD2 - Creating Knowledge out
of Interlinked Data. Lecturers comprise distinguished experts from LOD2
member organizations as well as invited speakers, the majority of which
will - apart from their lectures - also be present for the duration of
the school to interact with students. Interaction with senior
researchers and establishing contacts within young researchers is a main
focus of the school, which will be supported through social activities
and an interactive, amicable atmosphere.

   ISSLOD Application Deadline: 30 July 2011
   Notifications:5 August 2011
   ISSLOD:   12-18 September 2011

There will be a limited number of student grants available. Details of
the registration process will be announced on the Web site, after the
application deadline. We will keep the registration fee low (175 EUR)
and provide reasonable accomodation packages (less than 40 EUR per
night) for students.




CfP 2nd Workshop on Web Science (WSW2011) “Open Data and Open Communities”

2011-04-06 Thread Sören Auer
** 2nd Workshop on Web Science “Open Data and Open Communities” **

   Co-located with INFORMATIK2011, KI2011, MATES 2011
   *4th-10th of Oct 2011, Berlin, Germany*

   http://sites.google.com/site/webscienceworkshop2011/

In this 2nd Web Science Workshop (WSW) we aim to bring researchers,
developers and practitioners together to discuss the current state of
Web Science and individual aspects of this growing research area. In
particular, we aim to broaden the discussion by considering recently
emerging topics such as Open Data, ICT support for policy analysis and
modelling as well as Web-based citizen involvement in E-Government. In
addition, with this workshop specifically address the areas of
Computational Network Analysis, Web Intelligence, Social Computing and
Semantic Web.


*TOPICS OF INTEREST*

* Integrating computational network analysis and semantic web
  techniques, for example to enhance the mainly structure-based network
  analysis by semantic information
* Novel visualization techniques for topic related data
* Information diffusion on the Web
* Web and Web application governance
* Open Governmental Data - employing Data Web technologies to bring
  citizens and governments closer together
* Technology support for policy modeling, e.g. spatial planning
* Web-based citizen involvement in E-Government
* Case studies of communities such as Wikipedia, Facebook, Twitter,
  World of Warcraft, open source software as well as empirical findings
  in social computing-related applications


*IMPORTANT DATES*

* April 30, 2011: Submission Date
* May 31, 2011: Notification Date
* July 1, 2011: Submission final version of accepted papers


*SUBMISSIONS*

Submitted papers should not exceed the maximum length of eight (8) pages
and should be uploaded using EasyChair:
http://www.easychair.org/conferences/?conf=informatik2011 in the
PDF-format. Papers will be evaluated anonymously by two independent
evaluators from the program committee. Accepted papers are published in
the proceedings of the GI conference. The paper layout should conform to
the guidelines provided by Lecture Notes in Informatics” (LNI)
(http://www.gi-ev.de/service/publikationen/lni/). Submissions will be
accepted in German and English (preferred language).


*ORGANIZER*

* Claudia Müller-Birn, NBI AG, FU Berlin (main contact)
* Sören Auer, AKSW, Universität Leipzig
* Daniel Dietrich, TU Berlin department of computer science and society
* York Sure, GESIS - Leibniz Institute for the Social Sciences



ICWE 2011: Last Call for Papers (special focus Web Data Engineering)

2011-02-08 Thread Sören Auer

  11th International Conference on Web Engineering (ICWE 2011)

   http://icwe2011.webengineering.org
June 20-24, 2011, Paphos, Cyprus

 *** Last Call for Papers ***

 *Submission deadline extended to February 21, 2011 (23:59 Hawaii Time)*

The International Conference on Web Engineering (ICWE) aims at promoting
scientific and practical excellence on Web Engineering, and at bringing
together researchers and practitioners working in technologies,
methodologies, tools, and techniques used to develop and maintain
Web-based applications leading to better systems, and thus to enabling
and improving the dissemination and use of content and services through
the Web. A special focus of ICWE 2011 will be Web Data Engineering.


*Topics of Interest*

The conference fosters original submissions covering, but not restricted
to the following topics of interest:

Web application engineering
   * Processes and methods for Web application development
   * Conceptual modeling of Web applications
   * Model-driven Web application development
   * Domain-specific languages for Web application development
   * Component-based Web application development
   * Web application architectures and frameworks
   * Rich Internet Applications
   * Mashup development and end user Web programming
   * Patterns for Web application development and pattern mining
   * Web content management and data-intensive Web applications
   * Web usability and accessibility
   * I18N of Web applications and multi-lingual development
   * Testing and evaluation of Web applications
   * Deployment and usage analysis of Web applications
   * Performance modeling, monitoring, and evaluation
   * Empirical Web engineering
   * Web quality and Web metrics
   * Adaptive, contextualized and personalized Web applications
   * Mobile Web applications and device-independent delivery

Web service engineering
   * Web service engineering methodologies
   * Web Service-oriented Architectures
   * Semantic Web services
   * Web service-based architectures and applications
   * Quality of service and its metrics for Web applications
   * Inter-organizational Web applications
   * Ubiquity and pervasiveness
   * Linked Data Services

Web data engineering
   * Semantic Web engineering
   * Web 2.0 technologies
   * Social Web applications
   * Web mining and information extraction
   * Linked Data
   * Web data linking, fusion
   * Information quality assessment
   * Data repair strategies
   * Dataset dynamics
   * Dataset introspection
   * Linked Data consumption, visualisation and exploration
   * Deep Web
   * Web science and Future Internet applications


*Submission instructions*

Authors of the research and industrial papers track must explain the
relationship of their work to the Web Engineering discipline in their
submissions. Research papers must comprise substantial innovative
discussion with respect to the related work and must be well motivated
and presented.

   * Extension: Papers must not be longer than 15 (fifteen) pages.
   * Format: according to the LNCS guidelines.
   * Submission: http://www.easychair.org/conferences/?conf=icwe2011


*Publishing of accepted works*

The conference proceedings will be published by Springer-Verlag as an
LNCS volume. Official proceedings will include: full papers (15 pages),
demonstration papers (4 pages) and posters (4 pages). Workshop
papers and contributions to the doctoral consortium will be published
separately. Final versions of accepted papers must strictly adhere to
the LNCS guidelines and must include a printable file of the
camera-ready version, as well as all source files thereof. No changes
to such formatting rules are permitted. Authors of accepted papers
must also download and sign a copyright form that will be made
available on the Web site of the conference. Each paper requires at
least one full registration to the main conference. Selected papers
will be invited to submit an extended version to a special issue of the
JCR-indexed Journal Of Web Engineering (pending agreement).


*Important Dates*

   * Submission deadline: February 21, 2011 (23:59 Hawaii Time)
   * Notification of acceptance: April 14, 2011
   * Camera-ready version: April 28, 2011


*Program Chairs*

   * Oscar Diaz, University of the Basque Country, Spain
   * Sören Auer, Universität Leipzig, Germany

In case of inquiries, please contact the program chairs at:
pcchairs [at] icwe2011.webengineering.org


*Conference Committee*

General Chair
   * George A. Papadopoulos, University of Cyprus, Cyprus
Industrial Track Chair
   * Andreas Doms, SAP Research, Germany
Workshop Chairs:
   * Nora Koch, LMU and Cirquent GmbH, Germany
   * Andreas Harth, KIT, Germany
Tutorial Chairs
   * Cesare Pautasso, University of Lugano, Switzerland
Demo  Poster Chairs
   * Axel Ngonga, Universitat Leipzig
   * Pelechano Vicente, Universidad Politecnica de Valencia
Doctoral Consortium Chairs

ICWE 2011: Second Call for Papers

2011-01-17 Thread Sören Auer

  11th International Conference on Web Engineering (ICWE 2011)

   http://icwe2011.webengineering.org
June 20-24, 2011, Paphos, Cyprus

 *** Second Call for Papers ***


The International Conference on Web Engineering (ICWE) aims at promoting
scientific and practical excellence on Web Engineering, and at bringing
together researchers and practitioners working in technologies,
methodologies, tools, and techniques used to develop and maintain
Web-based applications leading to better systems, and thus to enabling
and improving the dissemination and use of content and services through
the Web. A special focus of ICWE 2011 will be Web Data Engineering.


*Topics of Interest*

The conference fosters original submissions covering, but not restricted
to the following topics of interest:

Web application engineering
   * Processes and methods for Web application development
   * Conceptual modeling of Web applications
   * Model-driven Web application development
   * Domain-specific languages for Web application development
   * Component-based Web application development
   * Web application architectures and frameworks
   * Rich Internet Applications
   * Mashup development and end user Web programming
   * Patterns for Web application development and pattern mining
   * Web content management and data-intensive Web applications
   * Web usability and accessibility
   * I18N of Web applications and multi-lingual development
   * Testing and evaluation of Web applications
   * Deployment and usage analysis of Web applications
   * Performance modeling, monitoring, and evaluation
   * Empirical Web engineering
   * Web quality and Web metrics
   * Adaptive, contextualized and personalized Web applications
   * Mobile Web applications and device-independent delivery

Web service engineering
   * Web service engineering methodologies
   * Web Service-oriented Architectures
   * Semantic Web services
   * Web service-based architectures and applications
   * Quality of service and its metrics for Web applications
   * Inter-organizational Web applications
   * Ubiquity and pervasiveness
   * Linked Data Services

Web data engineering
   * Semantic Web engineering
   * Web 2.0 technologies
   * Social Web applications
   * Web mining and information extraction
   * Linked Data
   * Web data linking, fusion
   * Information quality assessment
   * Data repair strategies
   * Dataset dynamics
   * Dataset introspection
   * Linked Data consumption, visualisation and exploration
   * Deep Web
   * Web science and Future Internet applications


*Submission instructions*

Authors of the research and industrial papers track must explain the
relationship of their work to the Web Engineering discipline in their
submissions. Research papers must comprise substantial innovative
discussion with respect to the related work and must be well motivated
and presented.

   * Extension: Papers must not be longer than 15 (fifteen) pages.
   * Format: according to the LNCS guidelines.
   * Submission: http://www.easychair.org/conferences/?conf=icwe2011


*Publishing of accepted works*

The conference proceedings will be published by Springer-Verlag as an
LNCS volume. Official proceedings will include: full papers (15 pages),
demonstration papers (4 pages) and posters (4 pages). Workshop
papers and contributions to the doctoral consortium will be published
separately. Final versions of accepted papers must strictly adhere to
the LNCS guidelines and must include a printable file of the
camera-ready version, as well as all source files thereof. No changes
to such formatting rules are permitted. Authors of accepted papers
must also download and sign a copyright form that will be made
available on the Web site of the conference. Each paper requires at
least one full registration to the main conference. Selected papers
will be invited to submit an extended version to a special issue of the
JCR-indexed Journal Of Web Engineering (pending agreement).


*Important Dates*

   * Submission deadline: February 14, 2011 (23:59 Hawaii Time)
   * Notification of acceptance: April 14, 2011
   * Camera-ready version: April 28, 2011


*Program Chairs*

   * Oscar Diaz, University of the Basque Country, Spain
   * Sören Auer, Universität Leipzig, Germany

In case of inquiries, please contact the program chairs at:
pcchairs [at] icwe2011.webengineering.org


*Conference Committee*

General Chair
   * George A. Papadopoulos, University of Cyprus, Cyprus
Industrial Track Chair
   * Andreas Doms, SAP Research, Germany
Workshop Chairs:
   * Nora Koch, LMU and Cirquent GmbH, Germany
   * Andreas Harth, KIT, Germany
Tutorial Chairs
   * Cesare Pautasso, University of Lugano, Switzerland
Demo  Poster Chairs
   * Axel Ngonga, Universitat Leipzig
   * Pelechano Vicente, Universidad Politecnica de Valencia
Doctoral Consortium Chairs
   * Peter Dolog, Aalborg University, Denmark,
   * Bernhard Haslhofer

Re: simple LOD browser for a a demo system

2011-01-11 Thread Sören Auer

Am 11.01.2011 18:23, schrieb Tim Finin:

Can anyone recommend software to stand up a simple linked data browser
for a demonstration system we plan on hacking together next week?
What we need is something very simple, much like the the code that
produces the HTML for http://dbpedia.org/page/Baltimore.


The code behind the DBpedia resource pages is based on Disco:

http://www4.wiwiss.fu-berlin.de/bizer/ng4j/disco/

If you want something a little less puristic you might also try OntoWiki:

http://ontowiki.net/

It has a class-hierarchy browser, faceted-browsing, map views, 
customized views as well as various filter options built in. It allso 
supports the full range of LOD best-practices from content-negotiation, 
over SPARQL endpoint, Semantic Pingback to OpenID and FOAF+SSL.


Best,

Sören



CfP: Position Papers for Linked Data Session at Future Internet Assembly

2010-11-25 Thread Sören Auer

  Linked Data session at the Future Internet Assembly (FIA)

  16th of December, Ghent, Belgium

http://semanticweb.org/wiki/LinkedDataFIA2010 (this call)
http://www.fi-ghent.eu(the FIA in Ghent event)
http://www.future-internet.eu (the general initiative)


Short position papers (1-10 pages LNCS) are due on *30th November 2010*
(Although appreciated authors are *not* required to attend the event.)


The Future Internet sparked the interest of many different communities. 
All of these communities develop specific parts of infrastructure, which 
at one point of time need to be able to interoperate. Unfortunately, 
currently the Future Internet architecture does not include means to 
achieve interoperability at a data level. At the same time Linked Data 
is becoming an accepted best practice to exchange information in an 
interoperable and reusable fashion. Many different communities on the 
Internet use Linked Data standards to provide and exchange interoperable 
information. This is strikingly confirmed by the dramatically growing 
Linked Data cloud and the currently more than 25 billion facts 
represented and interconnected therein with exponential growth rates 
both in terms of data sets and contained data.


The OSI/OSI 7-Layer architecture is a conceptual view on networking 
architectures. One possible view is a look at Linked Data as an 
independent layer in the Internet architecture, on top of the networking 
layer, but below the application layers, since it provides a common data 
model for all applications as shown in the figure below. This session 
investigates this view, what implications this imposes on the Future 
Internet Architecture, but also how future architectures and system 
developments can benefit from this new layer.


We are looking for position papers regarding the use of Linked Data in 
the Future Internet. These can be either concrete current use-cases or 
envisioned usages for the topics relevant for the Future Internet 
(examples include: Internet of Things, embedded systems, FIRE, services, 
smart cities., Open Government Data, Future Internet Architecture and 
others). The papers provide an input for the ongoing discussion on the 
role of Linked Data for the Future Internet.



*Submission*

Your position paper should have between 1 and 10 pages. We encourage 
authors to comply with the Springer LNCS format.


Position papers can be submitted until 30th November 2010 (in HTML or 
PDF) via email to futureinter...@semanticweb.org.



*Selection*

The session’s organizers reserve the right to do a relevance check of 
submitted position papers and reject papers, which are clearly not 
relevant to the topic outlined above.



*Publication*

Submitted position papers will be published on a website related to the 
Future Internet Assembly Linked Data Session and may influence further 
developments in the Future Internet space.


*Session Organiser*

* Sören Auer, a...@informatik.uni-leipzig.de
* Stefan Decker (main contact), stefan.dec...@deri.org
* Manfred Hauswirth, manfred.hauswi...@deri.org



1st CfP: 11th International Conference on Web Engineering (ICWE 2011)

2010-11-15 Thread Sören Auer

  11th *International Conference on Web Engineering* (ICWE 2011)
  --
 http://icwe2011.webengineering.org
  June 20-24, 2011, Paphos, Cyprus


The International Conference on Web Engineering (ICWE) aims at promoting 
scientific and practical excellence on Web Engineering, and at bringing 
together researchers and practitioners working in technologies, 
methodologies, tools, and techniques used to develop and maintain 
Web-based applications leading to better systems, and thus to enabling 
and improving the dissemination and use of content and services through 
the Web. A special focus of ICWE 2011 will be Web Data Engineering.



*Topics of Interest*

The conference fosters original submissions covering, but not restricted 
to the following topics of interest:


Web application engineering
* Processes and methods for Web application development
* Conceptual modeling of Web applications
* Model-driven Web application development
* Domain-specific languages for Web application development
* Component-based Web application development
* Web application architectures and frameworks
* Rich Internet Applications
* Mashup development and end user Web programming
* Patterns for Web application development and pattern mining
* Web content management and data-intensive Web applications
* Web usability and accessibility
* I18N of Web applications and multi-lingual development
* Testing and evaluation of Web applications
* Deployment and usage analysis of Web applications
* Performance modeling, monitoring, and evaluation
* Empirical Web engineering
* Web quality and Web metrics
* Adaptive, contextualized and personalized Web applications
* Mobile Web applications and device-independent delivery

Web service engineering
* Web service engineering methodologies
* Web Service-oriented Architectures
* Semantic Web services
* Web service-based architectures and applications
* Quality of service and its metrics for Web applications
* Inter-organizational Web applications
* Ubiquity and pervasiveness
* Linked Data Services

Web data engineering
* Semantic Web engineering
* Web 2.0 technologies
* Social Web applications
* Web mining and information extraction
* Linked Data
* Web data linking, fusion
* Information quality assessment
* Data repair strategies
* Dataset dynamics
* Dataset introspection
* Linked Data consumption, visualisation and exploration
* Deep Web
* Web science and Future Internet applications


*Submission instructions*

Authors of the research and industrial papers track must explain the 
relationship of their work to the Web Engineering discipline in their 
submissions. Research papers must comprise substantial innovative 
discussion with respect to the related work and must be well motivated 
and presented.


* Extension: Papers must not be longer than 15 (fifteen) pages.
* Format: according to the LNCS guidelines.
* Submission: http://www.easychair.org/conferences/?conf=icwe2011


*Publishing of accepted works*

The conference proceedings will be published by Springer-Verlag as an 
LNCS volume. Official proceedings will include: full papers (15 pages), 
demonstration papers (4 pages) and posters (4 pages). Workshop papers 
and contributions to the doctoral consortium will be published separately.
Final versions of accepted papers must strictly adhere to the LNCS 
guidelines and must include a printable file of the camera-ready 
version, as well as all source files thereof. No changes to such 
formatting rules are permitted. Authors of accepted papers must also 
download and sign a copyright form that will be made available on the 
Web site of the conference. Each paper requires at least one full 
registration to the main conference.
Selected papers will be invited to submit an extended version to a 
special issue of the JCR-indexed Journal Of Web Engineering (pending 
agreement).



*Important Dates*

* Submission deadline: February 14, 2011 (23:59 Hawaii Time)
* Notification of acceptance: April 14, 2011
* Camera-ready version: April 28, 2011


*Program Chairs*

* Oscar Diaz, University of the Basque Country, Spain
* Sören Auer, Universität Leipzig, Germany

In case of inquiries, please contact the program chairs at: pcchairs 
[at] icwe2011.webengineering.org



*Conference Committee*

General Chair
* George A. Papadopoulos, University of Cyprus, Cyprus
Industrial Track Chair
* Andreas Doms, SAP Research, Germany
Workshop Chairs:
* Nora Koch, LMU and Cirquent GmbH, Germany
* Andreas Harth, KIT, Germany
Tutorial Chairs
* Cesare Pautasso, University of Lugano, Switzerland
Demo  Poster Chairs
* Axel Ngonga, Universitat Leipzig
* Pelechano Vicente, Universidad Politecnica de Valencia
Doctoral Consortium

Job offer: Data Web developer/content architect at Wolters Kluwer

2010-10-13 Thread Sören Auer

Dear all,

The Germany branch of the international publisher Wolters Kluwer [1] is 
looking for support in the Content Architecture Team in Munich.


The new position will mainly work on the FP7-ICT LOD2 project [1], but 
also on other internal and international content projects.


A background in information and/or computer science, familiarity with 
Web 2.0 and Semantic Data Web technologies as well as proper German and 
English language skills are mandatory.


For further information please have a look at the job offer profile (in 
German) available from:


http://lod2.eu/BlogPost/?p=73

Please pass this job offer on to people you consider qualified and 
interested.


--Sören

[1] http://www.wolterskluwer.de
[2] http://lod2.eu



Re: PUBLINK Linked Data Consultancy

2010-10-07 Thread Sören Auer

On 07.10.2010 9:57, Dave Reynolds wrote:

Insofar PUBLINK rather clears the way for commercial linked data service
providers.


But is not working with any breadth of such providers.

I share Georgi's reservations, seems like an odd direction for EU
framework projects to take.


Its not really a fundamental change of direction, our main focus is 
research but we also want to evaluate our results on real data and give 
something back to the citizens, which is why we aim to get in touch 
with data owners of high public interest and help them a little to move 
in the right (i.e. LOD) direction ;-)


If commercial linked data service providers beyond LOD2/LATC consortia, 
want to get involved in PUBLINK we are more than happy about that. Let 
me know if you have suggestions how this could be implemented best.


Sören

PS: Please also keep in mind that PUBLINK is very limited (max. 3-5 data 
owning organizations) and ca. 10 man days of support for each.




Ann: PUBLINK Linked Data Consultancy

2010-10-06 Thread Sören Auer

Dear all,

We are pleased to announce the PUBLINK Linked Open Data Consultancy 
backed by the consortia of the EU-FP7 LOD2 [1] and LATC projects [2].


In order to lower the entrance barrier for potential data publishers the 
LOD2 and LATC consortia offer the *free* PUBLINK Linked Open Data 
Consultancy to up to five selected organizations supporting them with 
the publishing of Linked Open Data with an overall effort of 10-20 days 
each comprising support from highly skilled Linked Data professionals.


More information about PUBLINK can be found at:

http://lod2.eu/Article/Publink.html

With PUBLINK we aim to particularly support organizations (such as 
governmental agencies, commercial data providers, public 
administrations), which are interested to publish large amounts of 
structured information of a potentially high public interest. 
Applications of interested organizations are being accepted till 
December 20th.


Please forward this announcement to any potential stakeholders in this 
domain you might know.


On behalf of the LOD2 and LATC consortia,

Sören

[1] http://lod2.eu
[2] http://latc-project.eu


--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: PUBLINK Linked Data Consultancy

2010-10-06 Thread Sören Auer

On 07.10.2010 1:13, Georgi Kobilarov wrote:

So, now the EU also takes that burden off the small linked data
consultancies and businesses.


Not at all! PUBLINK is not aimed at organizations which already 
precisely know what they want and are willing to pay for it.


It is more aimed at people in organizations who want to persuade their 
decision makers or decision makers who need more information or a 
showcase in order to get ultimately involved.


Insofar PUBLINK rather clears the way for commercial linked data service 
providers.


Sören



GI INFORMATIK 2010 Workshop on Web Science

2010-04-20 Thread Sören Auer

**Apologies if you receive multiple copies of this CFP**


==INFORMATIK 2010 Workshop on Web Science==
co-located with GI-Jahrestagung 2010

http://aksw.org/WebScienceWorkshop

Web science is often referred to as the science of decentralized 
information systems.
While novel technologies such as semantic web, web services, and cloud 
computing are germane to the broad proliferation of Web technologies, we 
also need to understand phenomena of the Web in the small as well as in 
the large, in order to retain its usefulness and benefit to people. This 
is in the center of attention of Web science and includes besides the 
mentioned technological approaches, research related to online 
communities, information diffusion on the Web, Web governance, global 
network structures beyond the individual communities on the Web, growth 
analysis, incentive and monetization systems.


This workshop provides a platform for researchers and practitioners to 
exchange preliminary results, new concepts and methodologies in this area.


===Topics of interest ===

* Web Governance incl. Provenance, Licensing, Data Security, Access
  Control
* Open Knowledge ecosystems on the Web, such as Open Governmental Data,
  Open Scientific Data
* Information quality assessment
* Quality, coherence and user interaction on the Linked Data Web
* Social computing applications such as collaborative filtering,
  community-based information retrieval and recommendation,
  collaborative bookmarking, tagging and multi-agent systems
* Static and dynamic models of Web structure and Web growth
* Analysis of network structures within and beyond individual
  communities on the Web
* Incentive and monetization systems
* Information diffusion on the Web,
* Web and Web application governance,
* Novel visualisation techniques for Web related data
* Integrating computational network analysis and semantic web
  techniques, for example to enhance the mainly structure-based network
  analysis by semantic information
* Case studies of communities such as Wikipedia, Facebook, Twitter,
  World of Warcraft, open source software as well as empirical findings
  in social computing-related applications

In particular, we aim at collecting a set of requirements, architectural 
styles and metaphors for Web science.
It is the target of such terminology and figures to build bridges of 
understanding between the different communities serving the need to 
appropriately //analyse// and //develop// social Web applications.


===Important Dates===
* 07.05.2010 **Paper Submission**
* 24.05.2010 **Acceptance Notification**
* 30.06.2010 **Final paper version due**
* 28.09.2010 **Workshop in conjunction with the GI-Jahrestagung**

===Contact and Organisation===

Sören Auer, AKSW, University of Leipzig
Claudia Müller-Birn, Software Research, Carnegie Mellon University
Steffen Staab, WeST, University of Koblenz-Landau

===Website===
http://aksw.org/WebScienceWorkshop




Re: RDF Dataset Notifications

2010-04-16 Thread Sören Auer

On 16.04.2010 10:19, Leigh Dodds wrote:

There's been a fair bit of discussion, and more than a few papers around
dataset notifications recently. I've written up a blog post and a quick
survey of technologies to start to classify the available approaches:


Looks interesting. Can you add links to the individual approaches to the 
post and/or the spreadsheet?

Do you mean Semantic Pingback [1] by ping in the spreadsheet?

--Sören

[1] http://aksw.org/Projects/SemanticPingBack



Re: KIT releases 14 billion triples to the Linked Open Data cloud

2010-04-01 Thread Sören Auer

Hi Denny,

Interesting project.
Why didn't you publish 140 billion triples, by publishing 10 Billion 
numbers, or 1.4 Trillion or 14 Trillion or ...?


Looks like you stopped at 1 Billion:

http://km.aifb.kit.edu/projects/numbers/index.php?number=9
http://km.aifb.kit.edu/projects/numbers/index.php?number=10

I think if we go public with something like this we should stress the 
value for people instead of the sheer size.


Happy Easter to everybody,

Sören


--
*Leipziger Semantic Web Tag* am 6. Mai: http://aksw.org/LSWT

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: KIT releases 14 billion triples to the Linked Open Data cloud

2010-04-01 Thread Sören Auer

On 01.04.2010 12:35, Sören Auer wrote:

I think if we go public with something like this we should stress the
value for people instead of the sheer size.


But as an April Fool's joke the value is indeed clear ;-)

Sören



Semantic Pingback

2010-03-15 Thread Sören Auer

Hi all,

Sebastian was announcing the Semantic Pingback approach and 
implementations last week, unfortunately with missing subject header. I 
would like to draw your attention again on Semantic Pingback, since I 
consider it a crucial building block for the Linked Data Web:


http://aksw.org/Projects/SemanticPingBack

* it is downward compatible to the conventional blogosphere Pingback
* it helps data publishers to keep their data updated and interlinked
* it gives direct benefit to data publishers (i.e. usage notifications)

Please consider adding Semantic Pingback support to your published 
datasets and semantic web tools.


Best,

Sören

--
*Leipziger Semantic Web Tag* am 6. Mai: http://aksw.org/LSWT

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[JOB] 2 Doctorate and 1 PostDoc position at AKSW / Uni Leipzig

2010-02-03 Thread Sören Auer
For collaborative research projects in the area of Linked Data 
technologies and Semantic Web the research group Agile Knowledge 
Engineering and Semantic Web (AKSW) at Universität Leipzig opens 
positions for:



 *1 Postdoctoral Researcher (TV-L E13/14)*

The ideal candidate holds a doctoral degree in Computer Science or a 
related field and is able to combine theoretical and practical aspects 
in her/his work. The candidate is expected to build up a small team by 
successfully competing for funding, supervising doctoral students, and 
collaborating with industry. Fluent English communication and software 
technology skills are fundamental requirements. The candidate should 
have a background in at least one of the following fields:


* semantic web technologies and linked data
* knowledge representations and ontology engineering
* database technologies and data integration
* HCI and user interface design for Web/multimedia content

The position starts as soon as possible, is open until filed and will be 
granted for initially two years with extension possibility.



 *2 Doctoral Students (50% TV-L E13 or equivalent stipend)*

The ideal candidate holds a MS degree in Computer Science or related 
field and is able to consider both theoretical and practical 
implementation aspects in her/his work. Fluent English communication and 
programming skills are fundamental requirements. The candidate should 
have experience and commitment to work on a doctoral thesis in one of 
the following fields:


* semantic web technologies and linked data
* knowledge representations and ontology engineering
* database technologies and data integration
* HCI and user interface design for Web/multimedia content

The position starts as soon as possible and will be granted for 
initially one year with an extension to overall 3 years.



HOW TO APPLY

Excellent candidates are invited to apply with:
* Curriculum vitae and copies of degree certificates/transcripts,
* Writing samples/copies of relevant scientific papers (e.g. thesis),
* Letters of recommendation.

Further information can be also found at: http://aksw.org/Jobs

Please send your application in PDF format indicating in the subject
'Application for PhD/PostDoc position‘ to a...@uni-leipzig.de.

Further information can be also found at: http://aksw.org/Jobs


--
Sören Auer - University of Leipzig - Dept. of Computer Science
http://www.informatik.uni-leipzig.de/~auer, +49 (341) 97-32323



Re: [Ann] LESS - Content Syndication based on Linked Data

2010-01-21 Thread Sören Auer

On 21.01.2010 10:10, Pierre-Antoine Champin wrote:

You may be interested in having a look at


We did ;-)  - e.g. it is referenced in the related work section of our 
report on LESS [1].


Indeed the aims of T4R and LESS are very similar. However, LESS focuses 
also on sharing and collaboration on templates and it is very much 
aligned with the Linked Data paradigm (e.g. LESS dynamically 
dereferences additional resources).
We were actually thinking about supporting different template languages 
(in addition to our LeTL) at a later stage and T4R might be an 
interesting candidate.


--Sören

[1] http://www.informatik.uni-leipzig.de/~auer/publication/semtem.pdf



[Ann] LESS - Content Syndication based on Linked Data

2010-01-20 Thread Sören Auer

Hi all,

On behalf of the AKSW research group [1] and Netresearch GmbH [2] I'm 
very pleased to announce LESS - an end-to-end approach for the 
syndication and use of linked data based on the definition of 
visualization templates for linked data resources and SPARQL query results.


Such syndication templates are edited, published and shared by using 
LESS' collaborative Web platform. Templates for common types of entities 
can then be combined with specific, linked data resources or SPARQL 
query results and integrated into a wide range of applications, such as 
personal homepages, blogs/wikis, mobile widgets etc.


LESS and further information and documentation can be found at:

http://less.aksw.org

Particular thanks go to Raphael Doering (Netresearch) who performed most 
of the development work and to Sebastian Dietzold (AKSW) for 
contributing in various ways.


Cheers,

Sören Auer


[1] http://aksw.org
[2] http://netresearch.de
--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[cfp] 5th Open Knowledge Conference (OKCon 2010)

2009-12-03 Thread Sören Auer
OKCon, now in its fifth year, is the interdisciplinary conference that 
brings together individuals from across the open knowledge spectrum for 
a day of presentations and workshops.


Open knowledge (http://opendefinition.org) promises significant social 
and economic benefits in a wide range of areas from governance to 
science, culture to technology. Opening up access to content and data 
can radically increase access and reuse, improving transparency, 
fostering innovation and increasing societal welfare.


In addition to high profile initiatives such as Wikipedia, OpenStreetMap 
and the Human Genome Project, there is enormous growth among open 
knowledge projects and communities at all levels. Moreover, in the last 
year, many governments across the world have begun opening up their data.


And it doesn't stop there. In academia, open access to both publications 
and data has been gathering momentum, and similar calls to open up 
learning materials have been heard in education. Furthermore, this 
gathering flood of open data and content is the creator and driver of 
massive technological change. How can we make this data available, how 
can we connect it together, how can we use it collaborate and share our 
work?


 * where: London, UK
 * when: Saturday 24th April, 2010
 * www: http://www.okfn.org/okcon/
 * last year: http://www.okfn.org/okcon/2009/
 * cfp: http://www.okfn.org/okcon/cfp/ (deadline: Jan 31st 2010)
 * hashtag: #okcon2010


TOPICS

We welcome proposals on any aspect of creating, publishing or reusing 
content or data that is open in accordance with 
http://opendefinition.org. Topics include but are not limited to:


Technology

* Semantic Web and Linked Data in relation to open knowledge
* Platforms, methods and tools for creating, sharing and curating
  open knowledge
* Light-weight, adaptive interaction models
* Open, decentralized social network applications
* Open geospatial data

Law, Society and Democracy

* Open Licensing, Legal Tools and the Public Domain
* Open government data and content (public sector information)
* Open knowledge and international development
* Opening up access to the law

Culture and Education

* Open educational tools and resources
* Business models for open content
* Incentive and rewards open-knowledge contributors
* Open textbooks
* Public domain digitisation initiatives

Science and Research

* Opening up scientific data
* Supporting scientific workflows with open knowledge models
* Open models for scientific innovation, funding and publication
* Tools for analysing and visualizing open data
* Open knowledge in the humanities


IMPORTANT DATES

 * Submission deadline: January 31st 2010
 * Notification of acceptance: March 1st
 * Camera-ready papers due: March 31st
 * OKCon: April 24th 2010


SUBMISSION DETAIL

We are accepting three types of submissions:

1. Full papers of 5-10 pages describing novel strategies, tools, 
services or best-practices related to open knowledge,
2. Extended talk abstracts of 2-4 pages focusing on novel ideas, ongoing 
work and upcoming research challenges.

3. Proposals for short talks and demonstrations

OKCon will implement an open submission and reviewing process. To make a 
submission visit:


* http://www.okfn.org/okcon/submit/

Depending on the assessment of the submissions by the programme 
committee and external reviewers, submissions will be accepted either as 
full, short or lightning/poster presentations.


Proceedings of OKCON will be published at http://ceur-ws.org. If you 
want your submission to be included in the conference proceedings you 
have to prepare a manuscript of your submission according to the LNCS Style.





Re: RDF Update Feeds

2009-11-17 Thread Sören Auer

Damian Steer wrote:

There have been a few suggestions over the years. [1] immediately jumps
to mind, for example.


We integrated functionality for publishing LinkedData updates also in 
Triplify [1]. Its similar to Talis' changeset approach, but works more 
like publishing a hierarchically structured update log as linked data 
itself. Details can be found here:


http://triplify.org/vocabulary/update

Sören

[1] http://triplify.org/

--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[Ann] Triplify 0.7 released

2009-10-30 Thread Sören Auer

Hi all,

On behalf of AKSW (http://aksw.org), I'm pleased to announce the 
release of Triplify version 0.7.


Triplify (http://triplify.org) is a light-weight tool for publishing 
relational databases as RDF and Linked Data.


This release includes in particular:

* a *metadata extension* created by Olaf Hartig for generating and
  representing provenance information

* support for *Extract-Tranform-Load (ETL) cycles*, since Triplify can
  be called directly from the command line.

* extension of the default behavior for mapping URIs to SQL queries by
  using *regular expressions to match request URL’s*.

More information can be also found on the documentation page:

http://triplify.org/Documentation

Special thanks go to Sebastian Dietzold and Soren Roug.


Sören

--
Sören Auer, AKSW Research Group, InfAI / University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[Ann] Triplification Challenge 2009 Winners

2009-09-07 Thread Sören Auer

Hi all,

Last Friday, the winners of this years Triplification Challenge were 
announced at I-Semantics 2009 in Graz. The winners are:


* 1st prize: Anja Jentzsch, Jun Zhao, Oktie Hassanzadeh, Kei-Hoi Cheung, 
Matthias Samwald, Bo Andersson with *Linking Open Drug Data*


* 2nd prize: Bernhard Schandl with *TripFS*: Exposing File Systems as 
Linked Data


* 3rd prize: Matthias Quasthoff, Sebastian Hellmann, Konrad Höffner 
with Standardized *Multilingual Language Resources* for the Web of Data: 
http://corpora.uni-leipzig.de/rdf


We received a number of very high quality submissions and decided to 
award two honorable mentions to:


* Danh Le Phuoc with *SensorMasher*: publishing and building mashup of 
sensor data
* Andreas Koller with SKOS Thesaurus Management based on Linked Data 
with *Poolparty*


Links to the submissions are available from:

http://blog.aksw.org/2009/triplification-challenge-2009-winners/

We would like the Triplification Challenge 2009 sponsors *Ontos AG* 
(http://www.ontos.com/), *Punkt.NetServices* 
(http://poolparty.punkt.at/) and *DERI* (http://www.deri.ie/) for their 
kind support.


On behalf of the challenge organizers,

Sören

--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: [Ann] LinkedGeoData.org

2009-07-19 Thread Sören Auer

Ian Davis wrote:

Very nice. How long do you think it will take for the entire dataset
to be available?


The complete OSM dataset amounting roughly 3B triples is now available from:

http://linkedgeodata.org/Datasets

As already noted earlier, however, for most use cases the LGD Elements 
dataset might be the more interesting and manageable one.


--Sören



[Ann] LinkedGeoData.org

2009-07-08 Thread Sören Auer

Dear Colleagues,

On behalf of the AKSW research group [1] I'm pleased to announce the 
first public version of the LinkedGeoData.org datasets and services.


LinkedGeoData is a comprehensive dataset derived from the OpenStreetMap 
database covering RDF descriptions of more than 350 million spatial 
features (i.e. nodes, ways, relations).


LinkedGeoData currently comprises RDF dumps, Linked Data and REST 
interfaces, links to DBpedia as well as a prototypical user interface 
for linked-geo-data browsing and authoring.


More information can be found at: http://linkedgeodata.org

Best,

Sören Auer


[1] http://aksw.org

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: [Ann] LinkedGeoData.org

2009-07-08 Thread Sören Auer

Ian Davis wrote:
 Very nice. How long do you think it will take for the entire dataset
 to be available?

That might take another week or so, but for most use cases the elements 
data set should be sufficient, since it contains the most interesting 
information.
I guess the complete dataset will be a real challenge for most triple 
stores - not that they won't be able to store the data, but efficient 
querying will be very challenging and I even have some doubts that it is 
reasonable to use this data with a triple store at all. But we will try 
to make it available anyway ;-)


  Open streetmap are voting soon on whether to adopt the open data
 commons sharealike database license. If they adopt it will you also
 adopt it for this data?

Sure!


--Sören



Re: Fusion Tables: Google's approach to sharing data on the Web

2009-07-03 Thread Sören Auer

Chris Bizer wrote:
I’m regularly following Alon Halevy blog as I really like his thoughts 
on dataspaces [1].


I've the impression that's pretty much what DabbleDB [1] and others 
already do for ages even better than Google. Or am I wrong?


--Sören

[1] http://dabbledb.com/



Re: Keeping crawlers up-to-date

2009-04-28 Thread Sören Auer

Hi Yves, all,

We envisioned publishing updates of LOD sources via a special LOD 
resource space on the LOD endpoint.
The basic idea is to publish nested sets of updates as linked data for 
years, months, days, hours, minutes, seconds.
This allows crawlers to only update resources which were recently 
changed. The idea is implemented and described for Triplify at:


http://triplify.org/vocabulary/update

There is also a section on that in the paper:

Triplify - Lightweight Linked Data Publication from Relational 
Databases. Proceedings of WWW 2009.

http://www.informatik.uni-leipzig.de/~auer/publication/triplify.pdf
http://www.slideshare.net/soeren1611/triplify-1341084

Cheers,

Sören


--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: [Ann] OSM Linked Geo Data extraction, browser editor

2008-12-12 Thread Sören Auer


Bernard Vatant wrote:

Have you any plans to link those data with geonames.org data?


Yes and no ;-)
Of course we want to interlink the OSM data with as many other data 
sources as possible. Candidates are Geonames, Wiki/DBpedia, 
Worldfactbook, OpenResearch [1] etc.
Such interlinkages fit also very well into the OSM data model (which is 
very close to RDF in many aspects). Basically everybody can start 
creating links between the data sources by either using 
http://LinkedGeoData.org/browser or via OSM's REST API (any other OSM 
editor will do as well). Right now we are busy creating an complete RDF 
dump and setting up the bi-directional live syncronization between 
LinkedGeoData.org and OSM. So feel free to create mappings and store 
them in OSM's DB via its API. They will then also show up automatically 
on LinkedGeoData.org.


Best,

Sören

[1] http://OpenResearch.org


--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[Ann] OSM Linked Geo Data extraction, browser editor

2008-12-10 Thread Sören Auer


Hi all,

We were working in the last weeks on bringing geo data derived from the 
marvelous OpenStreetMap project [1] to the data web. This work in 
progress is still far from being finished, however, we would like to 
share some first preliminary results:


* A *vast amount of point-of-interest descriptions* was extracted from 
OSM and published as Linked Data at http://linkedgeodata.org


* The *Linked Geo Data browser and editor* (available at 
http://linkedgeodata.org/browser) is a facet-based browser for geo 
content, which uses an OLAP inspired hypercube for quickly retrieving 
aggregated information about any user selected area on earth.


Further information can be also found in our AKSW project description:

http://aksw.org/Projects/LinkedGeoData

Thanks go to Sebastian Dietzold, Jens Lehmann, Sebastian Hellmann, David 
Aumueller and other members of the AKSW team for their contributions.


Merry Christmas to everybody from Leipzig

Sören


[1] http://openstreetmap.org

--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



Re: [Ann] OSM Linked Geo Data extraction, browser editor

2008-12-10 Thread Sören Auer


Simon Reinhardt wrote:

Sören Auer wrote:
We were working in the last weeks on bringing geo data derived from 
the marvelous OpenStreetMap project [1] to the data web.


Nice work, you beat me to it. :-)
But since you take a slightly different approach, I think I'll publish 
my OSM Wrapper anyway.



Sure, the more the merrier ;-)
Maybe we can have some more discussions about the conceptual differences 
and similarities and ultimately maybe even join efforts?


Best,

Sören



Re: Pushing back into Wikipedia? Re: ANN: DBpedia 3.2 release, including DBpedia Ontology and RDF links to Freebase

2008-11-18 Thread Sören Auer


Tim Berners-Lee wrote:
 Now that there has been so much clean-up work which has been done, has
 there been any discussion of pushing back the cleanliness into the
 wikipedia pages themselves, so that the wikipedia gains in consistency?

Yes, we are thinking about this quite a while. The first step will be to 
set up some kind of live-syncronization between Wikipedia and DBpedia. 
For this we already got access to the live-stream of Wikipedia updates 
from Wikimedia's Brion Vibber. As a second step the DBpedia additions 
will be integrated back as annotations into Wikipedia pages. As a result 
there would be some kind of roundtrip-engineering between bot possible: 
If people see a error or mistake they can correct in Wikipedia and the 
correction will show up on DBpedia. However, we have to be careful not 
to overstrain Wikipedians, since they are usually more interested in 
texts than structure ;-)


Best,

Sören



Triplification Challenge 2008 Winners

2008-09-07 Thread Sören Auer


Dear all,

We are very pleased to announce that the winners of this years 
Triplification Challenge were awarded on September 5th at I-Semantics 
2008. The winners are:


1st prize (Macbook Air): *Linked Movie Data Base*
by Oktie Hassanzadeh, Mariano Consens
http://www.linkedmdb.org

2nd prize (eeePC): *DBTune* by Yves Raimond
http://dbtune.org

3rd prize (iPod): *Semantic Web Pipes* by Danh Le Phuoc
http://pipes.deri.org

Further information can be found on the Challenge homepage at:
http://triplify.org/Challenge

Some impressions from the award ceremony at I-Semantics after Tom 
Heath's keynote Humans and the Web of Data are available at:

http://www.flickr.com/photos/[EMAIL 
PROTECTED]/tags/triplificationchallenge2008

We are very thankful to those nominees who did not win this year. We are 
also grateful to the Challenge sponsors OpenLink, Punkt.NetServices and 
InfAI, without the Challenge would not have been possible.


On behalf of the Triplification Challenge organizers

Sören

--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



LOD Triplification Challenge Nominations

2008-08-05 Thread Sören Auer


Hi all,

The following submissions were nominated for the prizes of the LOD 
Triplification Challenge (http://triplify.org/Challenge):


1. Automatic CMS Generation from OWL by Alastair Burt, Brigitte Jörg

2. Django-Triplify Integration and Discover Some Math by Martin Czygan

3. Linked Movie Data Base by Oktie Hassanzadeh, Mariano Consens

4. Interlinking Multimedia Data by Michael Hausenblas, Wolfgang Halb

5. RDF syndication in Joomla! by Danh Le Phuoc, Nur Aini Rakhmawati

6. Semantic Web Pipes Demo by Danh Le Phuoc

7. DBTune by Yves Raimond

8. Triplification of osCommerce by Elias Theodorou

You will find links to the demos and short descriptions at:

http://triplify.org/Challenge/Nominations

The final decision about the winners of the challenge will be made by 
the organizing committee and invited judges. The prizes will be awarded 
at I-SEMANTICS 2008, 3–5 September 2008 in Graz, Austria.


Please vote for your personal favorite on the nominations page and 
invite your friends and colleagues. Although this will not have a formal 
influence on the award decision the winner of the community vote will at 
least earn a honorable mention.


Best,

Sören



Re: Linked Movie DataBase

2008-08-01 Thread Sören Auer


Hi Oktie, all,

Great work you have done with LinkedMDB.org!

I would like to point you to another open movie data related project: 
Open Movie Database (http:/omdb.org), which contains very high quality 
data and is completely free and open.


As a small exercise and to test Triplify with a larger dataset I created 
a triplification for OMDB, which is accessible at:


http://triplify.org/omdb/triplify/

The used Triplify configuration is available at [1].

Best,

Sören

[1] http://triplify.org/Configuration/OMDB


--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer



[Ann] Triplify 0.4

2008-07-31 Thread Sören Auer


Hi all,

we just release version 0.4 of the Triplify script. After the initial
release several months ago we made quite some additions and bug fixes
the most important of which are:

 * *update log functionality* added - allows Semantic Web crawlers to
   get incremental updates, see
  http://triplify.org/vocabulary/update
 * linked data publication now also works without Apache's mod_rewrite
 * Syntax for indicating objectProperties added, e.g.:
 SELECT id,user_id 'sioc:has_creator-user'
 * Additional metadata can now be added via $triplify['metadata']
 * The configuration variable $triplify['CallbackFunctions'] allows
   programmatic post processing of DB content

Thanks to everybody contributing bug fixes or comments (especially
Sebastian Hellmann, Danh Le Phuoc, Rolf Strathewerd, Elias Theodorou).

On behalf of the AKSW team [1]

Sören Auer


[1] http://aksw.org

--

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer




3 more weeks: LOD Triplification Challenge

2008-06-11 Thread Sören Auer


Hi all,

Together with this years I-Semantics conference we are organizing a
Linking Open Data Triplification Challenge. Submission deadline is in 
three weeks (30th of June).


The challenge aims at expediting the process of revealing and exposing
structured (relational) representations, which already back most of the
existing Web sites, as well as raising awareness in the Web Developer
community and showcasing best practices.

The challenge awards attractive prices (MacBook Air, EeePC, iPod) to the
most innovative and promising semantifications. The prizes are kindly
sponsored by OpenLink Software [2], Punkt.NetServices [3] and InfAI [4].

More Information about the challenge can be found at:

http://triplify.org/Challenge

I think outreach to the Web developer communities (as intended with the
challenge) is really crucial right now to expedite the Semantic Web
deployment and I would be very excited if you support this effort - e.g.
by spreading the word and/or submitting to the challenge.

Best,

Sören

[1] http://www.i-semantics.at/
[2] http://www.openlinksw.com/
[3] http://www.punkt.at/
[4] http://infai.org/

--
Sören Auer, AKSW/Computer Science Dept., University of Leipzig
http://www.informatik.uni-leipzig.de/~auer,  Skype: soerenauer