Third Call for Replication, Benchmark, Data and Software Track

==================================================================

The 13th International Semantic Web Conference

http://iswc2014.semanticweb.org/

19-23 October 2014, Riva del Garda, Trento, Italy

==================================================================

Science, to a large degree is like building a house of ideas on the foundations 
laid by others. In some scientific disciplines the laying of foundations is 
explicitly seen as part of the innovative activity of the discipline. In the 
Semantic Web and Linked Data field there has been a bias towards presenting 
novel ideas in research papers. The goal of the Replication, Benchmark and Data 
Track is to cater the reviewing and paper evaluation process towards the 
specific needs of papers that make useful contributions without proving a 
hypothesis or making a novel contribution. Specifically, this track seeks work 
in the following areas: 

Replication focuses on replicating a prior published approach in order to shed 
light on some important, possibly overlooked aspect. Replicating a result, or 
failing to, is a useful contribution to our collective knowledge, and good 
replication papers will challenge some previously accepted trusim, expose some 
limitation in the assumptions or confounds chosen, or confirm (or question) the 
internal validity of the results. For example: 
  * Jens Dittrich, Lukas Blunschi, and Marcos Antonio Vaz Salles. 2008. Dwarfs 
in the rearview mirror: how big are they really?. VLDB Endow. 1, 2 (August 
2008). [http://www.diku.dk/~vmarcos/pubs/DBS08.pdf] 
Review Criteria: Is the replicated work significant, has it been done before, 
was there an important/relevant lesson to be learned from the replication, were 
hidden assumptions of the original experiment exposed? 

Benchmarks make available to the community a new class of resources, metrics or 
software that can be used to measure the performance of systems in some 
dimension. Any data and software should be made public through a reasonable 
access mechanism, to enable the community to use it. Ideally a benchmark paper 
will also provide some baseline of performance, or may further serve the 
community by surveying the performance of existing systems according to the 
benchmark. The key here would be that the systems evaluated are not being 
presented in the paper as the contribution. For example: 
  * Guo, Yuanbo, Heflin, Jeff and Pan, Zhengxiang . Benchmarking DAML+OIL 
Repositories. ISWC 2003. [http://swat.cse.lehigh.edu/pubs/guo03a.pdf]
Review Criteria: Does the benchmark measure something significant (is it 
relevant and sufficiently general), are the proposed performance metrics 
sufficiently broad and relevant, is there already a similar benchmark (if yes, 
how does it differ), can others use the data and software, how can it advance 
the state of the art, if a survey was done, was the coverage of systems 
reasonable, or were any obvious choices missing? 

Data introduces an important data set to the community. This highly important 
task is often difficult to publish, as its main contribution lies in providing 
others the means for accomplishing their goals. Even though dbpedia and wordnet 
are some of the most valuable and widely used resources in our community, and 
have made an invaluable contribution to our science, they were very difficult 
to publish as papers. For example: 
  * S Auer, C Bizer, G Kobilarov, J Lehmann, R Cyganiak, Z Ives. Dbpedia: A 
nucleus for a web of open data. ISWC 2007. 
[http://158.130.69.163/~zives/research/dbpedia.pdf]
Review Criteria: Is there a similar data source? Is the source of interest to 
the semantic web community (and society in general)? Is the source semantic, 
linked, etc.? Does it use URIs. Is it available to the community? Was the data 
used for something scientific, practical, etc.? Is the data likely to be 
repurposed for other uses? 

We encourage the authors to carefully read the calls for the other tracks, the 
Research track, the  In Use track and the Industry track and consider 
submitting to the most 
appropriate track. Multiple submissions of the same paper to different tracks 
are not acceptable.
 

TOPICS OF INTEREST
-------------------

All topics addressed in any of the other ISWC tracks that present work without 
a clear hypothesis or novelty, but that present Replication, Benchmark, or Data 
studies are of interest to this track. 


 
SUBMISSION
-----------

Pre-submission of abstracts is a strict requirement. All papers and abstracts 
have to be submitted electronically via the EasyChair conference submission 
System at https://www.easychair.org/conferences/?conf=iswc2014rdb .
 
All research submissions must be in English, and no longer than 16 pages. 
Papers that exceed this limit will be rejected without review. Submissions must 
be in PDF formatted in the style of the Springer Publications format for 
Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see 
Springer’s Author Instructions. ISWC-2014 submissions are not anonymous.

Authors of accepted papers will be required to provide semantic annotations for 
the abstract of their submission, which will be made available on the 
conference web site. Details will be provided at the time of acceptance. 

Accepted papers will be distributed to conference attendees and also published 
by Springer in the printed conference proceedings, as part of the Lecture Notes 
in Computer Science series. At least one author of each accepted paper must 
register for the conference and present the paper there.
 


PRIOR PUBLICATION AND MULTIPLE SUBMISSIONS
-------------------------------------------

ISWC 2014 will not accept research papers that, at the time of submission, are 
under review for or have already been published in or accepted for publication 
in a journal or another conference. 
The conference organizers may share information on submissions with other 
venues to ensure that this rule is not violated.


 
SUBMISSION OF A POSTER OR DEMO TOGETHER WITH YOUR ACCEPTED RESEARCH PAPER
--------------------------------------------------------------------------

Authors of accepted papers are invited to submit also a poster or a demo to the 
Posters and Demo track. The submission format is the same as for normal poster 
and demo submissions but the submission must cite the corresponding paper from 
the research track.
 


IMPORTANT DATES
----------------

  * Abstracts: May 1, 2014
  * Full Paper Submission: May 9, 2014
  * Author Rebuttals: June 9-11, 2014 
  * Notifications: July 3, 2014 
  * Camera-Ready Versions: August 1, 2014 
  * Conference: October 19-23, 2014  

All deadlines are Hawaii time.
 


RDB TRACK CHAIR
-------------------
 
Abraham Bernstein, University of Zurich, Switzerland
Chris Welty, IBM Research, USA

Reply via email to