[Apologies for multiple postings.]

* * * * * * *
Overview
* * * * * * * 

The ESWC organizers are glad to announce that the Challenges Track will be 
included again in the program of ESWC 2018. Five challenges were held last year 
[1] and allowed the ESWC2017 conference to attract a broader audience beyond 
the Semantic Web community, also spanning across disciplines such as 
Recommender Systems or Knowledge Extraction. 

For the 2018 edition, a call for challenges is open in order to allow the 
selection of challenges to be held at the conference. The purpose of challenges 
is to showcase the maturity of state of the art methods and tools on tasks 
common to the Semantic Web community and adjacent disciplines, in a controlled 
setting involving rigorous evaluation.
Semantic Web Challenges are an official track of the conference, ensuring 
significant visibility for the challenges as well as participants. Challenge 
participants are asked to present their submissions as well as provide a paper 
describing their work.  These papers must undergo a peer-review by experts 
relevant to the challenge task, and will be published in the challenge 
proceedings.
Next to the publication of proceedings, challenges at ESWC2018 will benefit 
from high visibility and direct access to the ESWC audience and community.

[1] https://2017.eswc-conferences.org/call-challenges 
<https://2017.eswc-conferences.org/call-challenges> 

* * * * * * * * * * * * * * *
Challenge Proposals
* * * * * * * * * * * * * * *

Challenge organizers are encouraged to submit proposals adhering to the 
following criteria:

- At least one task involving semantics in data. 
The task(s) should be well defined and related to the Semantic Web but not 
necessarily confined to it. It is highly encouraged to consider tasks which 
involve other, highly related communities, such as NLP, Recommender Systems, 
Machine Learning or Information Retrieval. If multiple tasks are provided the 
tasks should be independent so that participants may choose which to 
participate in.

- Task descriptions are likely to be interesting to a wider audience. 
We encourage the challenge organizers to propose at least one basic task that 
can be addressed by a larger audience from their community. Engaging with your 
challenge audience and obtaining feedback from your target group on the task 
design might be helpful for shaping the task and ensuring sufficient number of 
participants.

- Clear and rigorous definition of the tasks. 
For each task, you should define a deterministic and objective way to verify if 
the goal of the task has been achieved, and to which extent it has been 
achieved (if applicable). The best way is usually to provide detailed examples 
of input data and expected output. The examples must cover all the possible 
situations that can occur while performing the task, and should leave no place 
to ambiguity about whether in a particular case the task is done or not.

- Valid dataset (if applicable). 
If accepted, you should find or create a dataset that will be used for the 
challenge. In any case, you must specify the provenance of the dataset (if it 
contains human annotation – how were those obtained). You must make sure you 
have the right to use/publish this dataset and clearly state the license for 
its use within the challenge. The dataset should be split at least in two parts 
– the training part, and the evaluation part. The training part contains the 
data, and the results that should be obtained when performing the task. As for 
the evaluation part, you should only publish the data, and make sure that the 
correct results have not previously been available to the participants. When 
proposing the challenge you must provide details on the dataset and on the way 
it is/will be created – the dataset can be made available later.

- Challenge Committee. 
Composed of at least 3 respected researchers with experience in the tasks of 
the challenge. They help evaluate the papers submitted by the participants, and 
also validate the evaluation procedure.

- Evaluation metrics and procedure. For each task there must be at least two 
objective criteria (metrics) (e.g. precision and recall). The evaluation 
procedure and the way in which the metrics will be calculated must be clearly 
specified and made transparent to participants (having in the website of the 
challenge evaluation scripts available for participants would be a good 
practice to use).
Among the selection criteria for choosing the supported challenges are:
-- Potential number to interested participants
-- Rigor and transparency of the evaluation procedure
-- Relevance for the Semantic Web community
-- Endorsements (from researchers working on the task, from industry players 
interested in results, from future participants)

* * * * * * * * * * *
Important Dates
* * * * * * * * * * *

-       Challenges proposals due: Friday December 22th, 2017 - 23:59 Hawaii Time
-       Challenges chosen/merged – notification to organizers sent: Friday 
December 29th, 2017
-       Training data ready and challenges Calls for Papers: Friday January 
12th, 2018
-       Challenge papers submission deadline (5 pages document): Friday March 
9th, 2018
-       Challenge paper reviews: Thursday April 5th, 2018
-       Notifications sent to participants and invitations to submit task 
results: Monday April 9th, 2018
-       Test data (and other participation tools) published: Monday 16 April 
7th, 2018
-       Camera ready papers for the conference (5 pages document): Monday April 
23rd, 2018
-       Submission of challenge results: free choice of organizers
-       Proclamation of winners: During ESWC2018 closing ceremony
-       Camera ready paper for the challenge post proceedings (15 pages 
document): Friday July 6th, 2018 (tentative deadline)

* * * * * * * * * * * * * *
Submission Details
* * * * * * * * * * * * * *

The challenges proposals should contain at least the following elements:
-       A summary description of the challenge and tasks
-       How the training/testing data will be built and/or procured
-       The evaluation methodology to be used, including clear evaluation 
criteria and the exact way in which they will be measured. Who will perform the 
evaluation and how will transparency be assured?
-       The anticipated availability of the necessary resources to the 
participants
-       The resources required to prepare the tasks (computation and annotation 
time, costs of annotations, etc.)
-       The list of challenge committee members who will evaluate the challenge 
papers (please indicate which of the listed members already accepted the role)

In case of doubt, feel free to send us your challenge proposal drafts as early 
as possible – the challenges chairs will provide you with feedback and answers 
to questions you may have.
Please submit proposals via Easychair at 
https://easychair.org/conferences/?conf=challengeeswc2018 
<https://easychair.org/conferences/?conf=challengeeswc2018> as soon as possible 
and no later than *22 December 2017*.
------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
DBpedia-discussion mailing list
DBpedia-discussion@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion

Reply via email to