## ACM SIGIR Artifact Badges ##

The ACM Special Interest Group on Information Retrieval (SIGIR) adheres to and 
implements the ACM policies for "Artifact Review and Badging” 
(https://www.acm.org/publications/policies/artifact-review-and-badging-current 
<https://www.acm.org/publications/policies/artifact-review-and-badging-current>).
 

Artifact badging is not only intended for further improving our experimental 
practices, but especially to highlight and recognize the outstanding efforts 
made by those who go the extra mile to make their experiments’ code and data 
not only available online, but also easy to use, fully functional, and 
reproducible.

Overall, the initiative promotes reproducibility of research results and allows 
scientists and practitioners to immediately benefit from state-of-the-art 
research results, without spending months re-implementing the proposed 
algorithms and trying to find the right parameter values, or creating datasets 
or running intensive user-oriented evaluation. We also hope that it will 
indirectly foster scientific progress, since it allows researchers to reliably 
compare with and build upon existing techniques, knowing that they are using 
exactly the same implementation.

Badge Types:

** Artifacts Evaluated – Functional ** The artifacts associated with the 
research are found to be documented, consistent, complete, exercisable, and 
include appropriate evidence of verification and validation.

** Artifacts Evaluated – Reusable and Available ** The artifacts associated 
with the paper are of a quality that significantly exceeds minimal 
functionality. That is, they have all the qualities of the Artifacts Evaluated 
– Functional level, but, in addition, they are very carefully documented and 
well-structured to the extent that reuse and repurposing are facilitated. In 
particular, the norms and standards of the research community for artifacts of 
this type are strictly adhered to. This badge is applied to papers in which 
associated artifacts have been made permanently available for retrieval.

** Results Reproduced ** The main results of the paper have been obtained in a 
subsequent study by a person or team other than the authors, using, in part, 
artifacts provided by the author.

** Results Replicated ** The main results of the paper have been independently 
obtained in a subsequent study by a person or team other than the authors, 
without the use of author-supplied artifacts

The different types of ACM stamps are not meant to be a measure of the 
scientific quality of the paper themselves or of the usefulness of presented 
algorithms, which are assessed by means of the traditional peer-review 
processes and by adoption/impact in the research and industry community. 
Rather, they are a recognition of the service provided by authors to the 
community by releasing the code and/or data and they are an endorsement of the 
replicability and/or reproducibility of the results presented in the paper. The 
stamps also alert users of the ACM Digital Library about the presence and 
location of these artifacts in the ACM DL:

Datasets – https://dl.acm.org/artifacts/dataset 
<https://dl.acm.org/artifacts/dataset> 
Software – https://dl.acm.org/artifacts/software 
<https://dl.acm.org/artifacts/software> 

In this way, each artifact will be assigned its own DOI, will be directly 
citable, and will be linked to its corresponding paper.


## Artifact Submission ##

ACM SIGIR Artifact Badges applies to artifacts complementing papers accepted in 
one of the following venues:

ACM Transactions on Information Systems (TOIS)
Annual International ACM SIGIR Conference on Research and Development in 
Information Retrieval (SIGIR)
ACM on Conference on Human Information Interaction and Retrieval (CHIIR)
ACM SIGIR International Conference on the Theory of Information Retrieval 
(ICTIR)

The submission is always open and authors are welcome to apply for badges as 
soon as their papers get accepted in one of the above venues.

Irrespective of the nature of the artifacts, authors should create a single Web 
page (whether on their site or a third-party repository service) that contains 
the artifact, the paper, and all necessary instructions.

For artifacts where this would be appropriate, it would be helpful to also 
provide a self-contained bundle (including instructions) as a single file (.tgz 
or .zip) for convenient offline use.

The artifact submission thus consists of just the URL and any credentials 
required to access the files submitted into the submission system: 

https://openreview.net/group?id=ACM.org/SIGIR/Badging 
<https://openreview.net/group?id=ACM.org/SIGIR/Badging> 


## Artifact Preparation Guidelines and Review Procedure ##

You can find more information about the ACM SIGIR Artifact Badges at:

https://sigir.org/general-information/acm-sigir-artifact-badging/ 
<https://sigir.org/general-information/acm-sigir-artifact-badging/> 

There you can also find detailed instructions and suggestions about how to 
prepare your artifacts for each type of badge and the reviewing criteria for 
each of them.

Each artifact will be reviewed by a senior and junior member of the Artifact 
Evaluation Committee (AEC).

For any questions or clarifications, please contact us at:

[email protected] <mailto:[email protected]> 


## ACM SIGIR Artifact Evaluation Committee (AEC) ##

Chair and Vice-chair
Nicola Ferro, University of Padua, Italy [chair]
Johanne Trippas, RMIT University , Australia [vice-chair]

Senior Members
Alessandro Benedetti, Sease, UK
Rob Capra, University of North Carolina at Chapel Hill, USA
Diego Ceccarelli, Bloomberg, UK
Anita Crescenzi, University of North Carolina at Chapel Hill, USA
Charles L . A. Clarke, University of Waterloo, Canada
Yi Fang, Santa Clara University, USA
Norbert Fuhr, University of Duisburg-Essen, Germany
Claudia Hauff, Delft University of Technology, The Netherlands
Jiqun Liu, University of Oklahoma, USA
Maria Maistro, University of Copenhagen, Denmark
Miguel Martinez, Signal AI, UK
Parth Mehta, Parmonic, USA
Martin Potthast, Leipzig University, Germany
Tetsuya Sakai, Waseda University, Japan
Ian Soboroff, National Institute of Standards and Technology (NIST), USA
Paul Thomas, Microsoft, Australia
Andrew Trotman, University of Otago, New Zealand
Min Zhang, Tsinghua University, China

Junior Members
Valeriia Baranova, RMIT University, Australia
Arthur Barbosa Câmara, Delft University of Technology, The Netherlands
Hamed Bonab, University of Massachusetts Amherst, USA
Kathy Brennan, Google, USA
Timo Breuer, TH Köln, Germany
Guglielmo Faggioli, University of Padua, Italy
Alexander Frummet, University of Regensburg, Germany
Darío Garigliotti, Aalborg University, Denmark
Chris Kamphuis, Radboud University, The Netherlands
Johannes Kiesel, Bauhaus-Universität Weimar, Germany
Yuan Li, University of North Carolina at Chapel Hill, USA
Joel Mackenzie, University of Melbourne, Australia
Antonio Mallia, New York University, USA
David Maxwell, Delft University of Technology, The Netherlands
Felipe Moraes, Delft University of Technology, The Netherlands
Ahmed Mourad, University of Queensland, Australia
Zuzana Pinkosova, ‎University of Strathclyde, UK
Chen Qu, University of Massachusetts Amherst, USA
Anna Ruggero, Sease, UK
Svitlana Vakulenko, University of Amsterdam, The Netherlands
Sasha Vtyurina, KIRA systems, Canada
Oleg Zendel, RMIT University, Australia
Steven Zimmerman, University of Essex, UK


_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to