https://www2023.thewebconf.org/calls/research-tracks/crowdsourcing-hc/

We invite research contributions to the Crowdsourcing and Human Computation
track at the 32nd edition of The Web Conference series (formerly known as
WWW), to be hosted at Austin, TX, US, on April 30 - May 4, 2023 (
 https://www2023.thewebconf.org/ <https://www2023.thewebconf.org/>)

Fifteen years ago, a 2007 WWW paper entitled “Internet-Scale Collection of
Human-Reviewed Data <https://dl.acm.org/doi/abs/10.1145/1242572.1242604>”
was one of several forerunners to signal a new, emerging area of research
on Human Computation and Crowdsourcing (HCOMP). Growing excitement and work
in this new area would eventually lead to four years of HCOMP workshops
across KDD and AAAI (2009-2012), a new annual AAAI HCOMP conference
<https://www.humancomputation.com/> (2013 onward), and a new, annual HCOMP
track at the WebConference (2014 onward).

Today, the world and research landscape looks remarkably different than it
did in 2007, with the Web playing a central role in orchestrating such
advances. Of particular note, modern neural models have transformed AI
capabilities, along with far greater ubiquity and significance of AI
systems now in practical deployment around the world. As one effect of
this, the commoditization and democratization of AI models today has also
brought a new focus to “data-centric AI” in which AI models can succeed or
fail based on the quality of underlying data and human annotations. The
nature of human-AI interactions are also continually evolving in response
to AI advances, posing an ever-changing frontier of new challenges for
researchers and practitioners. Furthermore, the growth of AI power has
brought a commensurate recognition of the need for responsible AI systems
that are fair, accountable, transparent, and trustworthy – across diverse,
global communities of human stakeholders who interact with or are impacted
by AI systems. Given the central role of HCOMP in AI (creating reliable
training and benchmark annotations, as well as enabling hybrid,
human-in-the-loop systems), continuing innovation in HCOMP remains a key
challenge for the further advancement of AI. HCOMP itself has made
tremendous strides forward in the past fifteen years, yet many research
challenges remain.

*We invite AI, HCI, and related contributions that advances the broad
spectrum of crowdsourcing and human computation (HCOMP) in the scope of the
Web*:

   - algorithms, analysis, applications, methods, systems, and techniques
   - conceptual, empirical, theoretical, and mixed-methods
   - spanning fields (e.g, psychology, sociology, economics, ethics, etc.)
   - system-centered, human-centered, and hybrid

More specifically, we invite work addressing contemporary HCOMP challenges
including (but not limited to) the following Web-related themes:

   - Fundamental research challenges in Web-based HCOMP
      - *Data collection, generation, labeling, and cleaning*: data-centric
      AI; human and AI-assisted annotation; annotator agreement,
aggregation, and
      modeling; annotation subjectivity and ambiguity, data excellence;
      human-in-the-loop data augmentation, generation, and adversarial attacks;
      label noise and bias detection and reduction; task
decomposition, task and
      workflow design, novel modalities for input acquisition, etc.
      - *Human-centered explainability *: algorithmic/model explanations,
      interpretability, and transparency to enhance human success in
using AI in
      decision-making, model and data debugging, task performance, trust in AI
      systems, appropriate reliance, etc. (please also read the CFP of the
      “Fairness, Accountability, Transparency and Ethics” track)
      - *Human-centered studies*: collaborative systems, computer-supported
      cooperative work, human-computer interaction, human factors, interaction
      design, usability, user experience, etc.
      - *Resources, benchmarking, reproducibility*: New resources for the
      community (e.g., datasets, open source toolkits, etc.), benchmarking
      studies comparing state of the art methods, and/or
reproducibility studies
      of prior work.
      - *Addressing bias and diversity in annotation and human computation*:
      Methods and algorithms to identify and mitigate biases in annotations;
      bias-aware annotation workflows; diversity in annotators and
workers, data
      labeling, and hybrid, human-in-the-loop systems; downstream effects of
      annotator diversity on bias and fairness measures; impact on
evaluation of
      various systems (e.g. information retrieval systems, recommender systems,
      etc.); ethics and fairness of HCOMP practices
   - Underlying workforce powering Web-based HCOMP
      - *Social and economic impacts of human computation and crowdsourcing*:
      societal and methodological challenges around crowdsourcing labor and
      workforces; inequalities in access and representation in crowdsourcing
      workforces; platform affordances and economic impact
      - *Supporting HCOMP workers*: collective action; design activism; fair
      work <https://fair.work/en/fw/homepage/>; ghost work, heteromation,
      and invisible work; human computation, digital colonialism, and
the global
      south; impact sourcing <https://en.wikipedia.org/wiki/Impact_sourcing>
       and responsible sourcing
      <https://partnershiponai.org/workstream/responsible-sourcing/>;
      regulation; worker empowerment, organization, protection and
wellness; and
      workforce diversity, equity, and inclusion, etc.
      - *Future of work*: AI-assisted human coordination, team formation
      and work, distributed work, freelancer economy, hybrid, human+AI work and
      complementarity, etc.
   - Web-based HCOMP systems, frameworks, or architectures
      - *Crowd-powered systems*: data management, marketplace design and
      sustainability, platforms, scalability, security, privacy, programming
      languages, real-time crowdsourcing, etc.
      - *Human-in-the-loop architectures*: decision support; human-AI
      collaboration, interaction, and teaming; hybrid systems; mixed-initiative
      design, etc.
      - *Crowdsourcing*: citizen science, collective intelligence, crowd
      computing, crowd creativity, crowdfunding, crowd ideation, crowd
      intelligence, crowd sensing, crowdsourcing contests, crowd
phenomena, crowd
      science, incentive schemes, gamification, human flesh search, open
      innovation, peer production, prediction markets, reputation
systems, social
      web, wisdom of crowds, etc.
      - *Human computation*: decision-theoretic and game-theoretic design,
      design patterns, human algorithm design and complexity, mechanism and
      incentive design, etc.
   - Web-based Applications of HCOMP
      - *Machine learning for HCOMP*: aggregation, answer fusion, annotator
      and user modeling, quality assurance, optimization, task assignment and
      recommendation, truth inference, etc.
      - *New Applications and Services*: delivering beyond state-of-the-art
      AI capabilities and enhanced services through human computation and
      human-in-the-loop systems.

Authors should consult the conference’s main Research Track CFP
<https://www2023.thewebconf.org/calls/research-tracks/> to ensure their
submissions are aligned with broader conference expectations, scope, and
theme: “Web Research with Openness, Fairness and Reproducibility”. The CFP
also details submission guidelines, relevant dates, and important
policies. Review
criteria <https://www.humancomputation.com/2016/review-criteria.html> will
include considerations typical of those in past years of this track and the
AAAI HCOMP conference.

Submissions that are out of scope or unresponsive to the call above will be
rejected early during the reviewing process (“desk rejected”) with minimal
feedback.This includes submissions that:

   - merely apply HCOMP methods in standard, previously known ways, without
   novel contributions to advance the methodology itself;
   - do not relate to the web or web-based human computation platforms,
   methods, or applications.

In case you have doubts whether your paper fits the scope of this track,
please contact the track chairs [email protected]
Important dates

   - Abstract submission: October 6, 2022. This is compulsory for all
   papers.
   - Full papers submission: October 13, 2022
   - Rebuttal: December 15 - 22, 2022
   - Notification: January 25, 2023

Track chairs:

   - Ujwal Gadiraju <http://ujwalgadiraju.com/>(Delft University of
   Technology)
   - Matthew Lease <https://www.ischool.utexas.edu/~ml/>(University of
   Texas at Austin and Amazon)
   - Besmira Nushi <https://besmiranushi.com/>(Microsoft Research)

*Senior Program Committee & Program Committee*: Stay Tuned!


-- 
Matt Lease
Professor
School of Information
University of Texas at Austin
Voice: (512) 471-9350 · Fax: (512) 471-3971 · Office: UTA 5.536
http://www.ischool.utexas.edu/~ml
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to