Dear Sender,

I am currently out of the office and will not be checking emails regularly. I 
will return on September 9, and will respond to your message as soon as 
possible after that date.

Best regards,
Charlott Jakob

On 10 Aug 2024, at 05:33, Aditya Nandkishore Chichani via Corpora 
<[email protected]> wrote:

============================================================================================================================
We are pleased to announce an extension of the submission deadline for CIKM 
MMSR 24. The new deadline is August 16, 2024.
============================================================================================================================

Workshop on Multimodal Search and Recommendations (CIKM MMSR ‘24)

Date: October 25, 2024 (Full day workshop)
Venue: ACM CIKM 2024 (Boise, Idaho, United States)
Website: https://cikm-mmsr.github.io/
Organizers: Aditya Chichani, Surya Kallumadi, Tracy Holloway King, Andrei 
Lopatenko
Keynote Speakers: Vamsi Salaka, Yubin Kim
Paper submission deadline: August 16, 2024 (23:59 P.M. GMT) 
https://openreview.net/group?id=ACM.org/CIKM/2024/Workshop/MMSR

Overview:
The advent of multimodal LLMs like GPT-4o and Gemini has significantly boosted 
the potential for multimodal search and recommendations. Traditional search 
engines rely mainly on textual queries, supplemented by session and 
geographical data. In contrast, multimodal systems create a shared embedding 
space for text, images, audio, and more, enabling next-gen customer 
experiences. These advancements lead to more accurate and personalized 
recommendations, enhancing user satisfaction and engagement.

Topics of interest include, but are not limited to:

Cross-modal retrieval techniques
     Strategies for efficiently indexing and retrieving multimodal data.
     Approaches to ensure cross-modal retrieval systems can handle large-scale 
data.
     Development of metrics to measure similarity across different data 
modalities.
Applications of Multimodal Search and Recommendations to Verticals (e.g. 
E-commerce, real estate)
      Implementing and optimizing image-based product searches.
      Creating multimodal conversational systems to enhance user experience and 
          make search more accessible.
       Utilizing AR to enhance product discovery and user interaction.
       Leveraging multimodal search for efficient customer service and support.
User-centric design principles for multimodal search interfaces
       Best practices for designing user-friendly interfaces that support 
multimodal search.
       Methods for evaluating the usability of multimodal search interfaces.
       Personalizing multimodal search interfaces to individual user 
preferences.
       Ensuring multimodal search interfaces are accessible to users with 
disabilities.
Ethical Considerations and Privacy Implications of Multimodal Search and 
Recommendations
       Strategies for ensuring user data privacy in multimodal applications.
       Identifying and mitigating biases in multimodal algorithms.
       Ensuring transparency in how multimodal results are generated and 
presented.
       Approaches for obtaining and managing user consent for using their data.
Modeling for Multimodal Search and Discovery
       Multi-modal representation learning
       Utilizing GPT-4o, Gemini, and other advanced pre-trained multimodal LLMs
       Dimensionality reduction techniques to reduce complexity of multimodal 
data.
       Techniques for fine-tuning pre-trained vision-language models.
       Developing and standardizing metrics to evaluate the performance of 
vision-language models in multimodal search.

Submission Instructions:
All papers will be peer-reviewed by the program committee and judged based on 
their relevance to the workshop and their potential to generate discussion. 
Submissions must be in PDF format, following the latest CEUR single column 
format. For instructions and LaTeX/Overleaf/docx templates, refer to CEUR’s 
submission guidelines (https://ceur-ws.org/HOWTOSUBMIT.html#CEURART), reading 
up to and including the “License footnote in paper PDFs” section. Use 
Emphasizing Capitalized Style for Paper Titles.

Submissions must describe original work not previously published, not accepted 
for publication, and not under review elsewhere. All submissions must be in 
English. The workshop follows a single-blind review process and does not accept 
anonymous submissions. At least one author of each accepted paper must register 
for the workshop and present the paper.

Long paper limit: 15 pages.
Short paper limit: 8 pages.
References are not counted in the page limit.

Contact: Aditya Chichani
E-mail: [email protected]
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to