Dear colleagues and friends,
We released the features for each video in the *video corpus *from a 3D ConvNet-based I3D model pre-trained on the Kinetics dataset. The video corpus <https://drive.google.com/drive/folders/1EfzNJqQDsCFtBZgKZUaE-FnYX1L-qpI_> and video features <https://bionlp.nlm.nih.gov/TRECVID-VideoFeatures.zip> are available to download. The test sets will be released by July 14, 2023. We are looking forward to your submissions. Please let us know if you have any questions. Join our Google Group <https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fg%2Ftrecvid-medvidqa2023&data=05%7C01%7Cdeepak.gupta%40nih.gov%7C27a7d46d81a141216eab08db5330aa84%7C14b77578977342d58507251ca2dc2b06%7C0%7C0%7C638195240714740133%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=pE%2B3SDDstXCEbCsedrbvx1VS%2BPigsK%2BNdPGW5nkjZ0A%3D&reserved=0> for important updates! If you have any questions, ask in our Google Group <https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fg%2Ftrecvid-medvidqa2023&data=05%7C01%7Cdeepak.gupta%40nih.gov%7C27a7d46d81a141216eab08db5330aa84%7C14b77578977342d58507251ca2dc2b06%7C0%7C0%7C638195240714740133%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=pE%2B3SDDstXCEbCsedrbvx1VS%2BPigsK%2BNdPGW5nkjZ0A%3D&reserved=0> or email <https://groups.google.com/> us. Thanks, MedVidQA 2023 Organizers On Tue, May 30, 2023 at 1:31 AM Deepak Gupta <[email protected]> wrote: > Dear colleagues and friends, > > > This year, we are organizing the MedVidQA > <https://medvidqa.github.io/>challenge > with TRECVID 2023 <https://www-nlpir.nist.gov/projects/tv2023/index.html>. > This challenge aims at developing models for (1) retrieving the relevant > videos and locating the visual answer in those videos for the medical or > health-related question and (2) generating the medical instructional > questions from the video segments. Following the success of the 1st > MedVidQA shared task <https://aclanthology.org/2022.bionlp-1.25/>, > MedVidQA at TRECVID 2023 expanded the tasks and introduced a new track > considering language-video understanding and generation. This track is > comprised of two main tasks Video Corpus Visual Answer Localization (VCVAL) > and Medical Instructional Question Generation (MIQG). > > > > For more details, please visit the challenge website ( > https://medvidqa.github.io/) and TRECVID 2023 website ( > https://www-nlpir.nist.gov/projects/tv2023/index.html). > > > The link for submission: > > - Task 1 (VCVAL): https://codalab.lisn.upsaclay.fr/competitions/13445 > <https://codalab.lisn.upsaclay.fr/competitions/13546> > - Task 2 (MIQG): https://codalab.lisn.upsaclay.fr/competitions/13546 > > > *Important Dates* > > - *Release of the training and validation datasets:* April 30, 2023 > - *Release of the video corpus:* May 12, 2023 > - *Release of the test sets:* July 14, 2023 > - *Run submission deadline:* August 4, 2023 > - *Release of the official results:* September 29, 2023 > > > We look forward to your participation in MedVidQA at TRECVID 2023. > > > Join our Google Group <https://groups.google.com/g/trecvid-medvidqa2023> for > important updates! If you have any questions, ask in our Google Group > <https://groups.google.com/g/trecvid-medvidqa2023> or email > <[email protected]> us. > > > Thank you, > > MedVidQA 2023 Organizers > > -- Thanking You, Deepak Gupta Web: https://deepaknlp.github.io/
_______________________________________________ Corpora mailing list -- [email protected] https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/ To unsubscribe send an email to [email protected]
