Can you force a single-layer transformer with attention to emit a token given 
an input sequence?
If you can, YOU are the right person to work with us! Drop me an e-mail.
We are hiring up to two PostDocs with a competitive salary for Italian PostDoc 
standards (assegno di ricerca 4 fascia).
Here's the challenge:
UnboxingChallenge.pdf<https://uniroma2-my.sharepoint.com/:b:/g/personal/fabio_zanzotto_uniroma2_eu/ESLOjk21TFRLjvpIOVth97sBu-3ghcJwXySGCVLa7OZFlQ?e=pJc6FJ>

Here's the supporting Excel file simulating a single-layer transformer with 
attention:
Challenge_1.xlsx<https://uniroma2-my.sharepoint.com/:x:/g/personal/fabio_zanzotto_uniroma2_eu/EbXpk65gX3hLn-3-U7HeTbgBW4e9OD56OcCxvfzRO66sCA?e=AE70Yu>

Research Group: Human-centric ART
Institution: University of Rome Tor Vergata
Location: Rome
Required: a PhD in CS or a competitive publications' record track
Desired: Willingness to work in team


To stay up-to-date:
X:  https://x.com/HumanCentricArt
LinkedIn: 
www.linkedin.com/in/fabio-massimo-zanzotto-b027831<http://www.linkedin.com/in/fabio-massimo-zanzotto-b027831>
Chek what we do: ‪Fabio Massimo Zanzotto‬ - ‪Google 
Scholar‬<https://scholar.google.com/citations?user=azv7Qr4AAAAJ&hl=en>


Prof. Fabio Massimo Zanzotto
Dipartimento di Ingegneria dell'Impresa "Mario Lucertini"
University of Rome Tor Vergata
_______________________________________________
Corpora mailing list -- [email protected]
https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/
To unsubscribe send an email to [email protected]

Reply via email to