Somewhat interesting: a paper from a conference in Italy a couple of months ago:

http://discovery.dundee.ac.uk/portal/en/research/oh-thats-what-you-meant(20b8923c-28da-49ed-bc78-fcc741db3187).html

I anticipated old news about misunderstanding based on presentation differences 
on the level of water gun vs. etc. But it focuses on subtleties in emotional 
reactions that different users associate with different smileys. E.g., how does 
U+1F624 “😤” compare with U+1F62C “😬”? A given user may perceive the two 
differently, and for either one a given user’s perception may differ when 
evaluating the depiction used in one app/platform versus another. They suggest 
that, if users gave a characterization of reactions to different emoji on a 
given platform (e.g., degree of emotion, how positive or negative) then an 
automated system could translate one user’s message to display an emoji to a 
second user that more closely reflects the emotion intended by the first user.




Peter

Reply via email to