On Sun, March 11, 2018 8:18 pm, Trajce Nikolov NICK wrote:
> Hello Community,
> I carry this idea with me from my UNI days (which is about 20 years ago)
> and since recent it was just an idea. Over time it was even close to get
> some funding but it did not happen.
> The idea is to have virtual 3D avatar that can "speak" the sign language
> based on text or voice input. The recent technology from Google made the
> realization of the language translation doable.
> So I am here to ask you if someone might be interested in actually make
> this happen. I personally work with an artist (friend of mine, decades of
> friendship, also worked together on various projects). And we want to do
> this opensource, based on OSG.
> All suggestions, brainstorms, hints, anything, are highly welcome!
I did some work on signing avatars 10 to 20 years ago, in an academic
context. In fact, several of the first page of hits for the Google search
that Jan Ciger posted refer to that project (Virtual Humans at UEA).
My own role in that was to create software to turn signing transcriptions
written in an avatar-independent notation into animation data for driving
any humanoid avatar, in real time. Feel free to email me for more
information if this sounds relevant to what you have in mind. My
publications on this can be found through ResearchGate or Google Scholar.
As Jan says, there has been a large amount of work on this, by other
groups as well, although it seems to me that none of it has really taken
off. Perhaps because it is a very niche application. The demand would be
primarily from the pre-lingually deaf, i.e., those for whom signing is
their first language. With all respect to that population, it is a very
Personally, I would be more interested in extending my work to do
procedural animation of more general sorts of movement, but I've haven't
done anything substantial on that.
John Innes Centre and University of East Anglia
osg-users mailing list