|
I am in the process of completing a
proof-of-concept for a voice-enabled rule-based system, i.e., it uses voice
recognition (with a command & control grammar) to drive the interaction with
a rules-based intelligent agent and voice synthesis for the agent to respond
verbally with the user.
So far, the results have been encouraging and
demonstrate the feasibility of man-to-machine communication using
common-off-the-shelf (COTS) software. The next step is to integrate a FIPA Agent
Communication Language (ACL) to express a knowledge representation to the agent
so that it can learn (from the user) by "hearing" what the user
knows.
Has anyone out there been doing anything like this
?
|
