|
Hi all,
I thought I would let you know that I have
completed the voice enabled, rule-based proof-of-concept that I had
mentioned a week or so ago.
Essentially, I capture voice input (speech text)
with voice recognition software, send it into a rules engine as an object which
then routes it to a YACC parser. The parser interprets it in exactly the same as
the voice recognition grammar but adds the Yacc "actions" which then
formulates an "event" object for the rules engine to process. This rules
engine (where the event object is sent) functions as a router so that the event
can be forwarded to other rules engines that function as service providers. The
service provider engine(s) then may use discrete/fuzzy logic and/or hybrid
neural nets to accomplish their mission. When they have finished they can then
communicate with voice synthesis back to the user (completing the round-trip of
control and interaction).
My next step is to incorporate (a higher level of)
knowledge representation into the language used to communicate with the system.
So, it is now evident that rules engines will play
an important role in the future of voice-enabled systems.
Rich Halsey |
