I need some help from a win32/rich text edit control wizard. I'm looking for a
mentoring/code template generation relationship. In a nutshell, I have RSI
crippled hands which means writing code is a challenge and I'm totally
unschooled in the ways of windowing systems. I'm working on techniques to make
it possible to write code using speech recognition and I could use some help
with the GUI aspect so I can focus on the speech user interface component.
The core idea is pairing a speech user interface with an application via an
accessibility adjunct program. This differs from the current model in that it
moves the speech user interface outside of the application so that it is no
longer bound by conventions of the application or the application's GUI
structure. the reason for this change comes from a hunch that it would make it
easier to create good speech user interfaces faster than the current in-situ UI
techniques.
the transition between application and speech UI starts when an utterance is
detected and ends when the utterance ends. the transition consists of changing
focus from the application to the paired speech UI.
The speech UI component also has a GUI for accepting and presenting data. In
most applications, the data is processed incrementally with intermediary results
visible within the window. The incrementally processed data is usually hints as
to what to say next. For example, if you are changing the directory and specify
partial pathname, the window would contain options of what to say next.
In my minds eye, I imagine that the window would contain a grid like a
spreadsheet or table. Only one cell at a time would be active for data input and
the active cell would receive all dictation input and commands. Any number of
cells could be used for data input but only one would be active at a single time.
How would this be used? For example, the Skype IM window is atrocious for
disabled users. If one associated a two cell speech UI window with Skype, the
user could dictate, edit, retrain, etc. within the speech UI program. On
command, the speech UI program would return the text to Skype. One could also
contain a small history so that like Skype, you could go back and edit a
previously sent message.
a second use case is found on my blog http://blog.esjworks.com which is the tool
to create mangled codenames from speech. I think the grid model works better
than individual text fields that I originally wrote because it allows me to make
better presentation of a cache of previously generated symbols.
Anyway, if anybody can help with the windowing side, I would really appreciate
some assistance.
--- eric
_______________________________________________
python-win32 mailing list
python-win32@python.org
http://mail.python.org/mailman/listinfo/python-win32