Hi,
I've thought this type of representation might be most efficiently achieved with a vector-driven internal representation. That is, Novamente's internal construction and representation of such a "demonstration model" might be done with vectors ("animated" by schema procedures), using pixels only in the final representation (unless of course a native vector display were used, but I doubt these are more practical than using a pixel translation).
This is easy to conceptualize with the running-man model; the idea of a "man running" might be conveyed with only a small number of vectors (perhaps as few as 10 or 14, considering the major points & lines involved for arms, legs and torso) and a compound of simple algorithms that repeat in a cycle. Fine-tuning interaction with an operator seems a very tractable problem for combo-BOA, as the entire cycling compound action model can be represented by a single CombinatorTree.
Vector models are used as the basis for all complex CGI we see in film, particularly with respect to motion (e.g. Gollum), with shape and texture filling added later to the vector model.
-dave
Ben Goertzel wrote:
hey -- good idea!!
In fact, we already have a beta user interface that does something like this, in a limited context. You can see certain Novamente productions in both English form and "internal node and link representation" form. However, this is mostly only useful for simple productions, otherwise there are way too many nodes and links involved.
However, you seem to also be suggesting something different -- having Novamente made visual productions in parallel with English productions. This is also possible, and a good idea, but we don't have anything like this right now...
Ben
-----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Erik Nilsson Sent: Thursday, October 14, 2004 8:57 PM To: [EMAIL PROTECTED] Subject: [agi] To communicate with an AGI
Hi,
Will Novamente in communicating with humans be able to show and tell? That is, will it in addition to text be able to produce as output a model which shows the human counterpart what it means? This would seem to make it a bit easier to understand it. One could also imagine there might be an advantage in directly manipulating the implementation of what the AGI means to tell in terms of giving it feedback. For example, if the output was a model implementing what the AGI considered to be the essence of a of a running man and the man to the human observer seemed to be walking one could directly manipulate this output model to portray a running man and feed it back to the AGI. Presumably this kind of interaction would be easier if the interface gave direct access to what the AGI considered to be the component dimensions of its output. Akin to a computer game where in manipulating the appearance of a humanoid one does not go about editing it pixel by pixel, rather one changes for example height with a simple slider. In this case, if step frequency was considered by the AGI to be a component dimension one could simply adjust it with a slider to better reflect what running is to the human counterpart. If step frequency was not considered a component dimension by the AGI, perhaps the ability to define dimensions such as step frequency on the fly and feed it back to the AGI would be useful. Presuming it was deemed expedient in illustrating the difference between walking and running.
Regards,
Erik Nilsson
-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
