[agi] Primates using tools.

2008-01-30 Thread Robert Wensman
This could perhaps be relevant to understanding human level intelligence.
One interpretation here is that the brain of primates considers tools as
part of their body, which makes them good at using them:

http://sciencenow.sciencemag.org/cgi/content/full/2008/128/2

This of course, still leaves the question of how a generally intelligent
system uses its body in the first place, and what special hardware there is
to deal with this problem. :-).

Personally I believe that a general intelligence, such as the human mind,
still have some specialized processors to deal with very common situations.

Another thing that I guess could use some special hardware, is the ability
to feel empathy and understand other human beings or animals. To understand
other intelligent beings is so important for humans, yet if done in a
general way it seems so incredibly expensive and difficult. Also, a human is
in many ways very similar to the intelligent beings it tries to simulate, so
it is my firm belief that a human uses parts of its own cognitive process to
simulate other intelligent beings. I think that a social AGI system needs to
be able to instantiate its own cognitive process in a kind of role-play.
Assume that I know this, that I want this, and that I am in this kind of
situation, what would I do. And then use this role playing to assess others
actions.

The fact that empathy seems to be more strongly connected to biological
heritage, rather than by social influence could indicate that the ability to
feel empathy needs special hardware in our brain. I think I heard of a study
that showed a very strong correlation between the empathic ability of
identical twins, which should indicate that their social upbringing has less
influence on this particular ability. However, I donĀ“t remember the source
of that that information.

/Robert Wensman

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91461624-5f7744

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Kaj Sotala
On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Okay, sorry to hit you with incomprehensible technical detail, but maybe
 there is a chance that my garbled version of the real picture will
 strike a chord.

 The message to take home from all of this is that:

 1) There are *huge* differences between the way that a system would
 behave if it had a single GS, or even a group of conflicting GS modules
 (which is the way you interpreted my proposal, above) and the kind of
 MES system I just described:  the difference would come from the type of
 influence exerted, because the vector field is operating on a completely
 different level than the symbl processing.

 2) The effect of the MES is to bias the system, but this bias amounts
 to the following system imperative:  [Make your goals consistent with
 this *massive* set of constraints]  where the massive set of
 constraints is a set of ideas built up throughout the entire
 development of the system.  Rephrasing that in terms of an example:  if
 the system gets an idea that it should take a certain course of action
 because it seems to satisfy an immediate goal, the implications of that
 action will be quickly checked against a vast range o constraints, and
 if there is any hint of an inconsistency with teh value system, this
 will pull the thoughts of the AGI toward that issue, whereupon it will
 start to elaborate the issue in more detail and try to impose an even
 wider net of constraits, finally making a decision based on the broadest
 possible set of considerations.  This takes care of all the dumb
 examples where people suggest that an AGI could start with the goal
 Increase global happiness and then finally decide that this would be
 accomplished by tiling the universe with smiley faces.  Another way to
 say this:  there is no such thing as a single utility function in this
 type of system, nor is there a small set of utility functions  there
 is a massive-dimensional set of utility functions (as many as there are
 concepts or connections in the system), and this diffuse utility
 function is what gives the system its stability.

I got the general gist of that, I think.

You've previously expressed that you don't think a seriously
unfriendly AGI will be likely, apparently because you assume the
motivational-system AGI will be the kind that'll be constructed and
not, for instance, a goal stack-driven one. Now, what makes you so
certain that people will build a this kind of AGI? Even if we assume
that this sort of architecture would be the most viable one, a lot
seems to depend on how tight the constraints on its behavior are, and
what kind they are - you say that they are a a set of ideas built up
throughout the entire development of the system. The ethics and
values of humans are the result of a long, long period of evolution,
and our ethical system is pretty much of a mess. What makes it likely
that it really will build up a set of ideas constraints that we humans
would *want* it to build? Could it not just as well pick up ones that
are seriously unfriendly, especially if its designers or the ones
raising it are in the least bit careless?

Even among humans, there exist radical philosophers whose ideas of a
perfect society are repulsive to the vast majority of the populace,
and a countless number of disagreements about ethics. If we humans
have such disagreements - we who all share the same evolutionary
origin biasing us to develop our moral systems in a certain direction
- what makes it plausible to assume that the first AGIs put together
(probably while our understanding of our own workings is still
incomplete) will develop a morality we'll like?



-- 
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91461196-a87c48


Re: [agi] Primates using tools.

2008-01-30 Thread Bob Mottram
On 30/01/2008, Robert Wensman [EMAIL PROTECTED] wrote:
 Another thing that I guess could use some special hardware, is the ability
 to feel empathy and understand other human beings or animals. To understand
 other intelligent beings is so important for humans, yet if done in a
 general way it seems so incredibly expensive and difficult. Also, a human is
 in many ways very similar to the intelligent beings it tries to simulate, so
 it is my firm belief that a human uses parts of its own cognitive process to
 simulate other intelligent beings. I think that a social AGI system needs to
 be able to instantiate its own cognitive process in a kind of role-play.
 Assume that I know this, that I want this, and that I am in this kind of
 situation, what would I do. And then use this role playing to assess others
 actions.


Yes.  This is a kind of bootstrapping process.  First you need to just
play around and start learning about how your own system interacts
with the environment to develop a primitive theory of self.  Here
system and system interactions could mean a physical body or they
could also apply to a disembodied intelligence living within an
abstract domain such as the internet.

In robotics general tool use means picking up the tool (assuming you
know what procedure is appropriate to grab it) then using cameras the
robot can observe the end of the object as it waves it randomly
around.  Once the principal axis and length of the object has been
determined it can then be integrated into the kinematic model for the
arm as if the tool were part of the robot.

In my opinion the development of a primitive theory of self, and here
I'm not referring to more high level social constructs, is the
starting point for many other abilities.  If you can learn to model
yourself then it's possible to do things such as identify and
compensate for damage and to create multiple instances of your model
which are then applied to other beings (the theory of mind).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91470010-1d68df


Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Stan Nilsen

Kaj Sotala wrote:

On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.

The message to take home from all of this is that:

1) There are *huge* differences between the way that a system would
behave if it had a single GS, or even a group of conflicting GS modules
(which is the way you interpreted my proposal, above) and the kind of
MES system I just described:  the difference would come from the type of
influence exerted, because the vector field is operating on a completely
different level than the symbl processing.

2) The effect of the MES is to bias the system, but this bias amounts
to the following system imperative:  [Make your goals consistent with
this *massive* set of constraints]  where the massive set of
constraints is a set of ideas built up throughout the entire
development of the system.  Rephrasing that in terms of an example:  if
the system gets an idea that it should take a certain course of action
because it seems to satisfy an immediate goal, the implications of that
action will be quickly checked against a vast range o constraints, and
if there is any hint of an inconsistency with teh value system, this
will pull the thoughts of the AGI toward that issue, whereupon it will
start to elaborate the issue in more detail and try to impose an even
wider net of constraits, finally making a decision based on the broadest
possible set of considerations.  This takes care of all the dumb
examples where people suggest that an AGI could start with the goal
Increase global happiness and then finally decide that this would be
accomplished by tiling the universe with smiley faces.  Another way to
say this:  there is no such thing as a single utility function in this
type of system, nor is there a small set of utility functions  there
is a massive-dimensional set of utility functions (as many as there are
concepts or connections in the system), and this diffuse utility
function is what gives the system its stability.


I got the general gist of that, I think.

You've previously expressed that you don't think a seriously
unfriendly AGI will be likely, apparently because you assume the
motivational-system AGI will be the kind that'll be constructed and
not, for instance, a goal stack-driven one. Now, what makes you so
certain that people will build a this kind of AGI? Even if we assume
that this sort of architecture would be the most viable one, a lot
seems to depend on how tight the constraints on its behavior are, and
what kind they are - you say that they are a a set of ideas built up
throughout the entire development of the system. The ethics and
values of humans are the result of a long, long period of evolution,
and our ethical system is pretty much of a mess. What makes it likely
that it really will build up a set of ideas constraints that we humans
would *want* it to build? Could it not just as well pick up ones that
are seriously unfriendly, especially if its designers or the ones
raising it are in the least bit careless?

Even among humans, there exist radical philosophers whose ideas of a
perfect society are repulsive to the vast majority of the populace,
and a countless number of disagreements about ethics. If we humans
have such disagreements - we who all share the same evolutionary
origin biasing us to develop our moral systems in a certain direction
- what makes it plausible to assume that the first AGIs put together
(probably while our understanding of our own workings is still
incomplete) will develop a morality we'll like?



Perhaps we make too much of the idea of moral and ethical.  As noted, 
this leads to endless debate.  The alternative is to use law even 
though it may be arbitrary and haphazard in formulation.


The importance of law is that it establishes risk.  As humans we 
understand risk.  Will an AI understand risk? Or, should we rephrase 
this to read will there be a risk for an AI?


examples of what an AI might risk...

1. banishment - not allowed to run.  No loading into hardware
2. isolation - prevention of access to published material or experimentation
3. imprisonment - similar to isolation, with more access than isolation
4. close supervision - imposing control through close supervision, 
constant oversight, actions subject to approval...
5.  economic sanction - not allowed to negotiate any deals or take 
control of resources.


I expect Matt Mahoney to point out that resistance is futile, the AI's 
will outsmart us.  Does that mean that criminals will ultimately be 
smarter than non-criminals? Maybe the AI's of the future will want an 
even playing field and be motivated to enforce laws.


I see Richards design as easily being able to implement risk factors 
that could lead to intelligent and legal behavior. I'm impressed by the 
design.  Thanks for the explanation.


Stan Nilsen



[agi] Request for Help

2008-01-30 Thread Mike Tintner
Remember that mathematical test/ experiment you all hated - the one where 
you doodle on this site -


http://www.imagination3.com

and it records your actual stream of drawing in time as well as the finished 
product?


Well, a reasonably eminent scientist liked it, and wants to set it up. But 
he's having problems contacting the site- they don't reply to emails 


there's no way of accessing the time and
space coordinates of the drawings, unless there's a
possibility to read them from the Flash animation.

Can you suggest either a) a way round this or b) an alternative site/method 
to faithfully record the time and space coordinates of the drawings?


Cheeky request perhaps, but I would be v. grateful for any help.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91653494-03327a


Re: [agi] Request for Help

2008-01-30 Thread Mark Waser

I know that you can do stuff like this with Microsoft's new SilverLight.

For example, http://www.devx.com/dotnet/Article/36544



- Original Message - 
From: Mike Tintner [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 12:44 PM
Subject: [agi] Request for Help


Remember that mathematical test/ experiment you all hated - the one where 
you doodle on this site -


http://www.imagination3.com

and it records your actual stream of drawing in time as well as the 
finished product?


Well, a reasonably eminent scientist liked it, and wants to set it up. But 
he's having problems contacting the site- they don't reply to emails 


there's no way of accessing the time and
space coordinates of the drawings, unless there's a
possibility to read them from the Flash animation.

Can you suggest either a) a way round this or b) an alternative 
site/method to faithfully record the time and space coordinates of the 
drawings?


Cheeky request perhaps, but I would be v. grateful for any help.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91667763-eb70c4


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Vladimir Nesov
On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:

 Ethics only becomes snarled when one is unwilling to decide/declare what the
 goal of life is.

 Extrapolated Volition comes down to a homunculus depending upon the
 definition of wiser or saner.

 Evolution has decided what the goal of life is . . . . but most are
 unwilling to accept it (in part because most do not see it as anything other
 than nature, red in tooth and claw).

 The goal in life is simply continuation and continuity.  Evolution goes
 for continuation of species -- which has an immediate subgoal of
 continuation of individuals (and sex and protection of offspring).
 Continuation of individuals is best served by the construction of and
 continuation of society.

 If we're smart, we should decide that the goal of ethics is the continuation
 of society with an immediate subgoal of the will of individuals (for a large
 variety of reasons -- but the most obvious and easily justified is to
 prevent the defection of said individuals).

 If an AGI is considered a willed individual and a member of society and has
 the same ethics, life will be much easier and there will be a lot less
 chance of the Eliezer-scenario.  There is no enslavement of Jupiter-brains
 and no elimination/suppression of lesser individuals in favor of greater
 individuals -- just a realization that society must promote individuals and
 individuals must promote society.

 Oh, and contrary to popular belief -- ethics has absolutely nothing to do
 with pleasure or pain and *any* ethics based on such are doomed to failure.
 Pleasure is evolution's reward to us when we do something that promotes
 evolution's goals.  Pain is evolution's punishment when we do something
 (or have something done) that is contrary to survival, etc.  And while both
 can be subverted so that they don't properly indicate guidance -- in
 reality, that is all that they are -- guideposts towards other goals.
 Pleasure is a BAD goal because it can interfere with other goals.  Avoidance
 of pain (or infliction of pain) is only a good goal in that it furthers
 other goals.

Mark,

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival. Yes, survival in the future is one
likely accidental property of structures that survived in the past,
but so are other properties of specific living organisms. Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.

When we are talking about choice of conditions for humans to live in
(rules of society, morality), we are trying to understand what *we*
would like to choose. We are doing it for ourselves. Better
understanding of *human* nature can help us to estimate how we will
appreciate various conditions. And humans are very complicated things,
with a large burden of reinforcers that push us in different
directions based on idiosyncratic criteria. These reinforcers used to
line up to support survival in the past, but so what?

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91706178-a90dcf


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-30 Thread Mark Waser

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival.


Goal was in quotes for a reason.  In the future, the same tautological 
forces will apply.  Evolution will favor those things that are adapted to 
survive/thrive.



Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.


Yes, everything is co-evolving fast enough that evolution is not fast enough 
to produce optimum solutions.  But are you stupid enough to try to fight 
nature and the laws of probability and physics?  We can improve on nature --  
but you're never going to successfully go in a totally opposite direction.



When we are talking about choice of conditions for humans to live in
(rules of society, morality), we are trying to understand what *we*
would like to choose.


What we like (including what we like to choose) was formed by evolution. 
Some of what we like has been overtaken by events and is no longer 
pro-survival but *everything* that we like has served a pro-survival purpose 
in the past (survival meaning survival of offspring and the species -- so 
altruism *IS* an evolutionarily-created like as well).



Better
understanding of *human* nature can help us to estimate how we will
appreciate various conditions.


Not if we can program our own appreciations.  And what do we want our AGI to 
appreciate?



humans are very complicated things,
with a large burden of reinforcers that push us in different
directions based on idiosyncratic criteria.


Very true.  So don't you want a simpler, clearer, non-contradictory set of 
reinforcers for you AGI (that will lead to it and you both being happy).



These reinforcers used to
line up to support survival in the past, but so what?


So . . . I'd like to create reinforcers to support my survival and freedom 
and that of the descendents of the human race.  Don't you?




- Original Message - 
From: Vladimir Nesov [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, January 30, 2008 2:14 PM
Subject: Re: [agi] OpenMind, MindPixel founders both commit suicide



On Jan 29, 2008 10:28 PM, Mark Waser [EMAIL PROTECTED] wrote:


Ethics only becomes snarled when one is unwilling to decide/declare what 
the

goal of life is.

Extrapolated Volition comes down to a homunculus depending upon the
definition of wiser or saner.

Evolution has decided what the goal of life is . . . . but most are
unwilling to accept it (in part because most do not see it as anything 
other

than nature, red in tooth and claw).

The goal in life is simply continuation and continuity.  Evolution goes
for continuation of species -- which has an immediate subgoal of
continuation of individuals (and sex and protection of offspring).
Continuation of individuals is best served by the construction of and
continuation of society.

If we're smart, we should decide that the goal of ethics is the 
continuation
of society with an immediate subgoal of the will of individuals (for a 
large

variety of reasons -- but the most obvious and easily justified is to
prevent the defection of said individuals).

If an AGI is considered a willed individual and a member of society and 
has

the same ethics, life will be much easier and there will be a lot less
chance of the Eliezer-scenario.  There is no enslavement of 
Jupiter-brains
and no elimination/suppression of lesser individuals in favor of 
greater
individuals -- just a realization that society must promote individuals 
and

individuals must promote society.

Oh, and contrary to popular belief -- ethics has absolutely nothing to do
with pleasure or pain and *any* ethics based on such are doomed to 
failure.

Pleasure is evolution's reward to us when we do something that promotes
evolution's goals.  Pain is evolution's punishment when we do 
something
(or have something done) that is contrary to survival, etc.  And while 
both

can be subverted so that they don't properly indicate guidance -- in
reality, that is all that they are -- guideposts towards other goals.
Pleasure is a BAD goal because it can interfere with other goals. 
Avoidance

of pain (or infliction of pain) is only a good goal in that it furthers
other goals.


Mark,

Nature doesn't even have survival as its 'goal', what matters is only
survival in the past, not in the future, yet you start to describe
strategies for future survival. Yes, survival in the future is one
likely accidental property of structures that survived in the past,
but so are other properties of specific living organisms. Nature is
stupid, so design choices left to it are biased towards keeping much
of the historical baggage and resorting to unsystematic hacks, and as
a result its products are not simply optimal survivors.

When we are talking about choice of conditions for 

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-01-30 Thread Richard Loosemore

Kaj Sotala wrote:

On Jan 29, 2008 6:52 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

Okay, sorry to hit you with incomprehensible technical detail, but maybe
there is a chance that my garbled version of the real picture will
strike a chord.

The message to take home from all of this is that:

1) There are *huge* differences between the way that a system would
behave if it had a single GS, or even a group of conflicting GS modules
(which is the way you interpreted my proposal, above) and the kind of
MES system I just described:  the difference would come from the type of
influence exerted, because the vector field is operating on a completely
different level than the symbl processing.

2) The effect of the MES is to bias the system, but this bias amounts
to the following system imperative:  [Make your goals consistent with
this *massive* set of constraints]  where the massive set of
constraints is a set of ideas built up throughout the entire
development of the system.  Rephrasing that in terms of an example:  if
the system gets an idea that it should take a certain course of action
because it seems to satisfy an immediate goal, the implications of that
action will be quickly checked against a vast range o constraints, and
if there is any hint of an inconsistency with teh value system, this
will pull the thoughts of the AGI toward that issue, whereupon it will
start to elaborate the issue in more detail and try to impose an even
wider net of constraits, finally making a decision based on the broadest
possible set of considerations.  This takes care of all the dumb
examples where people suggest that an AGI could start with the goal
Increase global happiness and then finally decide that this would be
accomplished by tiling the universe with smiley faces.  Another way to
say this:  there is no such thing as a single utility function in this
type of system, nor is there a small set of utility functions  there
is a massive-dimensional set of utility functions (as many as there are
concepts or connections in the system), and this diffuse utility
function is what gives the system its stability.


I got the general gist of that, I think.

You've previously expressed that you don't think a seriously
unfriendly AGI will be likely, apparently because you assume the
motivational-system AGI will be the kind that'll be constructed and
not, for instance, a goal stack-driven one. Now, what makes you so
certain that people will build a this kind of AGI?


Kaj,

[This is just a preliminary answer:  I am composing a full essay now, 
which will appear in my blog.  This is such a complex debate that it 
needs to be unpacked in a lot more detail than is possible here.  Richard].



The answer is a mixture of factors.

The most important reason that I think this type will win out over a
goal-stack system is that I really think the latter cannot be made to
work in a form that allows substantial learning.  A goal-stack control
system relies on a two-step process:  build your stack using goals that
are represented in some kind of propositonal form, and then (when you
are ready to pursue a goal) *interpret* the meaning of the proposition
on the top of the stack so you can start breaking it up into subgoals.

The problem with this two-step process is that the interpretation of
each goal is only easy when you are down at the lower levels of the
stack - Pick up the red block is easy to interpret, but Make humans
happy is a profoundly abstract statement that has a million different
interpretations.

This is one reason why nobody has build an AGI.  To make a completely
autonomous system that can do such things as learn by engaging in
exploratory behavior, you have to be able insert goals like Do some
playing, and there is no clear way to break that statement down into
unambiguous subgoals.  The result is that if you really did try to build
an AGI with a goal like that, the actual behavior of the system would be
wildly unpredictable, and probably not good for the system itself.

Further:  if the system is to acquire its own knowledge independently
from a child-like state (something that, for separate reasons, I think
is going to be another prerequisite for true AGI), then the child system
cannot possibly have goals built into it that contain statements like
Engage in an empathic relationship with your parents because it does
not have the knowledge base built up yet, and cannot understand such a
propositions!

These technical reasons seem to imply that the first AGI that is
successful will, in fact, have a motivational-emotional system.  Anyone
else trying to build a goal-stack system will simply never get there.

But beyond this technical reason, I also believe that when people start
to make a serious efort to build AGI systems - i.e. when it is talked
about in government budget speeches across the world - there will be
questions about safety, and the safety features of the two types of AGI
will be examined.  I believe that at 

RE: [agi] Request for Help

2008-01-30 Thread Benjamin Johnston

Hi Mike,

When the Flash code on your machine contacts the server, I assume it would
use a fairly straightforward communications format (XML over HTTP maybe?).
If you install a program to monitor the communications, you might be able to
figure out how to get at the data without using Flash. There are plenty of
options for such monitoring: do a web search for HTTP sniffer, HTTP protocol
analyzer or HTTP spy or something along those lines. It installs on your own
computer and tells you all the communications that the Flash applet is
making with the server. I don't know about the legality of mining the
underlying web services and data with your own client (rather than the
supplied Flash applet). 

If you or your collaborator is currently based at a university, another
possibility would be to recruit an undergraduate student to implement a
similar Flash applet of your own. A simple drawing recorder without all
the fancy extras and graphic design of the imagination3 site should actually
be a fairly small project. You can probably find a student to create a
similar web service for you in a few days for less than a couple hundred
dollars.

-Ben

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Thursday, 31 January 2008 4:45
To: agi@v2.listbox.com
Subject: [agi] Request for Help

Remember that mathematical test/ experiment you all hated - the one where 
you doodle on this site -

http://www.imagination3.com

and it records your actual stream of drawing in time as well as the finished

product?

Well, a reasonably eminent scientist liked it, and wants to set it up. But 
he's having problems contacting the site- they don't reply to emails 

there's no way of accessing the time and
space coordinates of the drawings, unless there's a
possibility to read them from the Flash animation.

Can you suggest either a) a way round this or b) an alternative site/method 
to faithfully record the time and space coordinates of the drawings?

Cheeky request perhaps, but I would be v. grateful for any help.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=91906145-b2bea4