On Fri, Feb 02, 2007 at 04:06:08PM +0900, chris wrote:
> for time, with the python-ode example, i did not see much diversion
> until about 8000.

Thank you, at which stepsize?

> to me, the scary thing is that people tend to assume

When people start to assume, bad things always start to happen. :)

> that a computer
> simulation, programmed with high precision and all, is going to be
> accurate and reliable. Consider the case when  a military simulation
> is used to generate images that they expect a sensor should "see".
> These images are compared to "ground truth" images and the result is
> used to calibrate a sensor - which is then used in a craft or weapon.
> If there is unknown positional error affecting the simulated image
> (and most practitioners are unaware of the effect of
> spatial/positional error on rendered images) then the sensor gets
> miss-calibrated.

There's an anecdote in university cycles about an AI trained to find
camouflaged tanks. It used a neural net which could classify pictures shown
to it into "tank present" or "no tank present" and was trained by being fed
images and the information wether a tank was present. In the lab it worked
great. In the wild, it was completely useless, Never worked.
In the postmortem analysis, someone found that all the pictures of tankless
wild had been done when light was best (day), but the tanks pictures where
taken when camouflage was best (dawn and dusk). Meditate on what what the
AIs mind was like. :)

Liebe Grüße,
Sebastian Hoffmann
-- 
"Glücklich zu sein ist oberste Bürgerpflicht."
  -- Paranoia, West End Games
"Oh, look at the time, 1984 already."
  -- Daria (MTV)

_______________________________________________
vos-d mailing list
vos-d@interreality.org
http://www.interreality.org/cgi-bin/mailman/listinfo/vos-d

Reply via email to