Sergio Pissanetzky wrote:
> I'll use a camera instead of the retina. When light hits a pixel in that
> camera, an electric signal is produced and travels to the brain, I mean
> the computer. That's it, that's the causal relation, light + pixel (and
> the pixel has a position, which is how spatial information gets encoded)
> cause signal. Multiply that by 1 million pixels, and you have a big
> causal set. From the signals alone, you can't tell that the camera is
> looking at your mother's face (in Hofstadter's words). But if you
> display the signals on a screen, your brain will immediately recognize
> the image. That's EI. I did it on a small scale on my  PC, and I now
> want to do it on a larger scale.

Do you have ANY idea how cameras work?

For every pixel, for every scan interval, the sensor will be affected by
tens of thousands to millions of photons...

What you get is a number. Typically, in most applications between 0 and
255. We can abstract that to some floating point value between 0 and 1
where 0 is almost no light and 1 is sensor saturation.

You are given a matrix of these, We will assume perfect pixels that are
vertically aligned and there are no sensor artifacts.

Your AI must find and encode the simplest possible theory of what
objects must exist out in the world to have excited the sensors on your
camera in such a way.

That is visual perception.

I'm getting sick and tired of reading this ignorant crap out of you. I
expect it from the list clown but you should be smarter than this,
that's why I'm so disappointed in you. =(

-- 
E T F
N H E
D E D

Powers are not rights.





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to