'ello,
Yes, my previous post seemed a bit too whimsical to respond to but the
problem I'm working on is rather important.
Our beady little eyeballs ( . . ) actually rather suck on all
accounts. They're almost OK in the Fovea where the cones aren't as
obscured by gangleon cells and blood vessels. Everywhere else, they're
The Suck.
Why is it that normal and successfully corrected vision looks so much
like ultra high-definition TV?
The only possible answer is that we have a serious amount of
interpolation going on. I tend to discuss it in terms of the
imagination, that literally the imagination IS the engine of perception.
But that leaves us with an even more problematic conundrum....
How is it that the brain can internalize almost any scene, almost
perfectly, almost immediately, to the point where it can generate and
overlay the perception-imagination thingy it does without anyone
noticing? There are definitely artifacts of this process, like people
failing to even perceive things they can't understand etc, however we
can treat it as if it works Pretty Well(tm), for the sake of discussion
at least.
My own (pathetic) attempts at image processing have resulted in
pronounced artifacts. Any experience with digital video will show that
the "compression artifacts" are quite plain. Yet the artifacts in human
perception are extremely obscure. Part of this can be written off as an
inherent blindness of the mechanisms of perception to its own flaws, but
only part.
If we propose that perception <==> imagination, then we have to propose
a mechanism of imagination that is equal to the task of showing us
everything that we have ever seen. Raster graphics are obviously flawed,
raytracing is an interesting approach but it is still not satisfying
because it doesn't answer the question of how we deal with figures drawn
on paper, for example.
On top of that, there are the other four modalities and many
sub-modalities that would seem to demand a general solution, we are
after AGI after all...
My current state of mind is that it might be profitable to look at all
stimuli as a massive n-dimensional flood of bits, especially in the
temporal dimension. Lets say we crank the number of bits all the way up
to eleven, well past all previous imagining, lets say we process an
image not as discreet color values, but as a sea of bits, dithered all
the way down to monochrome but at a resolution ten thousand times
greater. You wouldn't actually compute all of those bits but rather you
would get yourself some way to obtain the value of any bit you cared
about, or an average value of any arbitrary sub-region. The point is to
try to access, and analyze the source image. It is imperative to get
past any trace of pixelation and null out all artifacts of the device
you're using.
What this would do is to reduce all perception (and action too, even) to
a DSP problem, and imagination would be a problem of finding and
buffering the components of the signal... Which is equivalent to, but
possibly more refined, than what I've been saying all along.
http://www.scholarpedia.org/article/Neuronal_Code
It's much more likely, however, that I'm just an idiot who isn't worth
discussing this issue with...
--
E T F
N H E
D E D
Powers are not rights.
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com