> "You would require visual intelligence to build these nanobots."

Not necessarily visual, but spatial. They are not synonymous.

> "It is impossible to bootstrap perceptual grounding from a purely symbolic
AGI. It does not know how to build 3D robots."

Ah-ah-ah... be careful here, remember the big long discussion that
transpired a few months back? If you go down far enough, pretty much
anything can be represented with a symbol. The fact that the word is
relative is often neglected. My entire design rests on this principle. As
long as your symbolic ontology permits the representation of data that is
CONSISTANT with how spatial data should be treated, you're set... Since my
design hinges on this, I am obviously biased towards this line of thinking.
Though I can't necessarily prove it is possible so early in my research, you
surely can't prove it isn't.

The KR lingo I've been working on for the past couple years is based around
the ability to encapsulate spatial knowledge with everything else as a
side-dish... so I especially find that statement of yours unfounded. I have
yet to develop a working POC,  but this particular problem was what I spent
the majority of my high school days working on.

> "Purely symbolic ontologies can produce unsatisfying results."

Generalizations... yum. Remember, you can take symbols as far down the
rabbit hole as you fancy.

- Regards, Joe

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to