Jarrad,
As a self-declared novice in the field, you don't seem to understand
something that should be a health warning on every "AGI curriculum" -
NOTHING WORKS.
Nothing in what is called the field of AGI has ever shown the slightest
promise, or produced the slightest results that would justify calling it
"real AGI."Nothing. Nothing can even provide an empirical *argument* as to
how it might work.
No current system or method or program can perceive the world, or generalize
or create, let alone understand language. Strong AI has 50 years of
consistent, total failure ... and consistently failed predictions.
So I'm speculating about what can't work and what may work. And that
requires *new* ideas not old ones.
One small aspect of what may work is that humans really do think in, and
solve problems by using truly vague, waffly terms like "pretty likely" -or
"let's do "something" here" - without knowing or caring precisely what they
mean.
No current approach envisages such thinking even though linguistics shows
that "vague language" abounds in normal human language use. And vague
language is actually useful, because it enables great flexiblity and freedom
of thought.
Similarly, human creative problemsolving is clearly non-algorithmic - and
involves *projects* as opposed to *processes.* You start a creative project
(including every AGI project) with an "idea" or two, not an algorithmic
fully-finished plan of action - you have to take it one step at a time,
start somewhere, and then when you've taken a step or two, see where you've
got, and think about what you're going to do next. That crudely, is
irrefutably the reality of how creative projects are pursued. No one can or
will show any algorithmic structure to the billions of creative projects in
all fields that humans (and animals) undertake.
So I'm, in general, thinking about how computers/robots could be made to
think similarly - which is the opposite of Turing machines doing fully
pre-planned and fully informed computations.
You're in the v. unusual position of starting in a field which has no
substantive foundations whatsoever - (and is crying out for *new* ideas).
And people who do not tell you that upfront and every day, are irresponsibly
and immorally leading you astray.
--------------------------------------------------
From: "Jarrad Hope" <[email protected]>
Sent: Friday, June 08, 2012 10:41 AM
To: "AGI" <[email protected]>
Subject: Re: [agi] The Visual Alphabet
Sorry I don't follow you...
Can you link with some papers that go into detail about "quick
*shape*-matching and guesswork" - or explain to me how this works
exactly?
Furthermore you state that your shape matching is going to match water
(you even use a probability of "pretty likely" which negates pretty
much everything you've said so far) - I could get water and urine
making similar shapes - and with more difficulty, semen too, so how
does your quick shape matching and guesswork algorithms choose between
either?
I feel like your just trolling now.
On Fri, Jun 8, 2012 at 5:12 PM, Mike Tintner <[email protected]>
wrote:
P.S. When I say "quick SHAPE-matching.." - I think that is the *main*
approach, but the brain is obviously sensitive to other dimensions like
colour and texture etc... Come to think of it, evolution probably shows
rough shape-matching to be prior in history, & the other dimensions of
sophisticated human and higher animal vision to come later, no?
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
https://www.listbox.com/member/archive/rss/303/22581513-9fe46a1c
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/6952829-59a2eca5
Modify Your Subscription:
https://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com