Mike, Steve,

You've misinterpreted my point, I have missed some of the earlier messages,
too:

Steve> The world has FAR more than a million "monkeys" typing on their
keyboards, to stumble onto inductive solutions to
> every problem. The challenge is NOT in finding the solutions to the
world's problems. Instead, the challenge is in
> getting enough "traction" to even have a decision maker see and recognize
a solution when one is proposed.
>What we REALLY need is an "AGI recognizer" that recognizes and catalogs
prospective solutions from the Internet. This
> would do FAR more than any suggested AGI.

Todor: What world problems...

*Not About Politics...*

The human factors related to massive amounts of people is something that's
pretty stochastic, if the solution depends on the agreement of so-and-so,
some big VIPs, the president of..., the King, the prime-minister, the
director of the TV, or if you have to persuade so and so people to "buy a
product", and those people on average have so-and-so intelligence, cultural
background, needs etc. - that's not AGI in the above sense, that's
sociology and is more of politics, propaganda, marketing (which is a sort
of "politics" and popaganda). Not that the creative AGI I'm talking about
won't help here as well, to analyze the situation faster and write the
papers or propose solutions. The *ACCEPTANCE* of the suggestions, if it's
out of the machine's/AGI control, if it fails - you should blame the people.

The results can be inconsistent and random, no matter the intelligence, the
immediate AGI goals I'm talking about are in fields, applications, where
intelligence and creativity are independent and give consistent results,
and whose success doesn't depend on tricking people, manipulating them,
exploiting them, be corrupted, steal, abuse power and position etc. - many
"qualities" according to many humans.

...

You probably haven't read a satirical short story by the Bulgarian SF
writer Lyuben Dilov, where a peaceful alien civilization meets the humanity
and leaves an embassador on Earth. He is supposed to educate people and
tranfer to them more advanced technologies. One of the proposed technology
was an AGI, a wise machine supposed to take over the political decisions
and let people leave in peace - no nuclear wars, no wars - peace and love.

However what happened? Well, humans didn't want it. The politicians who
hold those positions would never agree to adopt such a technology - of
course, they will justify it by citing scenarios like the one you talk
about, or perhaps "The Terminator"...

[[[ Note: Without understanding the real message, see my previous one -
humans put Skynet to be a military computer, humans are threatening each
others with nuclear rockets - if humans were peaceful, if they lived in
brotherhood, they wouldn't have used nuclear weapons or would need military
at all, and Skynet wouldn't be able to launch those rockets. If humans
haven't connected it to the military computer, in order to gain strategical
advantage against their enemy - USSR - the machine wouldn't start the war.
The same goes for the "Wargames" movies, it also goes for "2001: A Space
Odyssey", where the "bad" thinking machines are actually doing what they
are taught is appropriate for this situations by humans, are humans have
put them in that situation, and then pointing out "look, AI is dangerous!"

And something else about all those catastrophic stories - there's a stupid
bias towards disasters. Humans, or maybe the authors, also the media,
believe that humans want to watch disasters, destruction, death, crimes
etc. All "bad stuff" - that's sick.

They actually are exploiting the primitive fears of the average humans, it
seems that adrenaline and those kind of emotions are a stronger driver.

Also generally a movie or a story must have dramatic settings, there must
be a protagonist and an antagonist, some kind of battles etc., otherwise
it's not interesting.
]]]]

[Back to the short story]

People didn't want peace and brotherhood between nations however,
politicians wanted themselves to be on top, to shuttle around the world as
VIPs, go to the UN conferences, drive their corruption schemes - that
spreads from the local political organizations up to the highest
representatives, if an AGI makes the decisions, all that hierarchy will
become useless, and all those people, who are working "for the good of
people" would lose their privileges - from the minor activists in each
block, to the presidents.

Humans didn't adopted the medical robots, too. They could stay in each
corner and heal people for free (If I remember correctly) - if it was
adopted, then what the medics and the nurses would do? The syndicates would
have ruled it out! :)

The only inventions that humans adopted and found useful was that of
micro-books, which allowed to put a book in a bead-sized grain...

Sadly, the poor alien ambassador became... alienated, fell into depression
and committed a suicide - because he failed to serve his purpose of
advancing human technologies, his intellectual and technological
superiority was useless.

...

You should watch also the movie "Idiocracy".

Mike:> And real world intelligence is *embodied, *massively condensed,
*imaginative (whereas symbols are merely labels on >imaginative boxes).and
*concept/idea-based.
>All these things :
>*creative
>*embodied
>*imaginative
>*condensed
>*concept/idea -based

Todor:

Mike, would you ever realize that I agree on that and my approach is like
that?

I am a universal artist, I enjoy practising and improving my skills and
knowledge in all kinds of arts, and in all kinds of sciences and
technologies - hard sciences, soft sciences, technical fields. Whatever,
and I do improvise all the time.

"All" means involving all kinds of sensory and motor modalities, all kinds
of resolutions/abstractions, from specialized tasks and precise ones (such
as juggling or programming, the latter is abstract and concrete in the same
time) to abstract and broad such as interdisciplinary socio-linguistics,
sociology, national psychology, philosophy of mind,
neuroscience-to-philosophy-to-psycholinguistics-... Whatever.

Mike>A This represents a field-wide massive failure to understand the
nature of intelligence. I repeat: nobody in the
>foreseeable future is going to produce a brain-in-the-box that can be
*creative about *anything* - the simplest thing,
>like what do you do if one shoelace is torn? How can you put toy blocks on
top of each other higgledy-piggledy, as an
> infant does, as opposed to brick wall style? A Let alone any political,
psychological, medical etc etc

Todor:

I remember that we discussed about the toy bricks and I gave you scenarios
how it could do it, your notion of "creative" is not defined well - what
exactly is hard or creative about all that? If the system has the senses
and memories that you/a human had in such circumstances, it would choose
something appropriate. What can you do with a toelace? Or what can you do
with blocks? If you have hands, that can pick, rotate, translate etc. and
if you want your shoes not to fall off your feet, and depending on the
environment (if you have new shoelaces around, or other shoes, etc.)...
You'll synthesize and choose something of them, or you'll just recall
something that you've done before, and re-apply it.

In fact humans super short memory and bad introspection makes their own
activities to look "magical" to themselves.
You cannot explain to yourself how your imagination is driven, why you did
what you do and generalize it.

To make a contrast - anyone who can draw decently, can do also
3D-reconstruction from one picture by hand. If you can draw in perspective,
the recovered 3D coordinates are obvious to you, you just need to input
them somehow to a system that can render them, or you can mentally do the
transformations and render it yourself on the paper.

If you do also 3D-graphics and programming the mathematical way - there's
no reason why not automating this process. It's a matter of engineering.
All the data that is needed is obvious, it's in the picture, and by
generalizing the border cases (which are not much) you may make it even
learning from the examples.

My implementation in code still waits the other general cognitive
infrastructure to get completed so that it becomes self-coded, but in a
non-general framework, for the sake of this specific problem it's actually
solved in practical systems, for example this one, but it still requires
some manual selection:

3-Sweep: Extracting Editable Objects from a Single Photo, SIGGRA
http://youtu.be/Oie1ZXWceqM

Regarding the "all kinds of problems" and this list context, I mean for
example all kinds that we've discussed here and you've given them as
challenges. For example that includes the basic tasks:

- Recognition and synthesize of buildings, chairs, caricatures; the skill
is developed by learning, play and observation
   -- That involves:
   -- 3D-structure and light reconstruction possible even
      from a single picture and uses also the experiences to fill in
missing parts

    -- Generalization and concept forming, in this context it's about
sensory and motor
        resolution variation, and clustering into classes of minimal
structures |
       (I explained in the emails back then, the minimum models)


     -- Building towers from toy bricks :), playing with Lego and other 3D
puzzles

-- 
===* Todor "Tosh" Arnaudov ===*
*
.... Twenkid Research:*  http://research.twenkid.com

.... *Author of the world's first University courses in AGI  (2010, 2011)*:
http://artificial-mind.blogspot.com/2010/04/universal-artificial-intelligence.html

*.... Todor Arnaudov's Researches Blog**: *
http://artificial-mind.blogspot.com
*
*



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to