So the prophets scored One:

https://web.archive.org/web/20160205202950/http://www.futuretech.ox.ac.uk/sites/futuretech.ox.ac.uk/files/The_Future_of_Employment_OMS_Working_Paper_1.pdf


On 20.06.2019 17:38, Stefan Reich via AGI wrote:
Prednet develops consciousness?

On Wed, Jun 19, 2019, 06:51 Alan Grimes via AGI <[email protected]
<mailto:[email protected]>> wrote:

Yay, it seems peeps are finally ready to talk about this!! =P

Lets see if I can fool anyone into thinking I'm actually making
sense by
starting with a first principles approach...

On first approximation, yer computer's memory is just a giant
string of
charactors. Since everything you have ever done on your computer has
been encoded into this string, we can be reasonably confident that
this
is a universal way to encode anything, so we go ahead and say that
all
the modalities (sight, sound, etc,) are mapped to this memory and
continue on.

[I'm maximally overtired right now, I didn't really get any sleep
last
night, should wait until tomorrow at least to write this but I
want to
get a head start...]

If you prefer, you can think in terms of a multi-tape turing machine,
where the program and working variables are stored on other tapes and
the mind file is in main memory, but the point is that we presume
there
exists a computation that does intelligence using the main memory,
which
we will say is the complete state of the mind.

So therefore what we are looking for is a set of principles and basic
operations that, when applied to the main memory, intelligent
behavior
and subjective experience emerges.

[next evening]

The most important thing to realize is that the data is NOT
RANDOM/stochastic/whatever, it is STRUCTURED,?? It has patterns,
it has
types, it may have meaning.?? This is true of all modalities.

Therefore, in order to create a mind, we need the system to be
able to
capture that structure and be able to use it in computations. Lets
say
that we had some identified structure in the data which had some
modest
but non-trivial k-complexity. Replacing that data with the
K-complexity
description, even if if were a "good enough" approximation, would
satisify most definitions of understanding that structure and
creating
some way to point to that description and incorporate it in other
descriptions would satisfy the definition of abstraction.

I hope you are all boned up on your study of non-standard
computational
models. What I'm getting at is that the expression of the
K-complexity
is a structured object just like the structured object which it
expands
to. So we can treat it the same way and approximate it using the
basic
functions we use to manipulate abstractions.

So if you had a string/object that basically meant "make big" and
another that expressed the shape, then you only need a way to
apply one
to the other to make a big shape. It is important that things are
done
like this because it is how general intelligence is achieved,
because it
allows the operation as well as the object to be learned by
essentially
the same mechanism.

Now, let me shift gears here and go into a thread that I'd been
meaning
to start for the past week or so. DEEP LEARNING IS DEAD! If we don't
have a viable replacement for it in the next 18 months or so,
we're at
risk of another AI winter. =0???? There are certainly some good
and useful
machine learning algorithms out there, some of which are just
rediscovery of techniques from ages past, but well that's OK too.

When you get down to it, DL is the realization of ideas dating back a
very long time but only made feasible with gpGPU techniques, see
Nvidia's website for a recounting of the history.

[few days later]

Have a look at this and count up the number of problems this field of
study has from an AGI perspective:

https://scholar.google.com/scholar?q=neural+architecture+search&hl=en&as_sdt=0&as_vis=1&oi=scholart

Neural nets are brittile in that you can only select their
topology when
you are creating them. (iirc)

The brain, by contrast is quite infamous for exhibiting plasticity in
the face of dammage and/or changing requirements. For example, london
taxi drivers develop an enlarged hypocampus to help deal with the
maze-like roads in that city.

https://www.youtube.com/watch?v=UoJf_tXU2Zk

Ok, that video was ammusing and little else. ;)

Also, consider what a "golden network" would look like. A golden
network
is a network like a universal turing machine that exhibits general AI
instead of some special AI.

The bottom line is that while we have come a long way from 2012,
we are
reaching the top of the S-curve with respect to deep learning.
Once the
RoI falls below some critical level, we will enter an AI winter. =|

We do not have time for another winter.

Deep learning is also hitting a computational complexity wall but I
couldn't find good referances for that so I left it out of this post.

Here's something else to think about.

Consider the near photorealistic graphics that you got with
Assassin's
Creed on the Playstation 3.

Consider the amount of power people are throwing at autonomous
driving
and just recognising the general outlines of visual scenes.

One of the reasons I flog prednet so much,

https://coxlab.github.io/prednet/

is not just because it yields consciousness, but it reduces
perception
to something a PS3 could do.

You render what you think you are seeing, the LGN of the thalamus
computes the error signal, and then you use your expensive complex
computations or just that error signal!

[midnight a day or two later, just jamming random related topics into
the post to make myself look smart...]

Here's another fascinating area of study:
https://scholar.google.com/scholar?q=explainable+neural+networks&hl=en&as_sdt=0&as_vis=1&oi=scholart

To me, it's just proof that deep learning, despite what can be
accomplished with it, is NOT a form of intelligence. When you think
about it, it doesn't learn in any true sense, in that it can recite
facts, or recall events but it, over many thousands of iterations,
can
*adapt*.

The result is a system that essentially operates as mindlessly as a
reflex. The apparent complexity of the behavior not changing that
fact.

Going to a recurrent model does give the network a legit memory,
after a
fassion. Still, the prospects for further progress is dim.

[next night around midnight again]

Consider, by contrast a system that works by analyzing structure.
There
is no reason why it can't simply continuallly fill its memory,
because
each thing is recorded as a representation of it's structure, the
storage requirement for even millenia of ordinary life experience
is not
at all implausible.

Ok, the magic sauce here is an ability to organize and catalog
representations. I think there exists a geometric technique that can
yield pattern recognition. This would involve a way to normalize the
input and then search the database, the transformation used to
achieve
the normalization also being a relevant structure that is also
analyzed
in the same manner.

Other tricks such as the space-time rotation that I've described a
number of times are also important techniques for detecting spatial
patterns. I think that approach can be much more powerful than
"convolution" which is just scanning a larger matrix with a
smaller...
I'm talking about taking the visual scene and turning the scan of it
into a temporal signal like a sound. Just as a speech recognizer
doesn't
care when the speaking actually begins, treating spatial signals can
work the same way.

Anyway, what was this post supposed to be about again? bleh ->
hitting
send.

--
Please report bounces from this address [email protected]
<mailto:[email protected]>

Powers are not rights.

*Artificial General Intelligence List
<https://agi.topicbox.com/latest>* / AGI / see discussions
<https://agi.topicbox.com/groups/agi> + participants
<https://agi.topicbox.com/groups/agi/members> + delivery options
<https://agi.topicbox.com/groups/agi/subscription> Permalink
<https://agi.topicbox.com/groups/agi/T395236743964cb4b-M686d9fcf7662ad8dc2fc1130>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0a32a6b19608fcc4-M9bc17f509d61f7d68b6a6c26
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to