About why it is difficult to create complex system that balances several
functions, as opposed to a system with just one single function (assuming
that the former would be important for AGI): To find a reason for this
difficulty, I would like to point in a different direction, namely at the
scientific method itself as currently applied.

It is my understanding, contemporary computer science works along a certain
pattern, that I believe is inspired much by mathematical research. A
researcher within computer science is supposed to write papers that follow a
pattern such as:

"Present a formally well specified hypothesis, prove formally that the
hypothesis is correct."

At this point, many would probably look at me in a strange way and say "so
what? that is how science is supposed to work! And for good reasons!". But
if we examine this pattern more carefully, we can see that it causes some
problems when it comes to a subject such as AGI.

First of all, if we assume general intelligence to be a vague concept, it is
difficult to base any well specified hypothesis about how it is supposed to
work. There are many reasons why we should assume general intelligence can
never be anything more than a vague concept, one is complexity. If a concept
is complex enough, the description of such a concept becomes too lengthy for
it to be consistently reproduced in scientific communication.

Secondly, if we cannot present a well formed hypothesis, making a formal
proof of its correctness is futile. Even experiments would be of little use,
as there is no well defined hypothesis to make comparisons against. How
about conclusions such as:

"I designed my system in this specified way, and now after some
experimenting with it, we judge it to be a little bit smarter than our
previous design"

There is no well specified hypothesis, there is no formal proof that the
hypothesis is correct. And because such research doesn´t fit the standard
pattern of good research, I guess scientists are forced to follow narrower
paths where provability is more easy. Like:

"Using adaption algorithm A is proven by experiments to be 30% faster than
adaptation algorithm B when trying to learn a function F"

I guess this boils down to a age old conflict of computer science, namely
whether computer science is about finding eternal facts, like in
mathematics, or whether computer science is about engineering and finding
methods that seems to work better, whether it can be proven formally or not.

I believe one reason computer science is good at digging holes, is because
computer scientists looks with envy at, and wants to be like mathematicians.
Digging out eternal truths, so true that they would even be true in any
logically possible world, seems like a very noble prospect. Since computer
scientists have the freedom to invent the systems they study, it is tempting
to think that a lot of things they invent has the single purpose of being
something that is easy to state truths about, rather than something that is
useful or meaningful.

I have seen something that I think is an example of this. I don't know how
many hundreds papers have been written about bottom-up parsing for compiling
of programming languages. Judge by my surprise when I started working at my
current company, and I found out that they used nothing else than recursive
descent for their massive system. Guess what? It worked lightning fast, and
in fact, the recursive descent was necessary for some syntax extension
features their language had. Their language had no such things as "reserved
words". So from my experience I see no use at all for all these papers about
bottom up parsing, and it makes me think, why were they written? Because the
world really needed bottom up parsing in the last decades, or because it was
easy to make theories on the subject? Maybe there will be some protests on
this specific example, but isn't it likely that a lot of systems designed by
computer science are designed so to make good theories?

Another example is how the scientific community has moved away from
programming language design. Designing a programming language is similar to
building an AGI in this abstract sense, namely it is about making a balanced
combination of different functionality to try to obtain some vaguely
defined improvement, such as readability and structural beauty. It will not
make good research, in the current understanding of the word.

This is also why I put little faith in the current computer science
community when it comes to vague research such as AGI. Surely enough, some
professors who has reached a certain status might have enough freedom to do
what they want. But PhD students who enter the community will feel a lot of
pressure to produce "solid proven facts" rather than "interesting
speculations", or "things that seem to work better". This forces the
community as a whole down the road towards narrow AI.

Maybe there is hope if computer scientists tries to be a little bit less
like mathematicians, and dares to let in a little bit of the psychological
vagueness in their paper writing jargon. By that I do not mean to encourage
any kind of Freud-like incoherent crackpot theories, but just the kind of
vagueness that is associated with any kind of complex engineering, like
"this system seems to be better than that system", or "it seems this design
could benefit a certain capability" etc. Maybe an increased focus on AGI
would encourage such a development.

/Robert Wensman



These are not clearly separable things.  One of the reasons many
> people do the system synthesis and balanced approximations so badly
> is because they tend to use minor variations of the same function
> representations they would use when playing with those functions in
> isolation.  The assumption that a particular set of functions are
> only expressible as a particular narrow form can frequently make it
> impossible to synthesize a useful system because the selected form
> imposes limits and tradeoffs specific to its form in practice that
> are not required to achieve equivalent function.
>
> A lot of computer science tends to be like this in practice (e.g. the
> ever ubiquitous balanced tree).
>
> Cheers,
>
> J. Andrew Rogers
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=47989577-57c6b2

Reply via email to