An Open Letter to the AGI list

Matt- firstly, well said.

Thanks for that perspective. To add, I would like to light a candle for 
theoretical researchers that are designers. This, as opposed to researchers who 
jump straight into coding modular tests, and/or designers who do 
purely-academic work and publish with the help of universities and other 
sponsors. In other words, independent researcher/designers such as myself and a 
few, others here. In general, we find ourselves out on an IP limb.

In my view, contributors on the list tend to focus on vague-ish definitions, or 
hard experimentation, albeit for niche development of a version of AGI. And so 
often, one sees the "argument" either collapse, bog down, or go full circle to 
a re-examination of the term AGI. In retail services, this may even be referred 
to a "spinning".

I would caution against any, established researcher to discount, or seemingly 
minimize the research of others. It is not part of the constructivist research 
approach, which is what I hope we're all trying to achieve on this list. After 
all, no-one here has found THE answer to an AGI architecture yet. I stand open 
to correction.

This is where holistic researchers such as myself come into the picture. The 
research I do focuses on developing an architectural model of an AGI platform 
(all AGI possibilities). In theory, such an holistic platform would be able to 
position, and seamlessly integrate with all other, AGI endeavours, or niche 
achievements. As such, my work enhances, and is being enhanced by, the work of 
others. Your output serves as my input.

I often do not understand the content of what some of you are sharing. My field 
of study is not EVERYTHING either. With some of your sharing, I even need to 
revert back to a dictionary and often go read a lot more than I have time for. 
But, I do, because I'm passionate about this topic.

Part of my AGI journey is to try and bridge the semantic and experiential 
differences I encounter. Once the architectural principles of any of your 
output are sufficiently clear to me, I can use my own language of reference and 
start integrating it with the logical whole in my mind.

To make sense of such an approach, I need to document my progress, and test it 
on the hand of existing and emerging theory. In addition, I need to "future 
proof" all my designs by attempting to peer 40+ years into the future. By 
comparison, when a researcher is swallowed up by the acute detail of developing 
an AGI, computational model, my perspective may seem to be an incumberence.

However, bearing mind, whatever any of you produce, at some point it has to be 
seamlessly integrated with an holistic AGO architecture. I think such a mindset 
objective is most critical for the progress of AGI development in the world.

In my experience, the greatest hindrance to the global development of AGI is 
our own minds. This is a view recently stated by a contributor. We try and make 
copied of AGI from our own minds, as if we are the super intelligent, the only 
role model. This may not be the case at all. I have learned that, as soon as 
I've reached a boundary of research, a doorway either extends the boundary. 
and/or another none opens up.

I think research in AGI is a work in progress. Research requires a holistic 
framework, which we already have in computational-engineering approaches. 
Further, research also requires holistic architecture, which probably exists 
for some, but that may be of such strategic value, it is definitely not going 
to be shared.

I plan to focus my research more pertinently from now on, to start integrating 
the understandable work here into the architecture I am evolving. I have so 
many papers still to go through, and would appreciate a fellow, holistic 
architect to join up with me in this goal.

I also intend to share my results more progressively and sensibly with the 
community. Surely there must be lurkers here who has as sole intent to try and 
grab what others are doing for selfish purposes, but that one even finds at any 
conference in any case. It should not be the factor to stop AGI-networking 
possibilities on this list.

May I then request, for the list, please could there be a sense of professional 
courtesy in terms of offering a degree of professional credit to individual 
contributors. Many of us have spent decades and tens of thousands of dollars on 
this research.

 We do not wish to just give it away like idiots, but we do wish to share in 
the interest of progressing AGI. Please try and institute a fairness to 
referencing contributors where possible, to quote accurately and correctly in 
full, and to give intellectual property credit when the work of contributors 
are being used. Else, this would have to turn into an academic site, forcing 
researchers to first spend a year writing and submitting white papers for 
every, little step of progress being made. The list would prove less useful and 
we'll probably never get anywhere fast.

If this is indeed feasible, I'd be happy to share my body of research openly 
for what it is worth. I'd also appreciate all critical reviews and comments on 
my contributions.

What say you?

Sincerely

Robert Benjamin

________________________________
From: Matt Mahoney via AGI <agi@agi.topicbox.com>
Sent: Wednesday, 13 June 2018 9:02 PM
To: agi@agi.topicbox.com
Subject: Re: [agi] Anyone interested in sharing your projects / data models

Among the many AGI designs and proposals mentioned in this thread, it was 
refreshing to see some actual results from Peter Voss's Aigo. (Also 
entertaining as my Alexa was listening and answering back while I played the 
demo videos). Experimental results are a lot more work to obtain than ideas, 
which is why most publishers and reviewers require them. I realize this is 
difficult for AGI, which I guess is why 85% of the papers accepted to the AGI 
conference still lacked a results section the last time I looked.

My last 20 years of research can be summarized as finding experimental evidence 
(not proof) supporting the following hypotheses:

1. The best language models are based on neural networks.
2. Intelligence grows logarithmically with CPU time and memory.
3. Automating all human labor with AGI will probably cost $1 quadrillion.

We recently learned that the best vision models are neural networks. My work 
suggests this is true of language too. It is based on testing thousands of 
versions of 200 compression programs since 2006 on a 1 GB text benchmark, found 
at http://mattmahoney.net/dc/text.html  Text compression measures text 
prediction or modeling by adding a coder, which is a solved problem. The top 
models use dictionary preprocessing to convert words into tokens followed by 
PAQ style compression predicting one bit at a time using ad-hoc context 
features and shallow neural networks. They implement essentially toddler level 
language models with hard-coded lexical features, proximity based semantics and 
flat (n-gram) simple grammars and dictionaries sorted by grammatical role (i.e. 
grouping "monday" with "tuesday" or "brother" with "sister"). The models so far 
lack advanced grammars necessary to understand math, software, or complex 
sentences.

Prior to my work on PAQ based compression, the best models were PPM (prediction 
by partial match) until about 2003. PPM predicts bytes rather than bits using 
the longest matching contexts. I started work on neural based compression in 
1998, 5 years before achieving this result.

The second hypothesis has several caveats. By intelligence, I mean text 
prediction accuracy. I show that human level prediction (which we have not yet 
achieved) implies passing the Turing test. Not everyone accepts the Turing test 
as general intelligence since it lacks non-text based processing like vision, 
music, and robotics, all requirements for AGI or automating labor. Also, my 
tests (with the same benchmark) only show a logarithmic trend over the range of 
a few bytes up to 32 GB and 1 to 10^6 operations per byte. If we assume that 
10% of the human brain is used to process language, then the goal figure is 
10^13 bits of memory and 10^14 operations per character.

For my third hypothesis, please note I am estimating the cost of several 
billion human level intelligences, not just one human level AGI. The two pieces 
of evidence I produced in support of my claim are:

3A. My 1998 masters thesis where I showed the scalability and robustness of 
distributed indexing using computer simulations. Distributed indexing is an 
essential feature of an AGI design consisting of lots of independently 
developed and competing narrow AI such as my 2008 proposal. (The thesis is 
here: https://cs.fit.edu/~mmahoney/thesis.html ).

3B. I showed that recursive self improvement in a closed environment (boxed AI, 
sometimes proposed as a shortcut to AGI or a singularity) is impossible. 
http://mattmahoney.net/rsi.pdf

Of course none of this disproves the possibility of other, less expensive 
routes to AGI. But logic based AI is probably not one of them (per my first 
result) and early progress does not predict success (per my second result).

--
-- Matt Mahoney, mattmahone...@gmail.com<mailto:mattmahone...@gmail.com>
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups> 
Permalink<https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-Mda6e59327c21a47a77423b17>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T731509cdd81e3f5f-M082efc12e9b986a6f6549b76
Delivery options: https://agi.topicbox.com/groups

Reply via email to