Re: [agi] Re: Fun with ChatGPT. How smart is it really?

2022-12-12 Thread Matt Mahoney via AGI
It is interesting how many times I've seen examples of ChatGPT getting something wrong but defending its answers with plausible arguments. In one example it gives a "proof" that all odd numbers are prime. It requires some thought to find the mistake. In another thread I saw on Twitter the user

Re: [agi] Want 2 PF?

2018-11-30 Thread Matt Mahoney via AGI
Empirically, neural networks are the best known solutions to vision and natural language. 2 petaflops is close to enough for a human brain sized neural network but the system would need a lot more memory than 2 TB. 1 PB would be closer. GPU arrays are also poorly suited for sparsely connected

Re: [agi] Please, read this plan and tell me what you guys think about it

2018-11-22 Thread Matt Mahoney via AGI
s of consciousness? > But then again, even if our society collapses ... every individuated piece > of consciousness will ultimately graduate ... it usually just takes many, > many experiences and iterations (reincarnations) in order for the average, > contemporary consciousness to trans

Re: [agi] Please, read this plan and tell me what you guys think about it

2018-11-21 Thread Matt Mahoney via AGI
lear that reproduction and physical survival is >> meaningless. Just as the survival of Super Mario is meaningless for the kid >> temporarly identified with the game character. >> >> The real evolution isn't biological or physical ... it happens on a meta >> level ... it's

Re: Re: Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-11-21 Thread Matt Mahoney via AGI
;sucking data from internet"), from a 1PB executable > compiled from manually built source code? > I don't see how the latter can be classified as complex while the former > being classified as simple. > > > -- > ------------- > > 在 2018-11-20 01:15:10,&quo

Re: [agi] Please, read this plan and tell me what you guys think about it

2018-11-20 Thread Matt Mahoney via AGI
ake AGI in order for hundreds of people to realize this > ... and AGI will - quite by definition - be aware of all of this too. > > Am 18.11.2018 um 22:09 schrieb Matt Mahoney via AGI: > > Self replicating nanotechnology is subject to the laws of evolution just > like DNA bas

Re: [agi] Please, read this plan and tell me what you guys think about it

2018-11-17 Thread Matt Mahoney via AGI
It's not that nobody cares. Automating labor with AGI is worth $1 quadrillion (15 years world GDP). If you can't raise $1 billion it means investors give your ideas less than a 1 in a million chance of working. Do you have a design for a 10 petaflop, 1 petabyte computer to run a human brain sized

Re: [agi] Compressed Algorithms that can work on compressed data.

2018-10-13 Thread Matt Mahoney via AGI
On Sat, Oct 13, 2018 at 6:12 AM John Rose wrote: > > It takes kT ln 2 = 9.57 x 10^-24 joules per kelvin to retrieve (and > > copy) a bit of information. > > > > Interesting! That's an average I bet. When there are many bits intelligence > would optimize the sum? Actually, no. That is the

Re: [agi] Compressed Algorithms that can work on compressed data.

2018-10-08 Thread Matt Mahoney via AGI
On Mon, Oct 8, 2018, 9:44 AM Stefan Reich via AGI wrote: > > > Matt Mahoney via AGI schrieb am So., 7. Okt. 2018 > 03:25: > >> I understand the desire to understand what an AGI knows. But that makes >> you smarter than the AGI. I don't think you want that. >>

Re: [agi] Compressed Algorithms that can work on compressed data.

2018-10-06 Thread Matt Mahoney via AGI
I understand the desire to understand what an AGI knows. But that makes you smarter than the AGI. I don't think you want that. A neural network learner compresses its training data lossily. It is lossy because the training data information content can exceed the neural network's memory capacity

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-27 Thread Matt Mahoney via AGI
Gravity and other laws of physics are explained by the anthropogenic principle. The simplest explanation by Occam's Razor is that all possible universes exist and we necessarily observe one where intelligent life is possible. On Thu, Sep 27, 2018, 5:32 AM Jim Bromer via AGI wrote: > Science

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-24 Thread Matt Mahoney via AGI
- > > From: Matt Mahoney via AGI > > > > I was applying John's definition of qualia, not agreeing with it. My > definition is > > qualia is what perception feels like. Perception and feelings are both > > computable. But the feelings condition you to believing ther

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-23 Thread Matt Mahoney via AGI
ence has already explained Chalmer's Hard > Problem of Consciousness. He just got it wrong? Is that what you are > saying? > Jim Bromer > > > On Sat, Sep 22, 2018 at 11:07 AM Matt Mahoney via AGI < > agi@agi.topicbox.com> wrote: > > I was applying John's defin

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-21 Thread Matt Mahoney via AGI
John answered the question. Qualia = sensory input compressed for communication. A thermostat has qualia because it compresses its input to one bit (too hot/too cold) and communicates it to the heater. On Fri, Sep 21, 2018, 2:00 PM Jim Bromer via AGI wrote: > > From: Matt Mahoney v

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
On Thu, Sep 13, 2018, 12:12 PM John Rose wrote: > > -Original Message- > > From: Matt Mahoney via AGI > > > > We could say that everything is conscious. That has the same meaning as > > nothing is conscious. But all we are doing is avoiding defining >

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*cJohn et alonsciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
On Thu, Sep 13, 2018, 4:15 PM wrote: > On Thursday, September 13, 2018, at 3:10 PM, Jim Bromer wrote: > > I don't even think that stuff is relevant. > > > Jim, > > It's relevant if consciousness is the secret sauce. and if it applies to > the complexity problem. > Jim is right. I don't believe

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-13 Thread Matt Mahoney via AGI
will. These are the things that make life better than death, which is good for reproductive fitness. On Wed, Sep 12, 2018, 9:21 AM John Rose wrote: > > -Original Message- > > From: Matt Mahoney via AGI > > > > I don't believe that my thermostat is conscious. Or let me taboo w

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-11 Thread Matt Mahoney via AGI
On Mon, Sep 10, 2018 at 3:45 PM wrote: > You believe! Showing signs of communication protocol with future AGI :) an > aspect of CONSCIOUSNESS? My thermostat believes the house is too hot. It wants to keep the house cooler, but it feels warm and decides to turn on the air conditioner. I

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-10 Thread Matt Mahoney via AGI
On Mon, Sep 10, 2018 at 8:10 AM wrote: > Why is there no single general compression algorithm? Same reason as general > intelligence, thus, multi-agent, thus inter agent communication, thus > protocol, and thus consciousness. Legg proved that there are no simple, general theories of

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread Matt Mahoney via AGI
AGI is the very hard engineering problem of making machines do all the things that people can do. Consciousness is not the magic ingredient that makes the problem easy. On Sep 9, 2018 10:08 PM, wrote: Basically, if you look at all of life (Earth only for this example) over the past 4.5 billion

Re: [agi] E=mc^2 Morphism Musings... (Intelligence=math*consciousness^2 ?)

2018-09-09 Thread Matt Mahoney via AGI
Recipe for jargon salad. Two cups of computer science. One cup mathematics. One cup electrical engineering. One cup neuroscience. One half cup information theory. Four tablespoons quantum mechanics. Two teaspoons computational biology. A dash of philosophy. Mix all ingredients in a large bowl.

Re: [agi] Re: French startup saying they found AGI, what do you think ?

2018-09-05 Thread Matt Mahoney via AGI
On Wed, Sep 5, 2018 at 8:25 AM Logan Streondj via AGI wrote: > Luna AI is real: > scam-to-raise-funds> Answered by the creator of Luna. I would be more impressed by an online demo we could test, but none exists AFAIK.

Re: [agi] Re: French startup saying they found AGI, what do you think ?

2018-09-04 Thread Matt Mahoney via AGI
I wouldn't say it's a scam. I would say it's naive. We all know what is going to happen after $350K and 9 months and it won't be the birth of AGI. On Tue, Sep 4, 2018, 3:58 PM Mark Nuzz via AGI wrote: > Looks like a scam to me as they seem to be bare bones with content but > claim they can

Re: [agi] Can you predict the future with AGI or AI?

2018-08-07 Thread Matt Mahoney via AGI
Prediction in sports is big business. Same as betting on stocks and other investments. The best predictor is the market price. If there was an algorithm that could do better then everyone would use it. You are going to have equal amounts of wins and losses no matter what. For AGI to beat the

Re: [agi] Reality

2018-08-02 Thread Matt Mahoney via AGI
On Thu, Aug 2, 2018 at 3:43 PM Steve Richfield via AGI wrote: > Should we be: > > 1. hiring otherwise-homeless people to drive cars, or > 2. have computers drive our cars and tax the computers to support the > homeless, or > 3. ignore what technology is doing to our society and just let

Re: [agi] Reality

2018-08-02 Thread Matt Mahoney via AGI
I disagree with most of this. On Wed, Aug 1, 2018 at 7:31 PM Steve Richfield via AGI wrote: > My AGI-related interest here springs from my observation that nearly > everything people expect from an AGI: > 1. Is well within human problem solving ability. No. Machines already do many things

Re: [agi] Reality

2018-08-01 Thread Matt Mahoney via AGI
Does anyone here still want to discuss AGI? Or would we rather talk about onions and politics? I realize AGI is a really hard problem. The only ones making any real progress are companies with 12 figure market caps, and it's incremental at best. So don't feel bad if 20 years of your work can be

Re: [agi] New Paper - Temporal Singularity and the Fermi Paradox

2018-06-25 Thread Matt Mahoney via AGI
Recursive self improvement in a closed environment is not possible because intelligence depends on knowledge and computing power. These can only come from outside the simulation. Nor can any simulation model the outside world exactly because Wolpert's theorem prohibits two computers from mutually

Re: [agi] The Singularity Forum

2018-06-16 Thread Matt Mahoney via AGI
> Or, perhaps, I have simply missed a VERY important article? > > Steve > > On 10:29AM, Fri, Jun 15, 2018 Matt Mahoney via AGI > wrote: >> >> On Thu, Jun 14, 2018 at 10:40 PM Steve Richfield via AGI >> wrote: >> > >> > In the space of real

Re: [agi] The Singularity Forum

2018-06-15 Thread Matt Mahoney via AGI
On Thu, Jun 14, 2018 at 8:04 PM Mark Nuzz via AGI wrote: > > The Singularity analogy was never intended to imply infinite power. Rather it > represents a point at which understanding and predictability breaks down and > becomes impossible. Agreed. Vinge called it an "event horizon" on our

Re: [agi] The Singularity Forum

2018-06-15 Thread Matt Mahoney via AGI
On Thu, Jun 14, 2018 at 10:40 PM Steve Richfield via AGI wrote: > > In the space of real world "problems", I suspect the distribution of > difficulty follows the Zipf function, like pretty much everything else does. A Zipf distribution is a power law distribution. The reason that power law

Re: [agi] The Singularity Forum

2018-06-14 Thread Matt Mahoney via AGI
Sure, a silicon solution might eventually be > faster, but why simply wait until then? > > Apparently, I failed to successfully make this point to the people who > were paying Singularity's bills. > > *Steve* > > On Thu, Jun 14, 2018 at 12:47 PM, Matt Mahoney

Re: [agi] The Singularity Forum

2018-06-14 Thread Matt Mahoney via AGI
The singularity list (and SL4) died years ago. The singularity has been 30 years away for decades now. I guess we got tired of talking about it. -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] Anyone interested in sharing your projects / data models

2018-06-13 Thread Matt Mahoney via AGI
Among the many AGI designs and proposals mentioned in this thread, it was refreshing to see some actual results from Peter Voss's Aigo. (Also entertaining as my Alexa was listening and answering back while I played the demo videos). Experimental results are a lot more work to obtain than ideas,

Re: [agi] Anyone interested in sharing your projects / data models

2018-06-09 Thread Matt Mahoney via AGI
Like everyone else on this list, I do not have a working AGI. It is easy to underestimate the enormity of the problem. The most obvious application of AGI is to automate human labor. Globally this is a USD $75 trillion per year problem. A working solution would have a ROI of world GDP divided by