Re: [agi] test

2019-06-30 Thread Rob Freeman
Korrelan, Good. Interested to talk to you about this. A lot I agree with. But let me just pick some specific points. On Sun, Jun 30, 2019 at 5:00 PM korrelan wrote: > ... > > The external sensory cortex re-encodes incoming sensory streams by > applying spatiotemporal compression > OK.

Re: [agi] test

2019-06-30 Thread Nanograte Knowledge Technologies
ogical map as your compass. Would you say this was correct enough? Robert Benjamin From: korrelan Sent: Sunday, 30 June 2019 11:00 To: AGI Subject: Re: [agi] test Any level of complexity creates a natural wall behind it, where the previous level cannot be constr

Re: [agi] test

2019-06-30 Thread korrelan
Any level of complexity creates a natural wall behind it, where the previous level cannot be constructed from the same concepts as the current level; this is just a law of reality. Ergo, any Human devised construct like language, cannot be used to create a true Human level AGI.  Language, set

Re: [agi] test

2019-06-29 Thread Costi Dumitrescu
Machines that can be recognized by cats... On 26.06.2019 12:57, korrelan wrote: I posted because there seems to be a general disillusionment regarding AGI, to show that the field of AGI research is varied and very much alive and well, if I can build this in my shed as a hobby, with no funding

Re: [agi] test

2019-06-27 Thread Alan Grimes via AGI
Nanograte Knowledge Technologies wrote: Alan, seriously? Are you utterly convinced the learning system for a machine would only comprise a single level of complexity? Yes. First, disregard everything you need to generate machine code instructions to your CPU and GPU, not relevant; don't

Re: [agi] test

2019-06-26 Thread Alan Grimes via AGI
Nanograte Knowledge Technologies wrote: Agreed with your comment on the many levels of abstraction. Some of us refer to those as the levels of complexity. How many levels would you theorize would be required for AGI version 1.0? We only need to worry about the one that does the learning.

Re: [agi] test

2019-06-26 Thread Basile Starynkevitch
On 6/26/19 6:03 PM, Basile Starynkevitch wrote: On 6/26/19 5:21 PM, Stefan Reich via AGI wrote: > Tying to build an AGI using symbolism, spoken language or C++ code for example, IMO is like trying to build a car from cars. (I seem to have a car theme going). I beg to differ. I aim to

Re: [agi] test

2019-06-26 Thread Basile Starynkevitch
On 6/26/19 5:21 PM, Stefan Reich via AGI wrote: > Tying to build an AGI using symbolism, spoken language or C++ code for example, IMO is like trying to build a car from cars. (I seem to have a car theme going). I beg to differ. I aim to create the "75% AGI" (everyday thought, excluding just

Re: [agi] test

2019-06-26 Thread Stefan Reich via AGI
> Tying to build an AGI using symbolism, spoken language or C++ code for example, IMO is like trying to build a car from cars. (I seem to have a car theme going). I beg to differ. I aim to create the "75% AGI" (everyday thought, excluding just the highest levels of creativity) using symbolism.

Re: [agi] test

2019-06-26 Thread korrelan
**Abstraction*** **in its main sense is a conceptual process where general** **rules* * **and** **concepts* * **are derived from the usage and classification of specific examples, literal ("real" or

Re: [agi] test

2019-06-26 Thread Matt Mahoney
Korrelan, that is interesting work. I think brain simulation will lead to the discovery of important algorithms and optimizations for difficult problems like vision, language, and robotics, where neural networks are already the best known solutions. Have you published any papers on your research?

Re: [agi] test

2019-06-26 Thread korrelan
I require a frequency modulated speech/ phoneme engine, none exist so I’ve just started building my own.  https://www.youtube.com/watch?v=ffY37q44O4E  :)   -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] test

2019-06-26 Thread Nanograte Knowledge Technologies
Fascinating! Exciting! I'm most curious; what "language" does this entity speak, and can it converse beyond it's own, adaptive boundary? From: korrelan Sent: Wednesday, 26 June 2019 09:53 To: AGI Subject: Re: [agi] test For the past 20 years I’ve be

Re: [agi] test

2019-06-26 Thread Nanograte Knowledge Technologies
At least we're beginning to make an effort to think like an AGI entity. Plato would surely commend us on our philosophical efforts. Descartes might warn us off. From: Colin Hales Sent: Wednesday, 26 June 2019 01:35 To: AGI Subject: Re: [agi] test On Tue., 25

Re: [agi] test

2019-06-26 Thread korrelan
For the past 20 years I’ve been working on a compromise. A massively parallel electro-chemical simulation/ emulation of the human brain, that has to exist in a 3D volumetric space and time.  There are no standard/ common neural nets, weights, bias, sigmoid functions or even back propagation.

Re: [agi] test

2019-06-24 Thread Colin Hales
Meanwhile, the actual science of artificial general intelligence languishes, intended, malformed and impotent. Those of us that actually want to do the real science of an artificial version of natural general intelligence ... have to stand back and watch. I've been deeply immersed in this full

Re: [agi] test

2019-06-24 Thread Costi Dumitrescu
There is a quote from one of the replies to NIST's RFI: "Our 200-year old paradigm of education may provide a baseline of knowledge, but [...]". It should stand around references to Turing's article in 'Mind' or when challenging other methods. On 25.06.2019 03:12, Jim Bromer wrote: In

Re: [agi] test

2019-06-24 Thread Jim Bromer
In fact, falsifiability is NOT the driving force behind EVERY scientific revolution. I think that many scientific revolutions are either derived from new technology which creates new insights or by the lack of achievement in some particular field which makes peripheral achievements more important.

Re: [agi] test

2019-06-24 Thread Costi Dumitrescu
Game theory? On 24.06.2019 17:31, Matt Mahoney wrote: The Turing test is a criteria for AI. It is not the test of a theory. On Sun, Jun 23, 2019, 9:26 PM Costi Dumitrescu mailto:costi.dumitre...@gmx.com>> wrote:

Re: [agi] test

2019-06-24 Thread Matt Mahoney
The Turing test is a criteria for AI. It is not the test of a theory. On Sun, Jun 23, 2019, 9:26 PM Costi Dumitrescu wrote: > > https://medium.com/@kengz/falsifiability-and-general-turing-test-a0a5d059f5b4 > > > On 23.06.2019 18:14, Ben Goertzel wrote: > > Failed ;( > > > > On Sun, Jun 23, 2019

Re: [agi] test

2019-06-23 Thread Costi Dumitrescu
https://medium.com/@kengz/falsifiability-and-general-turing-test-a0a5d059f5b4 On 23.06.2019 18:14, Ben Goertzel wrote: Failed ;( On Sun, Jun 23, 2019 at 11:14 PM Matt Mahoney wrote: Turing? On Sun, Jun 23, 2019, 10:47 AM Alan Grimes via AGI wrote: test -- Please report bounces from this

Re: [agi] test

2019-06-23 Thread Ben Goertzel
Failed ;( On Sun, Jun 23, 2019 at 11:14 PM Matt Mahoney wrote: > > Turing? > > On Sun, Jun 23, 2019, 10:47 AM Alan Grimes via AGI > wrote: >> >> test >> >> -- >> Please report bounces from this address to a...@numentics.com >> >> Powers are not rights. >> > > Artificial General

Re: [agi] test

2019-06-23 Thread Matt Mahoney
Turing? On Sun, Jun 23, 2019, 10:47 AM Alan Grimes via AGI wrote: > test > > -- > Please report bounces from this address to a...@numentics.com > > Powers are not rights. > -- Artificial General Intelligence List: AGI Permalink:

Re: [agi] test driving your system on captchas

2019-06-12 Thread Steve Richfield
Years ago I designed en early font-independant OCR system - much the same as recognizing captchas. It worked in reverse - evaluating the confidence that each letter was NOT each of the possible letters, and then chose the letter with the LOWEST confidence of NOT being the letter in question.