Re: [agi] Controlled AI

2019-08-01 Thread rouncer81
Does the goverment have a strangehold on nuclear weapons - you have to try and 
learn it to find out if they lock u up in a 6 foot cube or not.
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tdd7cd3380dc9f5a9-M8aeab79361a33a9f0aaa805e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My paper in AGI-19

2019-08-01 Thread Matt Mahoney
The obvious application of AGI is automating $80 trillion per year that we
have to pay people for work that machines aren't smart enough to do. That
means solving hard problems in language, vision, robotics, art, and
modeling human behavior. I listed the requirements in more detail in my
paper. The solution is going to require decades of global effort. The best
that individuals can do is make small steps towards a solution.
http://mattmahoney.net/costofai.pdf

On Thu, Aug 1, 2019, 9:14 PM Mohammadreza Alidoust 
wrote:

> Thank you. I really enjoy and appreciate your comments.
>
> There is no universal problem solver. So for the purpose of building a
> real AGI, how many problems should our model be able to solve? How big is
> our problem space?
>
>
> On Thu, Aug 1, 2019, 8:22 AM Matt Mahoney  wrote:
>
>> The human brain cannot solve every problem. There is no requirement for
>> AGI to do so either. Hutter and Legg proved that there is no such thing as
>> a universal problem solver or predictor.
>>
>> It feels like you could solve any problem given enough effort, but that
>> is an illusion. In reality you can't read a 20 digit number and recite it
>> back. The human brain is good at solving problems that improve reproductive
>> fitness, and that's only because it is very complex with thousands of
>> specialized structures and a billion bits of inherited knowledge.
>>
>> On Wed, Jul 31, 2019, 10:58 PM Mohammadreza Alidoust <
>> class.alido...@gmail.com> wrote:
>>
>>> I may not call the model "a reinforcement learning neural network",
>>> because nothing is going to be reinforced there. I would rather call it
>>> "model based decision making" where the model of the world will be
>>> incrementally completed and more accurate, which then helps in better
>>> decision making.
>>>
>>> The model is in its early stages and must be tested in heavier tasks
>>> like the ones you mentioned. However, I believe that AGI is an infinite
>>> problem-space and a real AGI must be able to solve everything. This
>>> requires further implementations, modifications, time, teamwork, financial
>>> support, etc.
>>>
>>> On Thu, Aug 1, 2019 at 1:34 AM Matt Mahoney 
>>> wrote:
>>>
 Not understanding the math is the reader's problem. It is necessary to
 describe the theory and the experiments and shouldn't be omitted.

 The paper describes 3 phases of training a reinforcement learning
 neural network. The first phase is experimenting with random actions. The
 next two phases choose the action estimated to maximize reward. They differ
 in that they use explicit and then implicit memory, although the paper
 didn't explain these or other details of the learner.

 I like that the paper has an experimental results section, which most
 papers on AGI lack. But I think calling it a "AGI brain" is a stretch. It
 learns in highly abstract models of chemical manufacturing or cattle
 grazing. It doesn't demonstrate actual AGI or solve any major components
 like language or vision.

 On Wed, Jul 31, 2019, 8:01 AM Manuel Korfmann 
 wrote:

> I guess he meant: It’s difficult to understand all these mathematical
> equations. Visualizations are better at transporting ideas in a way that
> almost everyone can understand easily.
>
> On 31. Jul 2019, at 13:46, Mohammadreza Alidoust <
> class.alido...@gmail.com> wrote:
>
> Thank you for reading my paper. I wish you success too.
>
> Could you please explain more about the readership? I am afraid I did
> not get the point.
>
> Best regards,
> Mohammadreza Alidoust
>
>
> On Tue, Jul 30, 2019, 2:14 PM Stefan Reich via AGI <
> agi@agi.topicbox.com> wrote:
>
>> If someone paid me to go, I'd go... :-)
>>
>> > http://agi-conf.org/2019/wp-content/uploads/2019/07/paper_21.pdf
>>
>> I like the stages you define in your paper (infancy, decision making,
>> expert). Sounds reasonable.
>>
>> I pretty much erased mathematical formulas from my brain though, even
>> though I have studied those things. These days I prefer to think in 
>> natural
>> language or code. Increases the readership exponentially too. :-)
>>
>> Many greetings and best wishes to you
>>
>>
>> On Tue, 30 Jul 2019 at 02:13, Mohammadreza Alidoust <
>> class.alido...@gmail.com> wrote:
>>
>>> Dear Stefan Reich,
>>>
>>> Thank you. I do not know whether submitting my paper before official
>>> publication by Springer is against their copyrights or not. I am not 
>>> sure
>>> about their rules. I will ask the authorities when I arrived Shenzhen 
>>> and
>>> inform you.
>>>
>>> However I recommend not to miss the AGI-19.
>>> http://agi-conf.org/2019/
>>>
>>>
>>> Best regards,
>>> Mohammadreza Alidoust
>>>
>>
>>
>> --
>> Stefan Reich
>> BotCompany.de // Java-based o

Re: [agi] Narrow AGI

2019-08-01 Thread rouncer81
Machines today only want what we make them want,  only we truly want things,   
so if you want AGI you need to create something with a true purpose, not an 
artificial one.

I think any more than narrow a.i. is blowing it out of superfluous proportions, 
 and we need something more important to do with our time than just defunkting 
all of man,  for the benefit of our egos?
--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M5852e37bbe9079f8a4fe84a0
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My paper in AGI-19

2019-08-01 Thread Mohammadreza Alidoust
Thank you. I really enjoy and appreciate your comments.

There is no universal problem solver. So for the purpose of building a real
AGI, how many problems should our model be able to solve? How big is our
problem space?


On Thu, Aug 1, 2019, 8:22 AM Matt Mahoney  wrote:

> The human brain cannot solve every problem. There is no requirement for
> AGI to do so either. Hutter and Legg proved that there is no such thing as
> a universal problem solver or predictor.
>
> It feels like you could solve any problem given enough effort, but that is
> an illusion. In reality you can't read a 20 digit number and recite it
> back. The human brain is good at solving problems that improve reproductive
> fitness, and that's only because it is very complex with thousands of
> specialized structures and a billion bits of inherited knowledge.
>
> On Wed, Jul 31, 2019, 10:58 PM Mohammadreza Alidoust <
> class.alido...@gmail.com> wrote:
>
>> I may not call the model "a reinforcement learning neural network",
>> because nothing is going to be reinforced there. I would rather call it
>> "model based decision making" where the model of the world will be
>> incrementally completed and more accurate, which then helps in better
>> decision making.
>>
>> The model is in its early stages and must be tested in heavier tasks like
>> the ones you mentioned. However, I believe that AGI is an infinite
>> problem-space and a real AGI must be able to solve everything. This
>> requires further implementations, modifications, time, teamwork, financial
>> support, etc.
>>
>> On Thu, Aug 1, 2019 at 1:34 AM Matt Mahoney 
>> wrote:
>>
>>> Not understanding the math is the reader's problem. It is necessary to
>>> describe the theory and the experiments and shouldn't be omitted.
>>>
>>> The paper describes 3 phases of training a reinforcement learning neural
>>> network. The first phase is experimenting with random actions. The next two
>>> phases choose the action estimated to maximize reward. They differ in that
>>> they use explicit and then implicit memory, although the paper didn't
>>> explain these or other details of the learner.
>>>
>>> I like that the paper has an experimental results section, which most
>>> papers on AGI lack. But I think calling it a "AGI brain" is a stretch. It
>>> learns in highly abstract models of chemical manufacturing or cattle
>>> grazing. It doesn't demonstrate actual AGI or solve any major components
>>> like language or vision.
>>>
>>> On Wed, Jul 31, 2019, 8:01 AM Manuel Korfmann 
>>> wrote:
>>>
 I guess he meant: It’s difficult to understand all these mathematical
 equations. Visualizations are better at transporting ideas in a way that
 almost everyone can understand easily.

 On 31. Jul 2019, at 13:46, Mohammadreza Alidoust <
 class.alido...@gmail.com> wrote:

 Thank you for reading my paper. I wish you success too.

 Could you please explain more about the readership? I am afraid I did
 not get the point.

 Best regards,
 Mohammadreza Alidoust


 On Tue, Jul 30, 2019, 2:14 PM Stefan Reich via AGI <
 agi@agi.topicbox.com> wrote:

> If someone paid me to go, I'd go... :-)
>
> > http://agi-conf.org/2019/wp-content/uploads/2019/07/paper_21.pdf
>
> I like the stages you define in your paper (infancy, decision making,
> expert). Sounds reasonable.
>
> I pretty much erased mathematical formulas from my brain though, even
> though I have studied those things. These days I prefer to think in 
> natural
> language or code. Increases the readership exponentially too. :-)
>
> Many greetings and best wishes to you
>
>
> On Tue, 30 Jul 2019 at 02:13, Mohammadreza Alidoust <
> class.alido...@gmail.com> wrote:
>
>> Dear Stefan Reich,
>>
>> Thank you. I do not know whether submitting my paper before official
>> publication by Springer is against their copyrights or not. I am not sure
>> about their rules. I will ask the authorities when I arrived Shenzhen and
>> inform you.
>>
>> However I recommend not to miss the AGI-19.
>> http://agi-conf.org/2019/
>>
>>
>> Best regards,
>> Mohammadreza Alidoust
>>
>
>
> --
> Stefan Reich
> BotCompany.de // Java-based operating systems
>

 *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
>
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf27122c71ce3b240-Mc82055636d6abd2a8d971995
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My paper in AGI-19

2019-08-01 Thread Mohammadreza Alidoust
Thank you for your email. You know, it is not about time management and its
worth. I am here to learn and I appreciate your comments and criticism.

What approach would you suggest that will never lead to undefined?


On Thu, Aug 1, 2019, 3:25 PM Jim Bromer  wrote:

> Mohammadreza said, "I think "intelligence" means optimization. So, if it
> is true, how can we tell an AGI agent to act optimally? e.g. with IF-THEN
> rules? definitely Not! These rules may lead to unforeseen states."
>
> If-Then rules are not the only application of discrete reasoning that are
> possible. In fact, when you talk about "optimization" you are talking about
> using mathematics to describe a discrete 'kind of thing'. Mathematical
> formula can lead to unforeseen states when they are applied to
> computational issues. Turing's Halting Problem is an example (- I am
> assuming that 'undefined' has a strong relation to 'unforeseen' as you used
> it.) You need to apply the mathematics to a 'kind of situation' and the
> idea that your mathematical formula might not lead to 'unforeseen states'
> when it is actually being used is naïve.
>
> Multiplication of an integer product has an uneven compressibility rate.
> OK, maybe I am talking about division. Division not only has a uneven
> compressibility rate it has an uneven deterministic rate. This has nothing
> to do with your paper.. So now the choice you have is: Do you take the time
> to understand what I am talking about? Do you take the time to understand
> how this might apply to your interest in AI / AGI? These are not trivial
> problems for you solve. How do you come to a conclusion about whether you
> should take the time to try to understand my criticism (and how it might be
> relevant to you) if I cannot make it easy for you to understand in a few
> minutes of reading? The conference is just about to start. Is it really
> worth your time to think about what I am trying to say? Right now it is not
> worth your time to respond. In a few years it will probably be very
> relevant to what you would like to do.
> Jim Bromer
>
>
> On Wed, Jul 31, 2019 at 10:09 PM Mohammadreza Alidoust <
> class.alido...@gmail.com> wrote:
>
>> Thank you. Sure, visualizations help in better understanding. However I
>> do not believe that the model contains difficult mathematics. BSc Students
>> of control engineering in their third or fourth year, study state-space
>> representation in their Modern Control Engineering course.
>>
>> Anyway, I think AGI is NOT POSSIBLE without mathematics.
>> I think "intelligence" means optimization. So, if it is true, how can we
>> tell an AGI agent to act optimally? e.g. with IF-THEN rules? definitely
>> Not! These rules may lead to unforeseen states.
>> All of the AI algorithms have a mathematical formulation behind. Can
>> anyone name an AI algorithm which has no mathematical background?
>>
>> I think if the hypothesis "intelligence is optimization" is true, we have
>> to, first devise an optimization framework for our problem space. That
>> optimization framework enables our agent to act intelligent in that space.
>> AGI is, in my view, an infinite problem-space. So, the question is: What
>> is able to cover the infinite better than the mathematics?
>>
>> On Wed, Jul 31, 2019 at 4:31 PM Manuel Korfmann 
>> wrote:
>>
>>> I guess he meant: It’s difficult to understand all these mathematical
>>> equations. Visualizations are better at transporting ideas in a way that
>>> almost everyone can understand easily.
>>>
>>> On 31. Jul 2019, at 13:46, Mohammadreza Alidoust <
>>> class.alido...@gmail.com> wrote:
>>>
>>> Thank you for reading my paper. I wish you success too.
>>>
>>> Could you please explain more about the readership? I am afraid I did
>>> not get the point.
>>>
>>> Best regards,
>>> Mohammadreza Alidoust
>>>
>>>
>>> On Tue, Jul 30, 2019, 2:14 PM Stefan Reich via AGI 
>>> wrote:
>>>
 If someone paid me to go, I'd go... :-)

 > http://agi-conf.org/2019/wp-content/uploads/2019/07/paper_21.pdf

 I like the stages you define in your paper (infancy, decision making,
 expert). Sounds reasonable.

 I pretty much erased mathematical formulas from my brain though, even
 though I have studied those things. These days I prefer to think in natural
 language or code. Increases the readership exponentially too. :-)

 Many greetings and best wishes to you


 On Tue, 30 Jul 2019 at 02:13, Mohammadreza Alidoust <
 class.alido...@gmail.com> wrote:

> Dear Stefan Reich,
>
> Thank you. I do not know whether submitting my paper before official
> publication by Springer is against their copyrights or not. I am not sure
> about their rules. I will ask the authorities when I arrived Shenzhen and
> inform you.
>
> However I recommend not to miss the AGI-19.
> http://agi-conf.org/2019/
>
>
> Best regards,
> Mohammadreza Alidoust
>


 --
 Stefan Reich

Re: [agi] Narrow AGI

2019-08-01 Thread Mike Archbold
and importantly, as Ben predicts, 6)  the ability for a narrow AGI to
utilize multiple sub AGIs seamlessly within a function area group
increases

On 8/1/19, Mike Archbold  wrote:
> I like this editorial but I'm not sure "Narrow AGI" is the best label.
> At the moment I don't have a better name for it though. I mean, I
> agree in principle but it's like somebody saying "X is a liberal
> conservative." X might really be so, but it might be that... oh hell,
> why don't we just call it "AI"?
>
> Really, all technology performs some function. A function is kind of
> intrinsically narrow. Real estate sales, radio advertising, wire
> transfer, musical composition...In that light, all technology is
> narrow for its function.
>
> The difficulty with AGI is: it doesn't understand, reason, and judge
> as a human can, at a human level. But I think that a narrow AGI app is
> still a narrow function! Thus narrow AGI is what is going on, a narrow
> function because all technology is basically narrow, we need it to do
> something specific. What narrow AI is, is really just a lot better
> good old fashioned programs that do something better at a human level.
>
> My opinion is a "narrow AGI" would need:
>
> 1) increased common sense, the ability to form rudimentary
> understanding, reasoning, and judgining pushing the boundary toward
> human level
> 2) can perform some function, some narrow function (all functions are
> narrow it seems) very well, approaching continually human level
> competence
> 3) Can handle wide variations in cases (DL level fuzzy pattern
> matching, patternism)
> 4) USES A COMMON BASE WITH OTHER NARROW AGIs which gets more competent
> 5) Becomes increasingly easier to specialize
>
>
> Mike A
>
> On 8/1/19, Costi Dumitrescu  wrote:
>> So Mars gets conquered by AI robots. What Tensor Flaw is so intelligent
>> about surgery or proving math theorems?
>>
>> Bias?
>>
>>
>> On 01.08.2019 13:16, Ben Goertzel wrote:
>>> https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
>>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-Mbbbf42e3f57b1cd8cb64f489
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Narrow AGI

2019-08-01 Thread Mike Archbold
I like this editorial but I'm not sure "Narrow AGI" is the best label.
At the moment I don't have a better name for it though. I mean, I
agree in principle but it's like somebody saying "X is a liberal
conservative." X might really be so, but it might be that... oh hell,
why don't we just call it "AI"?

Really, all technology performs some function. A function is kind of
intrinsically narrow. Real estate sales, radio advertising, wire
transfer, musical composition...In that light, all technology is
narrow for its function.

The difficulty with AGI is: it doesn't understand, reason, and judge
as a human can, at a human level. But I think that a narrow AGI app is
still a narrow function! Thus narrow AGI is what is going on, a narrow
function because all technology is basically narrow, we need it to do
something specific. What narrow AI is, is really just a lot better
good old fashioned programs that do something better at a human level.

My opinion is a "narrow AGI" would need:

1) increased common sense, the ability to form rudimentary
understanding, reasoning, and judgining pushing the boundary toward
human level
2) can perform some function, some narrow function (all functions are
narrow it seems) very well, approaching continually human level
competence
3) Can handle wide variations in cases (DL level fuzzy pattern
matching, patternism)
4) USES A COMMON BASE WITH OTHER NARROW AGIs which gets more competent
5) Becomes increasingly easier to specialize


Mike A

On 8/1/19, Costi Dumitrescu  wrote:
> So Mars gets conquered by AI robots. What Tensor Flaw is so intelligent
> about surgery or proving math theorems?
>
> Bias?
>
>
> On 01.08.2019 13:16, Ben Goertzel wrote:
>> https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
>>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M3aff1fc5fb106c331c3ce13e
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AGI Python library

2019-08-01 Thread Danko Nikolic
Sorry. I didn't know



On Thu, Aug 1, 2019 at 8:25 PM Matt Mahoney  wrote:

> You let it out of the box?!?!? WE'RE DOOMED!!!
>
> On Thu, Aug 1, 2019, 7:10 AM Danko Nikolic 
> wrote:
>
>> Hi everyone,
>>
>> I just tried the new agi library for Python. This is so exciting! But it
>> does not work really well for me. It is not responding any more. Where am I
>> making the mistake? Please see below a screenshot of my code.
>>
>> Thanks for any help.
>>
>> Danko
>>
>> [image: capture.PNG]
>>
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf98655bbfa70364b-M62cb153fa21d2ab2ed065a7d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] AGI Python library

2019-08-01 Thread Matt Mahoney
You let it out of the box?!?!? WE'RE DOOMED!!!

On Thu, Aug 1, 2019, 7:10 AM Danko Nikolic  wrote:

> Hi everyone,
>
> I just tried the new agi library for Python. This is so exciting! But it
> does not work really well for me. It is not responding any more. Where am I
> making the mistake? Please see below a screenshot of my code.
>
> Thanks for any help.
>
> Danko
>
> [image: capture.PNG]
> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 
>

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf98655bbfa70364b-M9cf5d330408ccfc983e8375d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Narrow AGI

2019-08-01 Thread Costi Dumitrescu

So Mars gets conquered by AI robots. What Tensor Flaw is so intelligent
about surgery or proving math theorems?

Bias?


On 01.08.2019 13:16, Ben Goertzel wrote:

https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce



--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M5cab5fc1752bd3696d86bed7
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] The brain: wholistically.

2019-08-01 Thread Alan Grimes via AGI
The cortical algorithm is interesting. Cortical columns are pretty sexy 
too because they're an obvious target for finding a high level algorithm 
that does the same or better.


But regardless of how you implement cortex, the next thing you MUST do 
to achieve a full functioning mind is to implement the actual structure 
for sequential thought, not just a logic network (like a deep learning 
model), but actual sequential thinking. It also seems that the brain can 
do basic task switching, in that multiple symbols can be fit into the 
pipeline and operate in sequence.?? I think the synchronization and 
buffering is done by the basal ganglia.


In the brain, that structure is the Cortico-thalamo-Cortical loop. Any 
functioning AGI architecture will have something similar.


There are several things going on, one is basic feedback, ie feedback 
applied to an input circuit will squelch out parts of the input that the 
brain has already perceived so that new information is highlighted.


Sometimes brain regions generate activity spontaneously and information 
is encoded by inhibiting some of those spontaneous signals. So yeah, 
there is a lot of information to grok but it's all important if you want 
to sucessfully make a functioning AGI.


Another thing is disinhibition, where a brain region might have 
inhibitory projections but if it, itself, is inhibited, that is removing 
that inhibition to the ultimate target, so it becomes disinhibited and 
available to either spontaneous firing or stimulated firing.


(many neurons have a base rate of spontaneous firing that can either be 
excited or inhibited.)



Quote from page on Thalamus linked below:
Recent research suggests that the mediodorsal thalamus may play a 
broader role in cognition. Specifically, the mediodorsal thalamus may 
"amplify the connectivity (signaling strength) of just the circuits in 
the cortex appropriate for the current context and thereby contribute to 
the flexibility (of the mammalian brain) to make complex decisions by 
wiring the many associations on which decisions depend into weakly 
connected cortical circuits."[31] Researchers founds that "enhancing MD 
activity magnified the ability of mice to ???think,???[31] driving down by 
more than 25 percent their error rate in deciding which conflicting 
sensory stimuli to follow to find the reward." [32]


REALLY?!??!!??! YOU THINK?!??!?!!?? I mean like wow, you must be like 
Sherlock Holms and Albert Einstein rolled together, I mean who would 
have thought a piece of anatomy that was basically wired into just about 
every part of the brain and was part of the major signal/information 
flows into and through the brain could have such a role


Seriously though, this is how the brain selects which networks it needs 
to accomplish specific functions and implements multiple behaviors (ie 
general intelligence) instead of just mastering one simple domain such 
as an Atari game or something.



https://en.wikipedia.org/wiki/Cortico-basal_ganglia-thalamo-cortical_loop

https://en.wikipedia.org/wiki/Thalamus

https://en.wikipedia.org/wiki/Basal_ganglia

Anyway, this post is part of a series I'm doing on neural anatomy, I 
already have my topic picked out for tomorrow, not sure what else I 
should cover, feel free to make requests...


--
Clowns feed off of funny money;
Funny money comes from the FED
so NO FED -> NO CLOWNS!!!

Powers are not rights.


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5561f5c999dffad2-Mc4922c2892b6f1c05149e6b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription


RE: [agi] Narrow AGI

2019-08-01 Thread peter
Narrow elements within AGI development can serve as scaffolding but unless the 
project philosophy is inherently General it's likely to fall into the Narrow AI 
trap.

https://towardsdatascience.com/no-you-cant-get-from-narrow-ai-to-agi-eedc70e36e50
 

 

from External to Internal Intelligence: 
https://medium.com/intuitionmachine/from-narrow-to-general-ai-e21b568155b9 

 



 

-Original Message-
From: Ben Goertzel  
Sent: Thursday, August 1, 2019 3:16 AM
To: AGI 
Subject: [agi] Narrow AGI

 

 

 
https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce

 

--

Ben Goertzel, PhD

  http://goertzel.org

 

“The only people for me are the mad ones, the ones who are mad to live, mad to 
talk, mad to be saved, desirous of everything at the same time, the ones who 
never yawn or say a commonplace thing, but burn, burn, burn like fabulous 
yellow roman candles exploding like spiders across the stars.” -- Jack Kerouac

 

--

Artificial General Intelligence List: AGI

Permalink:  

 https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M8e01c66a1aa1c2bc0bfa700f

Delivery options:   
https://agi.topicbox.com/groups/agi/subscription


--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M7f336187d21463d1d3fa831d
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] Narrow AGI

2019-08-01 Thread Duncan Murray
That was a good article - I generally agree with it, but am a little
skeptical in terms of different industries sharing knowledge openly for it
to be completely effective.

It most likely will turn out to be a lot of paywalls / walled gardens.

On Thu, Aug 1, 2019 at 7:47 PM Ben Goertzel  wrote:

> 
> https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce
> 
> --
> Ben Goertzel, PhD
> http://goertzel.org
> 
> “The only people for me are the mad ones, the ones who are mad to
> live, mad to talk, mad to be saved, desirous of everything at the same
> time, the ones who never yawn or say a commonplace thing, but burn,
> burn, burn like fabulous yellow roman candles exploding like spiders
> across the stars.” -- Jack Kerouac

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M1b6afb4fa533cbf946795aa5
Delivery options: https://agi.topicbox.com/groups/agi/subscription


[agi] AGI Python library

2019-08-01 Thread Danko Nikolic
Hi everyone,

I just tried the new agi library for Python. This is so exciting! But it
does not work really well for me. It is not responding any more. Where am I
making the mistake? Please see below a screenshot of my code.

Thanks for any help.

Danko

[image: capture.PNG]

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tf98655bbfa70364b-M68758c5af1193d59137e62fa
Delivery options: https://agi.topicbox.com/groups/agi/subscription


Re: [agi] My paper in AGI-19

2019-08-01 Thread Jim Bromer
Mohammadreza said, "I think "intelligence" means optimization. So, if it is
true, how can we tell an AGI agent to act optimally? e.g. with IF-THEN
rules? definitely Not! These rules may lead to unforeseen states."

If-Then rules are not the only application of discrete reasoning that are
possible. In fact, when you talk about "optimization" you are talking about
using mathematics to describe a discrete 'kind of thing'. Mathematical
formula can lead to unforeseen states when they are applied to
computational issues. Turing's Halting Problem is an example (- I am
assuming that 'undefined' has a strong relation to 'unforeseen' as you used
it.) You need to apply the mathematics to a 'kind of situation' and the
idea that your mathematical formula might not lead to 'unforeseen states'
when it is actually being used is naïve.

Multiplication of an integer product has an uneven compressibility rate.
OK, maybe I am talking about division. Division not only has a uneven
compressibility rate it has an uneven deterministic rate. This has nothing
to do with your paper.. So now the choice you have is: Do you take the time
to understand what I am talking about? Do you take the time to understand
how this might apply to your interest in AI / AGI? These are not trivial
problems for you solve. How do you come to a conclusion about whether you
should take the time to try to understand my criticism (and how it might be
relevant to you) if I cannot make it easy for you to understand in a few
minutes of reading? The conference is just about to start. Is it really
worth your time to think about what I am trying to say? Right now it is not
worth your time to respond. In a few years it will probably be very
relevant to what you would like to do.
Jim Bromer


On Wed, Jul 31, 2019 at 10:09 PM Mohammadreza Alidoust <
class.alido...@gmail.com> wrote:

> Thank you. Sure, visualizations help in better understanding. However I do
> not believe that the model contains difficult mathematics. BSc Students of
> control engineering in their third or fourth year, study state-space
> representation in their Modern Control Engineering course.
>
> Anyway, I think AGI is NOT POSSIBLE without mathematics.
> I think "intelligence" means optimization. So, if it is true, how can we
> tell an AGI agent to act optimally? e.g. with IF-THEN rules? definitely
> Not! These rules may lead to unforeseen states.
> All of the AI algorithms have a mathematical formulation behind. Can
> anyone name an AI algorithm which has no mathematical background?
>
> I think if the hypothesis "intelligence is optimization" is true, we have
> to, first devise an optimization framework for our problem space. That
> optimization framework enables our agent to act intelligent in that space.
> AGI is, in my view, an infinite problem-space. So, the question is: What
> is able to cover the infinite better than the mathematics?
>
> On Wed, Jul 31, 2019 at 4:31 PM Manuel Korfmann 
> wrote:
>
>> I guess he meant: It’s difficult to understand all these mathematical
>> equations. Visualizations are better at transporting ideas in a way that
>> almost everyone can understand easily.
>>
>> On 31. Jul 2019, at 13:46, Mohammadreza Alidoust <
>> class.alido...@gmail.com> wrote:
>>
>> Thank you for reading my paper. I wish you success too.
>>
>> Could you please explain more about the readership? I am afraid I did not
>> get the point.
>>
>> Best regards,
>> Mohammadreza Alidoust
>>
>>
>> On Tue, Jul 30, 2019, 2:14 PM Stefan Reich via AGI 
>> wrote:
>>
>>> If someone paid me to go, I'd go... :-)
>>>
>>> > http://agi-conf.org/2019/wp-content/uploads/2019/07/paper_21.pdf
>>>
>>> I like the stages you define in your paper (infancy, decision making,
>>> expert). Sounds reasonable.
>>>
>>> I pretty much erased mathematical formulas from my brain though, even
>>> though I have studied those things. These days I prefer to think in natural
>>> language or code. Increases the readership exponentially too. :-)
>>>
>>> Many greetings and best wishes to you
>>>
>>>
>>> On Tue, 30 Jul 2019 at 02:13, Mohammadreza Alidoust <
>>> class.alido...@gmail.com> wrote:
>>>
 Dear Stefan Reich,

 Thank you. I do not know whether submitting my paper before official
 publication by Springer is against their copyrights or not. I am not sure
 about their rules. I will ask the authorities when I arrived Shenzhen and
 inform you.

 However I recommend not to miss the AGI-19.
 http://agi-conf.org/2019/


 Best regards,
 Mohammadreza Alidoust

>>>
>>>
>>> --
>>> Stefan Reich
>>> BotCompany.de // Java-based operating systems
>>>
>>
>> *Artificial General Intelligence List *
> / AGI / see discussions  +
> participants  + delivery
> options  Permalink
> 

[agi] Narrow AGI

2019-08-01 Thread Ben Goertzel
https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce

-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

--
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1ff21f8b11c8c9ae-M8e01c66a1aa1c2bc0bfa700f
Delivery options: https://agi.topicbox.com/groups/agi/subscription