I don't think that the problem is just as simple as those you mentioned. However they might seem big, other big problems should be solved too.
On Sat, Aug 3, 2019, 8:00 PM Secretary of Trades <[email protected]> wrote: > Metabolism is the primary biological process. But reading and writing > chemical memories is not primary. > > While reading and writing electrical memories is primary it's also > critical in distinguishing between intelligent actions and automated > processes. > > To interrupt a primary process with uncertain items such as clouds > connectivity links is a fallacy. > > Vision and hearing should be enough to learn skills like seeing, > drawing, reading, writing, listening, talking, entity recognition and > other cognitive (not behavioral) tasks such as, of course, Kung fu. For > it's quite intelligent to have one's enemy kicking and screaming during > one's pillow peace times. > > > https://www.youtube.com/watch?v=RC7ZNXclWWY > > > Pride could be the third problem, for it runs smooth on Prejudice. > > > On 02.08.2019 23:45, Mohammadreza Alidoust wrote: > > Vision and hearing... And? > > > > On Fri, Aug 2, 2019 at 11:58 PM Secretary of Trades > > <[email protected] <mailto:[email protected]>> wrote: > > > > Vision and hearing. > > > > > > On 02.08.2019 04:12, Mohammadreza Alidoust wrote: > > > Thank you. I really enjoy and appreciate your comments. > > > > > > There is no universal problem solver. So for the purpose of > > building a > > > real AGI, how many problems should our model be able to solve? > > How big > > > is our problem space? > > > > > > > > > On Thu, Aug 1, 2019, 8:22 AM Matt Mahoney > > <[email protected] <mailto:[email protected]> > > > <mailto:[email protected] > > <mailto:[email protected]>>> wrote: > > > > > > The human brain cannot solve every problem. There is no > > > requirement for AGI to do so either. Hutter and Legg proved > that > > > there is no such thing as a universal problem solver or > > predictor. > > > > > > It feels like you could solve any problem given enough > > effort, but > > > that is an illusion. In reality you can't read a 20 digit > number > > > and recite it back. The human brain is good at solving problems > > > that improve reproductive fitness, and that's only because it > is > > > very complex with thousands of specialized structures and a > > > billion bits of inherited knowledge. > > > > > > On Wed, Jul 31, 2019, 10:58 PM Mohammadreza Alidoust > > > <[email protected] <mailto:[email protected]> > > <mailto:[email protected] > > <mailto:[email protected]>>> wrote: > > > > > > I may not call the model "a reinforcement learning neural > > > network", because nothing is going to be reinforced there. > I > > > would rather call it "model based decision making" where > the > > > model of the world will be incrementally completed and more > > > accurate, which then helps in better decision making. > > > > > > The model is in its early stages and must be tested in > > heavier > > > tasks like the ones you mentioned. However, I believe > > that AGI > > > is an infinite problem-space and a real AGI must be able to > > > solve everything. This requires further implementations, > > > modifications, time, teamwork, financial support, etc. > > > > > > On Thu, Aug 1, 2019 at 1:34 AM Matt Mahoney > > > <[email protected] > > <mailto:[email protected]> <mailto:[email protected] > > <mailto:[email protected]>>> wrote: > > > > > > Not understanding the math is the reader's problem. > > It is > > > necessary to describe the theory and the experiments > and > > > shouldn't be omitted. > > > > > > The paper describes 3 phases of training a > reinforcement > > > learning neural network. The first phase is > > experimenting > > > with random actions. The next two phases choose the > > action > > > estimated to maximize reward. They differ in that > > they use > > > explicit and then implicit memory, although the paper > > > didn't explain these or other details of the learner. > > > > > > I like that the paper has an experimental results > > section, > > > which most papers on AGI lack. But I think calling it a > > > "AGI brain" is a stretch. It learns in highly abstract > > > models of chemical manufacturing or cattle grazing. It > > > doesn't demonstrate actual AGI or solve any major > > > components like language or vision. > > > > > > On Wed, Jul 31, 2019, 8:01 AM Manuel Korfmann > > > <[email protected] <mailto:[email protected]> > > <mailto:[email protected] <mailto:[email protected]>>> wrote: > > > > > > I guess he meant: It’s difficult to understand all > > > these mathematical equations. Visualizations are > > > better at transporting ideas in a way that almost > > > everyone can understand easily. > > > > > >> On 31. Jul 2019, at 13:46, Mohammadreza Alidoust > > >> <[email protected] > > <mailto:[email protected]> > > >> <mailto:[email protected] > > <mailto:[email protected]>>> wrote: > > >> > > >> Thank you for reading my paper. I wish you > > success too. > > >> > > >> Could you please explain more about the > > readership? I > > >> am afraid I did not get the point. > > >> > > >> Best regards, > > >> Mohammadreza Alidoust > > >> > > >> > > >> On Tue, Jul 30, 2019, 2:14 PM Stefan Reich via AGI > > >> <[email protected] > > <mailto:[email protected]> <mailto:[email protected] > > <mailto:[email protected]>>> > > >> wrote: > > >> > > >> If someone paid me to go, I'd go... :-) > > >> > > >> http://agi-conf.org/2019/wp-content/uploads/2019/07/paper_21.pdf > > >> > > >> I like the stages you define in your paper > > >> (infancy, decision making, expert). Sounds > > >> reasonable. > > >> > > >> I pretty much erased mathematical formulas > from > > >> my brain though, even though I have studied > > those > > >> things. These days I prefer to think in > natural > > >> language or code. Increases the readership > > >> exponentially too. :-) > > >> > > >> Many greetings and best wishes to you > > >> > > >> > > >> On Tue, 30 Jul 2019 at 02:13, Mohammadreza > > >> Alidoust <[email protected] > > <mailto:[email protected]> > > >> <mailto:[email protected] > > <mailto:[email protected]>>> wrote: > > >> > > >> Dear Stefan Reich, > > >> > > >> Thank you. I do not know whether > submitting > > >> my paper before official publication by > > >> Springer is against their copyrights or > > not. > > >> I am not sure about their rules. I will > ask > > >> the authorities when I arrived Shenzhen > and > > >> inform you. > > >> > > >> However I recommend not to miss the > AGI-19. > > >> http://agi-conf.org/2019/ > > >> > > >> > > >> Best regards, > > >> Mohammadreza Alidoust > > >> > > >> > > >> > > >> -- > > >> Stefan Reich > > >> BotCompany.de <http://BotCompany.de> // > > >> Java-based operating systems > > >> > > > > > > *Artificial General Intelligence List > > > <https://agi.topicbox.com/latest>* / AGI / see discussions > > > <https://agi.topicbox.com/groups/agi> + participants > > > <https://agi.topicbox.com/groups/agi/members> + delivery options > > > <https://agi.topicbox.com/groups/agi/subscription> Permalink > > > > > < > https://agi.topicbox.com/groups/agi/Tf27122c71ce3b240-Mc82055636d6abd2a8d971995 > > > > > > *Artificial General Intelligence List > > <https://agi.topicbox.com/latest>* / AGI / see discussions > > <https://agi.topicbox.com/groups/agi> + participants > > <https://agi.topicbox.com/groups/agi/members> + delivery options > > <https://agi.topicbox.com/groups/agi/subscription> Permalink > > < > https://agi.topicbox.com/groups/agi/Tf27122c71ce3b240-M5381a7ff4717f20607fa1916 ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tf27122c71ce3b240-M628e338d79a9102ba41db518 Delivery options: https://agi.topicbox.com/groups/agi/subscription
