I've always found that supplied code did not work just the way I wanted it to. A little like the new string that cannot be changed. I mean it is not that difficult to use an alternative or to even build a simple function but if you want to change a string in place you pretty much have to use some antique which is ok but then worrying about different string definition types becomes an issue. And to change a string into a char array type of situation becomes the issue. I am sure it must be the same with any high level AI/AGI code. So an AGI hub would have to be associated with a lot of simpler algorithms just to make it flexible. I once discovered that by referring to a simple array of 2 or 3 different types as a class the speed of the simplest tasks - like array initialization - was many times slower. (It was not a direct array, each item was a handle, the handle did not point to the data objects but to the class and so it was a double indirect stuporific. The double indirect stuporific is probably the standard these days because operating systems get their position because they are so superior in multitasking. Jim Bromer
On Sun, Feb 24, 2019 at 12:50 PM Linas Vepstas <[email protected]> wrote: > Jim, I tried to make the github://opencog/atomspace to be a collection of > things that seem to be generically useful for anyone working on a broad > range of agi-ideas. If they are not actually useful for someone's > particular theory, I cannot help that. > > By contrast, the github://opencog/opencog is a collection of half-finished > experiments that subscribe to one particular "theory" or maybe "a general > collection of related theoretical inspirations". The quality and success > and usability of the code there is highly variable. > > But if you just want to "try your own ideas", you have to start somewhere. > You're not gonna start by writing assembly code and inventing a compiler. > So find some collection of high-level things your theory wants. You > promptly discover that there aren't any high level things. Or those that > are, are incompatible, unmaintained, buggy, hopeless for reason x,y,z. I > made a list, 10 years ago, of every open source project that might provide > something useful for AGI. It proved to be a lost cause. Here's the list > (pre-neural-net): https://linas.org/agi.html its horrible and its ugly > and not cause I'm stupid, but because the state of agi-directed software is > terrible. > > The atomspace was a response to that. Ben's singularity-net vision is a > very different response: build a marketplace of software services -- a > Chinese menu of of Shenzhen tinker-toy parts from which you can easily > assemble your favorite AGI theory. (Although such a marketplace would be > good or even excellent, I'm not convinced that it will provide the parts > that I want) > > -- Linas > > On Sun, Feb 24, 2019 at 11:28 AM Jim Bromer <[email protected]> wrote: > >> I wonder if there could be an open source where people or teams might be >> able to try their own ideas without being persuaded to pursue someone >> else's overarching theory? >> Jim Bromer >> >> >> On Sun, Feb 24, 2019 at 12:09 PM Mike Archbold <[email protected]> >> wrote: >> >>> On 2/24/19, Matt Mahoney <[email protected]> wrote: >>> > Colin, I think the source of our disagreement is that we have very >>> > different ideas about what we mean by AGI. To you, AGI is an autonomous >>> > agent that learns on its own by doing science (experiments). It is >>> > completely general and can work in any field like a real human. To me, >>> AGI >>> > means automating anything we might have to otherwise have to pay a >>> human to >>> > do. >>> >>> It seems like you are both right: AGI should be an autonomous agent >>> the learns on its own by doing science (experiments) AND AGI should be >>> automating anything we might have to otherwise pay a human to do. >>> >>> (Now all that remains is the trivial task of development .... :) >>> >>> > >>> > You believe (I think) that AGI is not even possible in conventional >>> > computers. I tend to agree. First, transistors use too much power, >>> about a >>> > megawatt for a human brain sized neural network. We might achieve tens >>> of >>> > kilowatts using neuromorphic computing at the physical limits of >>> > minituration. >>> > >>> > Second, the brain is optimized for reproductive fitness, not universal >>> > learning, something Legg proved is not even mathematically possible. >>> > Instead we are born with 10^9 bits of knowledge encoded on our DNA. >>> That is >>> > half of what we know as adults, and that knowledge took 3 billion >>> years to >>> > program at the rate of one bit per generation. For example, you cannot >>> > learn to remember a 20 digit permutation on a screen and immediately >>> recall >>> > it back no matter how much you practice, which a something a gorilla >>> can do >>> > because its DNA is different. >>> > >>> > Third, we do not even want autonomy. Then we would have to deal with >>> human >>> > limitations like emotions and the need to sleep, take vacations, and >>> get >>> > paid. We work around these limitations and train humans to specialize >>> in a >>> > million different fields because that is how an organization gets work >>> > done. >>> > >>> > I don't expect AGI to look anything like a human. We already automate >>> 99% >>> > of work using specialized machines that are vastly better at their jobs >>> > than humans could ever be. We don't want autonomy. We want to be in >>> > control. We want to asymptomatically approach 100% as the cost of the >>> > remaining human portion rises at 3-4% per year as it has for >>> centuries. AGI >>> > is not a robot revolution. AGI is more productivity with less effort >>> using >>> > machines that can see and understand language, sense but not feel, know >>> > what we want without wanting, and recognize and predict human emotions >>> > without having any. >>> > >>> > On Sun, Feb 24, 2019, 1:56 AM Colin Hales <[email protected]> wrote: >>> > >>> >> Matt: >>> >> >>> >> "When you put millions of these specialists together you have AGI." >>> >> >>> >> No you don't!!!! Says who? (1) Where's the proof? (2) Where's the >>> >> principle that suggests it? You have neither of these things. Even if >>> you >>> >> had both these things you'd still have to build AGI and test it >>> assuming >>> >> they are false in order to do the science properly. >>> >> >>> >> And you do not get to 'define it' this way. >>> >> >>> >> This is SCIENCE. You have an artificial version of a natural general >>> >> intelligence when you've built one and it (yes the AGI itself, not >>> your >>> >> or >>> >> anyone else's blessing) AUTONOMOUSLY proves it. Like fire. Flight and >>> a >>> >> million other things. >>> >> >>> >> I can think of 1000 things that such a specialist-narrow-AI-collection >>> >> doesn't cover (like everything that science does not know, but could >>> find >>> >> out), and that a natural general intelligence can autonomously learn >>> >> (find >>> >> out), but that 'collection of specialist narrow AI' lacks .... along >>> with >>> >> an ability to _autonomously_ learn it, which is also something natural >>> >> general intelligence (yes us) can do. And even worse: such specialist >>> >> collections have ZERO intelligence, not because it doesn't know >>> >> something, but because it lacks the _autonomous_ bit. A real AGI can >>> know >>> >> absolutely NOTHING and yet have non-zero intelligence because it >>> includes >>> >> a >>> >> means to autonomously find out. Like us. >>> >> >>> >> So that collection cannot be an "artificial version of a natural >>> general >>> >> intelligence". >>> >> >>> >> End of story. >>> >> >>> >> Real AGI is defined by how it autonomously handles what it doesn't >>> know >>> >> (its ignorance!), not by what we bestow on it. If we bestow a means >>> for >>> >> learning X, that too does not make it AGI. A real but artificial >>> version >>> >> of >>> >> a natural general intelligence has to _autonomously_ learn how to >>> learn >>> >> something it does not know, like us. To prove it you first prove it >>> does >>> >> NOT know it. Then, later, after learning, when it proves it does, >>> >> autonomously, it gets a stab at the AGI gold medal. >>> >> >>> >> Meanwhile? Nothing wrong with what's being done. Powerful. Useful. >>> >> Impressive. All good. Rah rah rah, carry on. BUT: Its AUTOMATION >>> based on >>> >> natural general intelligence, not AGI and it has ZERO intellect. >>> >> >>> >> [image: AGI.JPG] >>> >> >>> >> The entire AGI project is foundered on this basic fact of the science. >>> >> >>> >> And all we ever get here is the endless echo chamber of "if only we >>> can >>> >> program enough computers" (= AUTOMATION) AGI will magically appear. >>> >> Rubbish. It was rubbish 65 years ago and we've done nothing but prove >>> >> more >>> >> completely it is still rubbish. >>> >> >>> >> Enough. >>> >> >>> >> I know you'll never face it. Forget it. >>> >> >>> >> >>> >> >>> >> On Sun., 24 Feb. 2019, 11:08 am Matt Mahoney, < >>> [email protected]> >>> >> wrote: >>> >> >>> >>> OpenCog is one open source effort. But real progress in AI like >>> Google, >>> >>> Siri, Alexa etc. is not just software. It's hundreds of petabytes of >>> >>> data >>> >>> from the 4 billion people on the internet and the millions of CPUs >>> >>> needed >>> >>> to process it. It's not just something you could download and run. >>> >>> >>> >>> I realize it's not AGI yet. We are still spending USD $83 trillion >>> per >>> >>> year for work that machines can't do yet. There are still >>> incompletely >>> >>> solved problems in vision, language, robotics, art, and modelling >>> human >>> >>> behavior. That's going to take lots more data and computing power. >>> The >>> >>> theoretical work is mostly done, although we still lack good models >>> of >>> >>> humor and music and much of our own DNA. >>> >>> >>> >>> If you want to make progress, choose a narrow AI problem. When you >>> put >>> >>> millions of these specialists together you have AGI. Don't try to do >>> it >>> >>> all >>> >>> yourself. You can't. >>> >>> >>> >>> On Sat, Feb 23, 2019, 12:33 PM Ed Pell <[email protected]> wrote: >>> >>> >>> >>>> All, why is there no open source AGI effort? >>> >>>> >>> >>>> Ed Pell >>> >>>> >>> >>> *Artificial General Intelligence List < >>> https://agi.topicbox.com/latest>* >>> >> / AGI / see discussions <https://agi.topicbox.com/groups/agi> + >>> >> participants <https://agi.topicbox.com/groups/agi/members> + delivery >>> >> options <https://agi.topicbox.com/groups/agi/subscription> Permalink >>> >> < >>> https://agi.topicbox.com/groups/agi/T69c9e23ba6b51be9-M908fd539511a9ae0f08c9a65 > > -- > cassette tapes - analog TV - film cameras - you > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + delivery > options <https://agi.topicbox.com/groups/agi/subscription> Permalink > <https://agi.topicbox.com/groups/agi/T69c9e23ba6b51be9-M8042b53f9eec3ba334d574cf> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T69c9e23ba6b51be9-Mb3028a02a67fd801fd39478c Delivery options: https://agi.topicbox.com/groups/agi/subscription
