Ben, Your discussion here has been EXTREMELY valuable. I believe that You and Babbage made a mistake that I am carefully avoiding. This is a POV thing, so my words may not accurately communicate my thoughts.
The whole idea of project planning is to chop big projects into bite sized pieces, execute them to minimize risks and the critical path, etc. It appears to me that your and Babbage's bites were too large and/or failed to touch important milestones, e.g. proving principles. Note that you don't have tell me the answers the following questions, but answering them to yourself might prove to be valuable to you. I have no more "spare cycles" to expend on my projects than you do. So, for example: 1. I studied potential improvements to Dr. Eliza, but wrote NO code. 2. I made a deal with a patent attorney to patent important things, so I could sell IP to fund the rest of the project. 3. I am also getting other projects (like the hearing aid I mentioned in past postings) up to a fundable level, without expending lots of effort productizing them. I don't know enough to give you concrete advice, but I can take a shot at vague advice... 1. Can you roll your ideas into a patent and make some new IP that might form the nucleus for a fundable entity? 2. What is the simplest valuable application of your technology that could be sold, e.g. semi-intelligent avatars? Self-adapting street lights? Etc? If you could produce a patent aimed at such an application, you might be able to turn that into R&D money to create a better product, and use that income to fund greater developments. 3. OpenCog, etc., might be a valuable foil around which you could form a think tank to at least identify the bounds around the "secret sauce", identify alternative approaches, identify approaches to make the structure more or less impervious to new developmen,ts, etc. With this, you might be able to get the DoD, etc., to chip in some money. 4. IMHO you can't possibly succeed alone. What you are doing is WAY too big for one person to do, just like Babbage couldn't build his machine. You need to accumulate some human resources, e.g. a cooperative patent attorney, a PR guy, a CEO type, an angel investor to pay for patent applications, etc., who are willing to work for a piece of the action. This also means that YOU need to give away some of the future action. You might be surprised how cooperative some of the naysayers become when they realize that if they turn their negativity into positive help, that they might actually get a percent of the result. I hope this helps. I can really relate to your own pain and frustration. I have been there myself. I can how almost hear the *Rocky *theme as I rise above the same things that are now pulling you down. You now need to do the same. Steve ============== On Tue, Dec 25, 2012 at 6:05 PM, Ben Goertzel <[email protected]> wrote: > Oops, I clicked SEND prematurely... > > > I suggest you review the history of Babbage's Analytical Engine > > > > http://en.wikipedia.org/wiki/Analytical_Engine > > > > "The Analytical Engine was a proposed mechanical general-purpose > > computer designed by English mathematician Charles Babbage.[2] > > It was first described in 1837 as the successor to Babbage's > > Difference Engine, a design for a mechanical computer. The Analytical > > Engine incorporated an arithmetic logic unit, control flow in the form > > of conditional branching and loops, and integrated memory, making it > > the first design for a general-purpose computer that could be > > described in modern terms as Turing-complete.[3][4] > > Babbage was never able to complete construction of any of his machines > > due to conflicts with his chief engineer and inadequate funding.[5][6] > > It was not until the 1940s that the first general-purpose computers > > were actually built." > > > Note here that Babbage's core ideas were not only correct, but now seem > OBVIOUS to nearly anyone with a university education in computer science... > > Note that he failed to get the analytical engine built during his > lifetime, not because > of any core problem with the ideas or the design, but simply because > it was tricky to > accomplish using the component technologies available, and he ran into > various > human-management and funding problems... > > Note this page, titled > > "Babbage's Analytical Engine, 1834-1871. (Trial model)" > > > http://www.sciencemuseum.org.uk/objects/computing_and_data_processing/1878-3.aspx > > If you had asked Babbage in 1840, 6 years after conceiving the idea of > the Analytical Engine, > when he would have a complete Analytical Engine, what would he have > said? Maybe he > would have projected completion of the Analytical Engine by 1845 or > 1850. He would not have > predicted that the thing would still remain incomplete at the time of > his death in 1971.... > > But nevertheless, his failure to correctly foresee the pragmatic, > non-scientific obstacles he > would run into in trying to get his Analytical Engine created, tells > you NOTHING about the validity > of his underlying design -- which is now considered obvious and trivial. > > In my view, the situation with OpenCog is similar. The basic validity > of the design will look obvious > and almost trivial to any AI undergraduate of 2050. Given the code > libraries and hardware of that time, > the implementation of something like OpenCog will be an undergrad > course project..... Given the > code libraries and hardware available NOW, it's a lot of work to get > something like OpenCog implemented > and tested.... This is the sort of practical problem frequently > confronted by people with ideas that are > "ahead of the times" relative to the available technical infrastructure. > > One thing that is different now than in Babbage's time, though, is > that the exponential advancement of technology > is further along the curve. When five years of Babbage's life > passed, the underlying technologies needed to > support his Analytical Engine advanced only a little. When five years > of my life pass, the underlying tech needed > to support OpenCog advances quite a bit ;-) ... > > To counterbalance your mocking of my prior, conditional positive > predictions, I'd like to remind you of the long > list of incorrect negative predictions made in the past, regarding > various incipient technologies: > > http://www.merkle.com/badPredictions.html > > The folks making these incorrect negative predictions were just as > superior-sounding, self-confident and high-handed > as you are, in their dismissal of various technologies and approaches > that now seem obvious. Generically speaking, > humans aren't great at either positive or negative prediction, and we > need to consider each case carefully rather than > evaluating situations glibly. In the case of AGI, I prefer to > evaluate someone's AGI approach via actually looking at the > conceptual and scientific and technical ideas underlying their work, > rather than based on shallow considerations such > as the ones you are applying to OpenCog.... > > Your solution to the difficulty of achieving adequate funding for AGI > R&D, is to work on narrow AI and count on it > gradually becoming more and more AGI-ish.... You have > repeated this message dozens, perhaps hundreds of times during the > years I've been intersecting you online. (I am actually > amazed at your patience for repeating essentially the same arguments > in different words, month after month and year > after year.) You believe that > by incrementally improving a variety narrow-AI products like text > compressors, it will be possible to eventually achieve human-level > AGI. I doubt this will work. I understand how convenient it would be > if this WERE a workable path, because of course it's > easier to leverage resources toward practical projects with near-term, > high-probability commercial payoffs for investors. But > I'm not going to modify my scientific/conceptual understanding of > intelligence, based on criteria of economic convenience. > You will proceed with your R&D according to your own understanding, > and I'll proceed with my R&D according to mine. Research > is always risky, because it's always based in part on uncertain > knowledge and intuition. > > Success in AGI requires a number of things to go right: the right core > ideas, the right technical implementation choices, and the right > practical situation (team/funding/etc/). Failure in AGI requires > only one of these things to go wrong.... And to make AGI work, you > have to do all these difficult things right, in the midst of a bunch > of trolls and nay-sayers screeching annoyingly in your ear "You might > fail!! You might fail!! You may be wasting your time!! Are you sure > you shouldn't just get a garden-variety job and have an easier life??" > ..... But somehow, I manage to enjoy myself working on AGI anyway > -- I guess mainly because the subject matter is just SO damn important > and fascinating ;-) ... > > Merry Christmas ;) > Ben G > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
