Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-21 Thread Steve Richfield
Mike,

On 9/20/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve: If I were selling a technique like Buzan then I would agree.
 However, someone selling a tool to merge ALL techniques is in a different
 situation, with a knowledge engine to sell.

 The difference AFAICT is that Buzan had an *idea* -   don't organize your
 thoughts about a subject in random order, or list, or tables or other old
 structures  etc. organize them like a map/tree on a page so that you can
 oversee them. Not a big idea, but an idea, out of wh. he's made money, 
 clearly appeals to many..


Addresses a different audience than I was looking at, but yea, I think I see
what you are getting at.


 If you have a distinctive idea, wh. you may well have, I've missed it 
 you're not repeating it. A tool to merge all techniques is a goal, not an
 idea. You have to show me that you have an idea - some new insight into
 general system principles applying to ,say, repair.


There is a large body of experience with various knowledge engines of
decades past. My ideas are tiny bits of glue that were missed in long past
projects that were hastily designed, programmed, presented, and abandoned.
In some of these cases, whole approaches were abandoned because of tiny
problems in their design or coding. I am just taking the considerable time
(now ~6 years) to methodically work though the myriad issues and identify
viable approaches to the challenges that buried long past projects. As I
have said here before, if not for Weizenbaum's book, Dr. Eliza or a very
similar program would have been developed by 1980 and the Internet
Singularity would have arrived on the heels of the first Internet
deployment. Weizenbaum precipitated a computer disaster on a scale fully
comparable to the Perceptron disaster, yet still, no one sees it.

  And if you are to do focus groups, you will also have to have a new idea
 to show them  test on them.


Hmmm, I hadn't even thought about focus groups. I consider this area to be
way too subtle for any but computational linguists and similar sorts of
experts to participate. So far, the folks working on the Russian Translator
have been the most helpful. There is no broad masterstroke of genius behind
Dr. Eliza, but instead countless seemingly insignificant details make it
work where prior efforts failed. Little details like making users answer
questions by editing their problem statements rather than answering the
questions directly. Made separately, these decisions would push Dr. Eliza
into the same holes that past systems fell into. Instead, it must be
designed as a working system. Do you think that I could be wrong in this
presumption?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-20 Thread Steve Richfield
Mike,

On 9/19/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve: Thanks for wringing my thoughts out. Can you twist a little
 tighter?!

 A v. loose practical analogy is mindmaps - it was obviously better for
 Buzan to develop a sub-discipline/technique 1st, and a program later.


MAJOR difference: Buzan's iMindMap does his own particular method, whereas
Dr. Eliza is designed to do EVERYONE's methods, being easily extensible by
simply adding more knowledge. Dr. Eliza's limitations affect the problems it
can handle, rather than the knowledge it can use.


 What you don't understand, I think, in all your reasoning about repair is
 that there is probably no principle - however obvious it seems to you, that
 will not be totally questioned and contradicted, and reasonably so, by
 someone else.


Agreed. So what?! What you don't understand is that the better repairmen are
made that way by having larger/different assortments of techniques, often to
succeed where repairmen with lesser assortments failed. That there are a few
worthless techniques in the assortment is (almost) irrelevant. The ONLY
significance of worthless techniques is that they can waste some time,
unless of course you allow them to consume a LOT of time (and sometimes
enough to kill you), as modern medicine now so often does.


 The proof is in the pudding. Get yourself a set of principles together, and
 try them out on appropriately interested parties - some of your potential
 audience/customers - *before* you go to the trouble of programming.


There is a major communications/worldmodel disconnect of some sort here.
Much of my life has been doing some sort of repair - auto, electronic,
medical, etc. Often, I have succeeded where other experts had
previously failed. Their missing piece was usually their inability to use
what they DID know to effect the repair, and in failing to do
obviously-needed research when dead ends were reached.

If there is already a formal repair theory of some sort other than
individuals opinions in various sub-domain books, then I have completely
missed it. Hence, there is no present body of knowledge or experts, nor
people with broad enough experience to value their opinions beyond
obviousness.


  That's obviously good technological/business practice. Do some market
 research. I think you'll learn a lot.


If I were selling a technique like Buzan then I would agree. However,
someone selling a tool to merge ALL techniques is in a different situation,
with a knowledge engine to sell.

Finally, I absolutely agree that many/most experts will reject something
like Dr, Eliza, as I have already seen in the medical domain. I have a
friend who is the Director of Research at a major university's medical
center, and we have discussed this at length. Mainstream medicine is now SO
far off track that Americans now spend more money on alternative health than
they do on mainstream medicine. This is all wrapped up in degrees, egos,
value of old and stale knowledge, inability to keep up to date on entire
domains, lack of basic skills, etc., etc.

In short, I hear your comments about market research. That is why I see Dr.
Eliza as a knowledge fusion tool that could potentially work across the
entire Internet and NOT a tool to support experts. I see something like
Dr. Eliza as a sort of alternative/successor to Internet Explorer to fuse
the Internet to solve problems rather than being just another AI program
that might be useful in some sub-domains.

What seems SO very obvious to me and what seems to escape everyone else is:
The value of knowledge fusion across the Internet seems to be granted by
many people. There are a number of projects now working in this direction,
e.g. the one at Wikipedia. They all have absolutely insurmountable faults
(e.g. the inability to recognize statements of symptoms of the conditions
described in various articles) that I have written about on many occasions
and will simply never work. I have running demo code to show a way that
actually works. Some people have expressed objections, e.g. the need for
additional human-entered meta-information, yet no one has shown even a
suggestion that there is a way around these objections - that they aren't
inherent in the task. Why aren't people starting with my approach and
refining it, rather than continuing in other directions with no apparent
(informed) hope of ever working?

To answer my own question: People act in response to motivation, and their
motivations are NOT aligned with success. They need to push out a paper to
get their PhD, they are organizing a group of people to work for free on a
project when no one would hire them as a project manager, etc.

Anyway, until a better realization comes along and bops me on the head, that
is the way I see it. Do you see things differently?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-20 Thread Mike Tintner
Steve:
If I were selling a technique like Buzan then I would agree. However, someone 
selling a tool to merge ALL techniques is in a different situation, with a 
knowledge engine to sell.

The difference AFAICT is that Buzan had an *idea* -   don't organize your 
thoughts about a subject in random order, or list, or tables or other old 
structures  etc. organize them like a map/tree on a page so that you can 
oversee them. Not a big idea, but an idea, out of wh. he's made money,  
clearly appeals to many..

If you have a distinctive idea, wh. you may well have, I've missed it  you're 
not repeating it. A tool to merge all techniques is a goal, not an idea. You 
have to show me that you have an idea - some new insight into general system 
principles applying to ,say, repair. And if you are to do focus groups, you 
will also have to have a new idea to show them  test on them.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-19 Thread Mike Tintner
Steve:question: Why bother writing a book, when a program is a comparable 
effort that is worth MUCH more?

Well,because when you do just state basic principles - as you constructively 
started to do - I think you'll find that people can't even agree about those - 
any more than they can agree about say, the principles of self-help. If they 
can - if you can state some general systems principles that gain acceptance -  
then you have the basis for your program, and it'll cost you a helluva lot less 
effort.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-19 Thread Matt Mahoney
--- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

Sorry for being unclear. The two categories of AI that I refer to are the near 
term smart internet automated economy and longer term artificial human or 
transhuman phases. In the smart internet phase, individuals with competing 
goals own parts of the AGI (peers) and the message routing infrastructure 
provides a market that satisfies human goals efficiently. Peers work to satisfy 
the goals of their owners. Later, the network will be populated with 
intelligent peers that have their own goals independent of their (former) 
owners.

Just as the computation, storage, and communication eras of computing lack 
sharp boundaries, so will the automated economy and transhuman eras. Early on, 
people will add peers that try to appear human for various reasons, and with 
various degrees of success. These peers will know a lot about one person (such 
as its owner) and go to the net for more general knowledge about people. This 
becomes easier as computers get faster and surveillance becomes more pervasive. 
Basically, your CMR client knows everything you ever typed into a computer. 
People may program their peers to become autonomous and emulate their owners 
after they die. They might work, earn money, and pay for hosting. Later, peers 
may buy robotic bodies as the technology becomes available.

About intelligence testing, early AGI would pass an IQ test or Turing test by 
routing questions to the appropriate experts. Later, transhumans could do the 
same, only they might choose not to take your silly test.

So perhaps you could name some applications of AGI that don't fall into the 
categories of (1) doing work or (2) augmenting your brain?

3) learning as much as possible

Early AGI would do so because it is the most effective strategy to meet the 
goals of its owners. Later, transhumans would learn because they want to learn. 
They would want to learn because this is a basic human goal which was copied 
into them. Humans want to learn because intelligence requires both the ability 
to learn and the desire to learn. Humans are intelligent because it increases 
evolutionary fitness.

4) proving as many theorems as possible

Early AGI would route your theorem to theorem proving experts, rank the 
results, and use the results to improve future rankings and future routing of 
similar questions. Later, transhumans could just ask the net.

5) figuring out how to improve human life as much as possible 

Early AGI will make the market more efficient, which improves the lives of 
everyone who uses it. Later, transhumans will have their own ideas what 
improve means. That is where AGI becomes dangerous.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-19 Thread Matt Mahoney
--- On Thu, 9/18/08, John LaMuth [EMAIL PROTECTED] wrote:

 I always advocated a clear seperation between work and PLAY
 
 Here the appeal would be amusement / entertainment - not
 any specified work 
 goal
 
 Have my PR - AI call your PR - AI !!
 
 and Show Me the $$$ !!
 
As more of the economy is automated, we will spend a greater fraction of our 
time and money on entertainment. Automatically generating music, movies, art, 
and artificial worlds are hard AI problems.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-19 Thread Steve Richfield
Mike,

On 9/19/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve:question: Why bother writing a book, when a program is a comparable
 effort that is worth MUCH more?

 Well,because when you do just state basic principles - as you
 constructively started to do - I think you'll find that people can't even
 agree about those


No agreement would seem to be needed - just list all of the approaches that
sometimes work. Consider:

...
8.  Repair is usually done breadth-first, evaluating various approaches
before expending lots of effort on any one.
9.  The best (and worst) methods are that way because of the domain,
problem, and repairman involved. Hence, there will never be agreement among
repairmen across domains.
10. The value of any particular principle is in how it works. Go with
whatever works. People and computers should (mentally, physically, or
computationally) try various approaches until a good one emerges.

  - any more than they can agree about say, the principles of self-help.


OK, let's look a self-help as a repair domain. If you look across many
systems of self help, you will see some basic truths:

1.  While they have many different names, that there are only a small number
of distinct types, e.g. Buddhism and Scientology have a LOT in common. There
are many versions of 12 Step, etc.

2.  They are easily separable along major lines, e.g. those that are for
people with an internal locus of control (e.g. Buddhism), and those for
people with an external locus of control (e.g. Christianity). Buddhism will
never work for people with an external locus of control, and Christianity
will never work for people with an internal locus of control.

3.  Each of the methods, while complex in the whole, consist of small and
simple steps to take as situations (combinations of symptoms,
sub-conditions) dictate. For example, the first step in most 12 Step methods
is to recognize a power greater than yourself. Many people are unable to get
past this first step - but then again, they usually are not good candidates
for 12 Step for other reasons.

Hence, while we can't blindly agree which is best, I (or a computer) could
ask you a few questions like Do you believe that you control your life, or
do you believe that your mother, the government and/or God is in control?
to determine locus of control, and select the most appropriate system.

 If they can - if you can state some general systems principles that gain
 acceptance -  then you have the basis for your program,


How does acceptance have anything to do with a basis for a program? It's all
in the knowledge and NOT in the programming, so different principles only
mean different different knowledge. The only (easily identifiable) basic
assumptions that Dr. Eliza relies on seem to be:
1.  That statements of symptoms can usually (perfection is NOT needed) be
recognized by advanced (variable with negation and timing recognition)
shallow parsing methods.
2.  That applicable problems to solve have cause and effect chains (to
traverse and interrupt).
3.  That no traditional computation is needed, other than sometimes
invoking canned programs to compute things.
4.  That people will actually bother to create clear problem statements.
5.  That people will actually use the program.

 and it'll cost you a helluva lot less effort.
There is a year or so of effort either way. One way I get a book to try and
sell, and the other way I take on Google. Both ways seem to have their
obvious impediments (e.g. will anyone buy such a book) and require
comparable efforts. However, with a program, there is more fun and a
possibility of a really BIG win.

Thanks for wringing my thoughts out. Can you twist a little tighter?!

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-19 Thread Mike Tintner
Steve:
Thanks for wringing my thoughts out. Can you twist a little tighter?!

Steve,

A v. loose practical analogy is mindmaps - it was obviously better for Buzan to 
develop a sub-discipline/technique 1st, and a program later.

What you don't understand, I think, in all your reasoning about repair is 
that there is probably no principle - however obvious it seems to you, that 
will not be totally questioned and contradicted, and reasonably so, by someone 
else. 

The proof is in the pudding. Get yourself a set of principles together, and try 
them out on appropriately interested parties - some of your potential 
audience/customers - *before* you go to the trouble of programming. That's 
obviously good technological/business practice. Do some market research. I 
think you'll learn a lot.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Bob Mottram [EMAIL PROTECTED] wrote:

  And this is the problem.  Although some people have
 the goal of making
  an artificial person with all the richness and nuance
 of a sentient
  creature with thoughts and feelings and yada yada
 yada.. some of us
  are just interested in making more intelligent systems
 to do automated
  tasks.  For some reason people think we're going
 to do this by making
  an artificial person and then enslaving them..
 that's not going to
  happen because its just not necessary.
 
 
 In this case what you're doing is really narrow AI, not
 AGI.

Lets distinguish between the two major goals of AGI. The first is to automate 
the economy. The second is to become immortal through uploading.

The first goal does not require any major breakthroughs in AI theory, just lots 
of work. If you have a lot of narrow AI and an infrastructure for routing 
natural language messages to the right experts, then you have AGI. I described 
one protocol (competitive message routing, or CMR) to make this happen at 
http://www.mattmahoney.net/agi.html but the reality will probably be more 
complex, using many protocols to achieve the same result. Regardless of the 
exact form, we can estimate its cost. The human labor now required to run the 
global economy was worth US $66 trillion in 2006 and is increasing at 5% per 
year. At current interest rates, the value of an automated economy is about $1 
quadrillion. We should expect to pay this much, because there is a tradeoff 
between having it sooner and waiting until the cost of hardware drops.

This huge cost requires a competitive system with distributed ownership in 
which information has negative value and resource owners compete for attention 
and reputation by providing quality data. CMR, like any distributed knowledge 
base, is hostile: we will probably spend as many CPU cycles and human labor 
filtering spam and attacks as detecting useful features in language and video.

The second goal of AGI is uploading and intelligence augmentation. It requires 
advances in modeling, scanning, and programming human brains and bodies. You 
are programmed by evolution to fear death, so creating a copy of you that 
others cannot distinguish from you that will be turned on after you die has 
value to you. Whether the copy is really you and contains your consciousness 
is an unimportant philosophical question. If you see your dead friends brought 
back to life with all of their memories and behavior intact (as far as you can 
tell), you will probably consider it a worthwhile investment.

Brain scanning is probably not required. By the time we have the technology to 
create artificial generic humans, surveillance will probably be so cheap and 
pervasive that creating a convincing copy of you could be done just by 
accessing public information about you. This would include all of your 
communication through computers (email, website accesses, phone calls, TV), and 
all of your travel and activities in public places captured on video.

Uploads will have goals independent of their owners because their owners have 
died. They will also have opportunities not available to human brains. They 
could add CPU power, memory, I/O, and bandwidth. Or they could reprogram their 
brains, to live in simulated Utopian worlds, modify their own goals to want 
what they already have, or enter euphoric states. Natural selection will favor 
the former over the latter.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel

 Lets distinguish between the two major goals of AGI. The first is to
 automate the economy. The second is to become immortal through uploading.


Peculiarly, you are leaving out what to me is by far the most important and
interesting goal:

The creation of beings far more intelligent than humans yet benevolent
toward humans



 The first goal does not require any major breakthroughs in AI theory, just
 lots of work. If you have a lot of narrow AI and an infrastructure for
 routing natural language messages to the right experts, then you have AGI.


Then you have a hybrid human/artificial intelligence, which does not fully
automate the economy, but only partially does so -- it still relies on human
experts.

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Steve Richfield
Ben,

IMHO...

On 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:



 Lets distinguish between the two major goals of AGI. The first is to
 automate the economy. The second is to become immortal through uploading.


 Peculiarly, you are leaving out what to me is by far the most important and
 interesting goal:

 The creation of beings far more intelligent than humans yet benevolent
 toward humans


Depending on the details, there are already words in our English vocabulary
for these: Gods? Aliens? Masters? Keepers? Enslavers? Monsters? etc. I have
yet to hear a convincing case for any of them.




 The first goal does not require any major breakthroughs in AI theory, just
 lots of work. If you have a lot of narrow AI and an infrastructure for
 routing natural language messages to the right experts, then you have AGI.


Sounds a bit like my Dr.Eliza.

  Then you have a hybrid human/artificial intelligence, which does not fully
 automate the economy, but only partially does so -- it still relies on human
 experts.


Of course, the ULTIMATE intelligence should be able to utilize ALL expertise
- be it man or machine. My concept with Dr. Eliza was for it to handle
repeated queries, and people to answer new (to the machine) queries by
adding the knowledge needed to answer them. Similar repeated queries in the
future would then be answered automatically. By my calculations, the vast
majority of queries could be handled using the knowledge entered in only a
few expert years, so soon our civilization could focus its entire energy on
the really important unanswered questions, rather than having everyone
rediscover the same principles in life.

For obvious (to me, but maybe I should explain?), such an engine would
necessarily be SIMPLE - on the scale of Dr. Eliza, and nothing at all like
an AGI. The complexity absolutely MUST be in the data/knowledge/wisdom
and NOT in the engine itself, for otherwise, real-world structural detail
that ran orthogonal to the machine's structure would be necessarily be
forever beyond the machine's ability to deal with.

I am NOT saying that Dr. Eliza is it, but it seems closer than other
approaches, and close enough to start considering what it can NOT do that
needs doing to achieve the goal of utilizing entered knowledge to answer
queries.

So, after MANY postings by both of us, I think I can clearly express our
fundamental difference in views, for us and others to refine:

View #1 (yours, stated from my viewpoint) is that machines with super
human-like intelligence will be useful to humans, as have machines with
super computational abilities (computers). This may be so, but I have yet to
see any evidence or a convincing case (see view #2).

View #2 (mine, stated from your approximate viewpoint) is that simple
programs (like Dr. Eliza) have in the past and will in the future do things
that people aren't good at. This includes tasks that encroach on
intelligence, e.g. modeling complex phonema and refining designs. Note
that my own US Patent
4,274,684http://www.delphion.com/details?patent_number=4274684is for
a bearing design that was refined by computer. However, such simple
programs are fundamentally limited to human-contributed knowledge/wisdom,
and will never ever come up with any new knowledge/wisdom of their own.

My counter: True, but neither will an AGI come up with any new and useful
knowledge/wisdom based on the crap that we might enter. It would have to
discover this for itself, probably after years/decades of observation and
interaction. Our own knowledge/wisdom comes with our own erroneous
prejudices, and hence would be of little value to developing new
knowledge/wisdom. Our civilization comes from just that - civilization. A
civilization of AGIs might indeed evolve into something powerful, but if you
just finished building one and turned it on tomorrow, it probably wouldn't
do anything valuable in your lifetime.

Your counter?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

Lets distinguish between the two major goals of AGI. The first is to
automate the economy. The second is to become immortal through uploading.

Peculiarly, you are leaving out what to me is by far the most important and 
interesting goal:

The creation of beings far more intelligent than humans yet benevolent toward 
humans

That's what I mean by an automated economy. Google is already more intelligent 
than any human at certain tasks. So is a calculator. Both are benevolent. They 
differ in the fraction of our tasks that they can do for us. When that fraction 
is 100%, that's AGI.

The first goal does not require any major breakthroughs in AI theory, just 
lots of work. If you have a lot of narrow AI and an infrastructure for 
routing natural language messages to the right experts, then you have AGI.

Then you have a hybrid human/artificial intelligence, which does not fully 
automate the economy, but only partially does so -- it still relies on human 
experts.

If humans are to remain in control of AGI, then we have to make informed, top 
level decisions. You can call this work if you want. But if we abdicate all 
thinking to machines, then where does that leave us?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Mike Tintner
Steve:View #2 (mine, stated from your approximate viewpoint) is that simple 
programs (like Dr. Eliza) have in the past and will in the future do things 
that people aren't good at. This includes tasks that encroach on 
intelligence, e.g. modeling complex phonema and refining designs.

Steve,

In principle, I'm all for the idea that I think you (and perhaps Bryan) have 
expressed of a GI Assistant - some program that could be of general 
assistance to humans dealing with similar problems across many domains. A 
diagnostics expert, perhaps, that could help analyse breakdowns in say, the 
human body, a car or any of many other machines, a building or civil structure, 
etc. etc. And it's certainly an idea worth exploring.

 But I have yet to see any evidence that it is any more viable than a proper 
AGI - because, I suspect, it will run up against the same problems of 
generalizing -  e.g. though breakdowns may be v. similar in many different 
kinds of machines, technological and natural, they will also each have their 
own special character.

If you are serious about any such project, it might be better to develop it 
first as an intellectual discipline.rather than a program to test its viability 
- perhaps what it really comes down to is a form of systems thinking or science.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Charles Hixson
I would go further.  Humans have demonstrated that they cannot be 
trusted in the long term even with the capabilities that we already 
possess.  We are too likely to have ego-centric rulers who make 
decisions not only for their own short-term benefit, but with an 
explicit After me the deluge mentality.  Sometimes they publicly admit 
it.  And history gives examples of rulers who were crazier than any 
leading a major nation-state at this time.


If humans were to remain in control, and technical progress stagnates, 
then I doubt that life on earth would survive the century.  Perhaps it 
would, though.  Microbes can be very hardy.
If humans were to remain in control, and technical progress accelerates, 
then I doubt that life on earth would survive the century.  Not even 
microbes.


I don't, however, say that we shouldn't have figurehead leaders who, 
within constraints, set the goals of the (first generation) AGI.  But 
the constraints would need to be such that humanity would benefit.  This 
is difficult when those nominally in charge not only don't understand 
what's going on, but don't want to.  (I'm not just talking about greed 
and power-hunger here.  That's a small part of the problem.)


For that matter, I consider Eliza to be a quite important feeler from 
the future.  AGI as psychologist is an underrated role, but one that I 
think could be quite important.  And it doesn't require a full AGI 
(though Eliza was clearly below the mark).  If things fall out well, I 
expect that long before full AGIs show up, sympathetic companions will 
arrive.  This is a MUCH simpler problem, and might well help stem the 
rising tide of insanity.   

A next step might be a personal secretary.  This also wouldn't require 
full AGI, though to take maximal advantage of it, it would require a 
body, but a minimal version wouldn't.  A few web-cams for eyes and mics 
for ears, and lots of initial help in dealing with e-mail, separating 
out which bills are legitimate.  Eventually it could, itself, verify 
that bills were legitimate and pay them, illegitimate and discard them, 
or questionable and present them to it's human for processing.  It's a 
complex problem, probably much more so than the companion, but quite 
useful, and well short of requiring AGI.


The question is, at what point do these entities start acquiring a 
morality.  I would assert that it should be from the very beginning.  
Even the companion should try to guide it's human away from immoral 
acts.  As such, the companion is acting as a quasi-independent agent, 
and is exerting some measure of control.  (More control if it's more 
skillful, or it's human is more amenable.)  When one gets to the 
secretary, it's exhibiting (one hopes), honesty and just behavior (e.g., 
not billing for services that it doesn't believe were rendered).


At each step along the way the morality of the agent has implications 
for the destination that will be arrived at, as each succeeding agent is 
built from the basis of its predecessor.   Also note that scaling is 
important, but not determinative.  One can imagine the same entity, in 
different instantiations, being either the secretary to a school teacher 
or to a multi-national corporation.  (Of course the hardware required 
would be different, but the basic activities are, or could be, the 
same.  Specialized training would be required to handle the government 
regulations dealing with large corporations, but it's the same basic 
functions.  If one job is simpler than the other, just have the program 
able to handle either and both of them.)


So.  Unless one expects an overnight transformation (a REALLY hard 
takeoff), AGIs will evolve in the context of humans as directors to 
replace bureaucracies...but with their inherent morality.  As such, as 
they occupy a larger percentage of the bureaucracy, that section will 
become subject to their morality.  People will remain in control, just 
as they are now...and orders that are considered immoral will be ... 
avoided.  Just as bureaucracies do now.  But one hopes that the evolving 
AGIs will have superior moralities.



Ben Goertzel wrote:



Keeping humans in control is neither realistic nor necessarily 
desirable, IMO.


I am interested of course in a beneficial outcome for humans, and also 
for the other minds we create ... but this does not necessarily 
involve us controlling these other minds...


ben g



If humans are to remain in control of AGI, then we have to make
informed, top level decisions. You can call this work if you want.
But if we abdicate all thinking to machines, then where does that
leave us?

-- Matt Mahoney, [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;

Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Vladimir Nesov
On Fri, Sep 19, 2008 at 1:31 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
 On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 Lets distinguish between the two major goals of AGI. The first is to automate
 the economy. The second is to become immortal through uploading.

 Umm, who's goals are these?  Who said they are the [..] goals of
 AGI?  I'm pretty sure that what I want AGI for is going to be
 different to what you want AGI for as to what anyone else wants AGI
 for.. and any similarities are just superficial.


And to boot, both of you don't really know what you want. You may try
to present plans as points designating a certain level of utility you
want to achieve through AI, by showing feasible plans that are quite
good in themselves. But these are neither the best scenarios
available, nor what will actually come to pass.

See this note by Yudkowsky:

http://www.sl4.org/archive/0212/5957.html

So if you're thinking that what you want involves chrome and steel,
lasers and shiny buttons to press, neural interfaces, nanotechnology,
or whatever great groaning steam engine has a place in your heart, you
need to stop writing a science fiction novel with yourself as the main
character, and ask yourself who you want to be. 

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Bryan Bishop
On Thursday 18 September 2008, Mike Tintner wrote:
 In principle, I'm all for the idea that I think you (and perhaps
 Bryan) have expressed of a GI Assistant - some program that could
 be of general assistance to humans dealing with similar
 problems across many domains. A diagnostics expert, perhaps, that
 could help analyse breakdowns in say, the human body, a car or any of
 many other machines, a building or civil structure, etc. etc. And
 it's certainly an idea worth exploring. 

That's just one of the many projects I have going, however. It's easy 
enough to wire it up to a simple perceptron, or weights-adjustable 
additive function, or even physically up to a neural tissue culture for 
sorting through the hiss and the noise of 'bad results'. This isn't 
your fabled intelligence.

  But I have yet to see any evidence that it is any more viable than a
 proper AGI - because, I suspect, it will run up against the same

It's not aiming to be AGI in the first place though.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 And to boot, both of you don't really know what you want.

What we want has been programmed into our brains by the process of evolution. I 
am not pretending the outcome will be good. Once we have the technology to have 
everything we want, or to want what we have, then a more intelligent species 
will take over.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel
Matt M wrote:


 Peculiarly, you are leaving out what to me is by far the most important
 and interesting goal:
 
 The creation of beings far more intelligent than humans yet benevolent
 toward humans

 That's what I mean by an automated economy. Google is already more
 intelligent than any human at certain tasks. So is a calculator. Both are
 benevolent. They differ in the fraction of our tasks that they can do for
 us. When that fraction is 100%, that's AGI.



I believe there is a qualitative difference btw AGI and narrow-AI, so that
no tractably small collection of computationally-feasible narrow-AI's (like
Google etc.) are going to achieve general intelligence at the human level or
anywhere near.  I think you need an AGI architecture  approach that is
fundamentally different from narrow-AI approaches...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 3:36 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  Lets distinguish between the two major goals of AGI.
 The first is to automate the economy. The second is to
 become immortal through uploading.
 
 Umm, who's goals are these?  Who said they are
 the [..] goals of
 AGI?  I'm pretty sure that what I want AGI for is
 going to be
 different to what you want AGI for as to what anyone else
 wants AGI
 for.. and any similarities are just superficial.

So, I guess I should say, the two commercial applications of AGI. I realize 
people are working on AGI today as pure research, to better understand the 
brain, to better understand how to solve hard problems, and so on. I think 
eventually this knowledge will be applied for profit. Perhaps there are some 
applications I haven't thought of?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 6:57 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 general intelligence at the human level

I hear you say these words a lot.  I think, by using the word level,
you're trying to say something different to general intelligence just
like humans have but I'm not sure everyone else reads it that way.
Can you clarify?

Humans have all these interests that, although they might be
interesting to study with AGI, I'm not terribly interested in putting
in an AGI that I put to work.  I don't need an AGI that cries for its
mother, or thinks about eating, or yearns for freedom and so I simply
won't teach it these things.  If, by some fortuitous accident, it
happens to develop any of these concepts, or any other concepts that I
deem useless for the tasks I set it, I'll expect them to be quickly
purged from its limited memory space to make room for concepts that
are useful.  As such, I can imagine an AGI having a human level
intelligence that is very different to a human-like intelligence.

This is not to say that creating an AGI with human-like intelligence
is necessarily a bad thing.  Some people want to create simulated
humans, and that's interesting too.. just not as interesting to me.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
  Perhaps there are some applications I haven't thought of?

Bahahaha.. Gee, ya think?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel
On Thu, Sep 18, 2008 at 9:02 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I believe there is a qualitative difference btw AGI and narrow-AI, so that
 no tractably small collection of computationally-feasible narrow-AI's (like
 Google etc.) are going to achieve general intelligence at the human level or
 anywhere near.  I think you need an AGI architecture  approach that is
 fundamentally different from narrow-AI approaches...

 Well, yes, and that difference is a distributed index, which has yet to be
 built.


I extremely strongly disagree with the prior sentence ... I do not think
that a distributed index is a sufficient architecture for powerful AGI at
the human level, beyond, or anywhere near...




 Also, what do you mean by human level intelligence? What test do you use?
 My calculator already surpasses human level intelligence depending on the
 tests I give it.


Yes, and my dog surpasses human level intelligence at finding poop in a
grassy field ... so what?? ;-)

If I need to specify a test right now I'll just use the standard IQ tests as
a reference, or else the Turing Test

But I don't think these tests are ideal by any means...

One of the items on my list for this fall is the articulation of a clear set
of metrics for evaluating developing, learning AGI systems as they move
toward human-level AI ...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:

   Perhaps there are some applications I haven't
 thought of?
 
 Bahahaha.. Gee, ya think?

So perhaps you could name some applications of AGI that don't fall into the 
categories of (1) doing work or (2) augmenting your brain?

A third one occurred to me: launching a self improving or evolving AGI to 
consume all available resources, i.e. an intelligent worm or self replicating 
nanobots. This really isn't a useful application, but I'm sure somebody, 
somewhere, might think it would be really cool to see if it would launch a 
singularity and/or wipe out all DNA based life.

Oh, I'm sure the first person to try it would take precautions like inserting a 
self destruct mechanism that activates after some number of replications. (The 
1988 Morris worm had software intended to slow its spread, but it had a bug). 
Or maybe they will be like the scientists who believed that the idea of a chain 
reaction in U-235 was preposterous...
(Thankfully, the scientists who actually built the first atomic pile took some 
precautions, such as standing by with an axe to cut a rope suspending a cadmium 
control rod in case things got out of hand. They got lucky because of an 
unanticipated phenomena in which a small number of nuclei had delayed fission, 
which made the chain reaction much easier to control).


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread John LaMuth

You have completely left out the human element or friendly-type appeal

How about a AGI personal assistant / tutor / PR interface

Everyone should have one

The market would be virtually unlimited ...

John L

www.ethicalvalues.com

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 18, 2008 6:34 PM
Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: 
Proprietary_Open_Source)




--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:


On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
[EMAIL PROTECTED] wrote:



  Perhaps there are some applications I haven't
thought of?

Bahahaha.. Gee, ya think?


So perhaps you could name some applications of AGI that don't fall into 
the categories of (1) doing work or (2) augmenting your brain?


A third one occurred to me: launching a self improving or evolving AGI to 
consume all available resources, i.e. an intelligent worm or self 
replicating nanobots. This really isn't a useful application, but I'm sure 
somebody, somewhere, might think it would be really cool to see if it 
would launch a singularity and/or wipe out all DNA based life.


Oh, I'm sure the first person to try it would take precautions like 
inserting a self destruct mechanism that activates after some number of 
replications. (The 1988 Morris worm had software intended to slow its 
spread, but it had a bug). Or maybe they will be like the scientists who 
believed that the idea of a chain reaction in U-235 was preposterous...
(Thankfully, the scientists who actually built the first atomic pile took 
some precautions, such as standing by with an axe to cut a rope suspending 
a cadmium control rod in case things got out of hand. They got lucky 
because of an unanticipated phenomena in which a small number of nuclei 
had delayed fission, which made the chain reaction much easier to 
control).



-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Trent Waddington
On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
 So perhaps you could name some applications of AGI that don't fall into the 
 categories of (1) doing work or (2) augmenting your brain?

Perhaps you could list some uses of a computer that don't fall into
the category of (1) computation (2) communication.  Do you see how
pointless reasoning at this level of abstraction is?

In the few short decades we've had personal computers the wealth of
different uses for *general* computation has been enchanting.  Lumping
them together and claiming you understand their effect on the world as
a result is ridiculous.  What commercial applications people will
apply AGI to is just as hard to predict as what applications people
would apply the personal computer to.

My comment was meant to indicate that your hubris in assuming you have
*any* idea what applications people will come up with for readily
available AGI is about on par with predictions for the use of digital
computers.. if not more so, as general intelligence is orders of
magnitude more disruptive than general computation.

And to get back to the original topic of conversation, putting
restrictions on the use of supposedly open source code, the effects of
those restrictions can no more be predicted than the potential
applications of the technology.  Which, I think, is a rational piler
of the need for freedom.. you don't know better, so who are you to put
these restrictions on others?

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

Well, yes, and that difference is a distributed index, which has yet to be 
built.

I extremely strongly disagree with the prior sentence ... I do not think that 
a distributed index is a sufficient architecture for powerful AGI at the human 
level, beyond, or anywhere near...

Well, keep in mind that I am not trying to build a human-like AGI with its own 
goals. I am designing a distributed system with billions of owners, each of 
whom has their own interests and (conflicting) goals. To the user, the AGI is 
like a smarter internet. It would differ from Google in that any message you 
post is instantly available to anyone who cares (human or machine). There is no 
distinction between queries and documents. Posting a message could initiate an 
interactive conversation, or result in related messages posted later being sent 
to you.

A peer needs two types of knowledge. It knows about some specialized topic, and 
it also knows which other peers are experts on related topics. For simple 
peers, related just means they share the same words, and a peer is simply a 
cache of messages posted and received recently by its owner. In my CMR 
proposal, messages are stamped with the ID and time of origin as well as any 
peers they were routed through. This cached header information constitutes 
knowledge about related peers. When a peer receives a message, it compares the 
words in it to cached messages and routes a copy to the peers listed in the 
headers of those messages. Peers have their own policies regarding their areas 
of specialization, which can be as simple as giving the cache priority to 
messages originating from its owner. There is no provision to delete messages 
from the network once they are posted. Each peer would have its own deletion 
policy.

The environment is competitive and hostile. Peers compete for reputation and 
attention by providing quality information, which allows them to charge more 
for routing targeted ads. Peers are responsible for authenticating their 
sources, and risk blacklisting if they route too much spam. Peers thus have an 
incentive to be intelligent, for example, using better language models such as 
a stemmer, thesaurus, and parser to better identify related messages, or 
providing specialized services that understand a narrow subset of natural 
language, the way Google calculator understands questions like how many 
gallons in 50 cubic feet?

So yeah, it is a little different than narrow AI.

As to why I'm not building it, it's because I estimate it will cost $1 
quadrillion. Google controls about 1/1000 of the computing power of the 
internet. I am talking about building something 1000 times bigger.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, John LaMuth [EMAIL PROTECTED] wrote:

 You have completely left out the human element or
 friendly-type appeal
 
 How about a AGI personal assistant / tutor / PR interface
 
 Everyone should have one
 
 The market would be virtually unlimited ...

That falls under the category of (1) doing work.



-- Matt Mahoney, [EMAIL PROTECTED]


 - Original Message - 
 From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, September 18, 2008 6:34 PM
 Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog]
 Re: 
 Proprietary_Open_Source)
 
 
  --- On Thu, 9/18/08, Trent Waddington
 [EMAIL PROTECTED] wrote:
 
  On Fri, Sep 19, 2008 at 7:54 AM, Matt Mahoney
  [EMAIL PROTECTED] wrote:
 
Perhaps there are some applications I
 haven't
  thought of?
 
  Bahahaha.. Gee, ya think?
 
  So perhaps you could name some applications of AGI
 that don't fall into 
  the categories of (1) doing work or (2) augmenting
 your brain?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Ben Goertzel

 So perhaps you could name some applications of AGI that don't fall into the
 categories of (1) doing work or (2) augmenting your brain?


3) learning as much as possible

4) proving as many theorems as possible

5) figuring out how to improve human life as much as possible

Of course, if you wish to put these under the category of doing work
that's fine ... in a physics sense I guess every classical physical process
does work ...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Matt Mahoney
--- On Thu, 9/18/08, Trent Waddington [EMAIL PROTECTED] wrote:

 On Fri, Sep 19, 2008 at 11:34 AM, Matt Mahoney
 [EMAIL PROTECTED] wrote:
  So perhaps you could name some applications of AGI
 that don't fall into the categories of (1) doing work or
 (2) augmenting your brain?
 
 Perhaps you could list some uses of a computer that
 don't fall into
 the category of (1) computation (2) communication.  Do you
 see how
 pointless reasoning at this level of abstraction is?

No it is not. (and besides, there is (3) storage). We can usefully think of the 
primary uses of computers going through different phases, e.g.

1950-1970 - computation (numerical calculation)
1970-1990 - storage (databases)
1990-2010 - communication (internet)
2010-2030 - profit-oriented AI (automating the economy)
2030-2050 - brain augmentation and uploading

 And to get back to the original topic of conversation,
 putting
 restrictions on the use of supposedly open source code, the
 effects of
 those restrictions can no more be predicted than the
 potential
 applications of the technology.  Which, I think, is a
 rational piler
 of the need for freedom.. you don't know better, so who
 are you to put
 these restrictions on others?

I don't advocate any such thing, even if it were practical.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread John LaMuth


- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, September 18, 2008 7:45 PM
Subject: Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: 
Proprietary_Open_Source)




--- On Thu, 9/18/08, John LaMuth [EMAIL PROTECTED] wrote:


You have completely left out the human element or
friendly-type appeal

How about a AGI personal assistant / tutor / PR interface

Everyone should have one

The market would be virtually unlimited ...


That falls under the category of (1) doing work.



-- Matt Mahoney, [EMAIL PROTECTED]




I always advocated a clear seperation between work and PLAY

Here the appeal would be amusement / entertainment - not any specified work 
goal


Have my PR - AI call your PR - AI !!

and Show Me the $$$ !!

JLM

www.emotionchip.net




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Repair Theory (was Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source))

2008-09-18 Thread Steve Richfield
Mike,

On 9/18/08, Mike Tintner [EMAIL PROTECTED] wrote:

  Steve:View #2 (mine, stated from your approximate viewpoint) is that
 simple programs (like Dr. Eliza) have in the past and will in the future do
 things that people aren't good at. This includes tasks that encroach on
 intelligence, e.g. modeling complex phonema and refining designs.

 Steve,

 In principle, I'm all for the idea that I think you (and perhaps Bryan)
 have expressed of a GI Assistant - some program that could be of general
 assistance to humans dealing with similar problems across many domains. A
 diagnostics expert, perhaps, that could help analyse breakdowns in say, the
 human body, a car or any of many other machines, a building or civil
 structure, etc. etc. And it's certainly an idea worth exploring.

 But I have yet to see any evidence that it is any more viable than a proper
 AGI - because, I suspect, it will run up against the same problems of
 generalizing -  e.g. though breakdowns may be v. similar in many different
 kinds of machines, technological and natural, they will also each have their
 own special character.


Certainly true. That is why it must incorporate lots of domain-specific
knowledge rather than being a completed work at the get-go. Every domain has
its own, as you put it, special character.


 If you are serious about any such project, it might be better to develop it
 first as an intellectual discipline.rather than a program to test its
 viability - perhaps what it really comes down to is a form of systems
 thinking or science.


This has been done over and over again by many people in various disciplines
(e.g. *Zen and the Art of Motorcycle Maintenance*). Common rules/heuristics
have emerged, e.g.:
1.  Fixing your biggest problem will fix 80% of its manifestations. Then, to
work on the remaining 20%, loop back to the beginning of this rule...
2.  Complex systems usually only suffer from dozens, not thousands, of
potential problems. The knowledge base needed to fix the vast majority of
problems in any particular domain is surprisingly short.
3.  Symptoms are usually expressed simply, e.g. shallow parsing would
recognize most of them.
4.  Chronic problems are evidence of a lack of knowledge/understanding.
5.  Repair is a process and not an act. We must design that process to lead
to a successful repair.
6.  Often the best repair process is to simply presume that the failure is
the cheapest thing that could possibly fail, and proceed on that assumption.
This often leads to the real problem, and with a minimum of wasted effort.
7.  Etc. I could go on like this for quite a while.

I have considered writing a book, something like Introduction to Repair
Theory that outlines how to successfully tackle hypercomplex systems like
our own bodies, even where millions of dollars in failed research has
preceded us. The same general methods can be applied to repairing large
(e.g. VME) circuit boards with no documentation, addressing social and
political problems, etc.

My question: Why bother writing a book, when a program is a comparable
effort that is worth MUCH more?

From what I have seen, some disciplines like auto mechanics are open to (and
indeed are the source of much of) this sort of technology, Other disciplines
like medicine are completely closed-minded and are actively disinterested.
Hence, neither of these disciplines would benefit much if any at all. Only
disciplines that are somewhere in between could benefit, and I don't at the
moment know of any such disciplines. Do you?

However, a COMPUTER removes the human ego from the equation, so that people
would simply presume that it runs on PFM (Pure Frigging Magic) and accept
advice that they would summarily reject if it came from a human.

Anyway, those are my thoughts for your continuing comment.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com