On 02/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
I currently think there are some human human-level intelligences who know
how to build most of an AGI, at least enough to get up and running systems
that would solve many aspects of the AGI problem and help us better
understand what, if any
On Sunday 02 December 2007, John G. Rose wrote:
Building up parse trees and word sense models, let's say that would
be a first step. And then say after a while this was accomplished and
running on some peers. What would the next theoretical step be?
I am not sure what the next step would be.
Yesterday I heard the phrase Artificial General Intelligence on the
radio for the first time ever:
http://www.npr.org/templates/story/story.php?storyId=16816185
Weekend Edition Sunday, December 2, 2007 · The idea of what Artificial
Ed Porter wrote:
Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities described
and generalizations from them.
Assuming this is a continuation of the discussion of an AGI-at-home P2P
system, you are going to
Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities described
and generalizations from them.
Assuming this is a continuation of the discussion of an AGI-at-home P2P
system, you are going to be very limited by
Bob Mottram wrote:
Perhaps a good word of warning is that it will be really easy to
satirise/lampoon/misrepresent AGI and its proponents until such time
as one is actually created.
The problem is that these two activities - denigrating AGI, and actually
building one - are not two independent
Perhaps a good word of warning is that it will be really easy to
satirise/lampoon/misrepresent AGI and its proponents until such time
as one is actually created.
On 03/12/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Yesterday I heard the phrase Artificial General Intelligence on the
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
The reason it reminds me of this episode is that you are calmly talking
here about the high dimensional problem of seeking to understand the
meaning of text, which often involve multiple levels of implication,
which would normally be
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Menawhile, unfortunately, solving all those other issues like making
parsers and trying to do word-sense disambiguation would not help one
whit to get the real theoretical task done.
I agree. AI has a long history of doing the easy part of the
Matt:: The whole point of using massive parallel computation is to do the
hard part of the problem.
I get it : you and most other AI-ers are equating hard with very, very
complex, right? But you don't seriously think that the human mind
successfully deals with language by massive parallel
From: Bryan Bishop [mailto:[EMAIL PROTECTED]
I am not sure what the next step would be. The first step might be
enough for the moment. When you have the network functioning at all,
expose an API so that other programmers can come in and try to utilize
sentence analysis (and other functions)
From: Ed Porter [mailto:[EMAIL PROTECTED]
Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities
described
and generalizations from them.
Assuming this is a continuation of the discussion of an AGI-at-home
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
[snip]
I am not being negative, I am just relaying the standard understanding
of priorities in the AGI field as a whole. Send complaints addressed to
AGI Community, not to me, please.
You are being negative! And since
Mike Tintner wrote:
Matt:: The whole point of using massive parallel computation is to do the
hard part of the problem.
I get it : you and most other AI-ers are equating hard with very,
very complex, right? But you don't seriously think that the human mind
successfully deals with language
Ed Porter wrote:
Richard,
It is false to imply that knowledge of how to draw implications from a
series of statements by some sort of search mechanism is equally unknown as
that of how to make an anti-gravity drive -- if by anti-gravity drive you
mean some totally unknown form of physics,
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
It is easy for a research field to agree that certain problems are
really serious and unsolved.
A hundred years ago, the results of the Michelson-Morley experiments
were a big unsolved problem, and pretty serious for the foundations of
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
It is easy for a research field to agree that certain problems are
really serious and unsolved.
A hundred years ago, the results of the Michelson-Morley experiments
were a big unsolved problem, and pretty serious for the
My suggestion, criticized below (criticism can be valuable), was for just
one of many possible uses of an open-source P2P AGI-at-home type system. I
am totally willing to hear other proposals. Considering how little time I
spent coming up with the one being criticized, I have a relatively low
On 03/12/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
it is truly
astonishing to hear people talking about issues being more or less
solved, bar the shouting.
You'll usually find that such people never trouble themselves with
implementational details. Intuitive notions about how easy
MIKE TINTNER Isn't it obvious that the brain is able to understand the
wealth of language by relatively few computations - quite intricate,
hierarchical, multi-levelled processing,
ED PORTER How do you find the right set of relatively few computations
and/or models that are appropriate in a
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I think this is a very important issue in AGI, which is why I felt
compelled to say something.
As you know, I keep trying to get meaningful debate to happen on the
subject of *methodology* in AGI. That is what my claims about the
complex
For some lucky cable folks the BW is getting ready to increase soon:
http://arstechnica.com/news.ars/post/20071130-docsis-3-0-possible-100mbps-sp
eeds-coming-to-some-comcast-users-in-2008.html
I'm yet to fully understand the limitations of a P2P based AGI design or the
augmentational ability of
On Dec 3, 2007, at 12:52 PM, John G. Rose wrote:
For some lucky cable folks the BW is getting ready to increase soon:
http://arstechnica.com/news.ars/post/20071130-docsis-3-0-possible-100mbps-sp
eeds-coming-to-some-comcast-users-in-2008.html
I'm yet to fully understand the limitations of a
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED]
Distributed algorithms tend to be far more sensitivity to latency than
bandwidth, except to the extent that low bandwidth induces latency.
As a practical matter, the latency floor of P2P is so high that most
algorithms would run far faster on
--- Ed Porter [EMAIL PROTECTED] wrote:
And (2) with regard to the order of NL learning, I think a child actually
learns semantics first
Actually Jusczyk showed that babies learn the rules for segmenting continuous
speech at 7-10 months. I did some experiments in 1999 following the work of
On Dec 3, 2007 5:07 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
When a user asks a question or posts information, the message would be
broadcast to many nodes, which could choose to ignore them or relay them to
other nodes that it believes would find the message more relevant. Eventually
the
John G. Rose wrote:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I think this is a very important issue in AGI, which is why I felt
compelled to say something.
As you know, I keep trying to get meaningful debate to happen on the
subject of *methodology* in AGI. That is what my claims
On Dec 3, 2007 12:12 PM, Mike Tintner [EMAIL PROTECTED] wrote:
I get it : you and most other AI-ers are equating hard with very, very
complex, right? But you don't seriously think that the human mind
successfully deals with language by massive parallel computation, do you?
Very very complex
RL: One thing that can be easily measured is the activation of lexical
items related in various ways to a presented word (i.e. show the subject
the word Doctor and test to see if the word Nurse gets activated).
It turns out that within an extremely short time of the forst word being
seen, a very
--- Mike Tintner [EMAIL PROTECTED] wrote:
On the one hand, we can perhaps agree that one of the brain's glories is
that it can very rapidly draw analogies - that I can quickly produce a
string of associations like, say, snake, rope, chain, spaghetti
strand, - and you may quickly be able to
RICHARD LOOSEMORE I cannot even begin to do justice, here, to the issues
involved in solving the high dimensional problem of seeking to understand
the meaning of text, which often involve multiple levels of implication,
which would normally be accomplished by some sort of search of a large
Mike Tintner wrote:
RL: One thing that can be easily measured is the activation of lexical
items related in various ways to a presented word (i.e. show the subject
the word Doctor and test to see if the word Nurse gets activated).
It turns out that within an extremely short time of the forst
MIKE TINTNER Isn't it obvious that the brain is able to understand the
wealth of language by relatively few computations - quite intricate,
hierarchical, multi-levelled processing,
ED PORTER How do you find the right set of relatively few computations
and/or models that are appropriate in a
Ed Porter wrote:
RICHARD LOOSEMORE I cannot even begin to do justice, here, to the issues
involved in solving the high dimensional problem of seeking to understand
the meaning of text, which often involve multiple levels of implication,
which would normally be accomplished by some sort of search
Ed Porter wrote:
RICHARD LOOSEMORE I cannot even begin to do justice, here, to the issues
involved in solving the high dimensional problem of seeking to understand
the meaning of text, which often involve multiple levels of implication,
which would normally be accomplished by some sort of search
Matt: Semantic models learn associations by proximity in the training text.
The
degree to which you associate snake and rope depends on how often these
words appear near each other
Correct me - but it's the old, old problem here, isn't it? Those semantic
models/programs won't be able to form
--- Ed Porter [EMAIL PROTECTED] wrote:
We do not know the number and width of the spreading activation that is
necessary for human level reasoning over world knowledge. Thus, we really
don't know how much interconnect is needed and thus how large of a P2P net
would be needed for impressive
--- Mike Tintner [EMAIL PROTECTED] wrote:
Matt: Semantic models learn associations by proximity in the training text.
The
degree to which you associate snake and rope depends on how often these
words appear near each other
Correct me - but it's the old, old problem here, isn't it? Those
Mike
-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Monday, December 03, 2007 8:25 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]
MIKE TINTNER Isn't it obvious that the brain is able to understand
the
wealth
RICHARD LOOSEMORE= I'm sorry, but this is not addressing the actual
issues involved.
You are implicitly assuming a certain framework for solving the problem
of representing knowledge ... and then all your discussion is about
whether or not it is feasible to implement that framework (to
Richard Loosemore= None of the above is relevant. The issue is not
whether toy problems
set within the current paradigm can be done with this or that search
algorithm, it is whether the current paradigm can be made to converge at
all for non-toy problems.
Ed Porter= Richard, I
Matt,
IN my Mon 12/3/2007 8:17 PM post to John Rose from which your are probably
quoting below I discussed the bandwidth issues. I am assuming nodes
directly talk to each other, which is probably overly optimistic, but still
are limited by the fact that each node can only receive somewhere
Matt,
In addition to my last email, I don't understand what your were saying below
about complexity. Are you saying that as a system becomes bigger it
naturally becomes unstable, or what?
Ed Porter
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, December
On Thursday 29 November 2007, Ed Porter wrote:
Somebody (I think it was David Hart) told me there is a shareware
distributed web crawler already available, but I don't know the
details, such as how good or fast it is.
http://grub.org/
Previous owner went by the name of 'kordless'. I found him
Ed Porter wrote:
Richard Loosemore= None of the above is relevant. The issue is not
whether toy problems
set within the current paradigm can be done with this or that search
algorithm, it is whether the current paradigm can be made to converge at
all for non-toy problems.
Ed
On Monday 03 December 2007, Mike Dougherty wrote:
I believe the next step of such a system is to become an abstraction
between the user and the network they're using. So if you can hook
into your P2P network via a firefox extension, (consider StumbleUpon
or Greasemonkey) so it (the agent) can
Ed Porter wrote:
RICHARD LOOSEMORE= I'm sorry, but this is not addressing the actual
issues involved.
You are implicitly assuming a certain framework for solving the problem
of representing knowledge ... and then all your discussion is about
whether or not it is feasible to implement
RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]ED Yes,
but there are a lot of types of thinking that cannot be done by shape alone,
and shape is actually much more complicated than shape. There is shape, and
shape distorted by perspective, and shape changed by bending, and
Ed,
Well it'd be nice having a supercomputer but P2P is a poor man's
supercomputer and beggars can't be choosy.
Honestly the type of AGI that I have been formulating in my mind has not
been at all closely related to simulating neural activity through
orchestrating partial and mass activations at
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Top three? I don't know if anyone ranks them.
Try:
1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).
2) The problem of desiging an inference control engine
50 matches
Mail list logo