http://dilbert.com/strips/comic/2008-11-12/
What's the worst thing that could happen?
http://dilbert.com/strips/comic/2008-11-11/
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Bryan,
In my taste, testing with clueless judges is more appropriate
approach. It makes test less biased.
How can they judge when they don't know what they are judging? Surely,
when they hang out for some cyberlovin', they are not scanning for
intelligence. Our mostly in-bred stupidity is
http://blog.pmarca.com/2007/12/checking-in-on.html
===
If CyberLover works as described, it will qualify as one of the first
computer programs ever written that is actually passing the Turing Test.
===
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
Matt,
You can feed it with text. Then AGI would simply parse text [and
optionally - Google it].
No need for massive computational capabilities.
Not when you can just use Google's 10^6 CPU cluster and its database with 10^9
human contributors.
That's one of my points: our current
Bryan,
If CyberLover works as described, it will qualify as one of the first
computer programs ever written that is actually passing the Turing
Test.
I thought the Turing Test involved fooling/convincing judges, not
clueless men hoping to get some action?
In my taste, testing with
Mike,
What you describe - is set of AGI nodes.
AGI prototype is just one of such node.
AGI researcher doesn't have to develop all set at once. It's quite
sufficient to develop only one AGI node. Such node will be able to
work on single PC.
I believe Matt's proposal is not as much about the
Richard,
Did you know, for example, that certain kinds of brain damage can leave
a person with the ability to name a visually presented object, but then
be unable to pick the object up and move it through space in a way that
is consistent with the object's normal use . and that another
Derek,
Low level design is not critical for AGI. Instead we observe high level brain
patterns and try to implement them on top of our own, more understandable,
low level design.
I am curious what you mean by high level brain patterns
though. Could you give an example?
1) All
Richard,
Let's save both of us time and wait when somebody else read this
Cognitive Science book and will come here to discuss it.
:-)
Though interesting, interpreting Brain damage experiments is not the
most important thing for AGI development.
In both cases vision module works good.
Richard,
This could be called a communcation problem, but it is internal, and in
the AGI case it is not so simple as just miscalculated numbers.
Communication between subsystems is still communication.
So I suggest to call it Communication problem.
So here is a revised version of the
Matt,
No, my proposal requires lots of regular PCs with regular network connections.
Properly connected set of regular PCs would usually have way more
power than regular PC.
That makes your hardware request special.
My point is - AGI can successfully run on singe regular PC.
Special hardware
Matt,
Matt,:AGI research needs
special hardware with massive computational capabilities.
Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good - in
fact, I would argue, essential - to ground these
Mike,
1. Bush walks like a cowboy, doesn't he?
The only way a human - or a machine - can make sense of sentence 1 is by
referring to a mental image/movie of Bush walking.
That's not the only way to make sense of the saying.
There are many other ways: chat with other people, or look on Google:
Richard,
the instance nodes are such an
important mechanism that everything depends on the details of how they
are handled.
Correct.
So, to consider one or two of the details that you mention. You would
like there to be only a one-way connection between the generic node (do
you call
Richard,
It's Neural Network -- set of nodes (concepts), when every node can be
connected with the set of other nodes. Every connection has it's own
weight.
Some nodes are connected with external devices.
For example, one node can be connected with one word in text
dictionary (that is an
Mike,
Matt:: The whole point of using massive parallel computation is to do the
hard part of the problem.
The whole idea of massive parallel computation here, surely has to be wrong.
And yet none of you seem able to face this to my mind obvious truth.
Who do you mean under you in this
Richard,
1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).
Could you describe, what *real* grounding problem is?
It would be nice to consider an example.
Say, we are trying to build AGI for the purpose of running
Richard,
3) A way to represent things - and in particular, uncertainty - without
getting buried up to the eyeballs in (e.g.) temporal logics that nobody
believes in.
Conceptually the way of representing things is described very well.
It's Neural Network -- set of nodes (concepts), when every
John,
If you look at nanotechnology one of the goals is to build machines that
build machines. Couldn't software based AGI be similar?
Eventually AGIs will be able to build other AGIs, but first AGI models
won't be able to build any software.
-
This list is sponsored by AGIRI:
Matt,
Using pointers saves memory but sacrifices speed. Random memory access is
slow due to cache misses. By using a matrix, you can perform vector
operations very fast in parallel using SSE2 instructions on modern processors,
or a GPU.
I doubt it.
http://en.wikipedia.org/wiki/SSE2 -
Benjamin,
Obviously, most researchers who have developed useful narrow-AI
components have not gotten rich from it.
My example is Google founders who developed narrow-AI
component -- Google).
What is your example of useful narrow AI component developers who
have not got rich from it?
The
Benjamin,
E.g.: Google, computer languages, network protocols, databases.
These are tools that are useful for AGI RD but so are computer
monitors, silicon chips, and desk chairs.
1) Yes, creating monitor contributed into AGI a lot too.
2) Technologies that I mentioned above are useful on
John,
Note, that compiler doesn't build application.
Programmer does (using compiler as a tool).
Very true. So then, is the programmer + compiler more complex that the AGI
ever will be?
No.
I don't even see how it relates to what I wrote above ...
Or at some point does the AGI build and
John,
Example - When we create software applications we use compilers. When the
applications get more complex we have to improve the compilers (otherwise
AutoCad 2007 could be built with QBasic). For AGI do we need to improve the
compliers to the point where they actually write the source
or more IO modules.
I'd say that text IO is the most useful one.
Visual/Sound/Touch stuff is not critical.
Friday, November 30, 2007, 2:13:14 AM, you wrote:
On 30/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
For example, mouse has strong image and sound recognition ability.
AGI doesn't
be required, and I assumed this included access to computer science and
computer technology sources, which the peasants of the middle age would not
have access.
So I don't understand your problem.
-Original Message-
From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
Sent: Friday
Benjamin,
That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?
Yes, but it's a largely irrelevant point. Because building a narrow-AI
system in an AGI-compatible way is HARDER than building that same
narrow-AI component in a
Ed,
At the current stages this may be true, but it should be remembered that
building a human-level AGI would be creating a machine that would itself,
with the appropriate reading and training, be able to design and program
AGIs.
No.
AGI is not necessarily that capable. In fact first
Matt,
And some of the Blue Brain research suggests it is even worse. A mouse
cortical column of 10^5 neurons is about 10% connected,
What does mean 10% connected?
How many connections does average mouse neuron have?
1?
but the neurons are arranged such that connections can be formed
Benjamin,
Nearly any AGI component can be used within a narrow AI,
That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?
but, the problem is, it's usually a bunch easier to make narrow AI's
using components that don't have any AGI
Matt,
--- Dennis Gorelik [EMAIL PROTECTED] wrote:
Could you describe a piece of technology that simultaneously:
- Is required for AGI.
- Cannot be required part of any useful narrow AI.
A one million CPU cluster.
Are you claiming that computational power of human brain is equivalent
to one
Matt,
As for the analogies, my point is that AGI will quickly evolve to
invisibility from a human-level intelligence.
I think you underestimate how quickly performance deteriorates with the
growth of complexity.
AGI systems would have lots of performance problems in spite of fast
Mike,
I think you underestimate how quickly performance deteriorates with
the growth of complexity.
Dennis, you are stating what could be potentially an extremely important
principle.
It is very important principles for [hundreds of] years already.
Take a look into business. You can
Edward,
It seems that Cassimatis architect his AGI system as an assembly of
several modules.
That's primary approach in designing any complex system.
I agree with such module architecture approach, but my path to AGI
statement was not exactly about such architecture.
My claim is that it's
Richard,
I had something very specific in mind when I said that,
because I was meaning that in a complex systems AGI project, there is
a need to do a massive, parallel search of a space of algorithms. This
is what you might call a data collection phase. It is because of the
need for this
Jiri,
I'm professionally working on a top secret military project that supports
the war on terror and I can see there is lots of data that, if
processed in smarter ways could make a huge difference in the world.
This is not really a single domain narrow AI task (though the related
projects -
Benjamin,
Do you have any success stories of such research funding in the last
20 years?
Something that resulted in useful accomplishments.
Are you asking for success stories regarding research funding in any domain,
or regarding research funding in AGI?
Any domain, please.
There were
Richard,
specific technical analysis of the AGI problem that I have made
indicates that nothing like a 'prototype' is even possible until
after a massive amount of up-front effort.
I probably misunderstand you first time.
I thought you meant that this massive amount of up-front efforts must
Russell,
The reason I didn't comment is because I
don't have a solution - that is, I know how to write software that can
draw certain kinds of analogies in certain contexts, but I don't know
how to write software that can do it anywhere near as generally as
humans can.
I just want to note,
Matt,
http://www.mattmahoney.net/singularity.html
Could you allow comments under your article? That might be useful.
I expect my remarks to be controversial and most people will disagree with
parts of it,
Exactly. That's the major reason to have comments in the first place.
As for the
Andrew,
If you cannot solve interesting computer science problems that are
likely to be simpler, then it is improbable that you'll ever be able
to solve really hard interesting problems like AGI (or worse,
Friendly AGI). I don't mean to disparage anyone doing AGI research,
but if they are
Benjamin,
Are you asking for success stories regarding research funding in any domain,
or regarding research funding in AGI?
Any domain, please.
OK, so your suggestion is that research funding, in itself, is worthless in
any domain?
No.
My point is that massive funding without having a
Jiri,
AGI is IMO possible now but requires very different approach than narrow AI.
AGI requires properly tune some existing narrow AI technologies,
combine them together and may be add couple of more.
That's massive amount of work, but most AGI research and development
can be shared with
Benjamin,
That's massive amount of work, but most AGI research and development
can be shared with narrow AI research and development.
There is plenty overlap btw AGI and narrow AI but not as much as you
suggest...
That's only because that some narrow AI products are not there yet.
Could
Jiri,
To DARPA, but some spending rules should go with it. In collaboration
with universities and the AGI community, they IMO should:
1) Develop framework(s) for AGI testing.
DARPA cares about technology that help improve military within few
years. At this time that may be weak AI partially
Jiri,
Give $1 for the research to who?
Research team can easily eat millions $$$ without producing any useful
results.
If you just randomly pick researchers for investment, your chances to
get any useful outcome from the project is close to zero.
The best investing practise is to invest only
Matt,
You are right that AGI may seriously weaken human civilization just by
giving humans what they want. Lots of individuals can succumb to some
form of pleasure machine.
On the other hand -- why would you worry about human civilization or
any civilization at all if you personally get what you
Eliezer,
You asked that very personal question yourself and now you blame
Jiri for asking the same?
:-)
Ok, let's take a look into your answer.
You said that you prefer to be transported into a randomly selected
anime.
In my taste, Jiri's Endless AGI supervised pleasure is much wiser
choice
Jiri,
You assume that when we are 100% done -- we will get what we
ultimately want.
But that's not exactly true.
The most fittest species (whether computers, humans, or androids) will dominate
the world.
Let's talk about set of supergoals that such fittest species will
have.
I think this set
Matt,
You algorithm is too complex.
What's the point of doing step 1?
Step 2 is sufficient.
Saturday, November 3, 2007, 8:01:45 PM, you wrote:
So we can dispense with the complex steps of making a detailed copy of your
brain and then have it transition into a degenerate state, and just skip
Richard,
Although this seems like a reasonable stance, I don't think it is a
strategy that will lead the world to the fast development (or perhaps
any development) of a real AGI.
Nothing would lead to the fast development of a real AGI.
The development would be slow. It would be about
/development goals would be quite
helpful.
Saturday, November 17, 2007, 3:19:37 PM, you wrote:
On Nov 18, 2007 3:05 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:
You assume that when we are 100% done -- we will get what we
ultimately want.
But that's not exactly true.
The most fittest species
Matt,
On the other hand -- why would you worry about human civilization or
any civilization at all if you personally get what you want?
That is exactly the problem. I wouldn't worry about reducing my own fitness.
Why do you worry about reducing your own fitness now?
However I don't think
is
considerably longer and way more abstract than that.
Saturday, November 17, 2007, 11:51:13 PM, you wrote:
On Nov 18, 2007 2:30 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:
Stefan,
Could you please explain, how could I apply your research paper:
http://rationalmorality.info/wp-content/uploads/2007/11
William,
It is very simple and I wouldn't apply it to everything that
behaviourists would (we don't get direct rewards for solving crossword
puzzles).
How do you know that we don't get direct rewards on solving crossword
puzzles (or any other mental task)?
Chances are that under certain
William,
1) I agree that direct reward has to be in-built
(into brain / AI system).
2) I don't see why direct reward cannot be used for rewarding mental
achievements. I think that this direct rewarding mechanism is
preprogrammed in genes and cannot be used directly by mind.
This mechanism
Ben,
What exactly can Novamente do right now?
(What's input and what's output of these meaning extraction feature?
Can I test it?
Wednesday, March 16, 2005, 8:40:57 AM, you wrote:
Google's crawler does exactly that.
It examines written pages and grasp meaning of these pages.
Unfortunately
Ben,
Under direct knowledge here I mean knowledge of meaning of every
particular word and phrase in NL text.
That is instead of remembering linguistic rules (like in statement
verb goes after noun), AI should remember that word cat is used in
phrases cat catches, black cat, cat jumps, my cat,
Lukasz,
I don't see any practical use of Start systems.
Do you?
Second reference doesn't work.
Monday, March 14, 2005, 6:04:53 AM, you wrote:
Hi.
I don't need detailed counter-arguments. Just give me one good example
of strong AI implementation in LISP.
What is the functionality of this
Ben,
Let me clarify my question:
what is input and what is output of this convertor?
This reference:
http://www.goertzel.org/new_research/Lojban_AI.pdf
doesn't work...
Monday, March 14, 2005, 6:35:35 AM, you wrote:
Robin Lee Powell's complete PEG grammar for Lojban is here:
Ben,
Imagine that strong AI is already implemented.
And software developers have easy to use tool to implement human level
functionality.
Would you claim that humans don't have general intelligence?
:-)
Monday, March 14, 2005, 6:35:35 AM, you wrote:
Well, the point of distinguishing
Ben,
1) CYC --- I don't see why do you consider CYC intelligent
application.
From my point of view CYC in on the same level of intelligence as
MS Word. Well, probably MS Word is even more intelligent.
At least MS Word works and produce nice and intelligent results (not
super-intelligent though).
Ben,
What's your definition of reading?
What about this:
-
http://dictionary.reference.com/search?q=read
15. Computer Science. To obtain (data) from a storage medium, such as a
magnetic disk.
-
Do you have any doubts now that Google can read?
But wait, let's consider
Ben,
You don't need many rules to process Natural Language.
If you have more that 100 rules then probably your NL processing model
is wrong.
These less than 100 rules include rules for finding paragraphs, statements,
words and
phrases in the input text.
Plus a little bit more rules like that.
Ben,
I think language should initially be taught
in the context of interaction with the learner in a shared environment, not
via analysis of texts.
Reading of texts is important and must be learned, but AFTER language is
learned in an experiential-interaction context.
The optimal way of
Ben,
1) You need to apply Occam's razor principle:
why Lojban if you can do the same with English?
2) From maintenance standpoint massive reading is far less expensive
than interactive education.
In addition to that massive reading is ~10...1000 times faster than
interaction.
Of course we cannot
Ben,
Lojban syntax is completely formally specified by a known set of rules;
English syntax is not.
I'm pretty sure that live Lojban has a lot of exceptions from the
rules and cannot be formalized.
Humans introduce new rules into live language and just do errors.
Therefore you cannot rely on
If you think Google is strong AI then we have really different definitions
of that term, what can I say...
Google is narrow AI, if it's AI at all. It's great, of course.. but ...
Ok, let's see:
1) Google reads Natural Language. Every natural language.
Google write Natural Language.
2)
Ben,
Hard-coding Lojban syntax from what source is a solved problem?
I mean there's a complete formal grammar for Lojban, see e.g.
http://www.digitalkingdom.org/~rlpowell/hobbies/lojban/grammar/index.html
I see several convertors from something to something.
What exactly would you recommend
Cobol is industrial software development language too (not modern
though). And LISP is not an industrial language.
That's why I think that more good AI applications were developed in COBOL
than in LISP. Let me know if I'm wrong about it.
Of course you're wrong, but this statement is so silly
Ben,
The English analogues of tanru are just more complicated, that's all...
What does make English statement more comlex than Lojban tanru?
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL
Why do you want to search for verb-argument-relation and
similar linguistic stuff which is irrelevant to basic NL
understanding stuff???
How can you say that the subcategorization frames of verbs are irrelevant to
basic NL understanding Nothing could be more essential...
Any reason why
deering,
It seems that I agree with you ~70% of the time :-)
Let's focus on 30% differences and compare our understanding of
sub-goals and super-goals.
1) What did come first sub-goals or super-goals?
Super-goals are primary goals, aren't they?
SUPERGOAL 1: take actions which will aid the
Deering,
I strongly disagree.
Humans have preprogrammed super-goals.
Humans don't update ability to update their super-goals.
And humans are intelligent creatures, aren't they?
Moreover: system which can easily redefine its super-goals is very
unstable.
At the same time intelligent system has
Eugen,
Yes? Can you show them in the brain coredump? Do you have such a coredump?
There is no coredump.
But we can observe humans behavior.
Humans don't update ability to update their super-goals.
What, precisely, is a supergoal, in an animal context?
There are many supergoals.
They are:
1) All Supergoals are implemented in form of reinforcers.
Not all reinforcers constitute supergoals.
Some reinforcers can be created as sub-goals implementation.
For instance: unconditional reflexes are Supergoal reinforcers.
Conditional reflexes are sub-goals reinforcers.
2) You are telling
Ben,
1) Could you describe what is the architecture of you INLINK
interactive framework?
How is it going to handle natural language?
2) I doubt that it's possible to communicate in natural language
completely unambiguously. There always will be some uncertainty.
Intelligent system itself will
77 matches
Mail list logo