From: Brad Paulsen [mailto:[EMAIL PROTECTED]
John wrote:
A rock is either conscious or not conscious.
Excluding the middle, are we?
Conscious, not conscious or null?
I don't want to put words into Ben company's mouths, but I think what
they are trying to do with PLN is to
From: Ed Porter [mailto:[EMAIL PROTECTED]
ED PORTER
I am not an expert at computational efficiency, but I think graph
structures
like semantic nets, are probably close to as efficient as possible given
the
type of connectionism they are representing and the type of computing
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Subject: Are rocks conscious? (was RE: [agi] Did this message get
completely lost?)
--- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:
Actually on further thought about this conscious rock, I
want to take that particular rock and put
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 09:29 PM,, John G. Rose wrote:
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John G. Rose wrote:
A rock is conscious.
Okay, I'll bite. How are rocks conscious under Josh's definition or
any
other non-LSD-tripping
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
On Monday 02 June 2008 03:00:24 pm, John G. Rose wrote:
A rock is either conscious or not conscious. Is it less intellectually
sloppy to declare it not conscious?
A rock is not conscious. I'll stake my scientific reputation
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Just a quick thought not fully formulated. My model is in fact helpful
here.
Consciousness is an iworld-movie - a self watching and directing a movie
of
the world. How do you know if an agent is conscious - if it directs its
movie - if it tracks
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/02/2008 12:00 PM,, John G. Rose wrote:
A rock is either conscious or not conscious. Is it less intellectually
sloppy to declare it not conscious?
John
A rock is either conscious or not conscious (if consciousness is a
boolean all
) --- something that is not very dis-similar from
Novamente's hypergraphs.
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Sunday, June 01, 2008 12:36 PM
To: agi@v2.listbox.com
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Unfortunately AI will make CAPTCHAs useless against spammers. We will
need to figure out other methods. I expect that when we have AI, most
of the world's computing power is going to be directed at attacking
other computers and defending against
From: Mike Tintner [mailto:[EMAIL PROTECTED]
You are - if I've understood you - talking about the machinery and
programming that produce and help to process the movie of
consciousness.
I'm not in any way denying all that or its complexity. But the first
thing
is to define and model
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Sun, 6/1/08, John G. Rose [EMAIL PROTECTED] wrote:
AI has a long way to go to thwart CAPTCHAs altogether.
There are math CAPTCHAs (MAPTCHAs), 3-D CAPTCHAs, image rec CAPTCHAs,
audio and I can think
of some that are quite difficult
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Why do I believe anyone besides me is conscious? Because they are made
of
meat? No, it's because they claim to be conscious, and answer questions
about
their consciousness the same way I would, given my own conscious
experience -- and they
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Sun, 6/1/08, John G. Rose [EMAIL PROTECTED] wrote:
OK How about this. A CAPTCHA that combines human audio and
visual illusion that evokes a realtime reaction only in a conscious
physical human. Can audio visual illusion be used as a test
From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John G. Rose wrote:
A rock is conscious.
Okay, I'll bite. How are rocks conscious under Josh's definition or any
other non-LSD-tripping-or-batshit-crazy definition?
The way you phrase your question indicates your knuckle
From: Mike Tintner [mailto:[EMAIL PROTECTED]
You guys are seriously irritating me.
You are talking such rubbish. But it's collective rubbish - the
collective *non-sense* of AI. And it occurs partly because our culture
doesn't offer a simple definition of consciousness. So let me have a
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
What many people call consciousness is qualia, that which distinguishes
you from a philosophical zombie, http://en.wikipedia.org/wiki/P-zombie
There is no test for consciousness in this sense, but humans universally
believe that they are
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
If by conscious you mean having a humanlike subjective experience,
I suppose that in future we will infer this about intelligent agents
via a combination of observation of their behavior, and inspection of
their internal construction and
From: Mike Tintner [mailto:[EMAIL PROTECTED]
That's correct. The model of consciousness should be the self [brain-
body]
watching and physically interacting with the movie [that is in a sense
an
open movie - rather than on a closed screen - projected all over the
world
outside, and on the
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Sat, 5/31/08, John G. Rose [EMAIL PROTECTED] wrote:
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
I don't believe you are conscious. I believe you
are a zombie. Prove me wrong.
I am a zombie. Prove to me that I am
From: Mike Tintner [mailto:[EMAIL PROTECTED]
you utterly refused to answer my question re: what is your model? It's
not a
hard question to start answering - i.e. either you do have some kind of
model or you don't. You simply avoided it. Again.
I have some models that I feel confident that
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- John G. Rose [EMAIL PROTECTED] wrote:
Consciousness with minimal intelligence may be easier to build than
general
intelligence. General intelligence is the one that takes the
resources.
A general consciousness algorithm, one that creates
This doesn't distinguish Apache on Windows like in WAMP vs. LAMP but that is
probably a small percentage.
Uhm I've noticed with C# is that you hit some performance and resource
issues when the app gets big. But that is the tradeoff I guess and it is
workaroundable. Also VS2008 is buggy. It's
With all this lovely chit-chat about .NET, I have been wondering if anyone
was entertaining the possibility of doing a port of NARS from Java to C#.
Not that I have seriously considered working myself on it, just that before
someone would undertake such an effort it would be beneficial to share
The environmental complexities are different. NYC has been there for
hundreds of years. Human brain has been in nature for hundreds of thousands
of years. A manmade environment for AGI is custom made in the beginning; we
don't just throw it out on the street or into the jungle. It can start off
in
From: Mike Tintner [mailto:[EMAIL PROTECTED]
John:The synchronous melodies of the crickets strumming their legs,
changes
harmony as the wind moves warmthness. The reeds vibrate; the birds,
fearing
the snake, break their rhythmic falsetto polyphonies and flutter away to
new
pastures.
From: Joseph Gentle [mailto:[EMAIL PROTECTED]
There are two interesting points here.
The first is that (in my opinion) pattern matching must come first. I
agree that understanding the patterns (the /why/) is important; but
seeing (even unjustified) patterns is crucial. The benchmark I
Which actual world, a natural or manmade? And if there is plenty of
expensive electronic memory for all the nodes in that rather large graph.
It's a feat of efficiency management to trim it down as much as you can in
order for it to have a chance of developing a subset of that rich
understanding.
From: Mike Tintner [mailto:[EMAIL PROTECTED]
It's actually obvious if you care to listen, that music involves a
combination of pattern fitting/extrapolation and pattern BREAKING. The
whole
point of a pop song is that it involves a creative idea - a *twist* on
existing patterns. That's why
From: Brad Paulsen [mailto:[EMAIL PROTECTED]
Hey, Gang!
Hmmm. Sure is quiet around here lately. Maybe people are actually
getting
some (more) work done? Catching up on their respective AI reading
lists?
;-) I know I am. Reading Josh's book (Beyond AI - Creating the
Conscience
of
RPI for decades has been working various cutting edge AI projects. And IBM
is a long time sponsor.
Better than Second Life, is a virtual world modeled after the real world.
Like Google Earth with its model buildings, as it gets better, having AI
entities serve useful purposes. So a virtual
From: Jey Kottalam [mailto:[EMAIL PROTECTED]
Sent: Saturday, May 17, 2008 11:30 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Porting MindForth AI into JavaScript Mind.html
On Sat, May 17, 2008 at 10:09 AM, Bob Mottram [EMAIL PROTECTED] wrote:
I think Yudkowsky once said that AI remains at
From: BillK [mailto:[EMAIL PROTECTED]
On Sat, May 17, 2008 at 8:37 PM, John G. Rose wrote:
It's hard to refute something vague or irrelevant, with 10 volumes of
Astrology background behind it, and it's not necessary, as nonsense
is
usually self-evident. What's needed is merciless
Mike,
It's not all geometric. Patterns need not be defined by vector' lines, or
only magnitudes of image properties. The same recognition mechanisms in the
brain are emulatable by mathematical, indexable, categorizable, recognizable
and systematic, engineered processes. Even images of
So many overloads - pattern, complexity, atoms - can't we come up with new
terms like schfinkledorfs? - but a very interesting question is - given an
image of W x H pixels of 1 bit depth (on or off), one frame, how many
patterns exist within this grid? When you think about it, it becomes an
I kind of disagree with this attitude, too conformist and over assuming.
I've seen too many flaked out freakazoids have tiny grains of absolute
brilliance sprinkled throughout their time wasting mass of obtruse
utterings.
Yeah you can't waste too much time and have to gain something with the
of absolute
brilliance.
Thanks for your comment.
Steve Richfield
==
On 4/15/08, John G. Rose [EMAIL PROTECTED] wrote:
I kind of disagree with this attitude, too conformist and over assuming.
I've seen too many flaked out freakazoids have tiny grains of absolute
brilliance sprinkled
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Peruse the video:
http://www.youtube.com/watch?v=W1czBcnX1Wwfeature=related
Of course, they are only showing the best stuff. And I am sure there
is plenty of work left to do. But from the variety of behaviors that
are displayed, I would
To have an intuitive grasp of cultural information flow dynamics and to
understand how intelligent agents evolve individually in relation to group
knowledge and cultural evolution - to pin that down with a semi-in touch
with reality mathematical model is, to say the least a bit daunting, in my
Meet me halfway here and I am always willing to expand on anything I
have written.
One must be fully in touch with Global-Local Disconnect (GLD) to get the
gist of the paper.
john
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Charles: I don't think a General Intelligence could be built entirely
out
of
narrow AI components, but it might well be a relatively trivial add-
on.
Just consider how much of human intelligence is demonstrably narrow
AI
(well, not
From: Mike Tintner [mailto:[EMAIL PROTECTED]
I'm developing this argument more fully elsewhere, so I'll just give a
partial gist. What I'm saying - and I stand to be corrected - is that I
suspect that literally no one in AI and AGI (and perhaps philosophy)
present
or past understands the
From: Charles D Hixson [mailto:[EMAIL PROTECTED]
I don't think a General Intelligence could be built entirely out of
narrow AI components, but it might well be a relatively trivial add-on.
Just consider how much of human intelligence is demonstrably narrow AI
(well, not artificial, but you
I see the pattern as much more of the same. You now have Microsoft SQL
Server, Microsoft Internet Information Server, Microsoft Exchange Server and
then you'll have Microsoft Intelligence Server or Microsoft Cognitive
Server. It'll be limited by licenses, resources and features. The cool part
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
However, I think you are right that there could be an intermediate
period when proto-AGI systems are a nuisance. However, these
proto-AGI systems will really only be souped up Narrow-AI systems, so I
believe their potential for mischief
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
My take on this is completely different.
When I say Narrow AI I am specifically referring to something that is
so limited that it has virtually no chance of becoming a general
intelligence. There is more to general intelligence than just
From: Samantha Atkins [mailto:[EMAIL PROTECTED]
On Dec 28, 2007, at 5:34 AM, John G. Rose wrote:
Well I shouldn't berate the poor dude... The subject of rationality is
pertinent though as the way that humans deal with unknown involves
irrationality especially in relation to deitical
But the traditional gods didn't represent the unknowns, but rather the
knowns. A sun god rose every day and set every night in a regular
pattern. Other things which also happened in this same regular pattern
were adjunct characteristics of the sun go. Or look at some of their
names,
On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote:
Dawkins trivializes religion from his comfortable first world
perspective
ignoring the way of life of hundreds of millions of people and offers
little
substitute for what religion does and has done for civilization
From: Samantha Atkins [mailto:[EMAIL PROTECTED]
Indeed. Some form of instaneous information transfer would be
required for unlimited growth. If it also turned out that true time
travel was possible then things would get really spooky. Alpha and
Omega. Mind without end.
I think that
From: Joshua Cowan [mailto:[EMAIL PROTECTED]
It's interesting that the field of memetics is moribund (ex. the
Journal
of Memetics hasn't published in two years) but the meme of memetics is
alive
and well. I wonder, do any of the AGI researchers find the concept of
Memes
useful in
From: Charles D Hixson [mailto:[EMAIL PROTECTED]
The evidence in favor of an external god of any traditional form is,
frankly, a bit worse than unimpressive. It's lots worse. This doesn't
mean that gods don't exist, merely that they (probably) don't exist in
the hardware of the universe. I
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 11, 2007 12:43 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity
From: Charles D Hixson [mailto
and collective human spirits. I also believe it has to
power to threaten the well being of those spirits.
I hope we as a species will have the wisdom to make it do more of the former
and less of the latter.
Ed Porter
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent
Is an AGI really going to feel pain or is it just going to be some numbers?
I guess that doesn't have a simple answer. The pain has to be engineered
well for it to REALLY understand it.
AGI behavior related to its survival, its pain is non-existence; does it
care to be non-existent?
Well he did come up with the meme concept or at least he coined it, others
before him I'm sure have worked on that. Memetics is a valuable study.
John
From: Mike Dougherty [mailto:[EMAIL PROTECTED]
On Dec 10, 2007 6:59 AM, John G. Rose [EMAIL PROTECTED] wrote:
Dawkins trivializes
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Which memetics work would you consider valuable?
Specific works? I have no idea. But the whole way of thinking is valuable.
We are just these hosts for informational entities on a meme network that
have all kinds of properties. Honestly though I don't
From: Mike Tintner [mailto:[EMAIL PROTECTED]
John: Are you familiar with some works that you may recommend?
That's the point. Never heard of anything from memetics - which Dennett
conceded had not yet fulfilled its promise.
Well in particular, applied memetics. I believe that certain
If you took an AGI, before it went singulatarinistic[sic?] and tortured it..
a lot, ripping into it in every conceivable hellish way, do you think at
some point it would start praying somehow? I'm not talking about a forced
conversion medieval style, I'm just talking hypothetically if it would
/magazine/04evolution.t.html?ref=magazine
http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine;
pagewanted=print pagewanted=print
Ed Porter
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Sunday, December 09, 2007 1:50 PM
To: agi@v2
It'd be interesting, I kind of wonder about this sometimes, if an AGI,
especially one that is heavily complex systems based would independently
come up with the existence some form of a deity. Different human cultures
come up with deity(s), for many reasons; I'm just wondering if it is like
some
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
My design would use most of the Internet (10^9 P2P nodes). Messages
would be
natural language text strings, making no distinction between documents,
queries, and responses. Each message would have a header indicating the
ID
and time stamp of
From: Ed Porter [mailto:[EMAIL PROTECTED]
John,
I am sure there is interesting stuff that can be done. It would be
interesting just to see what sort of an agi could be made on a PC.
Yes it would be interesting to see what could be done on a small cluster of
modern server grade computers. I
happens there :)
So that's it without getting too into details. Very primitive still ...
John
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 04, 2007 2:17 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
As an example of a creative leap (that is speculative and may be wrong,
but is
certainly creative), check out my hypothesis of emergent social-
psychological
intelligence as related to mirror neurons and octonion algebras:
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
The reason it reminds me of this episode is that you are calmly talking
here about the high dimensional problem of seeking to understand the
meaning of text, which often involve multiple levels of implication,
which would normally be
From: Bryan Bishop [mailto:[EMAIL PROTECTED]
I am not sure what the next step would be. The first step might be
enough for the moment. When you have the network functioning at all,
expose an API so that other programmers can come in and try to utilize
sentence analysis (and other functions)
From: Ed Porter [mailto:[EMAIL PROTECTED]
Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities
described
and generalizations from them.
Assuming this is a continuation of the discussion of an AGI-at-home
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
It is easy for a research field to agree that certain problems are
really serious and unsolved.
A hundred years ago, the results of the Michelson-Morley experiments
were a big unsolved problem, and pretty serious for the foundations of
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I think this is a very important issue in AGI, which is why I felt
compelled to say something.
As you know, I keep trying to get meaningful debate to happen on the
subject of *methodology* in AGI. That is what my claims about the
complex
For some lucky cable folks the BW is getting ready to increase soon:
http://arstechnica.com/news.ars/post/20071130-docsis-3-0-possible-100mbps-sp
eeds-coming-to-some-comcast-users-in-2008.html
I'm yet to fully understand the limitations of a P2P based AGI design or the
augmentational ability of
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED]
Distributed algorithms tend to be far more sensitivity to latency than
bandwidth, except to the extent that low bandwidth induces latency.
As a practical matter, the latency floor of P2P is so high that most
algorithms would run far faster on
IMPLICATION-2.jpg
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Monday, December 03, 2007 12:37 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi]
Funding AGI
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Top three? I don't know if anyone ranks them.
Try:
1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).
2) The problem of desiging an inference control engine
up
with pretty good word sense models.
Ed Porter
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Friday, November 30, 2007 2:55 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
research]
Ed,
That is probably
From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
John,
If you look at nanotechnology one of the goals is to build machines
that
build machines. Couldn't software based AGI be similar?
Eventually AGIs will be able to build other AGIs, but first AGI models
won't be able to build any
From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
There are programs that already write source code.
The trick is to write working and useful apps.
Many of the apps that write code basically take data and statically convert
it to a source code representation. So a code generator may allow you to
1Kmsg/sec rate as the
client
to server upload you discribed?
Ed Porter
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 29, 2007 11:40 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
research]
OK
-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Friday, November 30, 2007 12:33 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
research]
Hi Ed,
If the peer is not running other apps utilizing the network it could do
the
same. Typically
From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
John,
Note, that compiler doesn't build application.
Programmer does (using compiler as a tool).
Very true. So then, is the programmer + compiler more complex that the
AGI
ever will be?
No.
I don't even see how it relates to what I
From: BillK [mailto:[EMAIL PROTECTED]
This discussion is a bit out of date. Nowadays no hackers (except for
script kiddies) are interested in wiping hard disks or damaging your
pc. Hackers want to *use* your pc and the data on it. Mostly the
general public don't even notice their pc is
From: Bob Mottram [mailto:[EMAIL PROTECTED]
There have been a few attempts to use the internet for data collection
which might be used to build AIs, or for teaching chatbots such as
jabberwacky, but you're right that as yet nobody has really made use
of the internet as a basis for
in conjunction with a distributed web crawler.
Ed Porter
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 29, 2007 7:27 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
research]
From: Ed Porter
-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 29, 2007 8:31 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
research]
Ed,
That is the http protocol, it is a client server request/response
From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
John,
Is building the compiler more complex than building
any application it can build?
Note, that compiler doesn't build application.
Programmer does (using compiler as a tool).
Very true. So then, is the programmer + compiler more
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Mike: To be fair, ask this same question but replace women with any
other
'minority' and see if it's still a problem.
I think women are the majority, aren't they? Anyway, yes, women are
remarkably absent here. You will find them in fair
From: Bob Mottram [mailto:[EMAIL PROTECTED]
I don't think we yet know enough about how DNA works to be able to
call it a conglomerated mess, but you're probably right that the same
principle applies to any information system adapting over time.
Similarly the thinking of teenagers or young
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I can answer this for you, because I was once an anti-virus developer,
so I have seen the internal code of more viruses than I care to think
about.
The answer is NO. Malicious hackers are among the world's most stupid
programmers.
We
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
What are the best current examples of (to any extent) self-building
software
?
So far, most of the effort has been concentrated on acquiring the
necessary
computing power. http://en.wikipedia.org/wiki/Storm_botnet
Just think of the
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
State of the art is:
-- Just barely, researchers have recently gotten automated
program learning to synthesize an nlogn sorting algorithm based on the
goal
of sorting a large set of lists as rapidly as possible...
-- OTOH, automatic
From: Mike Tintner [mailto:[EMAIL PROTECTED]
John: I kind of like the idea of building software that then builds AGI.
What are the best current examples of (to any extent) self-building
software
?
Microsoft includes a facility in dot NET called Reflection that allows code
to inspect itself
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
It amazes me that a crime of this scale can go on for a year and we are
powerless to stop it either through law enforcement or technology. The
Storm
botnet already controls enough computing power to simulate a neural
network
the size of several
From: Dennis Gorelik [mailto:[EMAIL PROTECTED]
John,
I kind of like the idea of building software that then builds AGI.
Sorry, but building AGI is less complex than building software that
is able to build AGI.
It totally depends on the design. When you write your narrow AI app you use
So far, they have not succeeded in killing it. An intelligent, self
improving
worm would be even harder to kill. Once every computer was infected, it
would
be impossible.
Well it's like who really owns your computers resources? There is so much
stuff going on there I don't think it is
My claim is that it's possible [and necessary] to split massive amount
of work that has to be done for AGI into smaller narrow AI chunks in
such a way that every narrow AI chunk has it's own business meaning
and can pay for itself.
You have not addressed my claim, which has massive
Yeah - because weak AI is so simple. Why not just make some
run-of-the-mill narrow AI with a single goal of Build AGI? You can
just relax while it does all the work.
I kind of like the idea of building software that then builds AGI. But you
could say that that software is part of the AGI
From: Jef Allbright [mailto:[EMAIL PROTECTED]
On 11/12/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I read it more as if it were a very highbrow sort of poetry ;-)
Same here. At first I was disappointed and irritated by the lack of
meaningful content (or was it all content, but
From: Jef Allbright [mailto:[EMAIL PROTECTED]
No real argument there, but it reminds me STRONGLY of my experience as
a poetic genius about 30 years ago during one of three scientific
LSD trips. Brilliant, I tell you!
On a more practical note, intelligence is not so much about making
Here is a stimulating read available online about emergent meta-systems and
Holonomics...ties a lot of things together, very rich reading.
http://www.scribd.com/doc/10456/Reflexive-Autopoietic-Dissipative-Speical-Sy
stems-Theory-PalmerKD-2007vZ
-
This list is sponsored by AGIRI:
Bravo this is great. I like the part about Marx's labor theory of value. AGI
economics - isn't that a world that lends itself to making a compelling
financial pitch.
John
From: Edward W. Porter [mailto:[EMAIL PROTECTED]
Robin,
I am an evangelist for the fact that the time for
Yes this is true. Sometimes though I think that we need to build AGI weapons
ASAP. Why? The human race needs to protect itself from other potentially
aggressive beings. Humans treat animals pretty bad as an example. The earth
is a sitting duck. How do we defend ourselves? Clumsy nukes? Not good
101 - 200 of 267 matches
Mail list logo