Hi,
Just for kicks - let's assume that AIXItl yields 1% more intelligent
results when provided 10^6 times the computational resources when
compared to another algorythm X. Let's further assume that today the
cost asscociated with X for reaching a benefit of 1 will be 1 compared
to a cost of
Thanks Ben, Russel et al for being so patient with me ;-) To
summarize: AIXItl's inefficiencies are so large and the additional
benefit it provides is so small that it will likely never be a logical
choice over other more efficient, less optimal algorithms.
Stefan
The additional benefit it
Subject: Please fwd to Singularity list
To: Ben Goertzel [EMAIL PROTECTED]
Ben, please forward this to your Singularity list.
** Excerpts from a work in progress follow. **
Imagine that I'm visiting a distant city, and a local friend volunteers
to drive me to the airport. I don't know
Peter Voss wrote:
I have a more fundamental question though: Why in particular would we want
to convince people that the Singularity is coming? I see many disadvantages
to widely promoting these ideas prematurely.
If one's plan is to launch a Singularity quickly, before anyone else
notices,
Hi,
On 10/9/06, Bruce LaDuke [EMAIL PROTECTED] wrote:
Just a sidebar on the whole 2012 topic.
It's quite possible that singularity is **already here** as new knowledge
and that the only barrier is social acceptance. Radical new knowledge is
historically created long before it is accepted by
Japan,despitealotofinterestbackin5thGenerationcomputerdaysseemstohaveadifficulttimeinnovatinginadvancedsoftware.Iamnotsurewhy.
I talked recently, at an academic conference, with the guy who directs robotics research labs within ATR, the primary Japanese government research lab.He said that at the
Hi, I know you must be frustrated with fund raising, but investor
relunctance is understandable from the perspective that for decadesnow there has always been someone who said we're N years from fullblown AI, and then N years passed with nothing but narrow AI progress.Of course, someone will end
Though I have remained often-publiclyopposed to emergence and 'fuzzy' design since first realising what the true
consequences (of the heavily enhanced-GA-based system I was workingon at the time) were, as far as I know I haven't made that particularmistake again.Whereas, my view is that it is
Loosemore wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would be similar to the
likelihood of the molecules of an Ideal Gas suddenly deciding to split
into
Right - for the record when I use words like loony in this sort of
context I'm not commenting on how someone might come across face to face
(never having met him), nor on what a psychiatrist's report would read (not
being a psychiatrist) - I'm using the word in exactly the same way that I
would
HI,
About hybrid/integrative architecturs, Michael Wilson said:
I'd agree that it looks good when you first start attacking the problem.
Classic ANNs have some demonstrated competencies, classic symbolic
AI has some different demonstrated competencies, as do humans and
existing non-AI software.
...
-- Ben G
On 10/25/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Loosemore wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would
Hi,
Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.
My view is that for technologies involving large investment in
manufacturing infrastructure, the US military is one very likely
source of funds.
Hi,
The problem, Ben, is that your response amounts to I don't see why that
would work, but without any details.
The problem, Richard, is that you did not give any details as to why
you think your proposal will work (in the sense of delivering a
system whose Friendliness can be very
Hi,
There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it: I am proposing a general
*class* of architectures for an AI-with-motivational-system. I am not
saying that this is a specific instance (with all the details nailed
down)
FYI
-- Forwarded message --
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
Date: Oct 30, 2006 12:14 AM
Subject: After Life by Simon Funk
To: [EMAIL PROTECTED]
http://interstice.com/~simon/AfterLife/index.html
An online novella, with hardcopy purchaseable from Lulu.
Theme:
Hi Richard,
Let me go back to start of this dialogue...
Ben Goertzel wrote:
Loosemore wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that the
likelihood of them becoming unfriendly would be similar
Me, interviewed by R.U. Sirius, on AGI, the Singularity, philosophy of
mind/emotion/immortality and so forth:
http://mondoglobo.net/neofiles/?p=78
Audio only...
-- Ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Hi,
For anyone who is curious about the talk Ten Years to the Singularity
(if we Really Really Try) that I gave at Transvision 2006 last
summer, I have finally gotten around to putting the text of the speech
online:
http://www.goertzel.org/papers/tenyears.htm
The video presentation has been
comes across in the talk.
Yours,
Joshua
2006/12/11, Ben Goertzel [EMAIL PROTECTED]:
Hi,
For anyone who is curious about the talk Ten Years to the Singularity
(if we Really Really Try) that I gave at Transvision 2006 last
summer, I have finally gotten around to putting the text
not documented or easily
digestable, but it seems like one of the most efficient ways to attack
the software development problem.
Bo
On Mon, 11 Dec 2006, Ben Goertzel wrote:
) Hi Joshua,
)
) Thanks for the comments
)
) Indeed, the creation of a thinking machine is not a typical VC type
or divided by
the population size?
-Chuck
On 12/11/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi,
For anyone who is curious about the talk Ten Years to the Singularity
(if we Really Really Try) that I gave at Transvision 2006 last
summer, I have finally gotten around to putting the text
Hi,
You mention intermediate steps to AI, but the question is whether these
are narrow-AI applications (the bane of AGI projects) or some sort of
(incomplete) AGI.
According the approach I have charted out (the only one I understand),
the true path to AGI does not really involve commercially
BTW Ben, for the love of God, can you please tell me when your AGI book is
coming out? It's been in my Amazon shopping cart for 6 months now!
The publisher finally mailed me a copy of the book last week!
Ben
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or
Well, the requirements to **design** an AGI on the high level are much
steeper than the requirements to contribute (as part of a team) to the
**implementation** (and working out of design details) of AGI.
I dare say that anyone with a good knowledge of C++, Linux, and
undergraduate computer
Yes, this is one of the things we are working towards with Novamente.
Unfortunately, meeting this low barrier based on a genuine AGI
architecture is a lot more work than doing so in a more bogus way
based on an architecture without growth potential...
ben
On 12/20/06, Joshua Fox [EMAIL
This post is a brief comment on PJ Manney's interesting essay,
http://www.pj-manney.com/empathy.html
Her point (among others) is that, in humans, storytelling is closely
tied with empathy, and is a way of building empathic feelings and
relationships. Mirror neurons and other related mechanisms
Joshua Fox wrote:
Any comments on this: http://news.com.com/2100-11395_3-6160372.html
Google has been mentioned in the context of AGI, simply because they
have money, parallel processing power, excellent people, an
orientation towards technological innovation, and important narrow AI
Richard, I long ago proposed a working definition of intelligence as
Achieving complex goals in complex environments. I then went
through a bunch of trouble to precisely define all the component
terms of that definition; you can consult the Appendix to my 2006
book The Hidden Pattern
Alas, that was not quite the question at issue...
In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed*
to go so far as to simulate most of the functionality of a human brain
in order to acquire its ability?
I am not asking you to make a judgment call on whether or not it
AIXI is valueless.
Well, I agree that AIXI provides zero useful practical guidance to those
of us
working on practical AGI systems.
However, as I clarified in a prior longer post, saying that mathematics
is valueless
is always a risky proposition. Statements of this nature have been
your options, please go to:
http://v2.listbox.com/member/?;
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge
-
This list is sponsored by AGIRI
/?;
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your
On Jan 20, 2008 1:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi Natasha
After discussions with you and others in 2005, I created a revised
version of the essay,
which may not address all your complaints, but hopefully addressed some of
them.
http://www.goertzel.org/Chapter12_aug16_05
, please go to:
http://v2.listbox.com/member/?;
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge
-
This list is sponsored by AGIRI: http
that
was really refreshing!!!)
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
-
This list is sponsored by AGIRI: http
Mike,
I certainly would like to see discussion of how species generally may be
artificially altered, (including how brains and therefore intelligence may
be altered) - and I'm disappointed, more particularly, that Natasha and any
other transhumanists haven't put forward some half-way
Craig
Venter co creating a new genome -
Just to be clear: They did not create a new genome, rather they are re-creating
a subset of a previously existing one...
is an example of the genetic keyboard
playing on itself, i.e. one genome [Craig Venter] has played with another
genome and will
On Jan 27, 2008 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Jan 27, 2008 9:29 PM, John K Clark [EMAIL PROTECTED] wrote:
Ben Goertzel [EMAIL PROTECTED]
we can think about a multi-multiverse, i.e. a collection of multiverses,
with a certain probability distribution over them
Nesov wrote:
Exactly. It needs stressing that probability is a tool for
decision-making and it has no semantics when no decision enters the
picture.
...
What's it good for if it can't be used (= advance knowledge)? For
other purposes we'd be better off with specially designed random
) notation.
--
Vladimir Nesovmailto:[EMAIL PROTECTED]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
OK, but why can't they all be dumped in a single 'normal' multiverse?
If traveling between them is accommodated by 'decisions', there is a
finite number of them for any given time, so it shouldn't pose
structural problems.
The whacko, speculative SF hypothesis is that lateral movement btw
current model of the universe is in many ways wrong ... it seems
interesting to me to speculate about what a broader, richer, deeper model
might look like
-- Ben Goertzel
(list owner, plus the guy who started this thread ;-)
On Feb 2, 2008 3:54 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
WTF does
This article
http://www.physorg.com/news120735315.html
made me think of Johnjoe McFadden's theory
that quantum nonlocality plays a role in protein-folding
http://www.surrey.ac.uk/qe/quantumevolution.htm
H...
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director
...
thanks
Ben Goertzel
List Owner
On Feb 5, 2008 4:36 PM, Bruno Frandemiche [EMAIL PROTECTED] wrote:
hello,too me(stop me if you have the thue,i am very open)
http://www.spaceandmotion.com/wave-structure-matter-theorists.htm
cordialement votre
bruno
- Message d'origine
De : Bruno
Hi Bruno,
effectively,my commentary is very short so excuse-my(i drive my pc with my
eyes
because i am a a.l.s with tracheo and gastro and i was a speaker,not a
writer and it's difficult)
Well that is certainly a good reason for your commentaries being short!
hello ben
ok ,i stop,no
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
singularity
Archives: http
http://www.codeplex.com/singularity
---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
If the concept behind Novamente is truly compelling enough, it
should be no problem to make a successful pitch.
Eric B. Ramsay
Gee ... you mean, I could pitch the idea of funding Novamente to
people with money?? I never thought of that!! Thanks for the
advice ;-pp
Evidently, the concept
On Sun, Apr 6, 2008 at 12:21 PM, Eric B. Ramsay [EMAIL PROTECTED] wrote:
Ben:
I may be mistaken, but it seems to me that AGI today in 2008 is in the air
again after 50 years.
Yes
You are not trying to present a completely novel and
unheard of idea and with today's crowd of sophisticated
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn [EMAIL PROTECTED] wrote:
I would think an investor would want a believable specific answer to the
following question:
When and how will I get my money back?
It can be uncertain (risk is part of the game), but you can't just wave
your hands
/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods
/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
singularity | Archives | Modify Your Subscription
--
Ben
Of course what I imagine emerging from the Internet bears little resemblance
to Novamente. It is simply too big to invest in directly, but it will
present
many opportunities.
But the emergence of superhuman AGI's like a Novamente may eventually become,
will both dramatically alter the
Samantha,
You know, I am getting pretty tired of hearing this poor mouth crap. This
is not that huge a sum to raise or get financed. Hell, there are some very
futuristic rich geeks who could finance this single-handed and would not
really care that much whether they could somehow monetize
I don't think any reasonable person in AI or AGI will claim any of these
have been solved. They may want to claim their method has promise, but not
that it has actually solved any of them.
Yes -- it is true, we have not created a human-level AGI yet. No serious
researcher disagrees. So why
Hi,
Just my personal opinion...but it appears that the exponential technology
growth chart, which is used in many of the briefings, does not include
AI/AGI. It is processing centric. When you include AI/AGI the exponential
technology curve flattens out in the coming years (5-7) and becomes
Brain-scan accuracy is a very crude proxy for understanding of brain
function; yet a much better proxy than anything existing for the case
of AGI...
On Sun, Apr 13, 2008 at 11:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote:
Hi,
Just my personal opinion
60 matches
Mail list logo