Linas Vepstas said:
To amplify: the rules for GoL are simple. The finding what they imply
are not. The rues for gravity are simple. Finding what they impl are
not.
And I would argue that the rules of Friendliness are simple and the finding
what they imply are not.
-
This list is
Andrew Babian said:
Honestly, it seems to me pretty clearly that whatever Richard's thing is
with
complexity being the secret sauce for intelligence and therefore everyone
having it wrong is just foolishness. I've quit paying him any mind.
Everyone
has his own foolishness. We just wait for
.
Simple. Unambiguous. Impossible to implement. (And not my proposal)
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED
Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, October 05, 2007 10:40 AM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED] wrote:
Then state the base principles or the algorithm that generates them,
without
ambiguity and without
I mean that ethics or friendliness is an algorithmically complex function,
like our legal system. It can't be simplified.
The determination of whether a given action is friendly or ethical or not is
certainly complicated but the base principles are actually pretty darn simple.
However, I
Matt Mahoney pontificated:
The probability distribution of language
coming out through the mouth is the same as the distribution coming in
through
the ears.
Wrong.
My goal is not to compress text but to be able to compute its probability
distribution. That problem is AI-hard.
Wrong
: Thursday, October 04, 2007 4:42 PM
Subject: Re: [agi] Language and compression
--- Mark Waser [EMAIL PROTECTED] wrote:
Matt Mahoney pontificated:
The probability distribution of language
coming out through the mouth is the same as the distribution coming in
through
the ears.
Wrong.
Could
So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?
What a stupid question. *Anything* can be ambiguous if you're clueless.
The moral truth of Thou shalt not destroy the universe is universal. The
ability to interpret it and apply it
A quick question, do people agree with the scenario where, once a non
super strong RSI AI becomes mainstream it will replace the OS as the
lowest level of software?
For the system that it is running itself on? Yes, eventually. For most/all
other machines? No. For the initial version of the
communication and refuse to act on
it.
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have to be distributed. My
argument/proof would be that I believe that *anything* can be described
PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Distributed Semantics [WAS Re: [agi] Religion-free
technical content]
Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have
that intelligent processes
independent on it would not take over).
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
The intelligence and goal system should be robust enough that a single or
small number of sources should not be able to alter the AGI's goals;
however, it will not do
Okay, I'm going to wave the white flag and say that what we should do is
all get together a few days early for the conference next March, in
Memphis, and discuss all these issues in high-bandwidth mode!
Definitely. I'm not sure that we're at all in disagreement except that I'm
still trying
.
So how do I get to be an assessor and decide?
- Original Message -
From: Jef Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 12:55 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote
] Religion-free technical content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Effective deciding of these should questions has two major elements:
(1) understanding of the evaluation-function of the assessors with
respect to these specified ends, and (2) understanding of principles
Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, October 02, 2007 2:53 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Wrong. There *are* some absolute answers. There are some obvious
universal
Thou shalt nots
content
On 10/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including
yourself.
Thou shalt not kill every living and/or sentient except
, October 02, 2007 7:12 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
--- Mark Waser [EMAIL PROTECTED] wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being
Matt,
Is there any particular reason why you're being so obnoxious?
His proposal said *nothing* of the sort and your sarcasm has buried any
value your post might have had.
- Original Message -
From: Matt Mahoney [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, October
So the real question is what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?
Some folks might argue that humans are just below that
threshold.
Humans are only below the threshold because our internal systems are so
convoluted and difficult to
.
Mark Waser wrote:
3) The system would actually be driven by a very smart, flexible,
subtle sense of 'empathy' and would not force us to do painful things
that were good for us, for the simple reason that this kind of
nannying would be the antithesis of really intelligent
Answer in this case: (1) such elemental things as protection from
diseases could always be engineered so as not to involve painful
injections (we are assuming superintelligent AGI, after all),
:-)First of all, I'm not willing to concede an AGI superintelligent
enough to solve all the
An interesting article on 'Mirror touch' synaesthesia where people actually
feel a touch on their own skin when they watch someone else being touched.
Should be relevant to all of the pain discussions recently.
http://www.nature.com/news/2007/070611/full/070611-14.html
-
This list is
I would recommend removing statements like Our team currently has 70
members, among them are several professors, many PhD and master students, and
programmers. from your website. The last thing you need is credibility
problems.
-
This list is sponsored by AGIRI:
Well, if one of us becomes extremely successful biz-wise, but the other has
made some deep AI success, the one can always buy the other's company ;-)
Hey! If I become both extremely successful biz-wise *and* make some deep AI
success, can I give you the company and just make you pay me some
in making a particular quale pleasant
vs unpleasant?
Regards,
Jiri Jelinek
On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote:
Hi Jiri,
A VNA, given sufficient time, can simulate *any* substrate.
Therefore,
if *any* substrate is capable of simulating you (and thus pain), then a
VNA
I was just playing with some thoughts on
potential security implications associated with the speculation of
qualia being produced as a side-effect of certain algorithmic
complexity on VNA.
Which is, in many ways, pretty similar to my assumption that consciousness
will be produced as a
I hardly think that's matter given that it's a truly a Singularity-class
AI.
Do you sit around calculating which of your grandparents deserves the most
credit for bringing you into being? No, you take care of them as they
need
it.
Thank you too, Josh -- maybe I was too cynical in thinking
A successful AI could do a superior job of dividing up the credit from
available historical records. (Anyone who doesn't spot this is not
thinking recursively.)
Yay! Thank you!
( . . . and to think that last night I decided to give up on the topic. But
don't worry, I'll still punt on it.
YKY,
I think that I'm going to take this opportunity to give up on this
conversation for the following reasons:
Come on, there're no obvious reasons for this complex issue.
I have to disagree. There *ARE* certain things that really should be obvious
if you get it.
To put it another
It's a white-list. Click on the button and it will stop harassing you.
- Original Message -
From: David Orban
To: agi@v2.listbox.com
Sent: Wednesday, June 13, 2007 2:01 AM
Subject: Re: [agi] META: spam? ZONEALARM CHALLENGE
R. Schwall set up the filter on the incoming
http://www.gmu.edu/thinklearn/decade-mind-videos.html
I particularly recommend
Giulio Tononi, PhD, MD
Consciousness and the Brain
Dharmendra Modha, PhD
Towards Engineering the Mind by Reverse Engineering the Brain
and, even though I disliked the first 15 minutes . . . .
Vernon Smith, PhD,
Board members will be nominated and elected by the entire group, and
hopefully we can find some academics who have reputation in certain areas of
AI, and are not contributors themselves. I tend to think that they will be
more judicious than other types of people.
Again, how is that
Has anyone tried a test of something as simple as per line of code /
function?
My first official programming course was a Master's level course at an
Ivy League college. The course project was a full-up LISP interpreter. My
program was ~800-900 lines and passed all testing with flying
For feelings - like pain - there is a problem. But I don't feel like
spending much time explaining it little by little through many emails.
There are books and articles on this topic.
Indeed there are and they are entirely unconvincing. Anyone who writes
something can get it published.
If
Josh: If you want to understand why existing approaches to AI haven't
worked, try Beyond AI by yours truly
Any major point or points worth raising here?
Yo, troll,
If you're really interested, then go get the book and stop wasting
bandwidth.
If you had any clue about AGI, you'd
YKY Think: if you have contributed something, it'd be in your best interest
to give accurate estimates rather than exaggerate or depreciate them
MW Why wouldn't it be to my advantage to exaggerate my contributions?
YKY But your peers in the network won't allow that.
That is an entirely
: Sunday, June 10, 2007 4:13 PM
Subject: Re: [agi] AGI Consortium
On 6/10/07, Mark Waser [EMAIL PROTECTED] wrote:
YKY Think: if you have contributed something, it'd be in your best
interest to give accurate estimates rather than exaggerate or depreciate them
MW Why wouldn't
Subject: Re: [agi] AGI Consortium
On 6/11/07, Mark Waser [EMAIL PROTECTED] wrote:
I'm going to temporarily ignore my doubts about accurate assessments to
try to get my initial question answered yet again.
Why wouldn't it be to my advantage to exaggerate my contributions
Think: if you have contributed something, it'd be in your best interest to
give accurate estimates rather than exaggerate or depreciate them
Why wouldn't it be to my advantage to exaggerate my contributions?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or
The problem of logical reasoning in natural language is a pattern recognition
problem (like natural language recognition in general). For example:
- Frogs are green. Kermit is a frog. Therefore Kermit is green.
- Cities have tall buildings. New York is a city. Therefore New York has
On 6/8/07, Mark Waser [EMAIL PROTECTED] wrote:
Actually, it should be On 6/8/07, Mark Waser [EMAIL PROTECTED] quoted
someone else as saying:
I don't agree with Sterling's indictment of Wikipedia since I don't believe
that a relatively unified vision is necessary for it. I do, however
Which exact aspect are you relying on and how are you implementing it?
Wow. That would take a long time to explain . . . . soon (I hope)
The main thin is the restriction on domain, all of his scripts were very
limiting, IE if you used a restaurant script and anything out of the
ordinary
This is the kind of control freak tendency that makes many startup
ventures untenable; if you cannot give up some control (and I will grant
such tendencies are not natural), you might not be the best person to be
running such a startup venture.
Yup, my suggestion of giving control to five
This absolutely never happened. I absolutely do not say such things, even
as a joke
Your recollection is *very* different from mine. My recollection is
that you certainly did say it as a joke but that I was *rather* surprised
that you would say such a thing even as a joke. If anyone
Your brain can be simulated on a large/fast enough von Neumann
architecture.
From the behavioral perspective (which is good enough for AGI) - yes,
but that's not the whole story when it comes to human brain. In our
brains, information not only is and moves but also feels.
It's my
I did a deeper scan of my mind, and found that the only memory I actually
have is that someone at the conference said that they saw I wasn't in the
room that morning, and then looked around to see if there was a bomb.
My memory probably was incorrect in terms of substituting fire for bomb
Actually, information theory would argue that if the more compactness was
driven by having less information due to a low transmission speed/bandwidth,
then you would likely have more ambiguity (i.e. less information on the
receiving side) not less.
Also, there have been numerous studies
My guess is that *after* people see and discuss each other's ideas, they'll
be more likely to change their views
Like Ben and Pei and Peter and Eliezer and Sam and Richard and . . . . ? What
are you basing your guess on?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
I think we'll maintain a tree and linked-list hybrid data structure.
AGI would be at the root. Then we allow users to add nodes like
Novamente's breakdown of AGI modules into A, B, C,... and YKY's breakdown
of AGI modules... etc. Also some nodes may be temporally linked, ie task
A can
It will b e very hard at that point to hold up in court, given that the AGI
must choose who gets what, cause there sure aint no precedent for a
non-legal-entity like an AI for making legal decisions.
Will have to have it declared a person first.
There is nothing necessary to hold up in
I think a system can get arbitrarily complex without being conscious --
consciousness is a specific kind of model-based, summarizing,
self-monitoring
architecture.
Yes. That is a good clarification of what I meant rather than what I said.
That said, I think consciousness is necessary
but
But instead, someday real soon now, you're going to realize that such a
credit attribution structure *is* fundamentally isomorphic to AGI.
... which is why it makes sense to look at architectures with a market as
one
of their key mechanisms -- see my book and Eric Baum's.
Huh. I was doing
What distinguishes this venture from the hundreds of other ones that
are frankly indistinguishable from yours? What is that killer thing that you
can convincingly demonstrate you have that no one else can? Without
that, your chances are poor on many different levels.
I'm trying to find
That sounds like a contributor lawsuit waiting to happen outside of the
contributors contractually agreeing to have zero rights, and who would
want to sign such a contract?
And there's the rub. We've gotten into a situation where it's almost
literally impossible to honestly set up a
list readers should check old discredited approaches first
Would you really call Schank discredited or is it just that his line of
research petered out?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Isn't it indisputable that agency is necessarily on behalf of some
perceived entity (a self) and that assessment of the morality of any
decision is always only relative to a subjective model of rightness?
I'm not sure that I should dive into this but I'm not the brightest
sometimes . . . . :-)
http://www.the-scientist.com/article/home/53231/
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
it be friendly, but
in the end, taking out those restrictions is an order much easier than putting
them in place.
James Ratcliff
Mark Waser [EMAIL PROTECTED] wrote:
What distinguishes this venture from the hundreds of other ones that
are frankly indistinguishable from yours? What
I do think its a misuse of agency to ascribe moral agency to what is
effectively only a tool. Even a human, operating under duress, i.e.
as a tool for another, should be considered as having diminished or no
moral agency, in my opinion.
So, effectively, it sounds like agency requires both
:-)A lot of the reason why I was asking is because I'm effectively
somewhat (how's that for a pair of conditionals? :-) relying on Schank's
approach not having any showstoppers that I'm not aware of -- so if anyone
else is aware of any surprise show-stopper's in his work, I'd love to have
? Or are they not moral since they're not conscious decisions
at the time of choice?:-).
Mark
- Original Message -
From: Jef Allbright [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 05, 2007 5:45 PM
Subject: Re: [agi] Pure reason is a disease.
On 6/5/07, Mark Waser
Just a gentle suggestion: If you're planning to unveil a major AGI
initiative next month, focus on that at the moment.
I think that morality (aka Friendliness) is directly on-topic for *any* AGI
initiative; however, it's actually even more apropos for the approach that I'm
taking.
As I
Decisions are seen as increasingly moral to the extent that they enact
principles assessed as promoting an increasing context of increasingly
coherent values over increasing scope of consequences.
Or another question . . . . if I'm analyzing an action based upon the criteria
specified above
You may be assuming flexibility in the securities and tax regulations
than actually exists now. They've tightened things up quite a bit over
the last ten years.
I don't think so. I'm pretty aware of the current conditions.
Equity and pseudo-equity (like incentive stock options -- ISOs)
provided that I
thought they weren't just going to take my code and apply some licence
which meant I could no longer use it in the future..
I suspect that I wasn't clear about this . . . . You can always take what is
truly your code and do anything you want with it . . . . The problems
start
But how do you add more contributors without a lot of very contentious
work? Think of all the hassles that you've had with just the close-knit
Novamente folk (and I don't mean to disparage them or you at all) and then
increase it by some number (further complicated by distance, difference
Mark, have you looked at phantom stock plans?
Keith,
I have not since I was unaware of them. Thank you very much for the
pointer. I will investigate. (Now this is why I spend so much time
on-line -- If only there were some almost-all-knowing being that could take
what you're trying to
but I'm not very convinced that the singularity *will* automatically happen.
{IMHO I think the nature of intelligence implies it is not amenable to
simple linear scaling - likely not even log-linear
I share that guess/semi-informed opinion; however, while that means that I
am less
One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular tasks.
Uh, I don't think you're getting this. Any
The difference is significant: the real return between the best and worst
can easily be 2x.
Given that this is effectively a venture capital moon-shot as opposed to a
normal savings plan type investment, a variance of 2x is not as much as it
initially seems (and we would, of course, do
Using a non-existent AGI to rate contributions... is not a realistic idea.
Ok, I'll bite. Why not?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
OK. I'm confused. You said both
lets say we don't program beliefs in consciousness or free will . . . .
The AGI will look at these concepts rationally. It will conclude that
they do not exist because human behavior can be explained without their
existence.
AND
I do believe in
Hi Jean-Paul,
I'm not sure that I understand your point but let me try to answer it
anyways (and you'll tell me if I missed :-).
I qualify as one of those mid-lifers but, due to impending college
expenses, I NEED my current non-AGI income stream. I'm not hugely motivated
by money
My approach is to accept the conflicting evidence and not attempt to
resolve it.
Yes, indeed, that does explain much.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
important -- 6 which would necessarily include 8 and 9
potentially important -- 10 (average level is a poor gauge, if there are
sufficient highly-expert/superstar people you can afford an equal number of
relatively non-expert people, if you don't have any real superstars, you're
dead in the
as the profound flaws in my suggestion?
(And TIA if you're willing to do so)
Mark
- Original Message -
From: Benjamin Goertzel
To: agi@v2.listbox.com
Sent: Sunday, June 03, 2007 1:57 PM
Subject: Re: [agi] Open AGI Consortium
YKY and Mark Waser ...
About
your suggestion is basically a dictatorship by you ;-)
Oh! I am horribly offended.:-o
That reaction is basically why I was planning on grabbing a bunch of other
trustworthy people to serve as joint owners (as previously mentioned).
without any clear promise of compensation in future
No
So, the share allocation is left undetermined, to be determined by the AGI
someday?
That's what I'm saying currently. The reality is that my project actually has
a clear intermediate product that would cleanly allow all current contributors
to determine an intermediate distribution -- but
You might get rich by writing a general software engine to make this
consortium idea work -- and it will take software, some very complex and
secure software to track and value the contributions of lots of people.
where
people or companies can form *any* sort of idea consortium they like
Well my feeling is that the odd compensation scheme, even if very clearly
presented, would turn off a VC or even an angel investor ...
The only thing that is odd about the compensation scheme is how you're
determining the allocation of the non-VC/investor shares/profits.
Why
Creating an entire, shippable product from scratch? Examples?
Entire product that does something? Absolutely. From scratch? Heck no --
and precisely my point. None of us should be doing entire projects from
scratch. If that's what you do, then you are not serving your clients and you
I've been doing a lot of the same thought process for what I'm trying to
set up. Here are the conclusions that I've come to (some of which are very
close to yours and some which vary tremendously).
1. People post their ideas into some layered set of systems that records them
permanently
You are anthropomorphising. Machines are not human. There is nothing
wrong
with programming an AGI to behave as a willing slave whose goal is to obey
humans.
I disagree. Programming an AGI to behave as a willing slave is unsafe
unless you can *absolutely* guarantee that it will *always*
DotNetNuke seems to be very sophisticated... unless I could find someone to
write the modifications.
Don't let the size and amount of code intimidate you -- writing DotNetNuke
modules and modifications is easy (i.e. it is probably one of the most easily
hacked systems -- assuming that by
I'm willing to use what's available and useful. If I get a shot at it, I'm
thinking in terms of probably using Java, primarily because of the amount of
functionality thereby available off the shelf. But that only goes a small
part of the way.
(obligatory IMO) Java is a good language but a
Interesting setup. I fear that this and YKY's project will have difficulty
attracting contributors, as AGI folk appear to be rather cranky
individualists, but I hope it works out for you! Even though this
discussion (and the spinoff software engineering vs algorithms pissing
contest) is
(so you'd have
to figure out how to convert those to Struts or something home-grown).
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Saturday, June 02, 2007 10:40 AM
Subject: Re: [agi] Opensource Business Model
On 6/2/07, Mark Waser [EMAIL
How are you going to estimate the worth of contributions *before* we have
AGI? I mean, people need to get paid in the interim.
For my project, don't count on getting paid in the short-term interim. Where's
the money going to come from? Do you expect your project to pay people in the
Belief in consciousness and belief in free will are parts of the human
brain's
programming. If we want an AGI to obey us, then we should not program
these
beliefs into it.
Are we positive that we can avoid doing so? Can we prevent others from
doing so?
Would there be technical
What component do you have that can't exist in
a von Neumann architecture?
Brain :)
Your brain can be simulated on a large/fast enough von Neumann architecture.
Agreed, your PC cannot feel pain. Are you sure, however, that an entity
hosted/simulated on your PC doesn't/can't?
If the
But programming a belief of consciousness or free will seems to be a hard
problem, that has no practical benefit anyway. It seems to be easier to
build
machines without them. We do it all the time.
But we aren't programming AGI all the time. And you shouldn't be
hard-coding beliefs in
Yes, I believe there're people capable of producing income-generating stuff
in the interim. I can't predict how the project would evolve, but am
optimistic.
Ask Ben about how much that affects a project . . . .
If you flexibly enter contracts with partners on an individual basis, that's
But lets say we don't program beliefs in consciousness or free will (not that
we should). The AGI will look at these concepts rationally. It will conclude
that they do not exist because human behavior can be explained without their
existence. It will recognize that the human belief in a little
figure out a new sex move that is only effective in really humid climates ;-)
Or the shower . . . .
. . . . and now you've got me curious . . . . :-)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
A week, however, is definitely nowhere near enough to create a useful
product.
Nope. I've made *a lot* of money consulting on sub-one-week
projects/products.
So you think the people who created products like Windows, Excel and Firefox
shouldn't be writing software?
No. I just
be detected
that accounts for a feeling people have, it must have been hard-wired by
evolution. Why can't morality be a learned behavior?
On 5/28/07, Mark Waser [EMAIL PROTECTED] wrote:
http://www.msnbc.msn.com/id/18899688
http://www.msnbc.msn.com/id/18899688/
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
If Google came along and offered you $10 million for your AGI, would you
give it to them?
No, I would sell services.
:-) No. That wouldn't be an option. $10 million or nothing (and they'll
go off and develop it themselves).
How about the Russian mob for $1M and your life and the
lives of
I think it is a serious mistake for anyone to say that the difference
between machines cannot in principle experience real feelings.
We are complex machines, so yes, machines can, but my PC cannot, even
though it can power AGI.
Agreed, your PC cannot feel pain. Are you sure, however, that an
401 - 500 of 705 matches
Mail list logo