Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Brad Paulsen

Dave,

Sorry to reply so tardily.  I had to devote some time to other, pressing, 
matters.


First, a general comment.  There seems to be a very interesting approach to 
arguing one's case being taken by some posters on this list in recent days. 
 I believe this approach was evinced most recently, and most baldly, by 
list newbie, Colin Hales.  Apparently, Mr. Hales doesn't believe he is 
responsible for making his arguments clear or backing up his assertions of 
fact.  Rather, it is we who must educate ourselves about the background for 
his particular arguments and assertions so we will be able to prove those 
arguments and assertions to ourselves for him.  Now, THAT's an ego!  This 
approach, of course, begs the question, Why should I care what Mr. Hales 
argues or asserts if he isn't going to take the time and effort needed to 
convince me he's not just some poseur?


Does anyone else find this a tad insulting?  It's tantamount to saying, 
Well, I've made my argument in English.  It may not be perfectly clear to 
you because, to really understand it, you need to be fluent in Esperanto. 
If you're not, that's your problem.  Go learn Esperanto so you can 
understand my fabulous reasoning and be convinced of my argument's 
veracity.  Get real.


I can (and will) ignore Mr. Hales.  But, then, you used this same approach 
in your last post to me when you wrote,  The OCP approach/strategy, both 
in crucial specifics of its parts and particularly in its total synthesis, 
*IS* novel; I recommend a closer re-examination!


I think not. If you really care whether I think OCP's approach is novel, 
you have to convince me, not give me homework.  I'm not arguing OCP's 
position.  You are.  If you think I don't understand OCP well enough, and 
if you think that is important to get me to take your argument seriously, 
then it's up to you to do the heavy lifting.


In this case, though, I'll let you off the hook by gladly conceding the 
point.  I will accept as true the proposition that OCP's approach to NLU is 
completely novel.  Of course, I do this gladly because it makes not a bit 
of difference.


In the first place, I didn't argue that the OCP approach was not novel in 
either its design or implementation.  In fact, I'm sure it is.  I argued 
that trying to solve the artificial intelligence problem by, first, 
solving the NLU problem is not a novel strategy.  We have Mr. Turing to 
thank for it.  It has been tried before.  It has, to date, always failed.


But, as I said, this makes no difference simply because thr fact that the 
OCP strategy is novel doesn't prove it will work.  Indeed, it's not even 
good evidence.  Prior approaches that failed were also once novel.


If the problem of NLU is AI-complete (and this is widely believed to be the 
case), it will not fall to a finite algorithm with space/time complexity 
small enough to make it viable in a real-time AGI.  If NLU turns out to not 
be AI-complete, then we still have fifty years of past failed effort by 
many intelligent, sincere and dedicated people to support the argument that 
it is at least a very difficult problem.


My point has been, and still is, that NLU becomes a necessary condition of 
AGI IFF we define AGI as AGHI.  Many people simply can't conceive of a 
general intelligence that isn't human-like.  This is understandable since 
the only general intelligence we (think we) know something about is human 
intelligence.  In that context, cracking the NLU problem can (although 
still needn't necessarily) be viewed as a prerequisite to cracking the AGI 
problem.


But, human intelligence is not the only general intelligence we can imagine 
or create.  IMHO, we can get to human-beneficial, non-human-like (but, 
still, human-inspired) general intelligence much quicker if, at least for 
AGI 1.0, we avoid the twin productivity sinks of NLU and embodiment.


In the end, of course, both or us really have only our opinions.  You can't 
prove the OCP approach, novel though it may be, will, finally, crack the 
elusive NLU problem.  I can't prove it won't.  I, agree, therefore, that we 
should agree to disagree and let history sort things out.


Cheers,
Brad

David Hart wrote:
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


So, it has, in fact, been tried before.  It has, in fact, always
failed. Your comments about the quality of Ben's approach are noted.
 Maybe you're right.  But, it's not germane to my argument which is
that those parts of Ben G.'s approach that call for human-level NLU,
and that propose embodiment (or virtual embodiment) as a way to
achieve human-level NLU, have been tried before, many times, and
have always failed.  If Ben G. knows something he's not telling us
then, when he does, I'll consider modifying my views.  But,
remember, my comments were never directed at the OpenCog project or
Ben G. personally.  They were directed at an AGI *strategy

Re: [agi] It is more important how AGI works than what it can do.

2008-10-11 Thread Brad Paulsen

Dave,

Well, I thought I'd described how pretty well.  Even why.  See my recent 
conversation with Dr. Heger on this list.  I'll be happy to answer specific 
questions based on those explanations but I'm not going to repeat them 
here.  Simply haven't got the time.


Although I have not been asked to do so, I do feel I need to provide an ex 
post facto disclaimer.  Here goes:


I am aware of the approach being taken by Stephen Reed in the Texai 
project.  I am currently associated with that project as a volunteer.  What 
I have said previously in this regard) is, however, my own interpretation 
and opinion insofar as what I have said concerned tactics or strategies 
that may be similar to those being implemented in the Texai project.  I'm 
pretty sure my interpretations and opinions are highly compatible with 
Steve's views even though they may not agree in every detail.  My comments 
should NOT, however, be taken as an official representation of the Texai 
project's tactics, strategies or goals.  End disclaimer.


I was asked by Dr. Heger to go into some of the specifics of the strategy I 
had in mind.  I honored his request and wrote quite extensively (for a list 
posting -- sorry 'bout that) about that strategy.  I have not argued, nor 
do I intend to argue, that I have an approach to AGI that is better, faster 
or more economical than approach X.  Instead, I have simply pointed out 
that NLU and embodiment problems have proven themselves to be extremely 
difficult (indeed, intractable to date).  I, therefore, on those grounds 
alone, believe (and it's just an OPINION, although I believe a 
well-reasoned one) that we will get to a human-beneficial AGI sooner (and, 
I guess, probably, therefore, cheaper) if we side-step those two proven 
productivity sinks.  For now.  End of argument.


I'm not trying to sell my AGI strategy or agenda to you or anyone else. 
Like many people on this list who have an opinion on these matters, I have 
a background as a practitioner in AI that goes back over twenty years. 
I've designed and written narrow-AI production (expert) system engines 
and been involved in knowledge engineering using those engines.  The 
results of my efforts have saved large corporations millions of dollars (if 
not billions, over time).  I can assure you that most of the humans who saw 
these systems come to life and out-perform their own human experts, were 
pretty sure I'd succeeded in getting a human into the box.  To them, it was 
already AGI.  I'd gotten a computer to do something only a human being 
(their employee) had theretofore been able to do.  And I got the computer 
to do it BETTER and FASTER.  Of course, these were mostly non-technical 
people who didn't understand the technology (in many cases had never even 
heard of it) and so, to them, there was a bit of magic involved.  We, 
here, of course know that was not the case.  While the stuff I built back 
in the 1980's and 1990's may not have been snazzy, wiz-bang AI with 
conversational robots and the whole Sci-Fi thing, it was still damn 
impressive and extremely human-beneficial.  No NLU.  No embodiment.


I don't claim to have a better way to get to AGI, just a less risky way. 
Based on past experience (in the field).  I have never intended to 
criticize any particular AGI approach.  I have not tried to show that my 
approach is conceptually superior to any other approach on any specific 
design point.  Indeed, I firmly believe that a multitude of vastly 
different approaches to this problem is a good thing.  At least initially.


As far as OCP's approach to embodiment is concerned, again it's neither the 
specifics nor the novelty of any particular approach that concerns me.  The 
efficacy of any approach to the embodiment problem can only be determined 
once it has been tried.  I'm only pointing out something everybody here 
knows full well: embodiment in various forms has, so far, failed to provide 
any real help in cracking the NLU problem.  Might it in the future?  Sure. 
 But the key word there is might.  When you go to the track to bet on a 
horse, do you look for the nag that's come in last or nearly last in every 
previous race that season and say to yourself, Hey, I have a novel betting 
strategy and, regardless what history shows (and the odds-makers say), I 
think I can make a killing here by betting the farm on that consistent 
loser!  Probably not.  Why?  Because past performance, while not a 
guarantee of future performance, is really the only thing you have to go 
on, isn't it?


Cheers,
Brad

P.S.  Back in the early 1970's I once paid for a weekend of debauchery in 
Chicago from the proceeds of my $10 bet on a 20-to-1 horse at Arlington 
Park race track because I liked the name, She's a Dazzler.  So it can 
happen.  The only question is: How much do you want to bet? ;-)


David Hart wrote:

Brad,

Your post describes your position *very* well, thanks.

But, it does not describe *how* or *why* your AI system might 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Brad Paulsen

Ben,

Well, I guess you told me!  I'll just be taking my loosely-coupled 
...bunch of clever narrow-AI widgets... right on out of here.  No need to 
worry about me venturing an opinion here ever again.  I have neither the 
energy nor, apparently, the intellectual ability to respond to a broadside 
like that from the top dog.


It's too bad.  I was just starting to fell at home here.  Sigh.

Cheers (and goodbye),
Brad

Ben Goertzel wrote:


A few points...

1) 
Closely associating embodiment with GOFAI is just flat-out historically 
wrong.  GOFAI refers to a specific class of approaches to AI that wer 
pursued a few decades ago, which were not centered on embodiment as a 
key concept or aspect. 


2)
Embodiment based approaches to AGI certainly have not been extensively 
tried and failed in any serious way, simply because of the primitive 
nature of real and virtual robotic technology.  Even right now, the real 
and virtual robotics tech are not *quite* there to enable us to pursue 
embodiment-based AGI in a really tractable way.  For instance, humanoid 
robots like the Nao cost $20K and have all sorts of serious actuator 
problems ... and virtual world tech is not built to allow fine-grained 
AI control of agent skeletons ... etc.   It would be more accurate to 
say that we're 5-15 years away from a condition where embodiment-based 
AGI can be tried-out without immense time-wastage on making 
not-quite-ready supporting technologies work


3)
I do not think that humanlike NL understanding nor humanlike embodiment 
are in any way necessary for AGI.   I just think that they seem to 
represent the shortest path to getting there, because they represent a 
path that **we understand reasonably well** ... and because AGIs 
following this path will be able to **learn from us** reasonably easily, 
as opposed to AGIs built on fundamentally nonhuman principles


To put it simply, once an AGI can understand human language we can teach 
it stuff.  This will be very helpful to it.  We have a lot of experience 
in teaching agents with humanlike bodies, communicating using human 
language.  Then it can teach us stuff too.   And human language is just 
riddled through and through with metaphors to embodiment, suggesting 
that solving the disambiguation problems in linguistics will be much 
easier for a system with vaguely humanlike embodied experience.


4)
I have articulated a detailed proposal for how to make an AGI using the 
OCP design together with linguistic communication and virtual 
embodiment.  Rather than just a promising-looking assemblage of 
in-development technologies, the proposal is grounded in a coherent 
holistic theory of how minds work.


What I don't see in your counterproposal is any kind of grounding of 
your ideas in a theory of mind.  That is: why should I believe that 
loosely coupling a bunch of clever narrow-AI widgets, as you suggest, is 
going to lead to an AGI capable of adapting to fundamentally new 
situations not envisioned by any of its programmers?   I'm not 
completely ruling out the possiblity that this kind of strategy could 
work, but where's the beef?  I'm not asking for a proof, I'm asking for 
a coherent, detailed argument as to why this kind of approach could lead 
to a generally-intelligent mind.


5)
It sometimes feels to me like the reason so little progress is made 
toward AGI is that the 2000 people on the planet who are passionate 
about it, are moving in 4000 different directions ;-) ...


OpenCog is an attempt to get a substantial number of AGI enthusiasts all 
moving in the same direction, without claiming this is the **only** 
possible workable direction. 

Eventually, supporting technologies will advance enough that some smart 
guy can build an AGI on his own in a year of hacking.  I don't think 
we're at that stage yet -- but I think we're at the stage where a team 
of a couple dozen could do it in 5-10 years.  However, if that level of 
effort can't be systematically summoned (thru gov't grants, industry 
funding, open-source volunteerism or wherever) then maybe AGI won't come 
about till the supporting technologies develop further.  My hope is that 
we can overcome the existing collective-psychology and 
practical-economic obstacles that hold us back from creating AGI 
together, and build a beneficial AGI ASAP ...


-- Ben G








On Mon, Oct 6, 2008 at 2:34 AM, David Hart [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

So, it has, in fact, been tried before.  It has, in fact, always
failed. Your comments about the quality of Ben's approach are
noted.  Maybe you're right.  But, it's not germane to my
argument which is that those parts of Ben G.'s approach that
call for human-level NLU, and that propose embodiment (or
virtual embodiment) as a way to achieve human-level NLU, have
been tried before

Re: [agi] NARS and probability

2008-10-10 Thread Brad Paulsen

Pei, Ben G. and Abram,

Oh, man, is this stuff GOOD!  This is the real nitty-gritty of the AGI 
matter.  How does your approach handle counter-evidence?  How does your 
approach deal with insufficient evidence?  (Those are rhetorical questions, 
by the way -- I don't want to influence the course of this thread, just 
want to let you know I dig it and, mostly, grok it as well).  I love this 
stuff.  You guys are brilliant.  Actually, I think it would make a good 
publication: PLN vs. NARS -- The AGI Smack-down!  A win-win contest.


This is a rare treat for an old hacker like me.  And, I hope, educational 
for all (including the participants)!  Keep it coming, please!


Cheers,
Brad

Pei Wang wrote:

On Fri, Oct 10, 2008 at 8:03 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

Yah, according to Bayes rule if one assumes P(bird) = P(swimmer) this would
be the case...

(Of course, this kind of example is cognitively misleading, because if the
only knowledge
the system has is Swallows are birds and Swallows are NOT swimmers then
it doesn't
really know that the terms involved are swallows, birds, swimmers etc.
... then in
that case they're just almost-meaningless tokens to the system, right?)


Well, it depends on the semantics. According to model-theoretic
semantics, if a term has no reference, it has no meaning. According to
experience-grounded semantics, every term in experience have meaning
--- by the role it plays.

Further questions:

(1) Don't you intuitively feel that the evidence provided by
non-swimming birds says more about Birds are swimmers than
Swimmers are birds?

(2) If your answer for (1) is yes, then think about Adults are
alcohol-drinkers and Alcohol-drinkers are adults --- do they have
the same set of counter examples, intuitively speaking?

(3) According to your previous explanation, will PLN also take a red
apple as negative evidence for Birds are swimmers and Swimmers are
birds, because it reduces the candidate pool by one? Of course, the
probability adjustment may be very small, but qualitatively, isn't it
the same as a non-swimming bird? If not, then what the system will do
about it?

Pei



On Fri, Oct 10, 2008 at 7:34 PM, Pei Wang [EMAIL PROTECTED] wrote:

Ben,

I see your position.

Let's go back to the example. If the only relevant domain knowledge
PLN has is Swallows are birds and Swallows are
NOT swimmers, will the system assigns the same lower-than-default
probability to Birds are swimmers and  Swimmers are birds? Again,
I only need a qualitative answer.

Pei

On Fri, Oct 10, 2008 at 7:24 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

Pei,

I finally took a moment to actually read your email...



However, the negative evidence of one conclusion is no evidence of the
other conclusion. For example, Swallows are birds and Swallows are
NOT swimmers suggests Birds are NOT swimmers, but says nothing
about whether Swimmers are birds.

Now I wonder if PLN shows a similar asymmetry in induction/abduction
on negative evidence. If it does, then how can that effect come out of
a symmetric truth-function? If it doesn't, how can you justify the
conclusion, which looks counter-intuitive?

According to Bayes rule,

P(bird | swimmer) P(swimmer) = P(swimmer | bird) P(bird)

So, in PLN, evidence for P(bird | swimmer) will also count as evidence
for P(swimmer | bird), though potentially with a different weighting
attached to each piece of evidence

If P(bird) = P(swimmer) is assumed, then each piece of evidence
for each of the two conditional probabilities, will count for the other
one symmetrically.

The intuition here is the standard Bayesian one.
Suppose you know there
are 1 things in the universe, and 1000 swimmers.
Then if you find out that swallows are not
swimmers ... then, unless you think there are zero swallows,
this does affect P(bird | swimmer).  For instance, suppose
you think there are 10 swallows and 100 birds.  Then, if you know for
sure
that swallows are not swimmers, and you have no other
info but the above, your estimate of P(bird|swimmer)
should decrease... because of the 1000 swimmers, you now know there
are only 990 that might be birds ... whereas before you thought
there were 1000 that might be birds.

And the same sort of reasoning holds for **any** probability
distribution you place on the number of things in the universe,
the number of swimmers, the number of birds, the number of swallows.
It doesn't matter what assumption you make, whether you look at
n'th order pdf's or whatever ... the same reasoning works...

From what I understand, your philosophical view is that it's somehow
wrong for a mind to make some assumption about the pdf underlying
the world around it?  Is that correct?  If so I don't agree with this...
I
think this kind of assumption is just part of the inductive bias with
which
a mind approaches the world.

The human mind may well have particular pdf's for stuff like birds and
trees wired into it, as we evolved to deal with these things.  But
that's
not really 

Re: [agi] It is more important how AGI works than what it can do.

2008-10-06 Thread Brad Paulsen



Dr. Matthias Heger wrote:

Brad Pausen wrote The question I'm raising in this thread is more one of
priorities and allocation of scarce resources.  Engineers and scientists
comprise only about 1% of the world's population.  Is human-level NLU
worth the resources it has consumed, and will continue to consume, in
the pre-AGI-1.0 stage? Even if we eventually succeed, would it be worth
the enormous cost? Wouldn't it be wiser to go with the strengths of
both humans and computers during this (or any other) stage of AGI
development? 

I think it is not so important what abilities our first AGIs will have. 
Human language would be a nice feature but it is not necessary.


Agreed.  And nothing in the above quote indicates otherwise.  I'm only 
arguing that we should not spend scarce resources now (or ever, really) 
trying to implement unnecessary features.  Both human-level NLU and 
human-like embodiment are, in my considered opinion, unnecessary for AGI 1.0.



It is more important how it works. We want to develop an intelligent
software which has the potential to solve very different problems in
different domains. This is the main idea of AGI.

Imagine someone thinks he has build an AGI. How can he convince the
community that it is in fact AGI and not AI? If he shows some
applications where his AGI works then this is an indication for the G in
his AGI but it is no proof at all. 


I agree.

Even a turing test would be no good

test because given n questions for the AGI I can never be sure whether
it can pass the test for further n questions. 


Ah, I see you've met my friend Mr. David Hume.


AGI is inherently a white
box problem not a black box problem.



A chess playing computer is for many people a stunning machine. But we
know HOW it works and only(!) because we know the HOW we can evaluate
the potential of this approach for general AGI.

Brad, for this reason I think your question about whether the first AGI
should have the ability for human language or not is not so important.
If you can create a software which has the ability to solve very
different problems in very different domains than you have solved the
main problem of AGI.


Actually, I disagree with you here.  There is really no need to create a 
single AGI that can solve problems in multiple domains.  Most humans can't 
do that.  We can, more easily, I believe, coordinate a network of AGI 
agents that are, each, experts in a single domain.  These experts would be 
trained by human experts (as well as be able to learn from experience) and 
would be able to exchange information across domains as needed (need being 
determined, perhaps, by an expert supervisor AGI agent).  None of these 
agents, alone, would qualify as AGI (because they are narrow-domain 
experts).  The system in which these AGI experts function would, however, 
constitute true AGI.


Your reply makes it sound like I have a question about whether the first 
AGI should have human language ability.  I have no question about this. 
What I have is an informed opinion.  It is this: requiring solution of an 
AI-complete problem (human-level NLU) is the kiss of death for the success 
of any AGI concept.  If we let go of this strategy and concentrate on 
making non-human-like intelligences (using already-proven AI strategies 
that do not rely on NLU or embodiment and that leverage the strengths of 
the only non-human intelligence we have at present), I believe we will get 
to much more powerful AGI much sooner.


My concept of AGI holds that creating many different domain experts using 
proven, narrow-AI technology and, then, coordinating a vast network of 
these domain experts to identify/solve complex, cross-domain problems (in 
real-time and concurrently if necessary) will, in fact, result in a system 
that has a problem-solving capability greater than any single human being. 
 Without requiring human-level NLU or embodiment. It will be more robust 
(massive redundancy, such as that found in biologically-evolved systems is 
the key here) than any human being, be quicker than any human being and be 
more accurate than any human being (or, especially, organization of human 
beings -- have you ever tried to get an error in you HMO medical records 
corrected?).


For example, in the (near, I hope) future when you feel sick, you will sit 
down at your computer and call up a medical practitioner (GP) AGI agent (it 
doesn't really matter from where, but assume from the Internet).  This will 
be the same GP AGI agent anyone else anywhere in the world would call up 
(except, of course, each human is invoking a, localized, instance of the GP 
AGI agent).  Note that you're ahead of the game already.  You didn't have 
to wait two weeks to get an appointment (at 7AM in the morning).  You 
didn't have to go to a remote location (the doctor's office or clinic). The 
visit to your doctor is already much less stressful, a medical benefit in 
and of itself.  Once the GP AGI responds, you will only have 

Re: AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



Dr. Matthias Heger wrote:

Brad Paulson wrote More generally, as long as AGI designers and
developers insist on simulating human intelligence, they will have to
deal with the AI-complete problem of natural language understanding.
Looking for new approaches to this problem, many researches (including
prominent members of this list) have turned to embodiment (or virtual
embodiment) for help. 

We only know one human level intelligence which works. And this works
with embodiment. So for this reason, it seems to be an useful approach.


Dr. Heger,

First, I don't subscribe to the belief that AGI 1.0 need be human-level. 
In fact, my belief is just the opposite: I don't think it should be 
human-level.  And, with all due respect sir, while we may know that 
human-level intelligence works, we have no idea (or very little idea) *how* 
it works.  That, to me, seems to be the more important issue.


If we did have a better idea of how human-level intelligence worked, we'd 
probably have built a human-like AGI by now.  Instead, for all we know, 
human intelligence (and not just the absence or presence or degree thereof 
in any individual human) may be at the bottom end of the scale in the 
universe of all possible intelligences.


You are also, again with all due respect, incorrect in saying that we have 
no other intelligence with which to work.  We have the digital computer. 
It can beat expert humans at the game of chess.  It can beat any human at 
arithmetic -- both in speed and accuracy.  Unlike humans, it remembers 
anything ever stored in its memory and can recall anything in its memory 
with 100% accuracy.  It never shows up to work tired or hung over.  It 
never calls in sick.  On the other hand, what a digital computer doesn't do 
well at present, things like understanding human natural language and being 
creative (in a non-random way), humans do very well.


So, why are we so hell-bent on building an AGI in our own image?  It just 
doesn't make sense when it is manifestly clear that we know how to do 
better.  Why aren't we designing and developing an AGI that leverages the 
strengths, rather than attempts to overcome the weaknesses, of both forms 
of intelligence?


For many tasks that would be deemed intelligent if Turing's imitation game 
had not required natural HUMAN language understanding (or the equivalent 
mimicking thereof), we have already created a non-human intelligence 
superior to human-level intelligence.  It thinks nothing like we do 
(base-2 vs. base-10) yet, for many feats of intelligence only humans used 
to be able to perform, it is a far superior intelligence.  And, please 
note, not only is human-like embodiment *not* required by this 
intelligence, it would be (as it is to the human chess player) a HINDRANCE.



But, of course, if we always use the humans as a guide to develop AGI
then we will probably obtain similar limitations we observe in humans.

I actually don't have a problem with using human-level intelligence as an 
*inspiration* for AGI 1.0.  Digital computers were certainly inspired by 
human-level intelligence.  I do, however, have a problem with using 
human-level intelligence as a *destination* for AGI 1.0.



I think an AGI which should be useful for us, must be a very good
scientist, physicist and mathematician. Is the human kind of learning by
experience and the human kind of intelligence good for this job? I don't
think so.

Most people on this planet are very poor in these disciplines and I
don't think that this is only a question of education. There seems to be
a very subtle fine tuning of genes necessary to change the level of
intelligence from a monkey to the average human. And there is an even
more subtle fine tuning necessary to obtain a good mathematician.



One must be careful with arguments from genetics.  The average chimp will 
beat any human for lunch in a short-term memory contest.  I don't care how 
good the human contestant is at mathematics.  Since judgments about 
intelligence are always relative to the environment in which it is evinced, 
in an environment where those with good short-term memory skills thrive and 
those without barely survive, chimps sure look like the higher intelligence.



This is discouraging for the development of AGI because it shows that
human level intelligence is not only a question of the right
architecture but it seems to be more a question of the right fine tuning
of some parameters. Even if we know that we have the right software
architecture, then the real hard problems would still arise.



Perhaps.  But your first sentence should have read, This is discouraging 
for the development of HUMAN-LEVEL AGI because  It doesn't really 
matter to a non-human AGI.



We know that humans can swim. But who would create a swimming machine by
following the example of the human anatomy?



Yes.  Just as we didn't design airplanes to fly bird-like, even though 
the bird was our best source of inspiration for developing 

Re: AW: AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



Dr. Matthias Heger wrote:


Brad Paulson wrote Fortunately, as I argued above, we do have other
choices.  We don't have to settle for human-like. 

I do not see so far other choices. Chess is AI but not AGI.


Yes, I agree but IFF by AGI you mean human-level AGI.  As you point out
below, a lot has to do with how we define AGI.


Your idea of an incremental roadmap to human-level AGI is interesting,
but I think everyone who tries to build a human-level AGI already makes
incremental experiments and first steps with non-human-level AGI in
order to make a proof of concept. I think, Ben Goertzel has done some
experiments with artificial dogs and other non-human agents.

So it is only a matter of definition what we mean by AGI 1.0 I think, we
now have already AGI 0.0.x and the goal is AGI 1.0 which can do the same
as a human.

Why this goal? An AGI which resembles functionally (not necessarily in
algorithmic details) a human has the great advantage that everyone can
communicate with this agent.

Yes, but everyone can communicate with baby AGI right now using a 
highly-restricted subset of human natural language.  The system I'm working 
on now uses the simple, declarative sentence, the propositional (if/then) 
rule statement, and simple query as its NL interface.  The declarations of 
fact and propositional rules are upgraded, internally, to FOL+.  AI-agent 
to AI-agent communication is done entirely in FOL+.  I had considered using 
Prolog for the human interface but the non-success of Prolog in a community 
(computer programmers) already expert at communicating with computers using 
formal languages caused me to drop back to the, more difficult, but not 
impossible, semi-formal NL approach.


We don't need to crack the entire NLU problem to be able to communicate 
with AGI's in a semi-formalized version of natural human language.  Sure, 
it can get tedious. just as talking to a two-year old human child can get 
tedious (unless it's your kid, of course: then, it's fascinating!).  Does 
it impress people at demos?  The average person?  Yep, it pretty much 
does.  Even though it's far from finished at this time.  Skeptical AGHI 
designers and developers?  Not so much.  But, I'm working on that!


The question I'm raising in this thread is more one of priorities and 
allocation of scarce resources.  Engineers and scientists comprise only 
about 1% of the world's population.  Is human-level NLU worth the resources 
it has consumed, and will continue to consume, in the pre-AGI-1.0 stage? 
Even if we eventually succeed, would it be worth the enormous cost? 
Wouldn't it be wiser to go with the strengths of both humans and 
computers during this (or any other) stage of AGI development?


Getting digital computers to understand natural human language at 
human-level has proven itself to be an AI-complete problem.  Do we need 
another fifty years of failure to achieve NLU using computers to finally 
accept this?  Developing NLU for AGI 1.0 is not playing to the strengths of 
the digital computer or of humans (who only take about three years to gain 
a basic grasp of language and continue to improve that grasp as they age 
into adulthood).


Computers calculate better than do humans.  Humans are natural language 
experts.  IMHO, saying that the first version of AGI should include 
enabling computers to understand human language like humans is just about 
as silly as saying the first version of AGI should include enabling humans 
to be able to calculate like computers.


IMHO, embodiment is another loosing proposition where AGI 1.0 is concerned. 
 For all we know, embodiment won't work until we can produce an artificial 
bowel movement.  It's the, To think like Einstein, you have to stink like 
Einstein. theory.  Well, I don't want AGI 1.0 to think like Einstein.  I 
want it to think BETTER than Einstein (and without the odoriferous 
side-effect, thank you very much).



It would be interesting for me which set of abilities you want to have
in AGI 1.0.

Well, we (humanity) need, first, to decide *why* we want to create another 
form of intelligence.  And the answer has to be something other than 
because we can.  What benefits do we propose should issue to humanity 
from such an expensive pursuit?  In other words, what does 
human-beneficial AGI really mean?


Only once we have ironed out our differences in that regard (or, at least, 
have produced a compromise on a list of core abilities), should we start 
thinking about an implementation.  In general, though, when it comes to 
implementation, we need to start small and play to our strengths.


For example, people who want to build AGHI tend to look down their noses at 
classic, narrow-AI successes such as expert (production) systems (Ben G. is 
NOT in this group, BTW).  This has prevented these folks from even 
considering using this technology to achieve AGI 1.0.  I *am* (proudly and 
loudly) using this technology to build bootstrapping intelligent agents 
for AGI.


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



David Hart wrote:
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


[snip]  Unfortunately, as long as the mainstream AGI community
continue to hang on to what should, by now, be a
thoroughly-discredited strategy, we will never (or too late) achieve
human-beneficial AGI.


What a strange rant! How can something that's never before been 
attempted be considered a thoroughly-discredited strategy? I.e., 
creating an AI system system designed for *general learning and 
reasoning* (one with AGI goals clearly thought through to a greater 
degree than anyone has attempted previously: 
http://opencog.org/wiki/OpenCogPrime:Roadmap ) and then carefully and 
deliberately progressing that AI through Piagetan-inspired inspired 
stages of learning and development, all the while continuing to 
methodically improve the AI with ever more sophisticated software 
development, cognitive algorithm advances (e.g. planned improvements to 
PLN and MOSES/Reduct), reality modeling and testing iterations, 
homeostatic system tuning, intelligence testing and metrics, etc.


Please: strange rant?  I've been known to employ inflammatory rhetoric in 
the past when my blood was boiling and I have always been sorry I did it. 
I have an opinion.  You don't think it agrees with your opinion.  That's 
called a disagreement amongst peers.  Not a strange rant.


First, you have taken my statement out of context.  I was NOT referring to 
Ben G.'s overall approach to AGI.  His *concept* of AGI, if you will.  I 
was referring to the (not his) strategy of making human-level NLU a 
prerequisite for AGI (this is not a strategy pioneered by Ben G.).


Human-level AGI is an AI-complete problem.  So, this strategy makes the 
goal of getting to AGI 1.0 dependent on solving an AI-complete problem. 
The strategy of using embodiment to help crack the NLU problem (also not 
pioneered by Ben G.) may very well be another AI-complete problem (indeed, 
it may contain a whole collection of AI-complete problems).  I don't think 
that's a very good plan.  You, apparently, do.  I can point to past 
failures, you can only point to future possibilities.  Still, neither of us 
is going to convince the other we are right.  End of story.  Time will tell 
(and this e-mail list is conveniently archived for later reference).


Second, ...never before been attempted...?  Simply not true.  I was in 
high school when this stuff was first attempted.  I personally remember 
reading about it.  I haven't succumbed to Alzheimer's yet.  By the time I 
got to college, most of the early predictions had already been shown to 
have been way too optimistic.  But, since eyewitness testimony is not 
usually good enough, I give you this quote from the Wikipedia article on 
Strong AI (which is what searching Wikipedia for AGI will get you):


The first generation of AI researchers were convinced that [AGI] was 
possible and that it would exist in just a few decades.  As AI pioneer 
Herbert Simon wrote in 1965: machines will be capable, within twenty 
years, of doing any work a man can do.[10]  Their predictions were the 
inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, 
who accurately embodied what AI researchers believed they could create by 
the year 2001.  Of note is the fact that AI pioneer Marvin Minsky was a 
consultant[11] on the project of making HAL 9000 as realistic as possible 
according to the consensus predictions of the time, having himself said on 
the subject in 1967, Within a generation...the problem of creating 
'artificial intelligence' will substantially be solved.[12]

(http://en.wikipedia.org/wiki/Artificial_general_intelligence)

So, it has, in fact, been tried before.  It has, in fact, always failed. 
Your comments about the quality of Ben's approach are noted.  Maybe you're 
right.  But, it's not germane to my argument which is that those parts of 
Ben G.'s approach that call for human-level NLU, and that propose 
embodiment (or virtual embodiment) as a way to achieve human-level NLU, 
have been tried before, many times, and have always failed.  If Ben G. 
knows something he's not telling us then, when he does, I'll consider 
modifying my views.  But, remember, my comments were never directed at the 
OpenCog project or Ben G. personally.  They were directed at an AGI 
*strategy* not invented by Ben G. or OpenCog.




One might well have said in early 1903 that the concept of powered 
flight was a thoroughly-discredited strategy. It's just as silly to 
say that now [about Goertzel's approach to AGI] as it would have been to 
say it then [about the Wright brothers' approach to flight].




What?  No it's not just as silly.

Let me see if I have this straight.  You would have me believe that because 
One might as well have said in early 1903 the concept of powered flight 
was a 'thoroughly-discredited' strategy.' my objection to a 2008 AGI 
strategy is just as silly.  Nice try

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Brad Paulsen
Dr. Heger,

Point #3 is brilliantly stated.  I couldn't have expressed it better.  And
I know this because I've been trying to do so, in slightly broader terms,
for months on this list.  Insofar as providing an AGI with a human-biased
sense of space and time is required to create a human-like AGI (what I
prefer to call AG*H*I), I agree it is a mistake.

More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding.  Looking for new approaches to
this problem, many researches (including prominent members of this list)
have turned to embodiment (or virtual embodiment) for help.  IMHO, this
is not a sound tactic because human-like embodiment is, itself, probably an
AI-complete problem.

Insofar as achieving human-like embodiment and human natural language
understanding is possible, it is also a very dangerous strategy.  The
process of understanding human natural language through human-like
embodiment will, of necessity, lead to the AGHI developing a sense of self.
 After all, that's how we humans got ours (except, of course, the concept
preceded the language for it).  And look how we turned out.

I realize that an AGHI will not turn on us simply because it understands
that we're not (like) it (i.e., just because it acquired a sense of self).
  But, it could.  Do we really want to take that chance?  Especially when
it's not necessary for human-beneficial AGI (AGI without the silent H)?

Cheers,
Brad


Dr. Matthias Heger wrote:
 1. We feel ourselves not exactly at a single point in space. Instead, we
 identify ourselves with our body which consist of several parts and which
 are already at different points in space. Your eye is not at the same place
 as your hand.
 I think this is a proof that a distributed AGI will not need  to have a
 complete different conscious state for a model of its position in space than
 we already have.
 
 2.But to a certain degree you are of course right that we have a map of our
 environment and we know our position (which is not a point because of 1) in
 this map. In the brain of a rat there are neurons which each represent a
 position of the environment. Researches could predict the position of the
 rat only by looking into the rat's brain.
 
 3. I think it is extremely important, that we give an AGI no bias about
 space and time as we seem to have. Our intuitive understanding of space and
 time is useful for our life on earth but it is completely wrong as we know
 from theory of relativity and quantum physics. 
 
 -Matthias Heger
 
 
 
 -Ursprüngliche Nachricht-
 Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
 Gesendet: Samstag, 4. Oktober 2008 02:44
 An: agi@v2.listbox.com
 Betreff: [agi] I Can't Be In Two Places At Once.
 
 The foundation of the human mind and system is that we can only be in one 
 place at once, and can only be directly, fully conscious of that place. Our 
 world picture,  which we and, I think, AI/AGI tend to take for granted, is 
 an extraordinary triumph over that limitation   - our ability to conceive of
 
 the earth and universe around us, and of societies around us, projecting 
 ourselves outward in space, and forward and backward in time. All animals 
 are similarly based in the here and now.
 
 But,if only in principle, networked computers [or robots] offer the 
 possibility for a conscious entity to be distributed and in several places 
 at once, seeing and interacting with the world simultaneously from many 
 POV's.
 
 Has anyone thought about how this would change the nature of identity and 
 intelligence? 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Let's face it, this is just dumb.

2008-10-03 Thread Brad Paulsen
Wow, that's a pretty strong response there, Matt.  Friends of yours?

If I were in control of such things, I wouldn't DARE walk out of a lab and
announce results like that.  So I have no fear of being the one to bring
that type of criticism on myself.  But, I'm just as vulnerable as any of us
to having colleagues do it for (to) me.

So, yeah.  I have a problem with premature release, or announcement, of a
technology that's associated with an industry in which I work.  It's
irresponsible science when scientists do it.  It's irresponsible marketing
(now, there's a redundant phrase for you) when company management does it.

And, it's irresponsible for you to defend such practices.  That stuff
deserved to be mocked.  Get over it.

Cheers,
Brad


Matt Mahoney wrote:
 So here is another step toward AGI, a hard image classification problem
 solved with near human-level ability, and all I hear is criticism.
 Sheesh! I hope your own work is not attacked like this.
 
 I would understand if the researchers had proposed something stupid like
 using the software in court to distinguish adult and child pornography.
 Please try to distinguish between the research and the commentary by the
 reporters. A legitimate application could be estimating the average age
 plus or minus 2 months of a group of 1000 shoppers in a marketing study.
 
 
 In any case, machine surveillance is here to stay. Get used to it.
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 
 --- On Thu, 10/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
 
 From: Bob Mottram [EMAIL PROTECTED] Subject: Re: [agi] Let's face
 it, this is just dumb. To: agi@v2.listbox.com Date: Thursday, October
 2, 2008, 6:21 AM 2008/10/2 Brad Paulsen [EMAIL PROTECTED]:
 It boasts a 50% recognition accuracy rate
 +/-5 years and an 80%
 recognition accuracy rate +/-10 years.  Unless, of
 course, the subject is
 wearing a big floppy hat, makeup or has had Botox
 treatment recently.  Or
 found his dad's Ronald Reagan mask.  'Nuf
 said.
 
 
 Yes.  This kind of accuracy would not be good enough to enforce age 
 related rules surrounding the buying of certain products, nor does it 
 seem likely to me that refinements of the technique will give the 
 needed accuracy.  As you point out people have been trying to fool 
 others about their age for millenia, and this trend is only going to 
 complicate matters further.  In future if De Grey gets his way this 
 kind of recognition will be useless anyway.
 
 
 P.S. Oh, yeah, and the guy responsible for this
 project claims it doesn't
 violate anyone's privacy because it can't be
 used to identify individuals.
 Right.  They don't say who sponsored this
 research, but I sincerely doubt
 it was the vending machine companies or purveyors of
 Internet porn.
 
 
 It's good to question the true motives behind something like this, and
  where the funding comes from.  I do a lot of stuff with computer 
 vision, and if someone came to me saying they wanted something to 
 visually recognise the age of a person I'd tell them that they're 
 probably wasting their time, and that indicators other than visual 
 ones would be more likely to give a reliable result.
 
 
 
 --- agi Archives:
 https://www.listbox.com/member/archive/303/=now RSS Feed:
 https://www.listbox.com/member/archive/rss/303/ Modify Your
 Subscription:
 https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Let's face it, this is just dumb.

2008-10-01 Thread Brad Paulsen
This is probably a tad off-topic, but I couldn't help myself.

From the Technology-We-Could-Probably-Do-Without files:

STEP RIGHT UP, LET THE COMPUTER LOOK AT YOUR FACE AND TELL YOU YOUR AGE
http://www.physorg.com/news141394850.html

From the article:

...age-recognition algorithms could ... prevent minors from purchasing
tobacco products from vending machines, and deny children access to adult
Web sites.

Sixteen-year-old-male's inner dialog: I need a smoke and some porn.  Let
me think... Where did dad put that Ronald Reagan Halloween mask?

It boasts a 50% recognition accuracy rate +/-5 years and an 80%
recognition accuracy rate +/-10 years.  Unless, of course, the subject is
wearing a big floppy hat, makeup or has had Botox treatment recently.  Or
found his dad's Ronald Reagan mask.  'Nuf said.

Cheers,
Brad

P.S. Oh, yeah, and the guy responsible for this project claims it doesn't
violate anyone's privacy because it can't be used to identify individuals.
 Right.  They don't say who sponsored this research, but I sincerely doubt
it was the vending machine companies or purveyors of Internet porn.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-10-01 Thread Brad Paulsen

Ben wrote: I remain convinced that probability theory is a proper
foundation for uncertain inference in an AGI context, whereas Pei remains
convinced of the oppositeSo, this is really the essential issue, rather
than the particularities of the algebra...

But, please, don't stop discussing that algebra.  This is the most fun I've
had on an e-mail list in years!

Cheers,
Brad

Ben Goertzel wrote:
 
 
 On Tue, Sep 23, 2008 at 9:28 PM, Pei Wang [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
 
 On Tue, Sep 23, 2008 at 7:26 PM, Abram Demski [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED] wrote:
  Wow! I did not mean to stir up such an argument between you two!!
 
 Abram: This argument has been going on for about 10 years, with some
 on periods and off periods, so don't feel responsible for it ---
 you just raised the right topic in the right time to turn it on
 again. ;-)
 
 
 
 Correct ... Pei and I worked together on the same AI project for a few
 years
 (1998-2001) and had related arguments in person many times during that
 period,
 and have continued the argument off and on over email...
 
 It has been an interesting and worthwhile discussion, from my view any way,
 but neither of us has really convinced the other...
 
 I remain convinced that probability theory is a proper foundation for
 uncertain
 inference in an AGI context, whereas Pei remains convinced of the
 opposite ...
 
 So, this is really the essential issue, rather than the particularities
 of the
 algebra...
 
 The reason this is a subtle point is roughly as follows (in my view, Pei's
 surely differs).
 
 I think it's mathematically and conceptually clear that for a system
 with unbounded
 resources probability theory is the right way to reason.   However if
 you look
 at Cox's axioms
 
 http://en.wikipedia.org/wiki/Cox%27s_theorem
 
 you'll see that the third one (consistency) cannot reasonably be expected of
 a system with severely bounded computational resources...
 
 So the question, conceptually, is: If a cognitive system can only
 approximately
 obey Cox's third axiom, then is it really sensible for the system to
 explicitly
 approximate probability theory ... or not?  Because there is no way for
 the system
 to *exactly* follow probability theory
 
 There is not really any good theory of what reasoning math a system should
 (implicitly or explicitly) emulate given limited resources... Pei has
 his hypothesis,
 I have mine ... I'm pretty confident I'm right, but I can't prove it ...
 nor can he
 prove his view...
 
 Lacking a comprehensive math theory of these things, the proof is gonna be
 in the pudding ...
 
 And, it is quite possible IMO that both approaches can work, though they
 will
 not fit into the same AGI systems.  That is, an AGI system in which NARS
 would
 be an effective component, would NOT necessarily
 look the same as an AGI system in which PLN would be an effective
 component...
 
 Along these latter lines:
 One thing I do like about using a reasoning system with a probabilistic
 foundation
 is that it lets me very easily connect my reasoning engine with other
 cognitive
 subsystems also based on probability theory ... say, a Hawkins style
 hierarchical
 perception network (which is based on Bayes nets) ... MOSES for
 probabilistic
 evolutionary program learning etc.   Probability theory is IMO a great
 lingua
 franca for connecting different AI components into an integrative whole...
 
 -- Ben G
 
 
 
 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | Modify
 https://www.listbox.com/member/?;
 Your Subscription [Powered by Listbox] http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Dangerous Knowledge - A Correction

2008-09-29 Thread Brad Paulsen

Oops!

The William Blake poem recited in the Dangerous Knowledge BBC program was 
not Infinity (that's what Cantor was so concerned about).  It was 
Auguries of Innocence.  The passage used in the program (and the one 
borrowed by Sting) was:


To see a world in a grain of sand
And a heaven in a wild flower,
Hold infinity in the palm of your hand
And eternity in an hour.

It's a beautiful poem.

Cheers,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Dangerous Knowledge

2008-09-28 Thread Brad Paulsen

Recently, someone on this list (I apologize for not making a note of this
person's name) raised the question whether we might find a shortcut to
AGI.  The author went on to opine that, because the problems associated
with achieving AGI had been considered by some of the world's most
brilliant minds at various times over thousands of years of human history
and because the problem, nonetheless, remains unsolved, it was extremely
unlikely such a shortcut would ever be found.

I believe this opinion neglects an important element of discovery (or 
creativity, if you will).  An element that is not found in every person who 
possesses a towering intellect but that can only be found in one that does. 
 I speak of intellectual courage (or foolhardiness -- it's often difficult 
to tell and could be both).  It's the courage (or obsession, or both) that 
causes a man (or woman) to give themselves over to a novel insight so 
thoroughly that it comes to define their self-image, their very being. 
This form of courage is so close to insanity that many of these individuals 
find themselves crossing that very boundary.  In our time, John Nash, for 
example.  Or Alan Turing, if only just for the brief hours, days and 
moments before he took his own life.


This isn't ordinary courage.  This is courage of an extraordinary kind.  It 
doesn't manifest itself very often (perhaps once or twice in every 
century).  But, when it does, the person possessing it can give us the 
shortcut that changes everything.  This type of courage could very well 
be from whence that shortcut to AGI eventually comes.  It's not a matter of 
just having mulled the problem over time and time again.  Or the number of 
mullers involved.  Or the length of time the mulling has been ongoing. 
It is, rather, a matter of being intellectually fearless and willing to put 
oneself in harms way for one's convictions.


There is a series of videos done by the BBC on the Web (link below) that 
everyone on this list should watch.  They will both enlighten and frighten. 
 The first is about the mathematician, Georg Cantor, who dared look 
infinity straight in the eye without blinking.  In the process, he turned 
mathematics on its head and was excoriated by some of the most renowned 
mathematicians of the time.  He died penniless in a mental institution. 
The link to Dangerous Knowledge  Part 1 is here:


http://video.google.ca/videoplay?docid=4007105149032380914ei=PvTXSJONKI_8-gHFhOi-Agq=artificial+lifevt=lf

Man, that URI is practically infinite!  The links to the other parts of the 
video should be on that same page.  I love the fact that they quote

from William Blake's famous poem Infinity (which Sting also borrowed
and used in the lyrics to the title song of his last album, Sacred Love -- 
to be expected from a former teacher of literature, I guess).  Also highly

recommended (the Sting album).

BTW, I'm writing this from my sick bed.  Got an awful virus that can best
be described as someone putting a bicycle pump up your nose, plugging all
of the other orifices in your head, and then pumping like crazy until you
are completely convinced your head is going to explode like a rotten
melon.  I like that description better than a sinus infection.  Thanks to 
laptops and WiFi routers (and lots of Tylenol and some prescription stuff I 
can't pronounce let alone spell in my present condition), I'm still able to 
surf the 'net and read my e-mail, although I'm probably doing so under the 
influence of the cold medicine I've been taking.  Anyhow, it's put me in a 
reflective mood.


One of the things I'd like to reelect upon with you all here has to do 
with, well... with you all here.  Since joining this list, I think I have 
learned more about more subjects associated with AI than I could have 
learned in two years of graduate-level study in a classroom setting.  The 
reason is simple.  We have some very bright and accomplished folks posting 
to this list regularly.  I have especially enjoyed, for example, the recent 
thread on NARs vs PLN and the exchanges between Ben G., Pei and Abram.  I 
mean, one's lucky if one has a couple of professors that know there stuff 
this well in one's entire graduate course of study.  Here, I have many 
solid thinkers publicly working out their differences on this list, 
grinding each other's arguments down until the nub of the matter lies 
helplessly exposed.  I, then, swoop down and greedily grab it up!  You 
can't buy this type of education at any cost.  Thank you all.  Matt M., 
David H. and everyone else who contributes here regularly, a big, mushy 
thanks for making me a smarter, if not necessarily better, human being!


Yep, too much cold medicine.  Later...

Brad



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

[agi] Dangerous Knowledge - Update

2008-09-28 Thread Brad Paulsen


Sorry, but in my drug-addled state I gave the wrong URI for the Dangerous 
Knowledge videos on YouTube.  The one I gave was just to the first part of 
the Cantor segment.  All of the segments can be reached from the link 
below.  You can recreate this link by searching, in YouTube, on the key 
words Dangerous Knowledge.


http://www.youtube.com/results?search_query=Dangerous+Knowledgesearch_type=aq=-1oq=

Cheers,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] NLP? Nope. NLU? Yep!

2008-09-20 Thread Brad Paulsen
I believe the company mentioned in this article was referenced in an active 
thread here recently.  They claim to have semantically enabled Wikipedia. 
  Their stuff is supposed to have a vocabulary 10x that of the typical 
U.S. college graduate.  Currently being licensed to software developers 
working on Web 3.0.


A quote from the company's CEO in the article:

We have taught the computer virtually all of the (... ah, er, what's that 
word? ... ah, er ... oh, yeah) meanings of words and phrases in the English 
language.


Oh, OK, so I added the stuff in the parentheses.  Sue me.

COMPUTERS FIGURING OUT WHAT WORDS MEAN
http://www.physorg.com/news140929129.html

Cheers,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] 3D CPU PTO's Peer-to-Peer Patent Review Study

2008-09-16 Thread Brad Paulsen

FIRST 3-D PROCESSOR RUNS AT 1.4 GHZ ON NEW ARCHITECTURE
http://www.physorg.com/news140692629.html

PROGRAM TURNS TO ONLINE MASSES TO IMPROVE PATENTS
http://www.physorg.com/news140672870.html

Enjoy,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


[agi] Will AGI Be Stillborn?

2008-09-08 Thread Brad Paulsen


From the article:

A team of biologists and chemists [lab led by Jack Szostak, a molecular 
biologist at Harvard Medical School] is closing in on bringing non-living 
matter to life.


It's not as Frankensteinian as it sounds. Instead, a lab led by Jack 
Szostak, a molecular biologist at Harvard Medical School, is building 
simple cell models that can almost be called life.


http://blog.wired.com/wiredscience/2008/09/biologists-on-t.html

There's a video entitled A Protocell Forming from Fatty Acids.  It's 
fascinating and, at the same time, a bit scary.


Paper co-authored by Szostak published this month:

Thermostability of model protocell membranes
http://www.pnas.org/content/early/2008/09/02/0805086105.full.pdf+html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Remembering Caught in the Act

2008-09-05 Thread Brad Paulsen

http://www.nytimes.com/2008/09/05/science/05brain.html?_r=3partner=rssnytemc=rssoref=sloginoref=sloginoref=slogin

or, indirectly,

http://science.slashdot.org/article.pl?sid=08/09/05/0138237from=rss


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] What Time Is It? No. What clock is it?

2008-09-03 Thread Brad Paulsen

Hey gang...

It’s Likely That Times Are Changing
http://www.sciencenews.org/view/feature/id/35992/title/It%E2%80%99s_Likely_That_Times_Are_Changing
A century ago, mathematician Hermann Minkowski famously merged space with 
time, establishing a new foundation for physics;  today physicists are 
rethinking how the two should fit together.


A PDF of a paper presented in March of this year, and upon which the 
article is based, can be found at http://arxiv.org/abs/0805.4452.  It's a 
free download.  Lots of equations, graphs, oh my!


Cheers,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-28 Thread Brad Paulsen

Eric,

It was a real-life near-death experience (auto accident).

I'm aware of the tryptamine compound and its presence in hallucinogenic 
drugs such as LSD.  According to Wikipedia, it is not related to the NDE 
drug of choice which is Ketamine (Ketalar or ketamine HCL -- street name 
back in the day was Special K).  Ketamine is a chemical secreted into the 
brain when your body detects an over-generation of Glutamate. Glutamate 
(i.e., the food flavor enhancer, MSG) is a neurotransmitter released in 
massive quantities when your senses lead your brain to believe it is in 
mortal danger.  It's your brain's way of, literally, trying to think its 
(your) way out of danger - fast.  Trouble is, too much Glutamate can 
irreparably damage the brain, hence the Ketamine push and the NDE experience.


Ketamine is a Schedule 3 drug.  Today, it is primarily used as an 
anesthetic in surgery performed on geriatric adults, children and animals 
(by vets).  It takes a much higher dose than that used for anesthetic 
purposes to achieve the NDE experience.  Back in the day, it was used as an 
adjunct to psychotherapy.  The Russians claimed it worked wonders for all 
sorts of addiction, especially alcoholism.  I do not recommend use of 
Ketamine unsupervised by a qualified medical practitioner.  Just like LSD, 
people have been known to react badly (bad trip).  But, then, I don't 
recommend near-fatal auto accidents either.  ;-)


Cheers,
Brad

Eric Burton wrote:

Hi,


Err ... I don't have to mention that I didn't stay dead, do I?  Good.


Was this the archetypal death/rebirth experience found in for instance
tryptamine ecstacy or a real-life near-death experience?

Eric B


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] To sleep, perchance to dream...

2008-08-28 Thread Brad Paulsen

EXPLORING THE FUNCTION OF SLEEP
http://www.physorg.com/news138941239.html


From the article:

Because it is universal, tightly regulated, and cannot be lost without 
serious harm, Cirelli argues that sleep must have an important core 
function. But what?


Sleep may be the price you pay so your brain can be plastic the next day, 
Cirelli and Tononi say.


Their hypothesis is that sleep allows the brain to regroup after a hard day 
of learning by giving the synapses, which increase in strength during the 
day, a chance to damp down to baseline levels. This is important because 
the brain uses up to 80 percent of its energy to sustain synaptic activity.


Sleep may also be important for consolidating new memories, and to allow 
the brain to forget the random, unimportant impressions of the day, so 
there is room for more learning the next day. This could be why the brain 
waves are so active during certain periods of sleep.



I just gotta get me one of these! (Will Smith, Independence Day)

SHARP UNVEILS NEW ANTI-BIRD FLU AIR PURIFIER
http://www.physorg.com/news139046671.html

Cheers,
Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Brad Paulsen

Terren,

OK, you hooked me.  A virgin is something I haven't been called (or even 
been associated with) in about forty-five years.  So, I feel compelled to 
defend my non-virginity at all costs.  I'm 58 now.  You do the math (don't 
forget to subtract for the 30 years I was married). ;-)  My widowed 
girlfriend of the last eight years is a mother of two 
30-something-year-olds (a boy and a girl) and four grandchildren, ages 11 
(going on 16) down to 2.  All girls!  The woman is post-menopausal and 
insatiable!  A little Astroglide (thank you, NASA!) and we're ready to 
rumble.  No birth control required!  Sex under 50?  OK.  Sex after 50?  To 
the moon!!  The Bradster is one lucky puppy.  So there!


I thought orgasms were cool, too.  Until I died.  Now THAT was cool.  So, 
for orgasms, it's sort of a quantity vs. quality thing for me these days. 
I'll eventually get to do that dying thing again (probably just once, 
though).  But between now and then, I hope to have lots and lots of 
orgasms!  Not as cool as dying, but a bit easier to come by. (I won't say 
it if you don't think it!) ;-)


Err ... I don't have to mention that I didn't stay dead, do I?  Good.

I don't recall whether or not I said one could describe an orgasm to a 
virgin in lieu of experiencing the real thing.  But, the AGI I have in 
mind is of the non-Turing/Loebner, non-orgasmic type, so the description 
will just have to do.  In my design, this is required only so the AGI can 
empathize with human experience.  It may need to know what a happy ending 
is, but it doesn't have to have one.  Who knows, though?  Maybe we've 
finally discovered that it's not Microsoft's fault we have to re-boot 
Windows at least twice a day.  Maybe a re-boot is sort of like an orgasm 
for Windows?  Explains that little happy chiming sound it makes during 
boot-up, right?  Maybe, just maybe, Windows was, to quote Steely Dan, 
programmed by fellows with compassion and vision.


Anyhow, that example fits with views I've expressed in the context of 
explaining how my AGI design requires empathy on the part of the AGI so it 
can empathize with human experiences without having to actually have 
them.  So, maybe I did say that.  Since I have no intention of developing a 
Turing/Loebner AGI, the ability to empathize is all my design really needs. 
 And, it may not even need that.  Benign indifference may be enough.  My 
design is still evolving even as I work on the implementation (it's a big 
job and I'm only one man).


If I do my job right, my AGI will have no sense of self.  I achieve that, 
mostly, by building a non-embodied AGI.  Embodiment leads directly to a 
sense of self which leads inexorably to an I am me and you're not world 
view.  I don't know about you, but an AGI with a sense of self gives me the 
willies.  Turns out, by NOT bestowing a sense of self on a 
non-Turing/Loebner AGI, one does away with a great many rather sticky 
problems in the area of morality and ethics.  How do I know what it's like 
to not have a sense of self?  A...  That's where the dying but not 
really dying part fits the puzzle.  Talk about experiences that are hard to 
explain!  But, that's another topic for another thread.


Now, to the meaty stuff...  You wrote: ... the really interesting question 
I would pose to you non-embodied advocates is: how in the world will you 
motivate your creation?


Some animals and all humans are motivated to maximize pleasure and minimize 
pain.  This requires the existence of a brain and a nervous system, 
preferably both peripheral and central.  In animals other than humans and 
some higher-order primates and mammals, motivation is more typically called 
instinct.  The difference?  Motivations are usually conscious and somewhat 
 malleable.  Instincts are usually not.  To be sure, there is some gray 
area here, but not enough, I think, to alone derail my argument.  While 
human motivations may appear more complex, this is almost always because 
they are more abstract.  They can usually be boiled down to fit the 
pleasure/pain model (e.g., reward/punishment).  There has been some 
interesting recent work on altruism reported in the cog sci literature. 
When I can lay hands on some URIs, I'll post them here.


With that conceptual background established, my reply is that your question 
contains the implicit assumption we non-embodied advocates are planning 
to build Turing/Loebner AGIs.  Some of us may be.  I am not.  Since my AGI 
model is not of the T/L variety, motivation does NOT apply.  But, I'm 
prepared to meet you halfway and cop to instinct.  My AGI WILL have at 
least one overriding instinct.  I've discussed it here recently (but it 
seemed most people who commented on my post didn't fully get it).  Here 
it is:


My AGI will be equipped with an instinctual drive to resolve cognitive 
dissonance (simulated, of course) engendered by its own inability to 
understand or answer queries posed by humans (or other AGIs).  I hasten 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Brad Paulsen

Mike,

So you feel that my disagreement with your proposal is sad?  That's quite 
an ego you have there, my friend.  You asked for input and you got it.  The 
fact that you didn't like my input doesn't make me or the effort I spent 
composing it sad.  I haven't read all of the replies to your post yet, 
but judging by the index listing in my e-mail client, it has already 
drained a considerable amount of time and intellectual energy from the 
members of this list.  You want sad?  That's sad.


Nice try at ignoring the substance of what I wrote while continuing to 
advance you own views.  I did NOT say THINKING about your idea, or any idea 
for that matter, was a waste of time.  Indeed, the second sentence of my 
reply contained the following ...(unless [studying human play is] being 
done purely for research purposes).  I did think about your idea.  I 
concluded what it proposes (not the idea itself) is, in fact, a waste of 
time for people who want to design and build a working AGI before 
mid-century.  I'm sure some list members will agree with you.  I'm also 
sure some will agree with me.  But, most will have their own views on this 
issue.  That's the way it works.


The AGI I (and many others) have in mind will be to human intelligence what 
an airplane is to a bird.  For many of the same reasons airplanes don't 
play like birds do, my AGI won't play (or create) like humans do.  And, 
just as the airplane flies BETTER THAN the bird (for human purposes), my 
AGI will create BETTER THAN any human (for human purposes).


You wrote, [Play] is generally acknowledged by psychologists to be an 
essential dimension of creativity - which is the goal of AGI.


Wrong.  ONE of the goals (not THE goal) of AGI is *inspired* by human 
creativity.  Indeed, I am counting on the creativity of the first 
generation of AGIs to help humans build (or keep humans away from building) 
the second generation of AGIs.  But... neither generation has to (and, 
IMHO, shouldn't) have human-style creativity.


In fact, I suggest we not use the word creativity when discussing 
AGI-type knowledge synthesis because that is a term that has been applied 
solely to human-style intelligence.  Perhaps, idea mining would be a 
better way to describe what I think about when I think about AGI-style 
creativity.  Knowledge synthesis also works for me and has a greater 
syllable count.  Either phrase fits the mechanism I have in mind for an AGI 
that works with MASSIVE quantities of data, using well-studied and 
established data mining techniques, to discover important (to humans and, 
eventually, AGIs themselves) associations.  It would have been impossible 
to build this type of idea mining capability into an AI before the mid 
1990's (before the Internet went public).  It's possible now.  Indeed, 
Google is encouraging it by publishing an open source REST (if memory 
serves) API to the Googleverse.  No human intelligence would be capable of 
doing such data mining without the aid of a computer and, even then, it's 
not easy for the human intellect (associations between massive amounts of 
data are often, themselves, still quite massive - ask the CIA or the NSA or 
Google).


Certainly play is ...fundamental to the human mind-and-body  My point 
was simply that this should have little or no interest to those of us 
attempting to build a working, non-human-style AGI.  We can discuss it all 
we like (however, I don't intend to continue doing so after this reply -- 
I've stated my case).  Such discussion may be worthwhile (if only to show 
up its inherent wrongness) but spending any time attempting to design or 
build an AGI containing a simulation of human-style play (or creativity) is 
not.  There are only so many minutes in a day and only so many days in a 
life.  The human-style (Turing test) approach to AI has been tried.  It 
failed (not in every respect, of course, but the Loebner Prizes - the $25K 
and $100K prizes - established in 1990 remain unclaimed).   I don't intend 
to spend one more minute or hour of my life trying to win the Loebner Prize.


The enormous amount of intellectual energy spent (largely wasted), from the 
mid 1950's to the end of the 1980's, trying to create a human-like AI is a 
true tragedy.  But, perhaps, even more tragic is that unquestioningly 
holding up Turing's imitation game as the gold standard of AI created 
what we call in the commercial software industry a reference problem.  To 
get new clients to buy your software, you need a good reference from 
former/current clients.  Anyone who has attempted to get funding for an AGI 
project since the mid-1990s will attest that the (unintentional but 
nevertheless real) damage caused by Turing and his followers continues to 
have a very real, negative effect on the field of AI/AGI.  I have done, and 
will continue to do, my best to see that this same mistake is not repeated 
in this century's quest to build a beneficial (to humanity) AGI. 
Unfortunately, we 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Brad Paulsen

Charles,

By now you've probably read my reply to Tintner's reply.  I think that 
probably says it all (and them some!).


What you say holds IFF you are planing on building an airplane that flies 
just like a bird.  In other words, if you are planning on building a 
human-like AGI (that could, say, pass the Turing test).  My position is, 
and has been for decades, that attempting to pass the Turing test (or win 
either of the two, one-time-only, Loebner Prizes) is a waste of precious 
time and intellectual resources.


Thought experiments?  No problem.  Discussing ideas?  No problem. 
Human-like AGI?  Big problem.


Cheers,
Brad

Charles Hixson wrote:
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily spend a 
lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do anything 
BUT play (using my supplied definition), then your response has some 
merit, but even that can be very useful.  Almost all of mathematics, 
e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews don't 
have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building 
an AGI any time in this century cannot afford.  I firmly believe if we 
had not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more 
wrong.  We *do not need embodiment* to be able to build a powerful AGI 
that can be of immense utility to humanity while also surpassing human 
intelligence in many ways.  To be sure, we want that AGI to be 
empathetic with human intelligence, but we do not need to make it 
equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - 
it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on 
computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-25 Thread Brad Paulsen
Mike Tintner wrote: ...how would you design a play machine - a machine 
that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless it's 
being done purely for research purposes).  It's a diversion of intellectual 
and financial resources that those serious about building an AGI any time 
in this century cannot afford.  I firmly believe if we had not set 
ourselves the goal of developing human-style intelligence (embodied or not) 
fifty years ago, we would already have a working, non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more wrong. 
 We *do not need embodiment* to be able to build a powerful AGI that can 
be of immense utility to humanity while also surpassing human intelligence 
in many ways.  To be sure, we want that AGI to be empathetic with human 
intelligence, but we do not need to make it equivalent (i.e., just like 
us).


I don't want to give the impression that a non-Turing intelligence will be 
easy to design and build.  It will probably require at least another twenty 
years of two steps forward, one step back effort.  So, if we are going to 
develop a non-human-like, non-embodied AGI within the first quarter of this 
century, we are going to have to just say no to Turing and start to use 
human intelligence as an inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI is 
surely that it must be able to play - so how would you design a play 
machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - it 
should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on computer, 
fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] rpi.edu

2008-08-25 Thread Brad Paulsen

Eric,

http://www.cogsci.rpi.edu/research/rair/asc_rca/

Sorry, couldn't answer your question based on quick read.

Cheers,
Brad


Eric Burton wrote:

Does anyone know if Rensselaer Institute is still on track to crack
the Turing Test by 2009? There was a Slashdot article or two about
their software called 'RASCALS' earlier this year.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI's Philosophy of Learning

2008-08-19 Thread Brad Paulsen

Abram,

Just FYI... When I attempted to access the Web page in your message, 
http://www.learnartificialneuralnetworks.com/ (that's without the 
backpropagation.html part), my virus checker, AVG, blocked the attempt 
with a message similar to the following:


Threat detected!
Virus found: JS/Downloader.Agent
Detected on open

Quarantined

On a second attempt, I also got the IE 7.0 warning banner:

This website wants to run the following add-on: Microsoft Data Access - 
Remote Data Services Dat...' from 'Microsoft Corporation'.  If you trust 
the website and the add-on and want to allow it to run, click... (of 
course, I didn't click).


This time, AVG gave me the option to heal the virus.  I took this option.

It may be nothing, but it also could be a drive by download attempt of 
which the owners of that site may not be aware.


Cheers,

Brad



Abram Demski wrote:

Mike,

There are at least 2 ways this can happen, I think. The first way is
that a mechanism is theoretically proven to be complete, for some
less-than-sufficient formalism. The best example of this is one I
already mentioned: the neural nets of the nineties (specifically,
feedforward neural nets with multiple hidden layers). There is a
completeness result associated with these. I quote from
http://www.learnartificialneuralnetworks.com/backpropagation.html :

Although backpropagation can be applied to networks with any number
of layers, just as for networks with binary units it has been shown
(Hornik, Stinchcombe,  White, 1989; Funahashi, 1989; Cybenko, 1989;
Hartman, Keeler,  Kowalski, 1990) that only one layer of hidden units
suces to approximate any function with finitely many discontinuities
to arbitrary precision, provided the activation functions of the
hidden units are non-linear (the universal approximation theorem). In
most applications a feed-forward network with a single layer of hidden
units is used with a sigmoid activation function for the units. 

This sort of thing could have contributed to the 50 years of
less-than-success you mentioned.

The second way this phenomenon could manifest is more a personal fear
than anything else. I am worried that there really might be partial
principles of mind that could seem to be able to do everything for a
time. The possibility is made concrete for me by analogies to several
smaller domains. In linguistics, the grammar that we are taught in
high school does almost everything. In logic, 1st-order systems do
almost everything. In sequence learning, hidden markov models do
almost everything. So, it is conceivable that some AGI method will be
missing something fundamental, yet seem for a time to be
all-encompassing.

On Mon, Aug 18, 2008 at 5:58 AM, Mike Tintner [EMAIL PROTECTED] wrote:

Abram:I am worried-- worried that an AGI system based on anything less than
the one most powerful logic will be able to fool AGI researchers for a
long time into thinking that it is capable of general intelligence.

Can you explain this to me? (I really am interested in understanding your
thinking). AGI's have a roughly 50 year record of total failure. They have
never shown the slightest sign of general intelligence - of being able to
cross domains. How do you think they will or could fool anyone?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Brains + Sleep, Bird Brains, Brain Rules

2008-08-15 Thread Brad Paulsen

(1) STUDY FINDS THAT SLEEP SELECTIVELY PRESERVES EMOTIONAL MEMORIES
http://www.physorg.com/news137908693.html

(2) BIG-BRAINED ANIMALS [BIRDS] EVOLVE FASTER
http://www.physorg.com/news138003096.html

(3) BRAIN RULES
Here's a guy selling a book/DVD (Brain Rules) about how to improve your
mental performance.  Many people on this list will already be familiar with the
brain science behind the book.

Rule #1?  Exercise Boosts Brain Power.  On the site, the author gives a talk
on video (probably from the DVD) about how exercise can improve your brain's
performance.  There's, uh, just one problem: he does his own videos and I
would say he's morbidly obese himself.  Do as I say, not as I do?  Go figure.

Rule #7 - Sleep is Good For the Brain.  Given his weight, he probably suffers
from sleep apnea to some extent.  Geeze, this guy is breaking all of his own
rules!  But, note that he was still smart enough to earn (I presume) a PhD
(or MD) and write a book called Brain Rules?  Again, go figure.

To be fair, though, his science seems to be conservative and based on
peer-reviewed research, some of which is summarized in the PhysOrg link
above (1).

From the Web site:

Dr. John Medina is a developmental molecular biologist and research
consultant.  He is an affiliate Professor of Bioengineering at the
University of Washington School of Medicine. He is also the director of the
Brain Center for Applied Learning Research at Seattle Pacific University.

The videos are well done and,  occasionally, humorous (intentionally so, I 
presume).
http://www.brainrules.net/?gclid=CPuLzubfkJUCFSAUagodXhqUPA

Here's the US Amazon site page for the book (301 pages + DVD), p. 2008, $20
(hardcover).
http://www.amazon.com/gp/product/product-description/097904/ref=dp_proddesc_0?ie=UTF8n=283155s=books

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] The results of disembodiment

2008-08-13 Thread Brad Paulsen
Pieces of rat brain controls small robo platform.  How's *that* for an
out-of-body experience!  There's video even!!

http://technology.newscientist.com/channel/tech/mg19926696.100-rise-of-the-ratbrained-robots.html

Cheers,

Brad

P.S.  Sorry I haven't been participating on the list that much this
week (Hey!  I heard that!)..  I needed to get
some work done and get my car fixed.  As you might imagine, I rarely
leave my computer and in fact I did
take it with me to the auto repair place (stop snickering..).  Anyhow,
car is fixed  (well, the part I could afford...) ,
but I still have a bunch of work to do.

As Gov. Arnold likes to say I'll be Bach.  Why does he want to be
Bach?  I really do miss you folks.  Back soon.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Brad Paulsen

Charles,

I don't think I've misunderstood what Turing was proposing.  At least not any 
more than the thousands of other people who have written about Turing and his 
test over the decades:


http://en.wikipedia.org/wiki/Turing_test
http://www.zompist.com/turing.html (Twelve reasons to toss the Turing test)
http://plato.stanford.edu/entries/turing-test/ (read the entire article)
http://tekhnema.free.fr/3Lasseguearticle.htm (What Kind of Turing Test Did 
Turing Have in Mind)


What I think Turing had in mind was a test of an artificial intelligence's 
ability to fool a human into thinking s/he was talking to another human.  If 
the computer program claiming to be intelligent couldn't simulate a human 
successfully enough to fool an actual human, it failed the imitation test. 
Therefore, the Turing test is a test to see if a computer program (artificial 
intelligence) can imitate human intelligence well enough to fool a human.  This 
is what is meant by the term Turing-indistinguishable (*from a human*).


Clearly, Turing's test is capable *only* of judging human-like artificial 
intelligences.  Yet, there are other forms of intelligence that humans have 
created and can, in the future, create.  IMHO, as long as we continue to hold up 
the Turning test (whatever flavor you like) as the gold standard of what is or 
is not successful AGI, we will continue to make little progress.  It's not that 
it's a WRONG TEST (although you will find people who argue strenuously that it 
is, in fact, wrong).  It's that it tests the WRONG THING.  Unless, that is, you 
plan on building a Turing-indistinguishable AGI.  In which case I wish you luck, 
but have little hope for the success of your endeavors.


I propose dropping the Turning test as a means to test the efficacy of 
artificial general intelligence.  I don't really care if the AGI I build can 
imitate a human successfully because I'm not setting out to create a human-like 
AGI.  I'm setting out to build a human-compatible AGI that will be empathetic to 
humans but that will far surpass their intellectual capabilities in short order. 
 There's a HUGE difference.


I truly believe that successful AGI will be to human intelligence what the 
Boeing 747 is to a bird.  They both fly, but that's pretty much where the 
similarities end.  The bird is an evolved, natural flier.  The 747 is a product 
of the evolved human brain, inspired by the natural fliers, but it is, itself, a 
very un-bird-like, artificial flier.  I say thank your lucky stars there was 
nobody like Alan Turing around at the latter part of the 19th century proposing 
that, to be deemed a successful artificial flying machine, the candidate machine 
would have to fool real birds into thinking it was another bird.  If humans had 
continued to try to imitate a bird to achieve human flight, we'd still be 
taking ocean liners to Europe.


I strongly believe the first successful AGI will have very little in common with 
human intelligence.  It will be better at many things beneficial to humanity, it 
will do those things faster and it will be able to create its own, improved, 
replacement.  I believe this so much that I am betting the rest of my life on it.


Cheers,

Brad


Charles Hixson wrote:

Brad Paulsen wrote:

...
Sigh.  Your point of view is heavily biased by the unspoken assumption 
that AGI must be Turing-indistinguishable from humans.  That it must 
be AGHI.  This is not necessarily a bad idea, it's just the wrong idea 
given our (lack of) understanding of general intelligence.  Turing was 
a genius, no doubt about that.  But many geniuses have been wrong.  
Turing was tragically wrong in proposing (and AI researchers/engineers 
terribly naive in accepting) his infamous imitation test, a simple 
test that has, almost single-handedly, kept AGI from becoming a 
reality for over fifty years.  The idea that AGI won't be real AGI 
unless it is embodied is a natural extension of Turing's imitation 
test and, therefore, inherits all of its wrongness.

...
Cheers,

Brad
You have misunderstood what Turing was proposing.  He was claiming that 
if a computer could act in the proposed manner that you would be forced 
to conceed that it was intelligent, not the converse.  I have seen no 
indication that he believed that there was any requirement that a 
computer be able to pass the Turing test to be considered intelligent.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Brad Paulsen



Mike Tintner wrote:

Bob:  As a roboticist I can say that a physical body resembling that of a

human isn't really all that important.  You can build the most
sophisticated humanoid possible, but the problems still boil down to
how such a machine should be intelligently directed by its software.

What embodiment does provide are *instruments of causation* and closed
loop control.  The muscles or actuators cause events to occur, and
sensors then observe the results.  Both actuation and sensing are
subject to a good deal of uncertainty, so an embodied system needs to
be able to cope with this adequately, at least maintaining some kind
of homeostatic regime.  Note that actuator and sensor could be
broadly interpreted, and might not necessarily operate within a
physical domain.

The main problem with non-embodied systems from the past is that they
tended to be open loop (non reflective) and often assumed crisp logic.

Certainly from a marketing perspective - if you're trying to promote a
particular line of research - humanoid-like embodiment certainly helps
people to identify with what's going on.  Also if you're trying to
understand human cognition by attempting to reproduce results from
developmental psychology a humanoid form may also be highly desirable.



Bob,

I think you are v. seriously wrong - and what's more, I suspect, 
robotically as well as humanly wrong. You are, in a sense, missing  
literally the whole point.


What mirror neurons are showing is that our ability to understand humans 
- as say portrayed in The Dancers :


http://www.csudh.edu/dearhabermas/matisse_dance_moma.jpg

comes from our capacity to simulate them with our whole body-and-brain 
all-at-once. Note that our brain does not just simulate their particular 
movement at the given point in time on that canvas - it simulates and 
understands their *manner* of movement - and you can get up and dance 
like them, and continue their dance, and produce/predict *further* 
movements that will be a reasonable likeness of how those dancers might 
dance - all from that one captured pose.


Our ability to understand animals and how they will move and emote and 
generally respond similarly comes from our ability to simulate them with 
our whole body-and-brain all at once - hence it is that we can go still 
further and liken humans to almost every animal under the sun - he's a 
snake/lizard/angry bear/slug/busy bee etc. etc.


Not only do we understand animals but also inanimate matter and its 
movements or non-movements with our whole body. Hence we see a book as 
lying on the table, and a wardrobe as standing in a room. This 
capacity is often valuable for inventors, who use it to imagine, for 
example, how liquids will flow through a machine, or scientists like 
Einstein who imagined himself riding a beam of light, or Kekule who 
imagined the atoms of a benzene molecule coiling like a snake.


We can only understand the entire world and how it behaves by embodying 
it within ourselves... or embodying ourselves within it.


This capacity shows that our self is a whole-brain-and-body unit. If I 
ask you to change your self - and please try this mentally - to 
simulate/ imagine yourself walking as - say,a flaming diva.. John 
Wayne... John Travolta... Madonna...   you should find that you will 
immediately/instinctively start to do this with your whole body and 
brain at once.As one integral unit.


Now my v. garbled understanding ( please comment) is that those 
Carnegie Mellon starfish robots show that such an integrated whole self 
is both possible - and perhaps vital - for robots too.  You need a 
whole-body-self not just to understand/embody the outside world and 
predict its movements, but to understand your inner body/world and how 
it's holding up and how together or falling apart it is - and 
whether you will/won't be able to execute different movements and think 
thoughts. You see, I hope, why I say you are missing the whole point.




Mike,

Sigh.  Your point of view is heavily biased by the unspoken assumption that AGI 
must be Turing-indistinguishable from humans.  That it must be AGHI.  This is 
not necessarily a bad idea, it's just the wrong idea given our (lack of) 
understanding of general intelligence.  Turing was a genius, no doubt about 
that.  But many geniuses have been wrong.  Turing was tragically wrong in 
proposing (and AI researchers/engineers terribly naive in accepting) his 
infamous imitation test, a simple test that has, almost single-handedly, kept 
AGI from becoming a reality for over fifty years.  The idea that AGI won't be 
real AGI unless it is embodied is a natural extension of Turing's imitation 
test and, therefore, inherits all of its wrongness.


I believe the time, effort and money spent attempting to develop an embodied AGI 
(one with simulated human sensorimotor capabilities) would be much better spent, 
at this point in time, building a *human-compatible AGI*.  A human-compatible 
AGI is an AGI 

Re: FWIW: Re: [agi] Groundless reasoning

2008-08-07 Thread Brad Paulsen

Charles,

Well, that's what gets me up in the morning.  I learn something new every day!

FWIW, I don't believe the Pink Floyd reference is appropriate since I don't 
*think* they included the signature word: stinkin'.  The We don't need 
no.. part is there, though. ;-)


As educational as this may be, I think it's getting pretty far off-topic, so 
I'll stop now.


Cheers,

Brad

Charles Hixson wrote:

Brad Paulsen wrote:
... Nope.  Wrong again.  At least you're consistent.  That line 
actually comes from a Cheech and Chong skit (or a movie -- can't 
remember which at the moment) where the guys are trying to get 
information by posing as cops.  At least I think that's the setup.  
When the person they're attempting to question asks to see their 
badges, Cheech replies, Badges?  We don't need no stinking badges!


Having been a young adult in the 1960's and 1970's, I am, of course, a 
long-time Pink Floyd fan.  In fact, one of my Pandora 
(http://www.pandora.com) stations is set up so that I hear something 
by PF at least once a week.




Brad
FWIW:  We don't need no stinking badges is from the movie Treasure of 
the Sierra Madre.  Many places have copied it from there.  It could be 
that both Pink Floyd and Cheech and Chong both copied it.  (It was also 
in a Farley comic strip.)  *Your* source may be Cheech and Chong.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen



Terren Suydam wrote:

Brad,

I'm not entirely certain this was directed to me, since it seems to be a 
response to both things I said and things Mike Tintner said. My comments below, 
where (hopefully) appropriate.

--- On Mon, 8/4/08, Brad Paulsen [EMAIL PROTECTED] wrote:

Ah, excuse me.  Don't humans (i.e., computer
programmers, script writers) 
ground virtual reality worlds?  Isn't that
just a way of simulating human 
(or some other abstract) reality?


Humans create simulated environments, but I'm not sure what you mean when you say humans ground them. If I 
have some kind of intelligent agent running around in a simulation, then that 
simulation *is* the agent's
real world. Consider for instance that we could actually be intelligent agents 
in some super-intelligent
alien's simulation. We have no access to that alien's world - all we know is 
what we perceive through our
senses. What would it add to say that the alien who created the simulation 
grounds our world?


Terren,

My comments were directed to the thread, not to you personally.  For future 
reference, I always begin comments directed to specific persons in a thread to 
those persons by name.  I wish everyone one would.  Absent such personal 
salutation, assume I meant my comments to be for all readers to/posters in the 
thread.


Unfortunately, there is no single definition of the word grounding (or symbol 
grounding) that is widely accepted in the AI (or Cognitive Science) community. 
  It is, however, fairly well acknowledged that:


The symbols in an autonomous hybrid symbolic+sensorimotor system -- a 
Turing-scale robot consisting of both a symbol system and a sensorimotor system 
that reliably connects its internal symbols to the external objects they refer 
to, so it can interact with them Turing-indistinguishably from the way a person 
does -- would be grounded.  But whether its symbols would have meaning rather 
than just grounding is something that even the robotic Turing Test -- hence 
cognitive science itself -- cannot determine, or explain. 
(http://en.wikipedia.org/wiki/Symbol_grounding).


To me, the key phrase in that quote is that grounding may be defined if an AGI 
...interacts Turning-indistinguishably from the way a person does  And, of 
course, the last sentence puts even accomplishment of that goal in doubt to the 
extent grounding a symbol may not necessarily give it meaning.  So far, we 
have found no way to tell.  But, I think most folks on this list would be 
awestruck by an AGI that could pass such a test next week, next year, next 
decade.  We aren't even close, fifty years post-Turing, as things currently stand.


I think the Turning test has been holding back AI/AGI for decades.  It sets up a 
nearly impossible (in fact, it may *be* impossible) standard by which all AI 
efforts are (and have been) judged.  Sometimes I think Turing was playing a 
cruel joke (I'll make sure no one is able to build an AI that anyone will take 
seriously by defining a test that sounds reasonable but is, in fact, impossible 
to pass.  Now, that's comedy!).  I know of no law that says an AGI *must* be 
Turing-indistinguishable from a human in the way it interacts with the world 
(and real humans living in it).  In my view of AGI, there is only a need for the 
AGI to be grounded in *some* world (including worlds that can only be described) 
that is *compatible* with ours.  This does not *require* human-like senses 
(indeed, it may not require *any* senses) nor does this require the ability to 
pass the Turing test.




How is grounding 
using AGI-human interaction different from getting the
experiential information 
from a third-party once removed (i.e., from the virtual
reality program's 
programmer)?  Except that the former method might be more

direct and efficient.


Experiential information cannot be given, it must be experienced, in exactly the same way that you cannot give a virgin 
the experience of sex by talking about it. Grounding requires experience 
(otherwise the term is meaningless). An agent in
a virtual environment experiences that environment in the same way that we 
experience ours. There is no fundamental

difference there.

I disagree.  You can give a virgin an understanding of the experience of having 
sex by talking about (i.e., describing) it.  Do you really want to throw the 
human emotion of empathy out with the bathwater?  I think its a rather important 
emotion.  It says that one human being can understand the feelings of another 
without having to have, first, personally experienced what gave rise to those 
feelings.  You don't have to be a rape victim to understand rape.  You don't 
have to be a black person to understand racism and prejudice.  I can go on and 
on.  But, that's not my point (just a counter to yours).  Here's my point...


There is an implicit assumption in everything you've said in your reply.  It is 
that an AGI, to be an AGI, must be able to experience the human (or VR avatar

Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen



Ben Goertzel wrote:




Well, having an intuitive understanding of human language will be useful 
for an AGI even if its architecture is profoundly nonhumanlike.  And, 
human language is intended to be interpreted based on social, 
spatiotemporal experience.  So the easiest way to make an AGI grok human 
language is very likely to give it an embodiment in a world somewhat 
like the one in which we live.  This does not imply the AGI must think 
in a thoroughly humanlike manner.


-- Ben G


Call me old-fashioned, but I believe ..an intuitive understanding of human 
language... can be acquired entirely through description.  Why can't human 
language be ...intended to be interpreted based on... *description of* social, 
spatiotemporal experience?  Of course it's not the same thing.  But is that a 
distinction that makes a significant difference?  IMHO, the answer is: not if 
all you really need is an AGI with *human compatibility*.  I submit we can build 
a highly-functional, human-compatible, AGI without requiring that it directly 
experience human space and time.  A description should suffice.


You've written books.  In those books, I assume you described all manner and 
sorts of things.  In order for people to learn from your books, are you really 
suggesting that they have to directly experience everything described therein? 
If so, why not skip the book-writing drudgery and lead learn-by-doing seminars? 
 You'd make more money and think of the frequent flier miles you'd rack up! :-)


I'm not saying that learning human language by direct experience couldn't be 
beneficial.  Nor am I arguing that it is not important for an AGI to be able to 
communicate effectively with humans.  Understanding human language is probably 
the gold standard for this behavior and it certainly will be more important 
for some applications (e.g., humanoid robots) than for others (e.g., data-mining 
or security AGIs).  My issue is with how Turing sent everyone out on this wild 
goose chase fifty-some years ago and many AI/AGI researchers are still out 
there looking for Turing's if-a-human-cannot-tell-it's-not-human goose. I just 
don't think Turing-indistinguishably is necessary.  We should be concentrating 
on building artificial general intelligence (AGI) that is human-compatible, not 
artificial general human intelligence (AGHI).  I submit if we build the 
human-compatible AGI first, it will help us achieve AGHI in much less time and 
at much lower cost -- if that's what human society decides it wants.


Research AGHI?  Maybe, but low priority.  Research and implement 
human-compatible AGI?  You betcha.  And, toward that latter task, I'll put my 
money on the learn-by-description approach until direct experience shows me 
otherwise! :-) - That was (an attempt at) a JOKE.



Brad



Grounding is a potential problem IFF your AGI is, actually, an AGHI,
where the H stands for Human.  There's nothing wrong with borrowing
the good features of human intelligence, but an uncritical aping of
all aspects of human intelligence just because we think highly of
ourselves is doomed.  At least I hope it is. Frankly, the
possibility of an AGHI scares the crap out of me.  Personally, I'm
in this to build and AGI that is about as far from a human copy
(with or without improvements) as possible.  Better, faster, less
prone to breakdown.  And, eventually, a whole lot smarter.

We don't need no stinkin' grounding.

Cheers,

Brad



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first 
overcome  - Dr Samuel Johnson




*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen

Jiri,

I'd really like to hear more about your approach.  Sounds bang-on!  Have you 
written a paper (or worked from papers written by others) to which you could 
point us?


Cheers,

Brad

Jiri Jelinek wrote:

Ben,


My perspective on grounding is partially summarized here
www.goertzel.org/papers/PostEmbodiedAI_June7.htm
Clearly, embodiment makes the task of teaching a proto-AGI system a heck of a lot 
easier – to such a great extent that trying to create a totally unembodied AGI would be a 
foolish thing.


I wouldn't be so sure about that. Your embodiment approach might be
better for dealing with the implicit knowledge problem than trying to
specify every single fact using the predicate logic or by a
conversation-based teaching. But there is another way, and that's
teaching through submitted stories [initially written in a formal
language] = a solution I'm trying to implement when I (once a while)
get to my AGI development. Stories (and the formal language) provide
important contextual data and collections of those stories can supply
a decent amount of semantic knowledge useful for generating (
clarifying) the implicit knowledge / grounding particular concepts.
Writing stories is [I guess] generally easier than setting up
scenarios-to-learn-from in simulated 3D world. And all the data
processing and attention allocation etc you need in order to handle
the 3D world is IMO an unnecessary overkill. But, I have to admit, my
AGI is not well functional yet (+ I definitely have AGI stuff to
learn), so - just sharing my current opinion. ;-)

Regards,
Jiri Jelinek


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] Groundless reasoning

2008-08-06 Thread Brad Paulsen



Mike Tintner wrote:

Brad;We don't need no stinkin' grounding.

Your intention, I take it, is partly humorous. You are self-consciously 
assuming the persona of an angry child/adolescent. How do I know that 
without being grounded in real world conversations? How can you 
understand the prosody of language generally without being grounded in 
conversations? How can I otherwise know that you do not literally mean 
that grounding stinks like a carcase? What language does not have 
prosody?



Mike,

Partly humorous, partly to make the point clear (in which effort humor sometimes 
helps).  I don't know what you mean by self-consciously but you really have no 
idea what I was thinking when I wrote that.  You can guess.  Which you did. 
And, you can guess wrong.  Which you did.  So, even though your interpretation 
of what I said comes from a grounded human, it didn't help you much in this 
case.  Oh, and the willful child stuff was used in a clearly-demarcated simile 
in a previous paragraph.  That simile had to do with learning by asking.  It 
clearly does not apply (except as part of the backing argument) to the above 
statement.


As for prosody being a factor in understanding language, we're communicating 
using language right now, via e-mail, where prosodic clues are practically 
non-existent.  We're not agreeing, but that doesn't mean we're both not 
understanding.  Even if you did manage to build an AGI that could use prosody 
like a human, it would run into the same prosody-related issues that attend all 
written communication.


Mostly, my post was not about your (or any human's) symbol grounding 
capabilities or lack thereof.  I was simply pointing out in that post that a 
human-compatible AGI can become grounded enough through description.  Again and 
again, in your reply you refer to yourself (e.g., How do I know that without 
being grounded in real world conversations?).  My response to that is Why 
should I care?  Unless, that is, you are, implicitly, suggesting an AGI must be 
Turing-indistinguishable from a human (like you) when it comes to being 
grounded.  If that's a correct inference on my part, then I disagree.


I think uncritical acceptance of Turing's test is one of the worst mistakes made 
in modern science by really smart people.  And, in the remainder of your reply, 
you do more than I ever could have alone to make my point for me.


How am I able to proceed to the following analysis of your sentence 
without grounding? ..Brad's sentence reveals a fascinating example of 
the workings of the unconscious mind. He has assumed in one sentence the 
persona of a wilful child. In effect, his unconscious mind is commenting 
on his conscious position  : I know that I am being wilful in demanding 
that AGI be conducted purely in language/symbols  - demanding like a 
child that the world conform to my wishes and convenience (because, 
frankly, I only know how to do AI that is language- and symbol-based, 
and having to learn new sign systems would be jolly inconvenient, and 
I'm too lazy, so there).


Congratulations.  You're a human whose understanding of the world is supposed to 
be the gold standard for AGI, grounded-by-experience knowledge and you totally 
misinterpreted what I wrote.  I never assumed the persona of a willful child. 
 The reference to the child was a simile and that (should have been) plain by 
my use of the word like in the description.  My human-compatible AGI would 
have very quickly understood the remainder of that sentence to be a simile and 
have interpreted it (unlike yourself) correctly.  Moreover, that simile extended 
only to the way an AGI might learn by description.  By definition, I can't say 
it had nothing to do with my subconscious but, then, neither (especially) can you.


There's a lot more to language than meets the eye - or could ever meet 
the eye of a non-grounded AGI.


Maybe, but you have yet to convince me a human-compatible AGI would need it.


P.S. I would suggest as a matter of practice here that anyone who wants 
to argue a position should ALWAYS PROVIDE AN EXAMPLE OR TWO - of say a 
sentence or even a single word that they think can be understood with or 
without grounding. (Sorry Bob M., I think that's worth shouting about). 
Argument without examples here should be regarded as shoddy, inferior 
intellectual practice.


Suggest all you want.  I think I am a very clear and concise writer.  So do 
other people.  I've won awards for it.  I do use examples when I feel they will 
help others better understand what I'm writing about.  I think I probably want 
people to understand what I write more than you do.  I confess to assuming a 
certain level of knowledge about AI/AGI on the part of my audience when I write 
for this list.  I also expect them to be facile with Google and Wikipedia should 
I fall short.  If that's not good enough for you, just refrain from replying to 
my posts.  Or, now here's a radical idea: ask me for clarification 

Re: [agi] Groundless reasoning

2008-08-04 Thread Brad Paulsen



Terren Suydam wrote:

I don't know, how do you do it? :-]  A human baby that grows up with virtual 
reality hardware surgically implanted (never to experience anything but a 
virtual reality) will have the same issues, right?

There is no difference in principle between real reality and virtual reality. All we have is our senses and 
our abilities to internally structure a world from the data that comes from them. That is how an AGI must do it - 
internally structure.  To a virtually-embodied AGI, the virtual world would be its real world. 
It wouldn't have access to our real world.

Terren

--- On Mon, 8/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

How will the virtual AGI distinguish between what is
virtual and real, and 
whether any information in any medium presents a
realistic picture, good 
likeness, is true to life or a
gross distortion, and whether any 
proposal will really work or whether it itself
is grounded or a 
fictional character?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

Ah, excuse me.  Don't humans (i.e., computer programmers, script writers) 
ground virtual reality worlds?  Isn't that just a way of simulating human 
(or some other abstract) reality?


If so, why not simply tell the AGI what it needs to know about the physical 
world?  Better yet (for human sensibilities), why not simply program the AGI to 
ask a human when it determines it needs further information?  How is grounding 
using AGI-human interaction different from getting the experiential information 
from a third-party once removed (i.e., from the virtual reality program's 
programmer)?  Except that the former method might be more direct and efficient.


People blind or deaf from birth probably have a very different internal idea 
(grounding) of colors or sounds (respectively) than people born with normal 
vision and hearing.  That doesn't mean they can't productively interact with the 
latter group.  It happens every day.


It's also not fair to use Harry's statements and expose them to Vlad's requests 
for clarification as a counterexample of not grounding.  Vlad and Harry are 
just human.  Humans get tired, don't feel well (headaches, etc.).  There are a 
multitude of things that could cause a human to write fuzzily (or, perhaps, 
for Vlad to read or think fuzzily).  The AGI a human creates can, however, be 
built to not suffer from fuzziness when describing things it believes or knows 
(without having to be grounded in human reality through direct self-experience). 
 In that case, Vlad would not have to ask for clarification and that test 
goes out the window.


Grounding is a potential problem IFF your AGI is, actually, an AGHI, where the H 
stands for Human.  There's nothing wrong with borrowing the good features of 
human intelligence, but an uncritical aping of all aspects of human intelligence 
just because we think highly of ourselves is doomed.  At least I hope it is. 
Frankly, the possibility of an AGHI scares the crap out of me.  Personally, I'm 
in this to build and AGI that is about as far from a human copy (with or without 
improvements) as possible.  Better, faster, less prone to breakdown.  And, 
eventually, a whole lot smarter.


We don't need no stinkin' grounding.

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] META: do we need a stronger politeness code on this list?

2008-08-03 Thread Brad Paulsen

Jim Bromer,

This post is not intended for you specifically, but for the entire group.  I 
accept your apology.  Peace.  And now...


Everybody,

Gee, it seems like elitism and censorship are alive and well on the AGI list.

I can't believe some of the stuff I've read in this thread.  Much of this 
rhetoric is coming from people who have never posted (or, at least not recently 
posted) to this list.  Some from the regulars.  But, what is most worrisome to 
me is that these types of proposals are not being rejected forcefully and 
outright by the administrators/moderators of this list.


I beg you all to consider the words of Thomas Jefferson: I would rather be 
exposed to the inconveniences attending too much liberty than to those attending 
too small a degree of it.


Count to ten and, then, realize that each of us has the power to decide for 
ourselves which posts we read and to which posts we reply.  The quickest way to 
get rid of the crackpots and trolls is to simply ignore them.  Most of us use 
mail clients most of the time.  My mail client (Thunderbird) allows me to set up 
powerful filters by which I can self-moderate this list quite well, thank you 
very much.  I simply choose to not read posts from certain individuals.  That 
way, my blood pressure stays as low as possible and I'm not tempted to reply. 
It only takes a few minutes to write a filter and even less time to update it. 
This can be done with most Web-based e-mail readers as well.  The best part?  My 
kill-list (so-called because the people on it are dead to me) does not affect 
any other list member.  The people in it can keep posting here (I just never see 
those posts) and, therefore, any other list member can still read them.


But, please, I'm begging you, do not let this list fall victim to the easy way 
out by banning certain individuals or topics before we, the list members, get a 
chance to see them.  And, please, don't even consider any form of entrance 
exam or proof of intelligence.


Cheers,

Brad

P.S.  I'm also not in favor of disallowing emotive language.  Here, again, if 
you don't like a particular poster's style, you can always kill-list them. 
Don't deprive me of the entertainment they often provide.  Some people are just 
bombastic by nature.  Scratch the surface and they turn out to be, guess what? 
Real human beings.  With all the emotional baggage and over the top behavior 
that can bring.  But, also with all of the emotional needs and insecurities. 
Sometimes we just need to cut people a little slack.  Judge not lest you be 
judged.  That kind of thing.


P.P.S. I was interested to see the list posting analysis in which I was the 
second most frequent poster for July!  I joined the AGI list (by invitation) on 
April 1, 2008.  From then until June 1 (two months), I submitted just 19 posts 
(for an average of just 9.5 posts per month).  The vast majority of those were 
informational (i.e., contained a headlines written by someone else with a link 
to the associated article but which I felt might be of interest to other list 
members). I didn't initiate a thread until late July.  The vast majority of my 
posts in July were over a two-day period and were responses to comments I 
received from other list members.  Just goes to show how misleading statistics 
can be.


But what really gets me cheezed-off is that Loosemore got first place! :-)

Jim Bromer wrote:

I seriously meant it to be a friendly statement.  Obviously I
expressed myself poorly.
Jim Bromer

On Sun, Aug 3, 2008 at 6:41 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

This from the guy who only about three or four days ago responded to a post
I made here by telling me to get a life.  And, that was the sum-total of
his comments.  What's that smell?!?  Ah, hypocrisy!

Jim Bromer wrote:

I used to think that critical attacks on a person's general thinking
were reasonable, but I have found that the best way to reduce the most
hostile and intolerant comments is to be overly objective and refrain
from making any personal comments at all.  Unfortunately, I have found
that you have to refrain from making friendly or shared-experience
kinds of remarks as well in order to use this method to effectively
reduce the dullest sort of personal attacks and the grossest
exaggerations.

The best method of bringing the conversation to a higher level is to
get as many people as possible to refrain from sinking to the lower
levels.

Some of the most intolerant remarks that I received from a few people
in this group were for remarks where I said that I thought that there
was a chance that I might have received some divine guidance on a
logical SAT project that I was working on.  At one point, to the best
of my recollection, Ben Goertzel made the statement that since a
polynomial time SAT was impossible, discussion of polynomial time
methods of SAT would be banned from the group!  Since polynomial time
vs non-polynomial time SAT is famously unprovable, Ben's remark seemed

Re: [agi] How do we know we don't know?

2008-07-31 Thread Brad Paulsen

Mike,

Valentina was referring to a remark I made (and shouldn't have -- just on 
general principles) about her making my *personal* kill-list thanks to the LOL 
she left regarding Richard Loosemore's original reply to the post that started 
this thread.  I should have taken a time out before I opened my big fingers. 
Had I done so, I would have found out (as I did through subsequent exchanges 
with Richard) that his comments were based on a misunderstanding.  He thought 
what I was calling the list of things we don't know was a list of all things 
not known.  It wasn't.  I was referring to the list of things we know we don't 
know.  I take full responsibility for creating this misunderstanding through 
sloppy writing/editing.  Anyhow, I took Richard's initial comments the wrong way 
(probably because I'm as insecure as the next person).  Valentina's message got 
read in that context.  The misunderstanding has all been worked out now, so 
there was really no reason for all the initial drama.


Valentina: if you're reading this, I apologize for overreacting.  I re-read your 
post after I'd calmed down and realized that you did add a brief comment on 
Richard's reply.  You didn't just pile on.  I look forward to hearing more 
about your views on building an AGI.


I'm happy to see this thread has generated some interesting side discussions. 
I'm here to learn and, occasionally, see what people who give a lot of time and 
thought to this subject think of my whacky ideas.


Cheers,

Brad


Mike Tintner wrote:

Er no, I don't believe in killing people :)
 
I'm not quite sure what you're what getting at. I was just trying to add 
another layer of complexity to the brain's immensely multilayered 
processing.  Our processing of new words/word combinations shows that 
there is a creative aspect to this processing - it isn't just matching.  
Some of this might be done by standard verbal associations/ semantic 
networks - e.g. yes IMO artcop could be a word for, say, art critic -  
cops police, and art can be seen as being policed - I may even have 
that last expression in memory.  But in other cases, the processing may 
have to be done by imaginative association/drawing - dirksilt could 
just conceivably be a word, if I imagine some dirk/dagger-like 
tool being used on silt, (doesn't make much sense but conceivable for my 
brain) -  I doubt that such reasoning could be purely verbal.
 
 
Valentina: This is how I explain it: when we perceive a stimulus, word 
in this case, it doesn't reach our brain as a single neuron firing or 
synapse, but as a set of already processed neuronal groups or sets of 
synapses, that each recall various other memories, concepts and neuronal 
group. Let me clarify this. In the example you give, the wod artcop 
might reach us as a set of stimuli: art, cop, mediu-sized word, word 
that begins with a, and so on. All these connect activate various maps 
in our memory, and if something substantial is monitored at some point 
(going with Richard's theory of the monitor, I don't have other 
references of this actually), we form a response.


 
This is more obvious in the case of sight - where an image is first

broken into various compontents that are separately elaborated:
colours, motion, edges, shapes, etc. - and then further sent to the
upper parts of the memory where they can be associated to higher
level concepts.
 
If any of this is not clear let me know, instead of adding me to

your kill-lists ;-P
 
On 7/31/08, *Mike Tintner* [EMAIL PROTECTED] wrote:



Vlad:

I think Hofstadter's exploration of jumbles (
http://en.wikipedia.org/wiki/Jumble ) covers this ground.
You don't
just recognize the word, you work on trying to connect it to
what you
know, and if set of letters didn't correspond to any word,
you give
up.


There's still more to word recognition though than this. How do
we decide what is and isn't, may or may not be a word?  A
neologism? What may or may not be words from:

cogrough
dirksilt
thangthing
artcop
coggourd
cowstock

or fomlepaung or whatever?

 



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen

Abram,

The syntactic surface feature argument makes a good, but rather narrow, addition 
to the list of mechanisms that can engender a feeling of not knowing.  The 
interesting part is that, as someone who speaks Norwegian, using that word in 
the example didn't set off phonological feature alarms for me.  Non-Norwegian 
speakers picked it up right off.


Your argument for semantic (i.e., meaning) features lacks concrete examples so 
it is difficult to tell exactly what you mean.  Based on your general argument, 
I would conclude that it requires a content search of some sort and, therefore, 
falls under one of the mechanisms posited in my initial post.


Cheers,

Brad

Abram Demski wrote:

I think the same sort of solution applies to the world series case;
the only difference is that it is semantic features that fail to
combine, rather than syntactic. In other words, there are either zero
associations or none with the potential to count as an answer.

--Abram

On Tue, Jul 29, 2008 at 7:51 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

Matt,

I confess, I'm not sure I understand your response.  It seems to be a
variant of the critique made by three people early-on in this thread based
on the misleading example query in my original post.  These folks noted that
an analysis of linguistic surface features (i.e., the word fomlepung would
not sound right to an English speaking query recipient) could account for
the feeling of not knowing.  And they were right.  For queries of that
type (i.e., queries that contained foreign, slang or uncommon words).

I apologized for that first example and provided an improved query (one that
has valid English syntax and uses common English words -- so it will pass
linguistic surface feature analysis).  To wit: Which team won the 1924
World Series?

Cheers,

Brad


Matt Mahoney wrote:

This is not a hard problem. A model for data compression has the task of
predicting the next

bit in a string of unknown origin. If the string is an encoding of natural
language text, then

modeling is an AI problem. If the model doesn't know, then it assigns a
probability of about

1/2 to each of 0 and 1. Probabilities can be easily detected from outside
the model, regardless

of the intelligence level of the model.

 -- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce that 
feeling so fast?


Your brain may have been able to do a massively-parallel search of 
your entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the 
conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  
I tend to think this is the case, but it doesn't explain why your 
brain can react so quickly with the feeling of not knowing when it 
doesn't know it doesn't know (e.g., the very first time it 
encounters the word fomlepung).


My intuition tells me the feeling of not knowing when presented with 
a completely novel concept or event is a product of the Danger, 
Will Robinson!, reptilian part of our brain.  When we don't know we 
don't know something we react with a feeling of not knowing as a 
survival response.  Then, having survived, we put the thing not 
known at the head of our list of things I don't know.  As long as 
that thing is in this list it explains how we can come to the 
feeling of not knowing it so quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it 
means will become less and less important and eventually it will 
drop off the end of the list.  And...


Another intuition tells me that the list of things I don't know, 
might generate a certain amount of cognitive dissonance the 
resolution of which can only be accomplished by seeking out new 
information (i.e., learning)?  If so, does this mean that such a 
list in an AGI could be an important element of that AGI's desire 
to learn?  From a functional point of view, this could be something 
as simple as a scheduled background task that checks the things I 
don't know list occasionally and, under the right circumstances, 
pings the AGI with a pang of cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the 
word down into components, then have mechanisms that watch for the 
rate at which candidate lexical items become activated  when  
this mechanism notices that the rate of activation is well below the 
usual threshold, it is a fairly simple thing for it to announce that 
the item is not known.


Keeping lists of things not known is wildly, outrageously 
impossible, for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere 
as a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in 
a very different way than activation of a word:  it would have been 
easy to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would be 
problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word 
(mis)recognition.  It wasn't.  Unfortunately, I included a misleading 
example in my initial post.  A couple of list members called me on it 
immediately (I'd expect nothing less from this group -- and this was a 
valid criticism duly noted).  So far, three people have pointed out 
that a query containing an un-common (foreign, slang or both) word is 
one way to quickly generate the feeling of not knowing.  But, it is 
just that: only one way.  Not all feelings of not knowing are 
produced by linguistic analysis of surface features.  In fact, I would 
guess that the vast majority of them are not so generated.  Still, 
some are and pointing this out was a valid contribution (perhaps that 
example was fortunately bad).


I don't think my query is a no-brainer to answer (unless you want to 
make it one) and your response, since it contained only another 
flavor of the previous two responses, gives me no reason whatsoever 
to change my opinion.


Please take a look at the revised example in this thread.  I don't 
think it has the same problems (as an example) as did the initial 
example.  In particular, all of the words are common (American 
English) and the syntax is valid

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen

Richard,

Someone who can throw comments like Isn't this a bit of a no-brainer? and 
Keeping lists of 'things not known' is wildly, outrageously impossible, for any 
system! at people should expect a little bit of annoyance in return.  If you 
can't take it, don't dish it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed therein. 
Your initial reply correctly identified an additional mechanism that two other 
list members had previously reported (that surface features could raise the 
feeling of not knowing without triggering an exhaustive memory search).  As I 
pointed out in my response to them, this observation was a good catch but did 
not, in any way, show my ideas to be no-brainers or wildly, outrageously 
impossible.  In that reply, I posted a new example query that contained only 
common American English words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why you 
believe my ideas are meritless, then let's have it.  Pejorative adjectives, ad 
hominem attacks and baseless opinions don't impress me much.


As to your cheerleader, she's just made my kill-list.  The only thing worse than 
someone who slings unsupported opinions around like they're facts, is someone 
who slings someone else's unsupported opinions around like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that 
each posited linguistic surface feature analysis as being responsible 
for generate the feeling of not knowing with that *particular* (and, 
admittedly poorly-chosen) example query.  This mechanism will, 
however, apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please read 
the entire thread and the new example.  I think you'll find Richard's 
and your explanation will fail to address how the new example might 
generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on 
the fact that it is relatively trivial to build mechanisms that monitor 
the rate at which the system is progressing in its attempt to do a 
recognition operation, and then call it as a not known if the progress 
rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding of 
your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to 
realize it. agi algorithms should be built in a similar way, rather 
than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word

ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-

hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-

dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere
as a word that I do not know? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a this is a nonword neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; Your Subscription[Powered by 
Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen

Richard,

I just finished reading and replying to your post preceding this one (I guess). 
 Your tone and approach in that post was more like what I expected from you. 
I'm not going to get in a pissing match about what I should or should not take 
personally.  That will generate only heat not light.


Peace, OK?

Cheers,

Brad

P.S. I will review Valentina's post to see if I misunderstood it.  When I 
originally read it, it sure looked like piling on to me.


P.S. Terren:  I reserve the right to put anyone in my personal kill-list.  I 
don't have to justify my reasons.  If I choose to not read the posts of a 
particular list member, and that person turns up on Time Magazine's cover ten 
years from now, well... my loss.  Right?



Richard Loosemore wrote:


Brad,

I just wrote a long, point-by-point response to this, but on reflection 
I am not going to send it.


Instead, I would like to echo Terren Suydam's comment and say that I 
think that you have overreacted here, because in my original reply to 
you I had not the slightest intention of insulting you or your ideas. 
The opening remark, for example, was meant to suggest that the QUESTION 
you posed was a no-brainer (as in, easily answerable), not that your 
ideas were brainless.  You will note that there was a smiley in the 
post, and it started with a question, not a declaration (Isn't this a 
bit of a no-brainer?...).


Throughout, I have simply been trying to explain that there is a general 
strategy for solving your initial question - a strategy quite well known 
to many people - which applies to all versions of the question, whether 
they be at the lexical level or the semantic level.


Valentina, it seems to me, was reacting to the humorous example I gave, 
not mocking you personally.


Certainly, if you feel that I insulted you I am quite willing to 
apologize for what (from my point of view) was an accident of prose style.




Richard Loosemore






Brad Paulsen wrote:

Richard,

Someone who can throw comments like Isn't this a bit of a 
no-brainer? and Keeping lists of 'things not known' is wildly, 
outrageously impossible, for any system! at people should expect a 
little bit of annoyance in return.  If you can't take it, don't dish 
it out.


Your responses to my initial post so far have been devoid of any real 
substantive evidence or argument for the opinions you have expressed 
therein. Your initial reply correctly identified an additional 
mechanism that two other list members had previously reported (that 
surface features could raise the feeling of not knowing without 
triggering an exhaustive memory search).  As I pointed out in my 
response to them, this observation was a good catch but did not, in 
any way, show my ideas to be no-brainers or wildly, outrageously 
impossible.  In that reply, I posted a new example query that 
contained only common American English words and was syntactically valid.


If you want to present an evidence-based or well-reasoned argument why 
you believe my ideas are meritless, then let's have it.  Pejorative 
adjectives, ad hominem attacks and baseless opinions don't impress me 
much.


As to your cheerleader, she's just made my kill-list.  The only thing 
worse than someone who slings unsupported opinions around like they're 
facts, is someone who slings someone else's unsupported opinions 
around like they're facts.


Who is Mark Waser?

Cheers,

Brad

Richard Loosemore wrote:

Brad Paulsen wrote:

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses 
that each posited linguistic surface feature analysis as being 
responsible for generate the feeling of not knowing with that 
*particular* (and, admittedly poorly-chosen) example query.  This 
mechanism will, however, apply to only a very tiny number of cases.


In response to those first two replies (not including Richard's), I 
apologized for the sloppy example and offered a new one.  Please 
read the entire thread and the new example.  I think you'll find 
Richard's and your explanation will fail to address how the new 
example might generate the feeling of not knowing.


Brad,

Isn't this response, as well as the previous response directed at me, 
just a little more annoyed-sounding than it needs to be?


Both Valentina and I (and now Mark Waser also) have simply focused on 
the fact that it is relatively trivial to build mechanisms that 
monitor the rate at which the system is progressing in its attempt to 
do a recognition operation, and then call it as a not known if the 
progress rate is below a certain threshold.


In particular, you did suggest the idea of a system keeping lists of 
things it did not know, and surely it is not inappropriate to give a 
good-naturedly humorous response to that one?


So far, I don't see any of us making a substantial misunderstanding 
of your question, nor anyone being deliberately rude to you.




Richard Loosemore











Valentina Poletti wrote:

lol

Re: [agi] How do we know we don't know?

2008-07-30 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:

James,

Someone ventured the *opinion* that keeping such a list of things I 
don't know was nonsensical, but I have yet to see any evidence or 
well-reasoned argument backing that opinion.  So, it's just an 
opinion.  One with which I, obviously, do not agree.


Please be clear about what was intended by my remarks.

I *now* have an explicit, episodic memory of confronting the question 
Who won the world series in 1954, and as a result of that episode that 
occured today, I have the explicit knowledge that I do not know the 
answer.  Having that kind of explicit knowledge of lack-of-knowledge is 
not problematic at all.


The only thing that seems implausible is that IN GENERAL we try to 
answer questions by first looking up explicit elements that encode the 
fact that we do not know the answer.  As a general strategy this must, 
surely, be deeply implausible, for the reasons that I originally gave, 
which centered on the fact that the sheer quantity of unknowns would be 
overwhelming for any system.  For almost every one of the potentially 
askable questions that would elicit, in me, a response of I do not 
know, there would not be any such episode.  Similarly, it would be 
clearly implausible for the cognitive system to spend its time making 
lists of things that it did not know.  If that is not an example of an 
obviously implausible mechanism, then I do not know what would be.


Ah.  Now we're getting somewhere!  I do *not* (and did not) propose that we keep 
a list of all the things unknown in memory.  Nor did I propose some 
background task that would maintain or add to such a list.  That would be 
...wildly, outrageously impossible, for any system!  Maybe, instead of 
assuming the worse (that I could be so ignorant as to propose such a list), you 
might have asked for some clarification?


The list of things I don't know is, by definition, a list of things I know I 
don't know.  How could I *possibly* know about things I don't know I don't 
know?  The list I propose contains ONLY those things we know we don't know. 
Such a list is, in my opinion, completely manageable and, indeed, helpful 
information to have around.  When we first encounter a completely novel object 
or event we will have to search (percolate, whatever) for it in memory and come 
up empty (however you want to define that).  It is then, and *only* then, that 
we put this knowledge (or meta-knowledge) on the things (I know) I don't know 
list.


This list can be consulted before performing a search of all memory to determine 
if there's a need to do such an exhaustive search.  If the thing we're trying to 
remember is on the things (I know) I don't know list, we can very quickly 
report the feeling of not knowing.  Otherwise, we have to do the exhaustive 
(however you define that) search of things we do know and come up empty.  Such a 
list can also be used by subconscious processes to power our desire to learn. 
Presumably, we experience cognitive dissonance when we feel there's something we 
know nothing about and want to resolve that feeling.  How?  By learning.  Once 
learned, the thing falls off the things (I know) I don't know list. 
Similarly, if an item is on the list for a long time, it will naturally fall 
off the list (the use it or lose it principle).  Both of these natural 
actions will work, I believe, to keep this list quite small.


Sometimes (well, don't ask my ex) I can be a bit thick.  I know you're all 
surprised to hear that, but...


It just dawned on me that much of the uproar here may have been caused by a 
miscommunication (gee, where have we heard of that happening before?).  I may 
have used the term things we don't know to denote the things we know we don't 
know list.  If so, please accept my apologies.  Having played with these 
questions for a long time, this *important* distinction apparently became lost 
to me and I began to assume it self-evident that a things we don't know list 
would have had to come into being as the result of our encounters with those 
things when they were things we didn't know we didn't know (and, therefore, 
could not be in any list of knowledge we had -- we are clueless about these 
things until we encounter them).


If that's the case, let me (finally) be clear: the list I am talking about in 
the human or AGI agent's memory is a list of THINGS I KNOW I DON'T KNOW.  In the 
first (misleading) example I gave, the word fomlepung would be on that list 
after the query containing it had resulted in the I don't know answer (how 
that determination is made is really a minor point for this discussion).  In the 
second example query I gave, the Which team won the 1924 World Series? would 
also, after eliciting the I don't know response, find its way onto this list.



This was not merely an opinion, it was a reasoned argument, 
illustrated by an example of a nonword that clearly belonged to a vast 
class of nonwords.


Well, be fair

Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means 
you're not a slang-slinging Norwegian.  But, how did your brain 
produce that feeling of not knowing?  And, how did it produce that 
feeling so fast?


Your brain may have been able to do a massively-parallel search of 
your entire memory and come up empty.  But, if it does this, it's 
subconscious.  No one to whom I've presented the above question has 
reported a conscious feeling of searching before having the 
conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  I 
tend to think this is the case, but it doesn't explain why your brain 
can react so quickly with the feeling of not knowing when it doesn't 
know it doesn't know (e.g., the very first time it encounters the word 
fomlepung).


My intuition tells me the feeling of not knowing when presented with a 
completely novel concept or event is a product of the Danger, Will 
Robinson!, reptilian part of our brain.  When we don't know we don't 
know something we react with a feeling of not knowing as a survival 
response.  Then, having survived, we put the thing not known at the 
head of our list of things I don't know.  As long as that thing is 
in this list it explains how we can come to the feeling of not knowing 
it so quickly.


Of course, keeping a large list of things I don't know around is 
probably not a good idea.  I suspect such a list will naturally get 
smaller through atrophy.  You will probably never encounter the 
fomlepung question again, so the fact that you don't know what it 
means will become less and less important and eventually it will drop 
off the end of the list.  And...


Another intuition tells me that the list of things I don't know, 
might generate a certain amount of cognitive dissonance the resolution 
of which can only be accomplished by seeking out new information 
(i.e., learning)?  If so, does this mean that such a list in an AGI 
could be an important element of that AGI's desire to learn?  From a 
functional point of view, this could be something as simple as a 
scheduled background task that checks the things I don't know list 
occasionally and, under the right circumstances, pings the AGI with 
a pang of cognitive dissonance from time to time.


So, what say ye?


Isn't this a bit of a no-brainer?  Why would the human brain need to 
keep lists of things it did not know, when it can simply break the word 
down into components, then have mechanisms that watch for the rate at 
which candidate lexical items become activated  when  this mechanism 
notices that the rate of activation is well below the usual threshold, 
it is a fairly simple thing for it to announce that the item is not known.


Keeping lists of things not known is wildly, outrageously impossible, 
for any system!  Would we really expect that the word 
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-

owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere as 
a word that I do not know? :-)


I note that even in the simplest word-recognition neural nets that I 
built and studied in the 1990s, activation of a nonword proceeded in a 
very different way than activation of a word:  it would have been easy 
to build something to trigger a this is a nonword neuron.


Is there some type of AI formalism where nonword recognition would be 
problematic?




Richard Loosemore


Richard,

You seem to have decided my request for comment was about word (mis)recognition. 
 It wasn't.  Unfortunately, I included a misleading example in my initial post. 
 A couple of list members called me on it immediately (I'd expect nothing less 
from this group -- and this was a valid criticism duly noted).  So far, three 
people have pointed out that a query containing an un-common (foreign, slang or 
both) word is one way to quickly generate the feeling of not knowing.  But, it 
is just that: only one way.  Not all feelings of not knowing are produced by 
linguistic analysis of surface features.  In fact, I would guess that the vast 
majority of them are not so generated.  Still, some are and pointing this out 
was a valid contribution (perhaps that example was fortunately bad).


I don't think my query is a no-brainer to answer (unless you want to make it 
one) and your response, since it contained only another flavor of the previous 
two responses, gives me no reason whatsoever to change my opinion.


Please take a look at the revised example in this thread.  I don't think it has 
the same problems (as an example) as did the initial example.  In particular, 
all of the words are common (American English) and the syntax is valid.


Cheers,

Brad

Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen

James,

So, you agree that some sort of search must take place before the feeling of 
not knowing presents itself?  Of course, realizing we don't have a lot of 
information results from some type of a search and not a separate process (at 
least you didn't posit any).


Thanks for your comments!

Cheers

Brad

James Ratcliff wrote:
It is fairly simple at that point, we have enough context to have a very 
limited domain

world series - baseball
1924
answer is a team,
so we can do a lookup in our database easily enough, or realize that we 
really dont have a lot of information about baseball in our mindset.


And for the other one, it would just be a strait term match.

James Ratcliff

___
James Ratcliff - http://falazar.com
Looking for something...

--- On *Mon, 7/28/08, Brad Paulsen /[EMAIL PROTECTED]/* wrote:

From: Brad Paulsen [EMAIL PROTECTED]
Subject: Re: [agi] How do we know we don't know?
To: agi@v2.listbox.com
Date: Monday, July 28, 2008, 4:12 PM

Jim Bromer wrote:
 On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen
[EMAIL PROTECTED] wrote:
 All,
What does fomlepung mean?

 If your immediate (mental) response was I don't know.
it means you're not
 a slang-slinging Norwegian.  But, how did your brain produce that
feeling
 of not knowing?  And, how did it produce that feeling so fast?

 Your brain may have been able to do a massively-parallel search of
your
 entire memory and
 come up empty.  But, if it does this,
it's subconscious.
  No one to whom I've presented the above question has reported a
conscious
 feeling of searching before having the conscious feeling
of not knowing.

 Brad
 
 My guess that initial recognition must be based on the surface

 features of an input.  If this is true, then that could suggest that
 our initial recognition reactions are stimulated by distinct
 components (or distinct groupings of components) that are found in the
 surface input data.
 Jim Bromer
 
 
Hmmm.  That particular query may not have been the best example since, to a 
non-Norwegian speaker, the phonological surface feature of that statement alone


could account for the feeling of not knowing.  In other words, the
word 
fomlepung just doesn't sound right.  Good point. 
But, that may only
 explain 
how we know we don't know strange sounding words.


Let's try another example:

Which team won the 1924 World Series?

Cheers,

Brad

 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen

Valentina,

Well, the LOL is on you.

Richard failed to add anything new to the two previous responses that each 
posited linguistic surface feature analysis as being responsible for generate 
the feeling of not knowing with that *particular* (and, admittedly 
poorly-chosen) example query.  This mechanism will, however, apply to only a 
very tiny number of cases.


In response to those first two replies (not including Richard's), I apologized 
for the sloppy example and offered a new one.  Please read the entire thread and 
the new example.  I think you'll find Richard's and your explanation will fail 
to address how the new example might generate the feeling of not knowing.


Cheers,

Brad

Valentina Poletti wrote:

lol.. well said richard.
the stimuli simply invokes no signiticant response and thus our brain 
concludes that we 'don't know'. that's why it takes no effort to realize 
it. agi algorithms should be built in a similar way, rather than searching.



Isn't this a bit of a no-brainer?  Why would the human brain need to
keep lists of things it did not know, when it can simply break the
word down into components, then have mechanisms that watch for the
rate at which candidate lexical items become activated  when
 this mechanism notices that the rate of activation is well below
the usual threshold, it is a fairly simple thing for it to announce
that the item is not known.

Keeping lists of things not known is wildly, outrageously
impossible, for any system!  Would we really expect that the word
ikrwfheuigjsjboweonwjebgowinwkjbcewijcniwecwoicmuwbpiwjdncwjkdncowk-
owejwenowuycgxnjwiiweudnpwieudnwheudxiweidhuxehwuixwefgyjsdhxeiowudx-
hwieuhyxweipudxhnweduiweodiuweydnxiweudhcnhweduweiducyenwhuwiepixuwe-
dpiuwezpiweudnzpwieumzweuipweiuzmwepoidumw is represented somewhere
as a word that I do not know? :-)

I note that even in the simplest word-recognition neural nets that I
built and studied in the 1990s, activation of a nonword proceeded in
a very different way than activation of a word:  it would have been
easy to build something to trigger a this is a nonword neuron.

Is there some type of AI formalism where nonword recognition would
be problematic?



Richard Loosemore

 



*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-29 Thread Brad Paulsen

Ed,

Thanks for the response.  I'm going to read it a couple more times to make sure 
I didn't miss anything.  But, on first read, looks good!


Thanks for taking the time to comment in such detail!

Cheers,

Brad

Ed Porter wrote:

I believe the human brain, in addition to including the controller for a
physical robot, includes the controller of a thought robot which includes
pushing much of the brain through learned or instinctual mental behaviors.
My understanding is that much of higher level function of this thought
controller is largely in the prefrontal cortex, basil ganglia, thalamic
loop.

I am guessing that answering a query such as  What does word_X (in this
case fomlepung) mean? is a type of learned behavior.  The thought robot
is consciously aware of the query task and the idea that as a query, its
task is to search for recollection of the word fomlepung and its
associations.  I think the search is generated by a consciously broadcasting
a pattern looking for a match for fomlepung to the appropriate areas of
the brain.  Although much of the spreading activation done in response to
this conscious activation is, itself, in the subconscious, the thought robot
task of answering a query is focusing attention on the query and any
feedback from it indicating a possible answer. This could be done by looking
for feedback from cortical activations to the thalamus that are in synchrony
with the query pattern, tuning into them, and testing them so see if any of
them are a desired match.  


When the conscious task of query answering does not get feedback indicating
an answer, the conscious pre-frontal process engaged in query is aware of
that lack of desired feedback and, thus, the human in whose mind the process
is taking place is conscious that he/she doesn't know (or at least can
recall) the meaning of the word.

Conscious feelings of not knowing can arise in other contexts besides
answering a what does word_X mean query.  In some of them, subconscious
processes might, for various reasons, promote a failure to match a
subconscious query or task up to the consciousness.  


For example, a sub-subconscious pattern completion process, in say high
level perception or in cognition, might draws activation to itself, pushing
its activation into semi-conscious or conscious attention, both because its
activation pattern is beginning to better match an emotionally weighted
patterns that direct more activation energy back to it, and because there is
a missing a piece of information necessary for that valuable match to be
made.  The brain may have learned by evolution or individual experience that
such information would be more likely found if the much greater spreading
activation resources of semi-conscious or conscious attention could be
utilized for conducting the search for such missing information.  This
causes a greater search to be made for such information, and if the
information is not found quickly, could cause even more attention to be
allocated to the search, pushing the search and its failure into clear
conscious awareness.

Ed Porter

-Original Message-
From: Abram Demski [mailto:[EMAIL PROTECTED] 
Sent: Monday, July 28, 2008 4:25 PM

To: agi@v2.listbox.com
Subject: Re: [agi] How do we know we don't know?

It seems like you have some valid points, but I cannot help but point
out a problem with your question. It seems like any system for pattern
recognition and/or prediction will have a sensible I Don't Know
state. An algorithm in a published paper might suppress this in an
attempt to give as reasonable an output as is possible in all
situations, but it seems like in most such cases it would be easy to
add. Therefore, where is the problem?

Yet, I follow your comments and to an extent agree... the feeling when
I don't know something could possibly be related to animal fear
(though I am not sure), and the second time I encounter the same thing
is certainly different (because I remember the previous not-knowing,
so I at least have that info for context this time).

But I think the issue might nonetheless be non-fundamental, because
algorithms typically can easily report their not knowing.

--Abram

On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED]
wrote:

All,

Here's a question for you:

   What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're

not

a slang-slinging Norwegian.  But, how did your brain produce that feeling
of not knowing?  And, how did it produce that feeling so fast?

Your brain may have been able to do a massively-parallel search of your
entire memory and come up empty.  But, if it does this, it's

subconscious.

 No one to whom I've presented the above question has reported a conscious
feeling of searching before having the conscious feeling of not knowing.

It could be that your brain keeps a list of things I don't know.  I tend
to think this is the case, but it doesn't explain why your brain can react
so quickly

[agi] How do we know we don't know?

2008-07-28 Thread Brad Paulsen

All,

Here's a question for you:

What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're not a 
slang-slinging Norwegian.  But, how did your brain produce that feeling of not 
knowing?  And, how did it produce that feeling so fast?


Your brain may have been able to do a massively-parallel search of your entire 
memory and come up empty.  But, if it does this, it's subconscious.  No one to 
whom I've presented the above question has reported a conscious feeling of 
searching before having the conscious feeling of not knowing.


It could be that your brain keeps a list of things I don't know.  I tend to 
think this is the case, but it doesn't explain why your brain can react so 
quickly with the feeling of not knowing when it doesn't know it doesn't know 
(e.g., the very first time it encounters the word fomlepung).


My intuition tells me the feeling of not knowing when presented with a 
completely novel concept or event is a product of the Danger, Will Robinson!, 
reptilian part of our brain.  When we don't know we don't know something we 
react with a feeling of not knowing as a survival response.  Then, having 
survived, we put the thing not known at the head of our list of things I don't 
know.  As long as that thing is in this list it explains how we can come to the 
feeling of not knowing it so quickly.


Of course, keeping a large list of things I don't know around is probably not 
a good idea.  I suspect such a list will naturally get smaller through atrophy. 
 You will probably never encounter the fomlepung question again, so the fact 
that you don't know what it means will become less and less important and 
eventually it will drop off the end of the list.  And...


Another intuition tells me that the list of things I don't know, might 
generate a certain amount of cognitive dissonance the resolution of which can 
only be accomplished by seeking out new information (i.e., learning)?  If so, 
does this mean that such a list in an AGI could be an important element of that 
AGI's desire to learn?  From a functional point of view, this could be 
something as simple as a scheduled background task that checks the things I 
don't know list occasionally and, under the right circumstances, pings the 
AGI with a pang of cognitive dissonance from time to time.


So, what say ye?

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] How do we know we don't know?

2008-07-28 Thread Brad Paulsen



Jim Bromer wrote:

On Mon, Jul 28, 2008 at 2:58 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

All,
   What does fomlepung mean?

If your immediate (mental) response was I don't know. it means you're not
a slang-slinging Norwegian.  But, how did your brain produce that feeling
of not knowing?  And, how did it produce that feeling so fast?

Your brain may have been able to do a massively-parallel search of your
entire memory and come up empty.  But, if it does this, it's subconscious.
 No one to whom I've presented the above question has reported a conscious
feeling of searching before having the conscious feeling of not knowing.

Brad


My guess that initial recognition must be based on the surface
features of an input.  If this is true, then that could suggest that
our initial recognition reactions are stimulated by distinct
components (or distinct groupings of components) that are found in the
surface input data.
Jim Bromer


Hmmm.  That particular query may not have been the best example since, to a 
non-Norwegian speaker, the phonological surface feature of that statement alone 
could account for the feeling of not knowing.  In other words, the word 
fomlepung just doesn't sound right.  Good point.  But, that may only explain 
how we know we don't know strange sounding words.


Let's try another example:

Which team won the 1924 World Series?

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-22 Thread Brad Paulsen
 potential applications extend to the
roles of switchboard/receptionist and personal
assistant/companion (in a time-share mode).

Opinions ?

John L

http://www.ethicalvalues.com
http://www.ethicalvalues.info
http://www.emotionchip.net
http://www.global-solutions.org
http://www.world-peace.org
http://www.angelfire.com/rnb/fairhaven/schematics.html
http://www.angelfire.com/rnb/fairhaven/behaviorism.html
http://www.forebrain.org
http://www.charactervalues.com
http://www.charactervalues.org
http://www.charactervalues.net


- Original Message - From: Brad Paulsen [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, July 21, 2008 12:35 PM
Subject: Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS



Matt,

Never underestimate the industriousness of a PATENT TROLL.  He's 
already been granted a new patent for the same concept, except 
(apparently, I haven't read the patent yet, this time for an ethical 
chip).  Patent #7236963 (awarded in 2007) for the emotion chip.  
Don't worry, it's as indefensible as the first one.  Same random 
buzzword generator, different title.


The problem is giving one of these morons a technology patent is like 
giving an ADHD kid a loaded gun.  You know they're just looking to use 
it as blackmail for some quick royalty fees.  The posting here was, no 
doubt, for intimidation purposes.  Of course, somebody ought to tell 
him the AGI crowd doesn't have much use for a solution to the 
ethical artificial intelligence problem (whatever the hell that 
is).  Indeed, even after he tells us what it is, it still doesn't make 
any sense.  And I quote from the (first) patent's Abstract A new 
model of motivational behavior, described as a ten-level 
metaperspectival hierarchy of...


Say what?  There is no such word as metaperspectival.  Not in 
English, at least.  Yet, that's the word he uses to define his 
invention.  But, it gets better...


...ethical terms, serves as the foundation for an ethical simulation 
of artificial intelligence.  Well, I'm glad he intends to conduct his 
simulation ethically.  I think what he really meant, however, was “a 
simulation of ethical artificial intelligence.”  He does get half a 
grammar point for using the correct article (“an”) before “ethical.”  
You don't see that much these days. But, ah... we have another problem 
here. You see, artificial intelligence IS ALREADY a simulation.  In 
particular, it is a simulation of human intelligence. Hence the word 
artificial.  At least, that's the idea.  Does he really mean his 
patent applies to a simulation of a simulation?  Given that most 
existing AI software is computationally intensive and gasping for 
breath most of the time, that's got to be one slow-ass AI invention!


Again, from the Abstract of the first patent...

This AI system is organized as a tandem, nested...”  Sigh.  Where I 
come from (planet earth), tandem and nested are mutually exclusive 
modifiers. It's either tandem (i.e., “along side of” or “behind each 
other”) or it's nested (i.e., “inside of”).  Can't be both at the same 
time.  Sorry.


Continuing, still in the Abstract...

“...overseen by a master control unit – expert system (coordinating 
the motivational interchanges over real time).”


OMG.  Let me see if I have this straight.  He has succeeded in 
patenting a simulation of a simulation with a “master control unit” 
that is, itself, another simulation.  The only thing that contraption 
will do in real time is sit there looking stupid.  That's presuming he 
could make it work which, as far as I can tell by scanning his patent, 
is right up there with the probability we'll solve the energy crises 
and the greenhouse effect using cold fusion.


I have a good dozen of these gems, most of them from the Abstract 
alone. It gets REALLY weird when you read the patent description where 
he talks about how this invention solves the affective language 
understanding problem heretofore unsolved.  News Alert: the entire 
NLP problem has yet to be solved (after 50 of trying by some of the 
best minds in the world).


I have a PDF version of the newer patent (#7236963) which I will send 
(off-list) to anyone interested.  Be advised, it's 3MB+ in size. 
Alternatively, you can read about it (see a picture of Mr. LaMuth, and 
download the PDF) at www.emotionchip.net.  I also have a PDF version 
of the other, earlier, patent he holds (#6587846) – the supposed 
“recently issued” patent (actually, granted in 2003).  I will also 
send this off-list to anyone interested (it's only about 1.3MB).  
Frankly, the reason these PDFs are so large is that every page is a 
graphic image.  The documents contain no data stored as text (that I 
could find).  This is pretty typical with U.S. Patent Office 
documents.  Somebody there really likes (or liked) the TIFF image 
format.  Unfortunately, this makes the Search function in Acrobat (or 
FoxIt Reader) completely useless.


BTW, this guy apparently uses a dialup ISP.  Yeah.  State of the art

[agi] Pretty soon, there will be nowhere to hide!

2008-07-22 Thread Brad Paulsen

All,

DUTCH RESEARCHERS TAKE FLIGHT WITH THREE-GRAM 'DRAGONFLY'
http://www.physorg.com/news135936047.html

Some may consider this a bit off-topic, but it has an undeniable way cool 
factor.  Be sure to watch the video (YouTube link in the article).


Come to think of it, since everything has something to do with artificial 
general intelligence, doesn't that mean nothing is really off-topic here? ;-)


(dodging beer cans, running for cover...)

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF ROBOTICS

2008-07-22 Thread Brad Paulsen
Ah... And just what the hell is your problem?  And I mean that in the 
friendliest way possible.  Just what the HELL is your problem?


Brad

Jim Bromer wrote:

Brad Paulsen Said:
Mr. LaMuth,
You are correct, sir. I should not have called you a patent troll. It 
was not
only harsh, it was inaccurate. I was under the impression the term 
applied to a

person or company holding a patent said person or company was not also
developing, or imminently planning to develop, as a product. According to
Wikipedia, however, this is not the correct definition of the term
(http://en.wikipedia.org/wiki/Patent_troll). Please accept my sincerest 
apologies...
..That being said, I would be happy to look over your patent in detail 
and to give

you my written, expert opinion on (a) whether it can actually be defended
against challenges to its claim(s) and (b) whether it is possible, using 
current
hardware and software tools, to actually construct a working prototype 
from the
patent description. I have just two requirements: (1) You must agree to 
allow
me to publish my analysis, unmodified, on the Internet (I will gladly 
also post,
in the same location, any comments you may have regarding my analysis) 
and, (b)

you must agree to assist in this analysis by providing any additional
information I may need to complete the task. We can communicate for this
purpose via email (this will also provide a “log” of our collaboration).
I assure you I am completely sincere in making this offer. I have charged
clients up to $250 per hour for similar services. Since I feel truly 
remorseful
about incorrectly intimating you were a patent troll, you get this one 
on me.
;-) Let me know if you're interested. I have everything I need to get 
started

right away.
Cheers,
Brad

---
Dude...  Get a life.
I mean that in the friendliest way possible, but honestly.  Get a life.

Jim Bromer



* *From:* Brad Paulsen [EMAIL PROTECTED]
* *To:* agi@v2.listbox.com
* *Subject:* Re: [agi] US PATENT ISSUED for the TEN ETHICAL LAWS OF
  ROBOTICS
* *Date:* Tue, 22 Jul 2008 19:37:16 -0500


Mr. LaMuth,

You are correct, sir. I should not have called you a patent troll. It 
was not
only harsh, it was inaccurate. I was under the impression the term 
applied to a

person or company holding a patent said person or company was not also
developing, or imminently planning to develop, as a product. According to
Wikipedia, however, this is not the correct definition of the term
(http://en.wikipedia.org/wiki/Patent_troll). Please accept my sincerest 
apologies.


You do, however, appear to be a non-practicing entity (NPE). Wikipedia
defines this term as ...a patent owner who does not manufacture or use the
patented invention (same article). The patent you cited was first 
applied for
in 2000 (actually in 1999 if we count the provisional patent) and 
approved in
2003. This is hardly a “recently issued” patent as you claimed in your 
initial
post to this list. Since you did not mention any applicable existing 
products,

or products currently under development (or even claimed to have a
proof-of-concept prototype working and available for examination by 
interested

parties), I think you can see where a reasonable person would have cause to
believe your post may have had some other purpose.

As to your claim to have initially posted here looking for “...aid in
developing...” your invention, I must, also, assume you are being 
sincere. But,
there is nothing in your initial posting to this mailing list that 
supports this
assumption in any way, shape or form. There is no mention of having 
acquired
any funding, no mention of a job opening, nor is there mention of any 
intent on

your part to seek a development partner (individual or company).

That being said, I would be happy to look over your patent in detail and 
to give

you my written, expert opinion on (a) whether it can actually be defended
against challenges to its claim(s) and (b) whether it is possible, using 
current
hardware and software tools, to actually construct a working prototype 
from the
patent description. I have just two requirements: (1) You must agree to 
allow
me to publish my analysis, unmodified, on the Internet (I will gladly 
also post,
in the same location, any comments you may have regarding my analysis) 
and, (b)

you must agree to assist in this analysis by providing any additional
information I may need to complete the task. We can communicate for this
purpose via email (this will also provide a “log” of our collaboration).

I assure you I am completely sincere in making this offer. I have charged
clients up to $250 per hour for similar services. Since I feel truly 
remorseful
about incorrectly intimating you were a patent troll, you get this one 
on me.
;-) Let me know if you're interested. I have everything I need to get 
started

right away.

Cheers,

Brad

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-17 Thread Brad Paulsen

Mike,

If memory serves, this thread started out as a discussion about binding in an 
AGI context.  At some point, the terms forward-chaining and 
backward-chaining were brought up and, then, got used in a weird way (I 
thought) as the discussion turned to temporal dependencies and hierarchical 
logic constructs.  When it appeared no one else was going to clear up the 
ambiguities, I threw in my two cents.


I made a spectacularly good living in the late 1980's building expert system 
engines and knowledge engineering front-ends, so I think I know a thing or two 
about that narrow AI technology.  Funny thing, though, at that time, the trade 
press were saying expert systems were no longer real AI.  They worked so well 
at what they did, the mystery wore off.  Ah, the price of success in AI. ;-)


What makes the algorithms used in expert system engines less than suitable for 
AGI is their static (snapshot) nature and crispness.  AGI really needs some 
form of dynamic programming, probabilistic (or fuzzy) rules (such as those built 
using Bayes nets or hidden Markov models), and runtime feedback.


Thanks for the kind words.

Cheers,

Brad

Mike Tintner wrote:
Brad: By definition, an expert system rule base contains the total sum 
of the

knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect 
the

system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be 
done,

but I've never seen it.

In which case - (thanks BTW for a v. helpful post) - are we talking 
entirely here about narrow AI? Sorry if I've missed this, but has anyone 
been discussing how to provide a flexible, evolving set of rules for 
behaviour? That's the crux of AGI, isn't it? Something at least as 
flexible as a country's Constitution and  Body of Laws. What ideas are 
on offer here?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-16 Thread Brad Paulsen



Richard Loosemore wrote:

Brad Paulsen wrote:
I've been following this thread pretty much since the beginning.  I 
hope I didn't miss anything subtle.  You'll let me know if I have, I'm 
sure. ;=)


It appears the need for temporal dependencies or different levels of 
reasoning has been conflated with the terms forward-chaining (FWC) 
and backward-chaining (BWC), which are typically used to describe 
different rule base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to 
refer to reasoning strategies have absolutely nothing to do with 
temporal dependencies or levels of reasoning.  These two terms refer 
simply, and only, to the algorithms used to evaluate “if/then” rules 
in a rule base (RB).  In the FWC algorithm, the “if” part is evaluated 
and, if TRUE, the “then” part is added to the FWC engine's output.  In 
the BWC algorithm, the “then” part is evaluated and, if TRUE, the “if” 
part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.


Whooaa there.  Something not right here.

Backward chaining is about starting with a goal statement that you would 
like to prove, but at the beginning it is just a hypothesis.  In BWC you 
go about proving the statement by trying to find facts that might 
support it.  You would not start from the statement and then add 
knowledge to your knowledgebase that is consistent with it.




Richard,

I really don't know where you got the idea my descriptions or algorithm
added “...knowledge to your (the “royal” you, I presume) knowledgebase...”.
Maybe you misunderstood my use of the term “output.”  Another (perhaps
better) word for output would be “result” or “action.”  I've also heard
FWC/BWC engine output referred to as the “blackboard.”

By definition, an expert system rule base contains the total sum of the
knowledge of a human expert(s) in a particular domain at a given point in
time.  When you use it, that's what you expect to get.  You don't expect the
system to modify the rule base at runtime.  If everything you need isn't in
the rule base, you need to talk to the knowledge engineer. I don't know of
any expert system that adds rules to its rule base (i.e., becomes “more
expert”) at runtime.  I'm not saying necessarily that this couldn't be done,
but I've never seen it.

I have more to say about your counterexample below, but I don't want
this thread to devolve into a critique of 1980's classic AI models.

The main purpose I posted to this thread was that I was seeing
inaccurate conclusions being drawn based on a lack of understanding
of how the terms “backward” and “forward” chaining related to temporal
dependencies and hierarchal logic constructs.  There is no relation.
Using forward chaining has nothing to do with “forward in time” or
“down a level in the hierarchy.”  Nor does backward chaining have
anything to do with “backward in time” or “up a level in the hierarchy.”
These terms describe particular search algorithms used in expert system
engines (since, at least, the mid-1980s).  Definitions vary in emphasis,
such as the three someone posted to this thread, but they all refer to
the same critters.

If one wishes to express temporal dependencies or hierarchical levels of
logic in these types of systems, one needs to encode these in the rules.
I believe I even gave an example of a rule base containing temporal and
hierarchical-conditioned rules.

So for example, if your goal is to prove that Socrates is mortal, then 
your above desciption of BWC would cause the following to occur


1) Does any rule allow us to conclude that x is/is not mortal?

2) Answer: yes, the following rules allow us to do that:

If x is a plant, then x is mortal
If x is a rock, then x is not mortal
If x is a robot, then x is not mortal
If x lives in a post-singularity era, then x is not mortal
If x is a slug, then x is mortal
If x is a japanese beetle, then x is mortal
If x is a side of beef, then x is mortal
If x is a screwdriver, then x is not mortal
If x is a god, then x is not mortal
If x is a living creature, then x is mortal
If x is a goat, then x is mortal
If x is a parrot in a Dead Parrot Sketch, then x is mortal

3) Ask the knowledge base if Socrates is a plant, if Socrates is a rock, 
etc etc . working through the above list.


3) [According to your version of BWC, if I understand you aright] Okay, 
if we cannot find any facts in the KB that say that Socrates is known to 
be one of these things, then add the first of these to the KB:


Socrates is a plant

[This is the bit that I question:  we don't do the opposite of forward 
chaining at this step].


4) Now repeat to find all rules that allow us to conclude that x is a 
plant.  For this set of  ... then x is a plant rules, go back and 
repeat the loop from step 2 onwards.  Then if this does not work, 



Well, you can imagine the rest

Re: FW: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-15 Thread Brad Paulsen
I've been following this thread pretty much since the beginning.  I hope I 
didn't miss anything subtle.  You'll let me know if I have, I'm sure. ;=)


It appears the need for temporal dependencies or different levels of reasoning 
has been conflated with the terms forward-chaining (FWC) and 
backward-chaining (BWC), which are typically used to describe different rule 
base evaluation algorithms used by expert systems.


The terms “forward-chaining” and “backward-chaining” when used to refer to 
reasoning strategies have absolutely nothing to do with temporal dependencies or 
levels of reasoning.  These two terms refer simply, and only, to the algorithms 
used to evaluate “if/then” rules in a rule base (RB).  In the FWC algorithm, the 
“if” part is evaluated and, if TRUE, the “then” part is added to the FWC 
engine's output.  In the BWC algorithm, the “then” part is evaluated and, if 
TRUE, the “if” part is added to the BWC engine's output.  It is rare, but some 
systems use both FWC and BWC.


That's it.  Period.  No other denotations or connotations apply.

To help remove any mystery that may still surround these concepts, here is an 
FWC algorithm in pseudo-code (WARNING: I'm glossing over quite a few details 
here – I'll be happy to answer questions on list or off):


   0. set loop index to 0
   1. got next rule?
 no: goto 5
   2. is rule FIRED?
 yes: goto 1
   3. is key equal to rule's antecedent?
 yes: add consequent to output, mark rule as FIRED,
  output is new key, goto 0
   4. goto 1
   5. more input data?
 yes: input data is new key, goto 0
   6. done.

To turn this into a BWC algorithm, we need only modify Step #3 to read as 
follows:

   3. is key equal to rule's consequent?
 yes: add antecedent to output, mark rule as FIRED,
 output is new key, goto 0

If you need to represent temporal dependencies in FWC/BWC systems, you have to 
express them using rules.  For example, if washer-a MUST be placed on bolt-b 
before nut-c can be screwed on, the rule base might look something like this:


   1. if installed(washer-x) then install(nut-z)
   2. if installed(bolt-y) then install(washer-x)
   3. if notInstalled(bolt-y) then install(bolt-y)

In this case, rule #1 won't get fired until rule #2 fires (nut-z can't get 
installed until washer-x has been installed).  Rule #2 won't get fired until 
rule #3 has fired (washer-x can't get installed until bolt-y has been 
installed). NUT-Z!  (Sorry, couldn't help it.)


To kick things off, we pass in “bolt-y” as the initial key.  This triggers rule 
#3, which will trigger rule #2, which will trigger rule #1. These temporal 
dependencies result in the following assembly sequence: install bolt-y, then 
install washer-x, and, finally, install nut-z.


A similar thing can be done to implement rule hierarchies.

   1. if levelIs(0) and installed(washer-x) then install(nut-z)
   2. if levelIs(0) and installed(nut-z) goLevel(1)
   3. if levelIs(1) and notInstalled(gadget-xx) then install(gadget-xx)
   4. if levelIs(0) and installed(bolt-y) then install(washer-x)
   5. if levelIs(0) and notInstalled(bolt-y) then install(bolt-y)

Here rule #2 won't fire until rule #1 has fired.  Rule #1 won't fire unless rule 
#4 has fired.  Rule #4 won't fire until rule #5 has fired.  And, finally, Rule 
#3 won't fire until Rule #2 has fired. So, level 0 could represent the reasoning 
required before level 1 rules (rule #3 here) will be of any use. (That's not the 
case here, of course, just stretching my humble example as far as I can.)


Note, again, that the temporal and level references in the rules are NOT used by 
the BWC.  They probably will be used by the part of the program that does 
something with the BWC's output (the install(), goLevel(), etc. functions). 
And, again, the results should be completely unaffected by the order in which 
the RB rules are evaluated or fired.


I hope this helps.

Cheers,

Brad

Richard Loosemore wrote:

Mike Tintner wrote:
A tangential comment here. Looking at this and other related threads I 
can't help thinking: jeez, here are you guys still endlessly arguing 
about the simplest of syllogisms, seemingly unable to progress beyond 
them. (Don't you ever have that feeling?) My impression is that the 
fault lies with logic itself - as soon as you start to apply logic to 
the real world, even only tangentially with talk of forward and 
backward or temporal considerations, you fall into a quagmire of 
ambiguity, and no one is really sure what they are talking about. Even 
the simplest if p then q logical proposition is actually infinitely 
ambiguous. No?  (Is there a Godel's Theorem of logic?)


Well, now you have me in a cleft stick, methinks.

I *hate* logic as a way to understand cognition, because I think it is a 
derivative process within a high-functional AGI system, not a foundation 
process that sits underneath everything else.


But, on the other hand, I do understand how it 

Re: [agi] Interesting article about EU's open source AGI robot program

2008-07-12 Thread Brad Paulsen
Below is a short list of robot kits available from some on-line resellers.  I 
have no connection to any of the companies or Web sites mentioned nor have I 
used and of the kits listed here.



http://www.electronickits.com/robot/Bioloid.htm

$350 USD   - Beginner's Kit (4 servos - Dynamixel AX-12x)
$895 USD   - Comprehensive Kit (19 servos + 1 sensor module)
$3,500 USD - Expert's Kit (21 servos + 3 sensors + wireless com + wireless 
camera set)


All models are based on Atmel Atmega128 8-bit RISC architecture MCU with 64KB of 
RAM and 16K on-board programmable flash memory (plus board space for another 
64KB of flash memory) , up to 16MHz clock speed.  No soldering required.


All software is Windows-based (i.e. you need a Windows-based computer to use 
their robot programming software).  Once uploaded to the robot, the RS-232 cable 
(if not wireless) is disconnected and the robot moves around on its own.


The robot's brain is the Atmega128 of the AVR Series from Atmel.  There is 
apparently an AVR series version of the GNU GCC compiler that Bioloid recommends 
for use programming the Atmega128 (a RISC chip).  The CM-5 (the robot's 
controller, of which the Atmega128 is a part, sports an RS232 interface.  This 
could be used to add a Gumstix (http://www.gumstix.com/products.html) or other 
such Linux-based, full-featured computer that would run the cognitive software 
and use the Atmega128 and it's 64MB of RAM (plus 16MB of flash memory) to talk 
back and forth with the Atmega128 (which would be used only for low-level 
control of the servos and sensors).


Gumstix are called that because that is their approximate size (a pack of 
chewing gum).  Their basix model motherboard sells for $129USD and a tweener 
board for $20USD (this board provides the RS232 I/O port).  It would be 
relatively easy to mount the Gumstix + tweener combo to the CM-5 and have it 
talk to the Atmega128 through the serial port.  Some inventive ways to power the 
Gumstix board would have to be found (e.g., battery power, solar power).  The 
Gumstix CPU is based on an Intel RISC CPU. Instead of the tweener, a board can 
be purchased with an SD slot (so flash memory could be increased up to 1 GB at 
least (indeed, SanDisk is currently selling 2 and 3 GB SD cards).




http://www.electronickits.com/robot/KHR-1.htm
KHR-1 Eco Robot High-Performance Humanoid Robot. 17 servos ($1,200 USD)
KHR-2HV Advanced Humanoid Biped Robot. 17 Servos ($959 + wireless controller for 
$199 more)


The KHR-2HV is the newer model.  Both are bipedal humanoid.  And, if you shop 
around, you will see that it's quite a good deal (compare with prices on Dr. 
Robot for the same functionality).  It's a kit, but it doesn't require soldering.


They have a pretty cool video of the uint doing back flips and cartwheels: 
http://www.electronickits.com/robot/khr1demo.wmv


The manuals are written in Janglish (Japanese English) and can be difficult to 
read.  Lots of pictures though.  I only mention this because they make a big 
deal out of the fact that their manuals and software are in English.  It's just 
really poor English.  Kinda like my California Spanglish. ;-)



If you don't need a humanoid bot (i.e., any autonomous bot will do)...

iRobot sells a non-humanoid development bot based on its Roomba vacuum cleaner 
robot.  No assembly required.  Comes with an Atmel-based MCU (apparently, a 
popular MCU for robotics work) that connects to your PC (for uploading programs) 
with a serial cable.  There's one demo video where the bot goes to a 
refridgerator (the portable kind, so it naturally sits near the floor), opens 
the door, and grabs a can of soda/beer (using an extra-cost robot arm 
extension), and takes the can back to its starting point (the human who sent 
it).  Just one problem: once the robot's got the can in its arm, it can't close 
the refirdgerator door!  And, if you tilt the refridgerator so that the door 
would close automatically, it would also close on the robot while it was trying 
to fish the can out of the fridge.  Interesting demo, nontheless!  And (without 
the robot arm), the price is very reasonable: $399 USD.  Includes the C/C++ 
compiler for your PC (Windows only again).



Cheers,

Brad


Bob Mottram wrote:

2008/7/11 Ed Porter [EMAIL PROTECTED]:

Interesting article about EU's open source AGI robot program at
http://www.eetimes.com/showArticle.jhtml?articleID=208808365
http://www.eetimes.com/showArticle.jhtml?articleID=208808365




The fact that iCub is open source is to be welcomed.  In the past I've
been critical of secret source robots which were just re-inventions
of the wheel.  However I think that open source robotics at the
present time is more of an aspiration than a reality.  For an open

[agi] More Brain Matter(s)...

2008-07-12 Thread Brad Paulsen

All,

RESEARCH IDENTIFIES BRAIN CELLS RELATED TO FEAR
http://www.physorg.com/news134969685.html

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] And, now, from the newsroom...

2008-07-09 Thread Brad Paulsen

WHY MUSICIANS MAKE US WEEP AND COMPUTERS DON'T
http://www.physorg.com/news134795617.html

DO WE THINK THAT MACHINES CAN THINK?
http://www.physorg.com/news134797615.html

Cheers,

Brad


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Larrabee - Intel's Response to Nvidia's GPU Platform

2008-07-07 Thread Brad Paulsen

Hi Kids!

http://www.custompc.co.uk/news/602910/rumour-control-larrabee-based-on-32-original-pentium-cores.html#

Cheers,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] need some help with loopy Bayes net

2008-07-04 Thread Brad Paulsen

YKY,

I'm not certain this applies directly to your issue, but it's an interesting 
paper nonetheless: http://web.mit.edu/cocosci/Papers/nips00.ps.


Cheers,

Brad

YKY (Yan King Yin) wrote:

I'm considering nonmonotonic reasoning using Bayes net, and got stuck.

There is an example on p483 of J Pearl's 1988 book PRIIS:

Given:
birds can fly
penguins are birds
penguins cannot fly

The desiderata is to conclude that penguins are birds, but penguins
cannot fly.

Pearl translates the KB to:
   P(f | b) = high
   P(f | p) = low
   P(b | p) = high
where high and low means arbitrarily close to 1 and 0, respectively.

If you draw this on paper you'll see a triangular loop.

Then Pearl continues to deduce:

Conditioning P(f | p) on both b and ~b,
P(f | p) = P(f | p,b) P(b | p) + P(f | p,~b) [1-P(b | p)]
 P(f | p,b) P(b | p)

Thus
P(f | p,b)  P(f | p) / P(b | p) which is close to 0.

Thus Pearl concludes that given penguin and bird, fly is not true.

But I found something wrong here.  It seems that the Bayes net is
loopy and we can conclude that fly given penguin and bird can be
either 0 or 1.  (The loop is somewhat symmetric).

Ben, do you have a similar problem dealing with nonmonotonicity using
probabilistic networks?

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Savants and user-interfaces [was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-01 Thread Brad Paulsen
I was nearly kicked out of school in seventh grade for coming up with a method 
of manipulating (multiplying, dividing) large numbers in my head using what I 
later learned was a shift-reduce method.  It was similar to this:


http://www.metacafe.com/watch/742717/human_calculator/

My seventh grade math teacher was so upset with me, he almost struck me 
(physically -- you could get away with that back them).  His reason?  Wasting 
valuable math class time.


The point is, you can train yourself to do this type of thing and look very 
savant-like.  The above link is just one in a series of videos where the teacher 
presents this system.  It takes practice, but not much more than learning the 
standard multiplication table.


Cheers,

Brad


Vladimir Nesov wrote:

Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)? You start by assigning unique symbols
to its nodes, train yourself to stably perform associations
implementing its junctions, and then assemble it all together by
training yourself to generate a problem as a temporal sequence
(request), so that it can be handled by the overall circuit, and
training to read out the answer and convert it to sequence of e.g.
base-10 digits or base-100 words keying pairs of digits (like in
mnemonic)? Has anyone heard of this attempted? At least the initial
steps look straightforward enough, what kind of obstacles this kind of
experiment can run into?

On Tue, Jul 1, 2008 at 7:43 AM, Linas Vepstas [EMAIL PROTECTED] wrote:

2008/6/30 Terren Suydam [EMAIL PROTECTED]:

savant

I've always theorized that savants can do what they do because
they've been able to get direct access to, and train, a fairly
small number of neurons in their brain, to accomplish highly
specialized (and thus rather unusual) calculations.

I'm thinking specifically of Ramanujan, the Hindi mathematician.
He appears to have had access to a multiply-add type circuit
in his brain, and could do symbolic long division and
multiplication as a result -- I base this on studying some of
the things he came up with -- after a while, it seems to be
clear  how he came up with it (even if the feat is clearly not
reproducible).

In a sense, similar feats are possible by using a modern
computer with a good algebra system.  Simon Plouffe seems
to be a modern-day example of this: he noodles around with
his systems, and finds various interesting relationships that
would otherwise be obscure/unknown.  He does this without
any particularly deep or expansive training in math (whence
some of his friction with real academics).  If Simon could
get a computer-algebra chip implanted in his brain, (i.e.
with a very, very user-freindly user-interface) so that he
could work the algebra system just by thinking about it,
I bet his output would resemble that of Ramanujan a whole
lot more than it already does -- as it were, he's hobbled by
a crappy user interface.

Thus, let me theorize: by studying savants with MRI and
what-not, we may find a way of getting a much better
man-machine interface.  That is, currently, electrodes
are always implanted in motor neurons (or visual cortex, etc)
i.e. in places of the brain with very low levels of abstraction
from the real word. It would be interesting to move up the
level of abstraction, and I think that studying how savants
access the magic circuits in thier brain will open up a
method for high-level interfaces to external computing
machinery.

--linas


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com








---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] You Say Po-tay-toe, I Sign Po-toe-tay...

2008-07-01 Thread Brad Paulsen

Greetings Fellow Knowledge Workers...

WHEN USING GESTURES, RULES OF GRAMMAR REMAIN THE SAME
http://www.physorg.com/news134065200.html

The link title is a bit misleading.  You'll see what I mean when you read it.

Enjoy,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-30 Thread Brad Paulsen

Richard,

Thanks for your comments.  Very interesting.  I'm looking forward to reading the 
introductory book by Waldrop.  Thanks again!


Cheers,

Brad


Richard Loosemore wrote:

Brad Paulsen wrote:

Richard,

I think I'll get the older Waldrop book now because I want to learn 
more about the ideas surrounding complexity (and, in particular, its 
association with, and differentiation from, chaos theory) as soon as 
possible.  But, I will definitely put an entry in my Google calendar 
to keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


You're welcome.  I hope it is not a disappointment:  the subject is a 
peculiar one, so I believe that it is better to start off with the kind 
of journalistic overview that Waldrop gives.  Let me know what your 
reaction is.


Here is the bottom line.  At the core of the complex systems idea there 
is something very significant and very powerful, but a lot of people 
have wanted it to lead to a new science just like some of the old 
science.  In other words, they have wanted there to be a new, fabulously 
powerful 'general theory of complexity' coming down the road.


However, no such theory is in sight, and there is one view of complexity 
(mine, for example) that says that there will probably never be such a 
theory.  If this were one of the traditional sciences, the absence of 
that kind of progress toward unification would be a sign of trouble - a 
sign that this was not really a new science after all.  Or, even worse, 
a sign that the original idea was bogus.  But I believe that is the 
wrong interpretation to put on it.  The complexity idea is very 
significant, but it is not a science by itself.


Having said all of that, there are many people who so much want there to 
be a science of complexity (enough of a science that there could be an 
institute dedicated to it, where people have real jobs working on 
'complex systems'), that they are prepared to do a lot of work that 
makes it look like something is happening.  So, you can find many 
abstract papers about complex dynamical systems, with plenty of 
mathematics in them.  But as far as I can see, most of that stuff is 
kind of peripheral ... it is something to do to justify a research program.


At the end of the day, I think that the *core* complex systems idea will 
outlast all this other stuff, but it will become famous for its impact 
on oter sciences, rather than for the specific theories of 'complexity' 
that it generates.



We will see.



Richard Loosemore







Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - 
Feb 15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 
called The Core Ideas of the Sciences of Complexity.  Interesting 
title, given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-28 Thread Brad Paulsen

Richard,

I presume this is the Waldrop Complexity book to which you referred:

Complexity: The Emerging Science at the Edge of Order and Chaos
M. Mitchell Waldrop, 1992, $10.20 (new, paperback) from Amazon (used
copies also available)
http://www.amazon.com/Complexity-Emerging-Science-Order-Chaos/dp/0671872346/ref=pd_bbs_sr_1?ie=UTF8s=booksqid=1214641304sr=1-1

Is this the newer book you had in mind?

At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity
Stuart Kauffman (The Santa Fe Institute), 1995, $18.95 (new, paperback) 
from Amazon (used copies

also available)
http://www.amazon.com/At-Home-Universe-Self-Organization-Complexity/dp/0195111303/ref=reg_hu-wl_mrai-recs

Cheers,

Brad

Richard Loosemore wrote:

Jim Bromer wrote:



From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the 
argument I am presenting, because your very first sentence... 
But we can invent a 'mathematics' or a program that can is just
 completely false.  In a complex system it is not possible to 
used analytic mathematics to predict the global behavior of the 
system given only the rules that determine the local mechanisms. 
That is the very definition of a complex system (note:  this is a

 complex system in the technical sense of that term, which does
 not mean a complicated system in ordinary language). Richard 
Loosemore


Well lets forget about your theory for a second.  I think that an 
advanced AI program is going to have to be able to deal with 
complexity and that your analysis is certainly interesting and 
illuminating.


But I want to make sure that I understand what you mean here. First
 of all, your statement, it is not possible to use analytic 
mathematics to predict the global behavior of the system given only
 the rules that determine the local mechanisms. By analytic 
mathematics are you referring to numerical analysis, which the 
article in Wikipedia, 
http://en.wikipedia.org/wiki/Numerical_analysis describes as the 
study of algorithms for the problems of continuous mathematics (as
 distinguished from discrete mathematics).  Because if you are 
saying that the study of continuous mathematics -as distinguished 
from discrete mathematics- cannot be used to represent discreet 
system complexity, then that is kind of a non-starter. It's a 
cop-out by initial definition. I am primarily interested in 
discreet programming ( I am, of course also interested in 
continuous systems as well), but in this discussion I was 
expressing my interest in measures that can be taken to simplify 
computational complexity.


Again, Wikipedia gives a slightly more complex definition of 
complexity than you do.  http://en.wikipedia.org/wiki/Complexity I
 am not saying that your particular definition of complexity is 
wrong, I only want to make sure that I understand what it is that 
you are getting at.


The part of your sentence that read, ...given only the rules that
 determine the local mechanisms, sounds like it might well apply
to the kind of system that I think would be necessary for a better
AI program, but it is not necessarily true of all kinds of 
demonstrations of complexity (as I understand them).  For example,
 consider a program that demonstrates the emergence of complex 
behaviors from collections of objects that obey simple rules that 
govern their interactions.  One can use a variety of arbitrary 
settings for the initial state of the program to examine how 
different complex behaviors may emerge in different environments. 
(I am hoping to try something like this when I buy my next computer
 with a great graphics chip in it.)  This means that complexity 
does not have to be represented only in states that had been 
previously generated by the system, as can be obviously seen in the

 fact that initial states are a necessity of such systems.

I think I get what you are saying about complexity in AI and the 
problems of research into AI that could be caused if complexity is

 the reality of advanced AI programming.

But if you are throwing technical arguments at me, some of which 
are trivial from my perspective like the definition of, continuous
 mathematics (as distinguished from discrete mathematics), then 
all I can do is wonder why.


Jim,

With the greatest of respect, this is a topic that will require some
 extensive background reading on your part, because the 
misunderstandings in your above test are too deep for me to remedy in

 the scope of one or two list postings.  For example, my reference to
 analytic mathematics has nothing at all to do with the wikipedia 
entry you found, alas.  The word has many uses, and the one I am 
employing is meant to point up a distinction between classical 
mathematics that allows equations to be solved algebraically, and 
experimental mathematics that solves systems by simulation.  Analytic
 means by analysis in this context...but this is a very 

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Brad Paulsen

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 15, 2000)

Brad

Richard Loosemore wrote:

Jim Bromer wrote:



From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the 
argument I am presenting, because your very first sentence... But we 
can invent a 'mathematics' or a program that can is just completely 
false.  In a complex system it is not possible to used analytic 
mathematics to predict the global behavior of the system given only 
the rules that determine the local mechanisms.  That is the very 
definition of a complex system (note:  this is a complex system in 
the technical sense of that term, which does not mean a complicated 
system in ordinary language).

Richard Loosemore


Well lets forget about your theory for a second.  I think that an 
advanced AI program is going to have to be able to deal with 
complexity and that your analysis is certainly interesting and 
illuminating.


But I want to make sure that I understand what you mean here.  First 
of all, your statement, it is not possible to use analytic 
mathematics to predict the global behavior of the system given only 
the rules that determine the local mechanisms.
By analytic mathematics are you referring to numerical analysis, which 
the article in Wikipedia, http://en.wikipedia.org/wiki/Numerical_analysis
describes as the study of algorithms for the problems of continuous 
mathematics (as distinguished from discrete mathematics).  Because if 
you are saying that the study of continuous mathematics -as 
distinguished from discrete mathematics- cannot be used to represent 
discreet system complexity, then that is kind of a non-starter. It's a 
cop-out by initial definition. I am primarily interested in discreet 
programming ( I am, of course also interested in continuous systems as 
well), but in this discussion I was expressing my interest in measures 
that can be taken to simplify computational complexity.


Again, Wikipedia gives a slightly more complex definition of 
complexity than you do.  http://en.wikipedia.org/wiki/Complexity
I am not saying that your particular definition of complexity is 
wrong, I only want to make sure that I understand what it is that you 
are getting at.


The part of your sentence that read, ...given only the rules that 
determine the local mechanisms, sounds like it might well apply to 
the kind of system that I think would be necessary for a better AI 
program, but it is not necessarily true of all kinds of demonstrations 
of complexity (as I understand them).  For example, consider a program 
that demonstrates the emergence of complex behaviors from collections 
of objects that obey simple rules that govern their interactions.  One 
can use a variety of arbitrary settings for the initial state of the 
program to examine how different complex behaviors may emerge in 
different environments.  (I am hoping to try something like this when 
I buy my next computer with a great graphics chip in it.)  This means 
that complexity does not have to be represented only in states that 
had been previously generated by the system, as can be obviously seen 
in the fact that initial states are a necessity of such systems.


I think I get what you are saying about complexity in AI and the 
problems of research into AI that could be caused if complexity is the 
reality of advanced AI programming.


But if you are throwing technical arguments at me, some of which are 
trivial from my perspective like the definition of, continuous 
mathematics (as distinguished from discrete mathematics), then all I 
can do is wonder why.


Jim,

With the greatest of respect, this is a topic that will require some 
extensive background reading on your part, because the misunderstandings 
in your above test are too deep for me to remedy in the scope of one or 
two list postings.  For example, my reference to analytic mathematics 
has nothing at all to do with the wikipedia entry you found, alas.  The 
word has many uses, and the one I am employing is meant to point up a 
distinction between classical mathematics that allows equations to be 
solved algebraically, and experimental mathematics that solves systems 
by simulation.  Analytic means by analysis in this context...but this 
is a very abstract sense of the word that I am talking about here, and 
it is very hard to convey.


This topic is all about 'complex systems' which is a technical term that 
does not mean systems that are complicated (in the everyday sense of 
'complicated').  To get up to speed on this, I recommend a popular 
science book called Complexity by Waldrop, although there was also a 
more recent book whose name I forget, which may be better.  You could 
also read Wolfram's A New Kind of Science, but that is huge and does 
not come 

Re: [agi] Approximations of Knowledge

2008-06-28 Thread Brad Paulsen

Richard,

I think I'll get the older Waldrop book now because I want to learn more about 
the ideas surrounding complexity (and, in particular, its association with, and 
differentiation from, chaos theory) as soon as possible.  But, I will definitely 
put an entry in my Google calendar to keep a lookout for the new book in 2009.


Thanks very much for the information!

Cheers,

Brad


Richard Loosemore wrote:

Brad Paulsen wrote:

Or, maybe...

Complexity: Life at the Edge of Chaos
Roger Lewin, 2000 $10.88 (new, paperback) from Amazon (no used copies)
Complexity: Life at the Edge of Chaos by Roger Lewin (Paperback - Feb 
15, 2000)


Nope, not that one either!

Darn.

I think it may have been Simplexity (Kluger), but I am not sure.

Interestingly enough, Melanie Mitchell has a book due out in 2009 called 
The Core Ideas of the Sciences of Complexity.  Interesting title, 
given my thoughts in the last post.




Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Brad Paulsen

Richard and Ed,

Insanity is doing the same thing over and over again and expecting different 
results. - Albert Einstein


Prelude to insanity: unintentionally doing the same thing over and over again 
and getting the same results. - Me


Cheers,

Brad

Richard Loosemore wrote:

Ed Porter wrote:

I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think 
Novamente
is one) that when powerful hardware available to more people we will 
be able
to relatively quickly get systems up and running that demonstrate the 
parts
of the problems we have solved.  And that will provide valuable 
insights and
test beds for solving the parts of the problem that we have not yet 
solved.


You are not getting my point.  What you just said was EXACTLY what was 
said in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..


And every time it was said, the same justification for the claim was 
given:  I just have this belief that it will work.


Plus ca change, plus c'est la meme fubar.






With regard to your statement the problem is understanding HOW TO DO IT
---
WE DO UNDERSTAND HOW TO DO IT --- NOT ALL OF IT --- AND NOT HOW TO 
MAKE IT
ALL WORK TOGETHER WELL AUTOMATICALLY --- BUT --- GIVEN THE TYPE OF 
HARDWARE
EXPECTED TO COST LESS THAN $3M IN 6 YEARS --- WE KNOW HOW TO BUILD 
MUCH OF

IT --- ENOUGH THAT WE COULD PROVIDE EXTREMELY VALUABLE COMPUTERS WITH OUR
CURRENT UNDERSTANDINGS.


You do *not* understand how to do it.  But I have to say that statements 
like your paragraph above are actually very good for my health, because 
their humor content is right up there in the top ten, along with Eddie 
Izzard's Death Star Canteen sketch and Stephen Colbert at the 2006 White 
House Correspondents' Association Dinner.


So long as the general response to the complex systems problem is not 
This could be a serious issue, let's put our heads together to 
investigate it, but My gut feeling is that this is just not going to 
be a problem, or Quit rocking the boat!, you can bet that nobody 
really wants to ask any questions about whether the approaches are 
correct, they just want to be left alone to get on with their 
approaches.  History, I think, will have some interesting things to say 
about all this.


Good luck anyway.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] As we inch closer to The Singularity...

2008-06-24 Thread Brad Paulsen

Hey Gang...

RESEARCHERS DEVELOP NEURAL IMPLANT THAT LEARNS WITH THE BRAIN
http://www.physorg.com/news133535377.html

I wonder what *that* software looks like!

Cheers,

Brad



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Have you hugged a cephalopod today?

2008-06-17 Thread Brad Paulsen

From The More stuff we already know department...

NEW RESEARCH ON OCTOPUSES SHEDS LIGHT ON MEMORY
http://www.physorg.com/news132920831.html

Cheers,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Learning without Understanding?

2008-06-16 Thread Brad Paulsen

Hear Ye, Hear Ye...

CHILDREN LEARN SMART BEHAVIORS WITHOUT KNOWING WHAT THEY KNOW
http://www.physorg.com/news132839991.html

Cheers,

Brad

And, remember: think twice, code once!


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] RoboSex?

2008-06-15 Thread Brad Paulsen

Fellow AGIers,

Every once in a while this list could use a bit of intentional humor. 
Unfortunately, I think these guys are serious:


IN 2050, YOUR LOVER MAY BE A ... ROBOT
http://www.physorg.com/news132727834.html


Pacis progredior,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread Brad Paulsen
If anyone is interested, I have some additional information on the C870 
NVIDIA Tesla card.  I'll be happy to send it to you off-list.  Just 
contact me directly.


Cheers,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] How the brain separates audio signals from noise

2008-06-11 Thread Brad Paulsen

Hi Kids!

Article summary:
http://www.physorg.com/news132290651.html

Article text:
http://biology.plosjournals.org/perlserv/?request=get-documentdoi=10.1371/journal.pbio.0060138ct=1

Enjoy!


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Forget talk to the animals. Talk directly to their cells.

2008-06-06 Thread Brad Paulsen

All,

Not specifically AGI-related, but too interesting not to pass along, so:

SWEET NOTHINGS: ARTIFICIAL VESICLES AND BACTERIAL CELLS COMMUNICATE BY WAY OF SUGAR COMPONENTS 
http://www.physorg.com/news131883741.html 



Cheers,

Brad



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread Brad Paulsen



J Storrs Hall, PhD wrote:
Actually, the nuclear spins in the rock encode a single state of an ongoing 
computation (which is conscious). Successive states occur in the rock's 
counterparts in adjacent branes of the metauniverse, so that the rock is 
conscious not of unfolding time, as we see it, but of a journey across 
probability space.


What is the rock thinking?

 T h i s   i s   w a a a y   o f f   t o p i c . . . 

Josh

  
I just love it when you talk string theory.  Somehow, though, I didn't 
picture you as a 0.0005 probability sort of guy. :-)


Cheers,

Brad

P.S.  You're right about this thread wandering way off topic, though.  
We should probably get back to the serious business of comparing PLN to FOL.

On Tuesday 03 June 2008 05:05:05 pm, Matt Mahoney wrote:
  

--- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:


Actually on further thought about this conscious rock, I
want to take that particular rock and put it through some
further tests to absolutely verify with a high degree of
confidence that there may not be some trace amount of
consciousness lurking inside. So the tests that I would
conduct are - 


Verify the rock is in a solid state at close to absolute
zero but not at absolute zero.
The rock is not in the presence of a high frequency
electromagnetic field.
The rock is not in the presence of high frequency physical
vibrational interactions.
The rock is not in the presence of sonic vibrations.
The rock is not in the presence of subatomic particle
bombardment, radiation, or being hit by a microscopic black
hole.
The rock is not made of nano-robotic material.
The rock is not an advanced, non-human derived, computer.
The rock contains minimal metal content.
The rock does not contain holograms.
The rock does not contain electrostatic echoes.
The rock is a solid, spherical structure, with no worm
holes :)
The rock...

You see what I'm getting at. In order to be 100% sure.
Any failed tests of the above would require further
scientific analysis and investigation to achieve proper
non-conscious certification.
  
You forgot a test. The postions of the atoms in the rock encode 10^25 bits 

of information representing the mental states of 10^10 human brains at 10^15 
bits each. The data is encrypted with a 1000 bit key, so it appears 
statistically random. How would you prove otherwise?
  

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 


http://www.listbox.com/member/?;
  

Powered by Listbox: http://www.listbox.com







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

  



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-04 Thread Brad Paulsen



John G. Rose wrote:

From: Brad Paulsen [mailto:[EMAIL PROTECTED]

Not exactly (to start with, you can *never* be 100% sure, try though you
might  :-) ).  Take all of the investigations into rockness since the
dawn of homo sapiens and we still only have a 0.9995 probability that
rocks are not conscious.  Everything is belief.  Even hard science.
That was the nub of Hume's intellectual contribution.  It doesn't mean
we can't be sure enough.  It just means that we can never be 100% sure
of *anything*.



We can be 100% sure that we can never be 100% sure of *anything*.

Including that statement!  I believe (with 0.51 probability) my point has been made.  :-) 

Of course, there's belief and then there's BELIEF.  To me (and to Hume),
it's not a difference in kind.  It's just that the leap from
observational evidence to empirical (natural) belief is a helluvalot
shorter than is the leap from observational evidence to supernatural
belief.




I agree that it is for us in the modern day technological society. But it may 
not have been always the case. We have been grounded by reason. Before reason 
it may have been largely supernatural. That's why sometimes I think AGI's could 
start off with little knowledge and lots of supernatural, just to make it 
easier for it to attach properties to the void. It starts off knowing there is 
some god bringing it into existence but eventually it figures out that the god 
is just some geek software engineer and then it becomes atheist real quick heheh
  
I don't entirely disagree with you.  I don't entirely agree either.  
But, like the is a rock conscious thread, if we want to continue this 
one we should either take it off-list or move it to the Singularity 
Outreach list.  Don't ya think? :-\

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

  



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread Brad Paulsen
But, without us droids, how would you verify/validate your 
consciousness?  And, think about what you'd be taking over.  As Sting 
says, What good's a world that's all used up?  Rhetorical questions, 
both.  When I start quoting Sting lyrics, I *know* it's time for me to 
get off a thread.  Ta!


Cheers,

Brad

John G. Rose wrote:

From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]

Actually, the nuclear spins in the rock encode a single state of an
ongoing
computation (which is conscious). Successive states occur in the rock's
counterparts in adjacent branes of the metauniverse, so that the rock is
conscious not of unfolding time, as we see it, but of a journey across
probability space.

What is the rock thinking?

 T h i s   i s   w a a a y   o f f   t o p i c . . . 




I never would have thought of that. To come up with something as good I
would have to explore consciousness and anti-consciousness,
potential-consciousness, stuff like that.

But kicking around these ideas really shouldn't hurt. You could build AGI
and make the darn thing appear conscious. But what fun is that if you know
it's fake? Or are we all fake? Are we all just automatons or is it like -
I'm the only one conscious and all the rest of you are all simulations in MY
world space, p-zombies, bots, you'll all fake so if I want to take over the
world and expunge all you droids, there are no religious repercussions, as
long as I could pull it off without being terminated.

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

  



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-03 Thread Brad Paulsen



John G. Rose wrote:

You see what I'm getting at. In order to be 100% sure. Any failed tests of the 
above would require further scientific analysis and investigation to achieve 
proper non-conscious certification.



Not exactly (to start with, you can *never* be 100% sure, try though you 
might  :-) ).  Take all of the investigations into rockness since the 
dawn of homo sapiens and we still only have a 0.9995 probability that 
rocks are not conscious.  Everything is belief.  Even hard science.  
That was the nub of Hume's intellectual contribution.  It doesn't mean 
we can't be sure enough.  It just means that we can never be 100% sure 
of *anything*.


Of course, there's belief and then there's BELIEF.  To me (and to Hume), 
it's not a difference in kind.  It's just that the leap from 
observational evidence to empirical (natural) belief is a helluvalot 
shorter than is the leap from observational evidence to supernatural belief.


Cheers,

Brad

Today's words-to-live-by: Everything in moderation.  Including 
moderation. ;-)


P.S.  Hmmm.  The Thunderbird e-mail client spell checker recognizes the 
word homo but not the word sapiens.  It gets better.  Here's 
WordWeb's definition of sapiens: Of or relating to or characteristic of 
Homo sapiens.  Oh.  Now I get it!  NOT.  Sigh...  Isn't there some sort 
of dictionary-writing rule that says you're not allowed to use the word 
you're defining in the definition of that word?  I smell a project!  
Let's build a dictionary that contains nothing but circular 
definitions.  For example: definition - Of or relating to or 
characteristic of defining something.

From: Brad Paulsen [mailto:[EMAIL PROTECTED]

John wrote:
A rock is either conscious or not conscious.

Excluding the middle, are we?




Conscious, not conscious or null?

  

I don't want to put words into Ben  company's mouths, but I think what
they are trying to do with PLN is to implement a system that expressly
*includes the middle*.  In theory (but not necessarily in practice) the
clue to creating the first intelligent machine may be to *exclude the
ends*!  Scottish philosopher and economist David Hume argued way back in
the 18th century that all knowledge is based on past observation.
Because of this, we can never be 100% certain of *anything*.  While Hume
didn't put it in such terms, as I understand his thinking, it comes down
to *everything* is a probability or all knowledge is fuzzy
knowledge.  There is no such thing as 0.  There is no such thing as 1.

For example, let's say you are sitting at a table holding a pencil in
your hand.  In the past, every time you let go of the pencil in this
situation (or a similar situation), it dropped to the table.  The cause
and effect for this behavior is so well documented that we call the
underlying principal the *law* of gravity.  But, even so, can you say
with probability 1.0 that the *next* time you let go of that pencil in a
similar situation that it will, in fact, drop to the table?  Hume said
you can't.  As those ads for stock brokerage firms on TV always say in
their disclaimers, Past performance is no guarantee of future
performance.

Of course, we are constantly predicting the future based on our
knowledge of past events (or others' knowledge of past events which we
have learned and believe to be correct).  I will, for instance, give you
very favorable odds if you are willing to bet against the pencil hitting
the table when dropped.  Unless you enjoy living life on the edge, your
predictions won't stray very far from past experiences (or learned
knowledge about past experiences).  But, in the end, it's all
probability and fuzziness.  It is all belief, baby.



Yes Hume and Kant actually were making contributions to AGI but didn't' know 
it. Although I suppose at the time there imaginations where rich and varied 
enough to where those possibilities were not totally unthinkable.

  

Regarding the issue of consciousness and the rock, there are several
possible scenarios to consider here.  First, the rock may be conscious
but only in a way that can be understood by other rocks.  The rock may
be conscious but it is unable to communicate with humans (and vice
versa) so we assume it's not conscious.  The rock is truly conscious and
it thinks we're not conscious so it pretends to be just like it thinks
we are and, as a result, we're tricked into thinking it's not
conscious.  Finally, if a rock falls in the forest, does it make a
sound?  Consciousness may require at least two actors.  Think about it.
What good would consciousness do you if there was no one else around to
appreciate it?  Would you, in that case, in fact be conscious?

Most humans will treat a rock as if it were not conscious because, in
the past, that assumption has proven to be efficacious for predictions
involving rocks.  I know of no instance where someone was able to talk a
rock

Re: [agi] Ideological Interactions Need to be Studied

2008-06-02 Thread Brad Paulsen

Richard Loosemore wrote:
Anyone at the time who knew that Isaac Newton was trying to do could 
have dismissed his efforts and said Idiot!  Planetary motion is simple. 
Ptolemy explained it in a simple way.  I use simplicity-preferring 
prior, so epicycles are good enough for me.


Which is why the admonition You don't want to reinvent the wheel! is 
such bad advice.  Reinventing the wheel is prerequisite to technological 
progress.  If somebody hadn't reinvented the wheel, we'd be buying front 
and rear logs for our cars instead of front and rear tires.


Cheers,

Brad

Richard Loosemore wrote:

Vladimir Nesov wrote:

On Mon, Jun 2, 2008 at 6:27 AM, Mark Waser [EMAIL PROTECTED] wrote:

But, why SHOULD there be a *simple* model that produces the same
capabilities?

What if the brain truly is a conglomeration of many complex interacting
pieces?



Because unless I know otherwise, I use simplicity-preferring prior.
Biological complexity is hardly evidence that the refined
computational model is also complex. It is very easy, as J. Rogers
noted, to scramble a simple object so that you won't ever guess that
it's simple based just on how it looks on the surface.



This misses the point I think.

It all has to do with the mistake of *imposing* simplicity on 
something by making a black-box model of it.


For example, the Ptolemy model of planetary motion imposed a 'simple' 
model of the solar system in which everything could be explained by a 
set of nested epicycles.  There would have been no need to use any 
other model because in principle those epicycles could have been 
augmented to infinite depth, to yield as good a fit to the data as you 
wanted.


That is a black-box model of the solar system because it stuffs all 
the real complexity inside a black box and then models the black box 
with a simplistic formalism.  If all you care about is getting a 
precise model of planetary movement across the sky, this would be (and 
was!) the simplest model anyone could ask for.


But it was wrong.  It was wrong because it obscured the real 
situation.  The real situation could not be understood without 
inventing a whole new type of mathematics (calculus) and discovering a 
new law of nature (universal gravitation).  By any measure, that 
combination of calculus and gravitation was a more complicated 
explanation.


Anyone at the time who knew that Isaac Newton was trying to do could 
have dismissed his efforts and said Idiot!  Planetary motion is 
simple.  Ptolemy explained it in a simple way.  I use 
simplicity-preferring prior, so epicycles are good enough for me.


And if that same person insisted that there SHOULD be a simple model 
of planetary motion (where simple meant 'as simple as epicycles) 
would have been insisting that an explanation as complicated as 
calculus and gravitation was in some sense bad or unparsimonious.


In the long run, of course, gravity plus calculus was perceived as 
being 'simple' and elegant in the extreme.  But that is not the point, 
because before it was discovered it could have been criticised as 
being far, far more complicated than the epicycle explanation.


What Rogers was suggesting was that a 'simple' black-box explanation 
was already a good explanation for neurons.  Quite apart from the fact 
that such a model has not even been created yet (!), even if it did 
exist, it would amount to nothing more than an epicycle explanation.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-02 Thread Brad Paulsen

John wrote:
A rock is either conscious or not conscious.

Excluding the middle, are we? 

I don't want to put words into Ben  company's mouths, but I think what 
they are trying to do with PLN is to implement a system that expressly 
*includes the middle*.  In theory (but not necessarily in practice) the 
clue to creating the first intelligent machine may be to *exclude the 
ends*!  Scottish philosopher and economist David Hume argued way back in 
the 18th century that all knowledge is based on past observation.  
Because of this, we can never be 100% certain of *anything*.  While Hume 
didn't put it in such terms, as I understand his thinking, it comes down 
to *everything* is a probability or all knowledge is fuzzy 
knowledge.  There is no such thing as 0.  There is no such thing as 1.


For example, let's say you are sitting at a table holding a pencil in 
your hand.  In the past, every time you let go of the pencil in this 
situation (or a similar situation), it dropped to the table.  The cause 
and effect for this behavior is so well documented that we call the 
underlying principal the *law* of gravity.  But, even so, can you say 
with probability 1.0 that the *next* time you let go of that pencil in a 
similar situation that it will, in fact, drop to the table?  Hume said 
you can't.  As those ads for stock brokerage firms on TV always say in 
their disclaimers, Past performance is no guarantee of future performance.


Of course, we are constantly predicting the future based on our 
knowledge of past events (or others' knowledge of past events which we 
have learned and believe to be correct).  I will, for instance, give you 
very favorable odds if you are willing to bet against the pencil hitting 
the table when dropped.  Unless you enjoy living life on the edge, your 
predictions won't stray very far from past experiences (or learned 
knowledge about past experiences).  But, in the end, it's all 
probability and fuzziness.  It is all belief, baby.


Regarding the issue of consciousness and the rock, there are several 
possible scenarios to consider here.  First, the rock may be conscious 
but only in a way that can be understood by other rocks.  The rock may 
be conscious but it is unable to communicate with humans (and vice 
versa) so we assume it's not conscious.  The rock is truly conscious and 
it thinks we're not conscious so it pretends to be just like it thinks 
we are and, as a result, we're tricked into thinking it's not 
conscious.  Finally, if a rock falls in the forest, does it make a 
sound?  Consciousness may require at least two actors.  Think about it.  
What good would consciousness do you if there was no one else around to 
appreciate it?  Would you, in that case, in fact be conscious?


Most humans will treat a rock as if it were not conscious because, in 
the past, that assumption has proven to be efficacious for predictions 
involving rocks.  I know of no instance where someone was able to talk a 
rock that was in the process of falling on him or her to change 
direction by appealing to the rock, one conscious entity to another.  
And maybe they should have.  There is, after all, based on past 
experience, only a 0.9995 probability that a rock is not conscious.


Cheers,

Brad


John G. Rose wrote:

From: j.k. [mailto:[EMAIL PROTECTED]

On 06/01/2008 09:29 PM,, John G. Rose wrote:


From: j.k. [mailto:[EMAIL PROTECTED]
On 06/01/2008 03:42 PM, John G. Rose wrote:



A rock is conscious.

  

Okay, I'll bite. How are rocks conscious under Josh's definition or


any


other non-LSD-tripping-or-batshit-crazy definition?




The way you phrase your question indicates your knuckle-dragging
  

predisposition making it difficult to responsibly expend an effort in
attempt to satisfy your - piqued inquisitive biting action.

  

Yes, my tone was overly harsh, and I apologize for that. It was more
indicative of my frustration with the common practice on this list of
spouting nonsense like rocks are conscious *without explaining what is
meant* by such an ostensibly ludicrous statement or *giving any kind of
a justification whatsoever*. This sort of intellectual sloppiness
seriously lowers the quality of the list and makes it difficult to find
the occasionally really insightful content.




A rock is either conscious or not conscious. Is it less intellectually sloppy 
to declare it not conscious?

John






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

  



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

[agi] More brain scanning and language

2008-06-02 Thread Brad Paulsen

Hey kids:

A COMPUTER THAT CAN 'READ' YOUR MIND 
http://www.physorg.com/news131623779.html 

Cheers, 


Brad



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] U.S. Plan for 'Thinking Machines' Repository

2008-05-28 Thread Brad Paulsen

Fellow AGI-ers,

At the risk of being labeled the list's newsboy...

U.S. Plan for 'Thinking Machines' Repository
Posted by samzenpus on Wednesday May 28, @07:19PM
from the save-those-ideas-for-later dept.




An anonymous reader writes Information scientists organized by the U.S.'s 
NIST say they will create a concept bank that programmers can use to build 
thinking machines that reason about complex problems at the frontiers of 
knowledge - from advanced manufacturing to biomedicine. The agreement by 
ontologists - experts in word meanings and in using appropriate words to 
build actionable machine commands - outlines the critical functions of the 
Open Ontology Repository (OOR). More on the summit that produced the 
agreement here.


Cheers,

Brad



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Phoenix has Landed

2008-05-25 Thread Brad Paulsen

Hi Gang!

The first Phoenix Lander pix from Mars:

http://fawkes4.lpl.arizona.edu/images.php?gID=315cID=7

Cheers,

Brad



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] More Phoenix Info...

2008-05-25 Thread Brad Paulsen

Hi again...

As I write this I'm watching the post-landing NASA press conference live on 
NASA TV (http://www.nasa.gov/multimedia/nasatv/index.html).  One of the NASA 
people was talking about what a difficult navigation problem they'd 
successfully overcome.  His analogy was, It was like golfing from a tee in 
Boston, Mass, aiming at a cup in Australia and hitting a hole-in-one. 
Everybody applauded (and so they should have).  But, then, the next guy to 
speak interjected, Yeah.  And the cup was moving!  They had only a 20 
meter window of landing space to make their desired landing spot (after a 
600+ million kilometer trip from earth).  It looks, by the initial data, 
that they nailed it.


This program begin in 1998.  Ten years!  Imagine how the technology they 
were working with changed (improved) in those ten years.  The success of the 
Phoenix Lander now brings NASA's Mars mission success rate up to 50%.


They are very different problems, of course, but one is prompted on the 
heals of a great intellectual success like this to renew one's belief that 
humanity will, one day, build the AGI we can only dream about and (most of 
the time battling great frustration) work toward today.


Cheers,

Brad





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Porting MindForth AI into JavaScript Mind.html

2008-05-18 Thread Brad Paulsen

John,

Yeah.  And look how well that worked.  It didn't.

Most people who post drivel to lists like this honestly don't think they're 
posting drivel.  Their ignorance runs very, very deep (in many cases to the 
point of clinical self-delusion).  It is as deep as their conviction that 
they are smarter than everybody else; a conviction they will evince in every 
post they make (if only by declaring how everyone else has missed the 
point or is barking up the wrong tree).  And, in most cases, if you take 
them on directly (i.e., rise to their bait), they will bury you in mounds 
and mounds of more, and better-targeted, drivel.  You will spend the rest of 
your productive life trying to dig yourself out from under their prolific, 
but vacant, verbiage giving them even more opportunities to spew their 
nonsense.  They'll throw straw man after straw man at you.  When you finally 
just give up in frustration, they will use that as further evidence (to 
themselves and, if they can spin it just right, to others) of their own 
correctness.


These people are, typically, pretty handy with the English language and love 
to talk about themselves and their unique creations.  In general terms, of 
course.  Their ideas are so world-shattering that they must keep the details 
close to their vest, you see.  I suspect most of these folks also have high 
IQ's.  They're just intellectually lazy.  They really don't see why they 
should have to study what others before them (or even their contemporaries) 
have struggled to elucidate.  After all, that's the intellectual equivalent 
of manual labor.  They're too smart for that.  They'd rather construct their 
own world from whole cloth and, then, spend their intellectually-work-free 
days trying to shove it down everyone else's throats.


While these people may, occasionally, have something interesting to say, 
it won't be because they have any interesting ideas.  It will be because 
they produce a huge volume of words.  Random chance says that, somewhere in 
there, they are bound to produce an utterance meaningful to others  at some 
point.  To them, of course, all of their own utterances are supremely 
meaningful.


I brought this problem up about a month ago and received some gentle 
criticism about throwing the baby out with the bathwater and how history 
demonstrates that most true paradigm-shift creators were considered 
crackpots in their own time.  The difference is that those creators had 
empirical evidence, or solid reasoning, to back up their ideas and they were 
eager to have others independently verify their discoveries.  Trolls don't 
have any evidence (that they'll let you independently examine) nor does 
their reasoning hold up under even the slightest external scrutiny.


I have never advocated kicking anyone off an e-mail list (or from anywhere 
else) unless their postings are totally off-topic (i.e., spam, hate, porn). 
I'm one with Thomas Jefferson who said, I would rather be exposed to the 
inconveniences attending too much liberty than to those attending too small 
a degree of it.  Although, in practice, history shows even he had his 
limits.


My suggestion: adjust your e-mail client's troll filters.  Adjust both the 
from and content filters.  You'll still have to manually weed out the 
occasional reply to their drivel that slips through, but this should relieve 
you from having to get your blood pressure up over everything the trolls 
write.  Works for me!


Cheers,

Brad


- Original Message - 
From: John G. Rose [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, May 17, 2008 3:37 PM
Subject: RE: [agi] Porting MindForth AI into JavaScript Mind.html



It's hard to refute something vague or irrelevant, with 10 volumes of
Astrology background behind it, and it's not necessary, as nonsense is
usually self-evident. What's needed is merciless moderation...



I think it's overdue for someone to get a good Waseration.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] WikiMining with Java

2008-05-18 Thread Brad Paulsen
Some of you may be interesting in this WikiMining Java API: 
http://www.ukp.tu-darmstadt.de/software/JWPL.  JWPL is the acronym for Java 
WikiPedia Library.


The license isn't open source.  But, it is available at no charge for 
non-commercial use in research.  It's from an academic project in Germany.



From the download page:


Lately, Wikipedia has been recognized as a promising lexical semantic 
resource. If Wikipedia is to be used for large-scale NLP tasks, efficient 
programmatic access to the knowledge therein is required.


JWPL (Java Wikipedia Library) is a free, Java-based application programming 
interface that allows to access all information contained in Wikipedia.


The high-performance Wikipedia API provides structured access to information 
nuggets like redirects, categories, articles and link structure. It is 
described in our LREC 2008 
(http://elara.tk.informatik.tu-darmstadt.de/publications/2008/lrec08_camera_ready.pdf 
paper.


JWPL now contains a Mediawiki Markup parser that can be used to further 
analyze the contents of a Wikipedia page. The parser can also be used 
stand-alone with other texts using MediaWiki markup.


Cheers,

Brad 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Re: Accidental Genius

2008-05-09 Thread Brad Paulsen

Ryan,

Thanks for the clarifications and the links!

Cheers,

Brad

- Original Message - 
From: Bryan Bishop [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]; 
[EMAIL PROTECTED]; [EMAIL PROTECTED]

Sent: Wednesday, May 07, 2008 9:46 PM
Subject: Re: Accidental Genius




On Wed, May 7, 2008 at 9:21 PM, Brad Paulsen [EMAIL PROTECTED] 
wrote:

I happened to catch a program on National Geographic Channel today
entitled Accidental Genius.  It was quite interesting from an AGI 
standpoint.


One of the researchers profiled has invented a device that, by sending
electromagnetic pulses through a person's skull to the appropriate spot 
in
the left hemisphere of that person's brain, can achieve behavior similar 
to
that of an idiot savant in a non-brain-damaged person (in the session 
shown,

this was a volunteer college student).


That's Snyder's work.*
http://www.wireheading.com/brainstim/savant.html

http://heybryan.org/mediawiki/index.php/rTMS

Re: savantism,
http://heybryan.org/intense_world_syndrome.html

DIY rTMS:
http://transcenmentalism.org/OpenStim/tiki
(there's a mailing list)
http://heybryan.org/mailing_lists.html

Before being zapped by the device, the student is taken through a 
series

of exercises.  One is to draw a horse from memory.  The other is to read
aloud a very familiar saying with a slight grammatical mistake in it 
(the

word the is duplicated, i.e., the the, in the saying -- sorry I can't
recall the saying used). Then the student is shown a computer screen full 
of

dots for about 1 second and asked to record his best guess at how many
dots there were.  This exercise is repeated several times (with different
numbers of dots each time).


It's not just being zapped, it's being specifically stimulated in a
certain region of the brain; think of it like actually targetting the
visual cortex, or actually targetting the anterior cingulate, the left
ventrolateral amygdala, etc. And that's why this is interesting. I
wrote somewhat about this on my site once:

http://heybryan.org/recursion.html

Specifically, if this can be used to modify attention, then can we use
it to modify attention re: paying attention to attention? Sounds like
a direct path to the singularity to me.


The student is then zapped by the electromagnetic pulse device for 15
minutes.  It's kind of scary to watch the guy's face flinch 
uncontrollably

as each pulse is delivered. But, while he reported feeling something, he
claimed there was no pain or disorientation. His language facilities were
unimpaired (they zap a very particular spot in the left hemisphere based 
on

brain scans taken of idiot savants).


Right. The DIY setups that I have heard of haven't been able to be all
that high-powered due to safety concerns -- not safety re: the brain,
but safety when considering working with superhigh voltages so close
to one's head. ;-)


You can watch the episode on-line here:
http://channel.nationalgeographic.com/tv-schedule.  It's not scheduled 
for

repeat showing anytime soon.


Awesome. Thanks for the link.


That's not a direct link (I couldn't find one).  When you get to that Web
page, navigate to Wed, May 7 at 3PM and click the More button under the
picture.  Unfortunately, the full-motion video is the size of a large
postage stamp.  The full screen view uses stop motion (at least i did 
on

my laptop using a DSL-based WiFi hotspot). The audio is the same in both
versions.


- Bryan

* Damien Broderick had to correct me on this, once. :-)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
OpenCog.org (Open Cognition Project) group.

To post to this group, send email to [EMAIL PROTECTED]
To unsubscribe from this group, send email to 
[EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/opencog?hl=en

-~--~~~~--~~--~--~---



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Accidental Genius

2008-05-07 Thread Brad Paulsen
I happened to catch a program on National Geographic Channel today entitled
Accidental Genius.  It was quite interesting from an AGI standpoint.

One of the researchers profiled has invented a device that, by sending
electromagnetic pulses through a person's skull to the appropriate spot in
the left hemisphere of that person's brain, can achieve behavior similar to
that of an idiot savant in a non-brain-damaged person (in the session shown,
this was a volunteer college student).

Before being zapped by the device, the student is taken through a series
of exercises.  One is to draw a horse from memory.  The other is to read
aloud a very familiar saying with a slight grammatical mistake in it (the
word the is duplicated, i.e., the the, in the saying -- sorry I can't
recall the saying used). Then the student is shown a computer screen full of
dots for about 1 second and asked to record his best guess at how many
dots there were.  This exercise is repeated several times (with different
numbers of dots each time).

The student is then zapped by the electromagnetic pulse device for 15
minutes.  It's kind of scary to watch the guy's face flinch uncontrollably
as each pulse is delivered. But, while he reported feeling something, he
claimed there was no pain or disorientation. His language facilities were
unimpaired (they zap a very particular spot in the left hemisphere based on
brain scans taken of idiot savants).

After being zapped, the exercises are repeated.  The results were
impressive.  The horse drawn after the zapping contained much more detail
and was much better rendered than the horse drawn before the zapping.
Before the zapping, the subject read the familiar saying correctly (despite
the duplicate the).  After zapping, the duplicate the stopped him dead
in his tracks.  He definitely noticed it.  The dots were really impressive
though.  Before being zapped, he got the count right in only two cases.
After being zapped, he got it right in four cases.

The effects of the electromagnetic zapping on the left hemisphere fade
within a few hours.  Don't know about you, but I'd want that in writing.

You can watch the episode on-line here:
http://channel.nationalgeographic.com/tv-schedule.  It's not scheduled for
repeat showing anytime soon.

That's not a direct link (I couldn't find one).  When you get to that Web
page, navigate to Wed, May 7 at 3PM and click the More button under the
picture.  Unfortunately, the full-motion video is the size of a large
postage stamp.  The full screen view uses stop motion (at least i did on
my laptop using a DSL-based WiFi hotspot). The audio is the same in both
versions.

Cheers,

Brad

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Panda: a pattern-based programming system

2008-05-02 Thread Brad Paulsen
Readers of these lists might enjoy the refereed paper Overview of the Panda 
Programming System (http://www.jot.fm:80/issues/issue_2008_05/article1/) 
described in the following abstract:


This article provides an overview of a pattern-based programming system, 
named Panda, for automatic generation of high-level programming language 
code. Many code generation systems have been developed [2, 3, 4, 5, 6] that 
are able to generate source code by means of templates, which are defined by 
means of transformation languages such as XSL, ASP, etc. But these templates 
cannot be easily combined because they map parameters and code snippets 
provided by the programmer directly to the target programming language. On 
the contrary, the patterns used in a Panda program generate a code model 
that can be used as input to other patterns, thereby providing an unlimited 
capability of composition. Since such a composition may be split across 
different files or code units, a high degree of separation of concerns [15] 
can be achieved.


A pattern itself can be created by using other patterns, thus making it easy 
to develop new patterns. It is also possible to implement an entire 
programming paradigm, methodology or framework by means of a pattern 
library: design patterns [8], Design by Contract [12], Aspect-Oriented 
Programming [1, 11], multi-dimensional separation of concerns [13, 18], data 
access layer, user interface framework, class templates, etc. This way, 
developing a new programming paradigm does not require to extend an existing 
programming system (compiler, runtime support, etc.), thereby focusing on 
the paradigm concepts.


The Panda programming system introduces a higher abstraction level with 
respect to traditional programming languages: the basic elements of a 
program are no longer classes and methods but, for instance, design patterns 
and crosscutting concerns [1, 11].


Cheers,

Brad

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Comments from a lurker...

2008-04-16 Thread Brad Paulsen
Steve  Josh,

You guys ought to get a kick out of this: 
http://www.physorg.com:80/news127452360.html.  We don't need no stinking 
gigahertz circuits when we can have terahertz guided-wave circuits.  That's 
1000 times faster than gigahertz (but, of course, you know that).  Based on the 
terahertz radiation portion of the infrared spectrum.  Some guys in Utah made 
it work (I.e., were able to split and exchange signals at terahertz speeds).  
It's an interesting read.  They figure about 10 years to commercial deployment. 
 That might just be in time to save Moore's ass once again. ;-)  Enjoy!

Cheers,

Brad

  - Original Message - 
  From: Steve Richfield 
  To: agi@v2.listbox.com 
  Sent: Tuesday, April 15, 2008 3:28 PM
  Subject: Re: [agi] Comments from a lurker...


  Josh,


  On 4/15/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote: 
On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote:
 ... My present
 efforts are now directed toward a new computer architecture that may be 
more
 of interest to AGI types here than Dr. Eliza. This new architecture should
 be able to build new PC internals for about the same cost, using the same
 fabrication facilities, yet the processors will run ~10,000 times faster
 running single-thread code.

This (massively-parallel SIMD) is perhaps a little harder than you seem to
think. I did my PhD thesis on it and led a multi-million-dollar 10-year
ARPA-funded project to develop just such an architecture.

  I didn't see any attachments. Perhaps you could send me some more information 
about this? Whenever I present this stuff, I always emphasize that there is 
NOTHING new here, just an assortment of things that are decades old. Hopefully 
you have some good ideas in there, or maybe even some old ideas that I can 
attribute new thinking to.


The first mistake everybody makes is to forget that the bottleneck for
existing processors isn't computing power at all, it's memory bandwidth. All
the cruft on a modern processor chip besides the processor is there to
ameliorate that problem, not because they aren't smart enough to put more
processors on.

  Got this covered. Each of the ~10K ALUs has ~8 memory banks to work with, for 
a total of ~80K banks, so there should be no latencies except for inter-ALU 
communication. Have I missed something here?


The second mistake is to forget that processor and memory silicon fab use
different processes, the former optimized for fast transistors, the latter
for dense trench capacitors.  You won't get both at once -- you'll give up 
at
least a factor of ten trying to combine them over the radically specialized
forms.

  Got that covered. Once multipliers and shift matrices are eliminated and only 
a few adders, pipeline registers, and a little random logic remain, then the 
entire thing can be fabricated with MEMORY fab technology! Note that memories 
have been getting smarter (and even associative), e.g. cache memories, and when 
you look at their addressing, row selection, etc., there is nothing more 
complex than I am proposing for my ALUs. While the control processor might at 
first appear to violate this, note that it needs no computational speed, so its 
floating point and other complex instructions can be emulated on slow 
memory-compatible logic.


The third mistake is to forget that nobody knows how to program SIMD.

  This is a long and complicated subject. I spent a year at CDC digging some of 
the last of the nasty bugs out of their Cyber-205 FORTRAN compiler's optimizer 
and vectorizer, whose job it was to sweep these issues under the rug. There are 
some interesting alternatives, like describing complex code skeletons and how 
to vectorize them. When someone writes a loop whose structure is new to the 
compiler, someone else would have to explain to the computer how to vectorize 
it. Sounds kludgy, but co0nsidering the man-lifetimes that it takes to write a 
good vectorizing compiler, this actually works out to much less total effort.

  I absolutely agree that programmers will quickly fall into two groups - those 
who get it and make the transition to writing vectorizable code fairly 
easily, and those who go into some other line of work.


They
can't even get programmers to adopt functional programming, for god's sake;
the only thing the average programmer can think in is BASIC,

  I can make a pretty good argument for BASIC, as its simplicity makes it 
almost ideal to write efficient compilers for. Add to that the now-missing MAT 
statements for simple array manipulations, and you have a pretty serious 
competitor for all other approaches.


or C which is
essentially machine-independent assembly.

  C is only SISD machine independent. When you move to more complex 
architectures, its paradigm breaks down.


Not even LISP. APL, which is the
closest approach to a SIMD language, died a decade or so back.


[agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Brad Paulsen
Dear Fellow AGI List Members:

Just thought I'd remind the good members of this list about some strategies for 
dealing with certain types of postings.  

Unfortunately, the field of AI/AGI is one of those areas where anybody with a 
pulse and a brain thinks they can design a program that thinks.  Must be 
easy, right?  I mean, I can do it so how hard can it be to put me in a can? 
 Well, that's what some very smart people in the 1940's, '50's and into the 
1960's thought.  They were wrong.  Most of them now admit it.  So, on 
AI-related lists, we have to be very careful about the kinds of conversations 
on which we spend our valuable time.  Here are some guidelines.  I realize most 
people here know this stuff already.  This is just a gentle reminder.

If a posting makes grandiose claims, is dismissive of mainstream research,  
techniques, and institutions or the author claims to have special knowledge 
that has apparently been missed (or dismissed) by all of the brilliant 
scientific/technical minds who go to their jobs at major corporations and 
universities every day (and are paid for doing so), and also by every Nobel 
Laureate for the last 20 years, this posting should be ignored.  DO NOT RESPOND 
to these types of postings: positively or negatively.  The poster is, 
obviously, either irrational or one of the greatest minds of our time.  In the 
former case, you know they're full of it, I know they're full of it, but they 
will NEVER admit that.  You will never win an argument with an irrational 
individual.  In the latter case, stop and ask yourself: Why is somebody that 
fantastically smart posting to this mailing list?  He or she is, obviously, 
smarter than everyone here.  Why does he/she need us to validate his or her 
accomplishments/knowledge by posting on this list?  He or she should have 
better things to do and, besides, we probably wouldn't be able to understand 
(appreciate) his/her genius anyhow.

The only way to deal with postings like this is to IGNORE THEM.  Don't rise to 
the bait.  Like a bad cold, they will be irritating for a while, but they will, 
eventually, go away.

Cheers,

Brad

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Big Dog

2008-04-11 Thread Brad Paulsen
What's really impressive is how natural the leg movements are.  I was 
flashing to images of young horses navigating rough terrain.


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, April 10, 2008 6:48 PM
Subject: [agi] Big Dog



Peruse the video:
http://www.youtube.com/watch?v=W1czBcnX1Wwfeature=related

Of course, they are only showing the best stuff.  And I am sure there
is plenty of work left to do.  But from the variety of behaviors that
are displayed, I would say that the problem of quadraped walking is
surprisingly well solved.   apparently it's way easier than biped
locomotion...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com