Re: [agi] Learning without Understanding?

2008-06-17 Thread J Storrs Hall, PhD
The only thing I find surprising in that story is: "The findings go against one prominent theory that says children can only show smart, flexible behavior if they have conceptual knowledge – knowledge about how things work..." I don't see how anybody who's watched human beings at all can come w

[agi] I haven't actually watched this, but...

2008-06-16 Thread J Storrs Hall, PhD
http://www.robotcast.com/site/ --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225 P

Re: [agi] The Logic of Nirvana

2008-06-13 Thread J Storrs Hall, PhD
On Friday 13 June 2008 02:42:10 pm, Steve Richfield wrote: > Buddhism teaches that happiness comes from within, so stop twisting the > world around to make yourself happy, because this can't succeed. However, it > also teaches that all life is sacred, so pay attention to staying healthy. > In short

Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
In my visualization of the Cosmic All, it is not surprising. However, there is an undercurrent of the Singularity/AGI community that is somewhat apocaliptic in tone, and which (to my mind) seems to imply or assume that somebody will discover a Good Trick for self-improving AIs and the jig will

Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
There've been enough responses to this that I will reply in generalities, and hope I cover everything important... When I described Nirvana attractractors as a problem for AGI, I meant that in the sense that they form a substantial challenge for the designer (as do many other features/capabilit

Re: [agi] Nirvana

2008-06-12 Thread J Storrs Hall, PhD
Huh? I used those phrases to describe two completely different things: a program that CAN change its highest priorities (due to what I called a bug), and one that CAN'T. How does it follow that I'm missing a distinction? I would claim that they have a similarity, however: neither one represents

Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-12 Thread J Storrs Hall, PhD
Right. You're talking Kurzweil HEPP and I'm talking Moravec HEPP (and shading that a little). I may want your gadget when I go to upload, though. Josh On Thursday 12 June 2008 10:59:51 am, Matt Mahoney wrote: > --- On Wed, 6/11/08, J Storrs Hall, PhD <[EMAIL PROTECTED]>

Re: [agi] Nirvana

2008-06-12 Thread J Storrs Hall, PhD
If you have a program structure that can make decisions that would otherwise be vetoed by the utility function, but get through because it isn't executed at the right time, to me that's just a bug. Josh On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote: > > If you have a fixed-priority ut

Re: [agi] Nirvana

2008-06-12 Thread J Storrs Hall, PhD
On Thursday 12 June 2008 02:48:19 am, William Pearson wrote: > The kinds of choices I am interested in designing for at the moment > are should program X or program Y get control of this bit of memory or > IRQ for the next time period. X and Y can also make choices and you > would need to nail the

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote: > On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > I claim that there's plenty of historical evidence that people fall into this > > kind of attractor, as the word nirvana

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
ine. If he wishes to go into detail about specifics of his idea that explain empirical facts that mine don't, I'm all ears. Otherwise, I have code to debug... Josh On Wednesday 11 June 2008 09:43:52 pm, Vladimir Nesov wrote: > On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD <

Re: [agi] IBM, Los Alamos scientists claim fastest computer

2008-06-11 Thread J Storrs Hall, PhD
Hmmph. I offer to build anyone who wants one a human-capacity machine for $100K, using currently available stock parts, in one rack. Approx 10 teraflops, using Teslas. (http://www.nvidia.com/object/tesla_c870.html) The software needs a little work... Josh On Wednesday 11 June 2008 08:50:58 p

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
I'm getting several replies to this that indicate that people don't understand what a utility function is. If you are an AI (or a person) there will be occasions where you have to make choices. In fact, pretty much everything you do involves making choices. You can choose to reply to this or to

Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
y. I claim that there's plenty of historical evidence that people fall into this kind of attractor, as the word nirvana indicates (and you'll find similar attractors at the core of many religions). Josh On Wednesday 11 June 2008 09:09:20 am, Vladimir Nesov wrote: > On Wed, Jun

[agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
The real problem with a self-improving AGI, it seems to me, is not going to be that it gets too smart and powerful and takes over the world. Indeed, it seems likely that it will be exactly the opposite. If you can modify your mind, what is the shortest path to satisfying all your goals? Yep, yo

Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
basically on the right track -- except there isn't just one "cognitive level". Are you thinking of working out the function of each topographically mapped area a la DNF? Each column in a Darwin machine a la Calvin? Conscious-level symbols a la Minsky? On Thursday 05 June 2008 09:37:00 pm, Richa

Re: [agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
ore important? On Thursday 05 June 2008 03:44:14 pm, Matt Mahoney wrote: > --- On Thu, 6/5/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > > http://www.spectrum.ieee.org/print/6268 > > Some rough calculations. A human brain has a volume of 10^24 nm^3. A scan of 5

[agi] Reverse Engineering The Brain

2008-06-05 Thread J Storrs Hall, PhD
http://www.spectrum.ieee.org/print/6268 --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=103754539

Re: [agi] Neurons

2008-06-04 Thread J Storrs Hall, PhD
during the last ~200 million years, instead of evolving the highly > complex creatures that we now are. > > That having been said, I will comment on your posting... > > On 6/4/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > > > On Tuesday 03 June 2008 09:54:53

Re: [agi] Neurons

2008-06-04 Thread J Storrs Hall, PhD
On Tuesday 03 June 2008 09:54:53 pm, Steve Richfield wrote: > Back to those ~200 different types of neurons. There are probably some cute > tricks buried down in their operation, and you probably need to figure out > substantially all ~200 of those tricks to achieve human intelligence. If I > were

Re: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-04 Thread J Storrs Hall, PhD
Actually, the nuclear spins in the rock encode a single state of an ongoing computation (which is conscious). Successive states occur in the rock's counterparts in adjacent branes of the metauniverse, so that the rock is conscious not of unfolding time, as we see it, but of a journey across pro

Re: [agi] Neurons

2008-06-03 Thread J Storrs Hall, PhD
Strongly disagree. Computational neuroscience is moving as fast as any field of science has ever moved. Computer hardware is improving as fast as any field of technology has ever improved. I would be EXTREMELY surprised if neuron-level simulation were necessary to get human-level intelligence.

Re: [agi] Did this message get completely lost?

2008-06-02 Thread J Storrs Hall, PhD
On Monday 02 June 2008 03:00:24 pm, John G. Rose wrote: > A rock is either conscious or not conscious. Is it less intellectually sloppy to declare it not conscious? A rock is not conscious. I'll stake my scientific reputation on it. (this excludes silicon rocks with micropatterned circuits :-)

[agi] Neurons

2008-06-02 Thread J Storrs Hall, PhD
One good way to think of the complexity of a single neuron is to think of it as taking about 1 MIPS to do its work at that level of organization. (It has to take an average 10k inputs and process them at roughly 100 Hz.) This is essentially the entire processing power of the DEC KA10, i.e. the

Re: [agi] Did this message get completely lost?

2008-06-02 Thread J Storrs Hall, PhD
that we can see the limits of the illusion consciousness is giving us, and thus look under the hood, similar to the way we can understand more about the visual process by studying optical illusions. Josh On Monday 02 June 2008 01:55:32 am, Jiri Jelinek wrote: > > On Sun, Jun 1, 2008 at 6:

[agi] Did this message get completely lost?

2008-06-01 Thread J Storrs Hall, PhD
Originally sent several days back... Why do I believe anyone besides me is conscious? Because they are made of meat? No, it's because they claim to be conscious, and answer questions about their consciousness the same way I would, given my own conscious experience -- and they have the same capa

Re: [agi] Consciousness vs. Intelligence

2008-06-01 Thread J Storrs Hall, PhD
On Saturday 31 May 2008 10:23:15 pm, Matt Mahoney wrote: > Unfortunately AI will make CAPTCHAs useless against spammers. We will need to figure out other methods. I expect that when we have AI, most of the world's computing power is going to be directed at attacking other computers and defend

[agi] Consciousness is simple

2008-05-31 Thread J Storrs Hall, PhD
Why do I believe anyone besides me is conscious? Because they are made of meat? No, it's because they claim to be conscious, and answer questions about their consciousness the same way I would, given my own conscious experience -- and they have the same capabilities, e.g. of introspection, 1-sh

Re: [agi] Live Forever Machines...

2008-05-30 Thread J Storrs Hall, PhD
You'd get a hell of a lot better resolution with an e-beam blowing up nanometer-sized spots, and feeding the ejecta thru a mass spectrometer. See my talk a couple of years back at Alcor. But I would suggest that this is *wy* off-topic for this list... uploading implications to the contrary n

Re: [agi] Compression PLUS a fitness function "motoring" for hypothesized compressibility is intelligence?

2008-05-30 Thread J Storrs Hall, PhD
I don't really have any argument with this, except possibly quibbles about the nuances of the difference between "empirical" and "empiricism" -- and I don't really care about those! On Friday 30 May 2008 05:04:58 am, Tudor Boloni wrote: > The key point was lost, here is a clearer way of saying i

Re: [agi] Compression PLUS a fitness function "motoring" for hypothesized compressibility is intelligence?

2008-05-29 Thread J Storrs Hall, PhD
easonable permutations of past events (include permutations of time > scales also), elegantly compress hypotheses until not disprovable > > The more intelligent a system the smaller the information footprint... > pretty zen > > Tudor > > > > On Thu, May 29, 2008 at 8:41 P

Re: [agi] U.S. Plan for 'Thinking Machines' Repository

2008-05-29 Thread J Storrs Hall, PhD
I would demur. There is a huge overlap in the techniques used in compression and those in intelligence. However, the significant difference is that intelligence, in interacting with the real world, has a motor component which allows it to select among possible future sensory histories in a way

Re: [agi] Consciousness vs. Intelligence

2008-05-29 Thread J Storrs Hall, PhD
read http://cs-www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=86

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-27 Thread J Storrs Hall, PhD
On Monday 26 May 2008 09:55:14 am, Mark Waser wrote: > Josh, > > Thank you very much for the pointers (and replying so rapidly). You're welcome -- but also lucky; I read/reply to this list a bit sporadically in general. > > > You're very right that people misinterpret and over-extrapolate

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-26 Thread J Storrs Hall, PhD
On Monday 26 May 2008 06:55:48 am, Mark Waser wrote: > >> The problem with "accepted economics and game theory" is that in a proper > >> scientific sense, they actually prove very little and certainly far, FAR > >> less than people extrapolate them to mean (or worse yet, "prove"). > > > > Abusus no

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
In the context of Steve's paper, however, "rational" simply means an agent who does not have a preference circularity. On Sunday 25 May 2008 10:19:35 am, Mark Waser wrote: > Rationality and irrationality are interesting subjects . . . . > > Many people who endlessly tout "rationally" use it as a

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 07:51:59 pm, Richard Loosemore wrote: > This is NOT the paper that is under discussion. WRONG. This is the paper I'm discussing, and is therefore the paper under discussion. --- agi Archives: http://www.listbox.com/member/archive/303

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote: > > Read the appendix, p37ff. He's not making arguments -- he's explaining, > > with a > > few pointers into the literature, some parts of completely standard and > > accepted economics and game theory. It's all very basic stuff. > > The proble

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-25 Thread J Storrs Hall, PhD
The paper can be found at http://selfawaresystems.files.wordpress.com/2008/01/nature_of_self_improving_ai.pdf Read the appendix, p37ff. He's not making arguments -- he's explaining, with a few pointers into the literature, some parts of completely standard and accepted economics and game theory

Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity Outcomes...]

2008-05-24 Thread J Storrs Hall, PhD
On Saturday 24 May 2008 06:55:24 pm, Mark Waser wrote: > ...Omuhundro's claim... > YES! But his argument is that to fulfill *any* motivation, there are > generic submotivations (protect myself, accumulate power, don't let my > motivation get perverted) that will further the search to fulfill yo

Re: [agi] The sound of silence (or is that science?)

2008-05-20 Thread J Storrs Hall, PhD
On Monday 19 May 2008 11:50:15 pm, Steve Richfield wrote: > On 5/19/08, Brad Paulsen <[EMAIL PROTECTED]> wrote: > > Ripping good read. Josh, you can write real good, dude! It's a real > > page-turner (which is not easy to do in non-fiction). > > > Could I get you to entice me with some discuss

[agi] upcoming oral at Princeton

2008-05-02 Thread J Storrs Hall, PhD
Just saw this announcement go by: Abstract: Constructing ImageNet Data sets are essential in computer vision and content based image retrieval research. We present the work in progress for constructing ImageNet, a large scale image data set based on the Princeton WordNet. The goal is to associ

Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread J Storrs Hall, PhD
This is poppycock. The people who are really good at something like that so something as simple but much more general. They have an associative memory of lots of balls they have seen and tried to catch. This includes not only the tracking sight of the ball, but things like the feel of the wind,

Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread J Storrs Hall, PhD
This is all pretty old stuff for mainstream AI -- see Herb Simon and bounded rationality. What needs work is the cross-modal interaction, and understanding the details of how the heuristics arise in the first place from the pressures of real-time processing constraints and deliberative modellin

Re: [agi] Deliberative vs Spatial intelligence

2008-04-29 Thread J Storrs Hall, PhD
I disagree with your breakdown. There are several key divides: Concrete vs abstract Continuous vs discrete spatial vs symbolic deliberative vs reactive I can be very deliberative, thinking in 2-d pictures (when designing a machine part in my head, for example). I know lots of people who are com

Re: [agi] An interesting project on embodied AGI

2008-04-28 Thread J Storrs Hall, PhD
I drool over the physical robot -- it's built like a brick outhouse. It has 53 degrees of freedom, binocular vision, touch, audition, and inertial sensors, harmonic drives, top-grade aircraft aluminum members, the works. That doofy face places it squarely in the deepest ravine of the uncanny val

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? --- recent input and responses

2008-04-23 Thread J Storrs Hall, PhD
On Tuesday 22 April 2008 01:22:14 pm, Richard Loosemore wrote: > The solar system, for example, is not complex: the planets move in > wonderfully predictable orbits. http://space.newscientist.com/article/dn13757-solar-system-could-go-haywire-before-the-sun-dies.html?feedId=online-news_rss20 "H

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-22 Thread J Storrs Hall, PhD
Thank you! This feeds back into the feedback discussion, in a way, at a high level. There's a significant difference between research programming and production programming. The production programmer is building something which if (nominally) understood and planned ahead of time. The researcher

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread J Storrs Hall, PhD
On Monday 21 April 2008 05:33:01 pm, Ed Porter wrote: > I don't think your 5 steps do justice to the more sophisticated views of AGI > that are out their. It was, as I said, a caricature. However, look, e.g., at the overview graphic of this LIDA paper (page 8) http://bernardbaars.pbwiki.com/f/B

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread J Storrs Hall, PhD
(Aplogies for inadvertent empty reply to this :-) On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote: > WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? In a single word: feedback. At a very high level of abstraction, most the AGI (and AI for that matter) schemes I've seen can be caricatured

Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-21 Thread J Storrs Hall, PhD
On Saturday 19 April 2008 11:35:43 am, Ed Porter wrote: > WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI? > > With the work done by Goertzel et al, Pei, Joscha Bach > , Sam Adams, and others who spoke at AGI 2008, I > feel we pretty much conceptually understand how build

Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread J Storrs Hall, PhD
Well, I haven't seen any intelligent responses to this so I'll answer it myself: On Thursday 17 April 2008 06:29:20 am, J Storrs Hall, PhD wrote: > On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote: > > If you could build a (completely safe, I am assuming

Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread J Storrs Hall, PhD
On Thursday 17 April 2008 04:47:41 am, Richard Loosemore wrote: > If you could build a (completely safe, I am assuming) system that could > think in *every* way as powerfully as a human being, what would you > teach it to become: > > 1) A travel Agent. > > 2) A medical researcher who could lear

Re: [agi] associative processing

2008-04-16 Thread J Storrs Hall, PhD
On Wednesday 16 April 2008 04:15:40 am, Steve Richfield wrote: > The problem with every such chip that I have seen is that I need many > separate parallel banks of memory per ALU. However, the products out there > only offer a single, and sometimes two banks. This might be fun to play > with, but

Re: [agi] associative processing

2008-04-15 Thread J Storrs Hall, PhD
On Tuesday 15 April 2008 07:36:56 pm, Steve Richfield wrote: > As I understand things, speed requires low capacitance, which DRAM requires > higher capacitance, depending on how often you intend to refresh. However, > refresh operations look a LOT like vector operations, so probably all that > woul

[agi] associative processing

2008-04-15 Thread J Storrs Hall, PhD
On Tuesday 15 April 2008 04:28:25 pm, Steve Richfield wrote: > Josh, > > On 4/15/08, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > > > On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote: > > > ... My present > > > efforts are now directed t

Re: [agi] Comments from a lurker...

2008-04-15 Thread J Storrs Hall, PhD
On Monday 14 April 2008 04:56:18 am, Steve Richfield wrote: > ... My present > efforts are now directed toward a new computer architecture that may be more > of interest to AGI types here than Dr. Eliza. This new architecture should > be able to build new PC internals for about the same cost, using

Re: [agi] Comments from a lurker...

2008-04-12 Thread J Storrs Hall, PhD
On Friday 11 April 2008 03:17:21 pm, Steve Richfield wrote: > > Steve: If you're saying that your system builds a model of its world of > > discourse as a set of non-linear ODEs (which is what Systems Dynamics is > > bout) then I (and presumably Richard) are much more likely to be > > interested...

Re: [agi] Comments from a lurker...

2008-04-11 Thread J Storrs Hall, PhD
On Friday 11 April 2008 01:59:42 am, Steve Richfield wrote: > > Your experience with the medical community is not too surprising: I > > believe that the Expert Systems folks had similar troubles way back when. > > IMO the Expert Systems people deserved bad treatment! Actually, the medical expe

[agi] Minor milestone

2008-04-09 Thread J Storrs Hall, PhD
Just noticed that last month, a computer program beat a professional Go player (at a 9x9 game) (one game in 4). First time ever in a non-blitz setting. http://www.earthtimes.org/articles/show/latest-advance-in-artificial-intelligence,345152.shtml http://www.computer-go.info/tc/ -

Re: [agi] The resource allocation problem

2008-04-05 Thread J Storrs Hall, PhD
Note that in the brain, there is a fair extent to which functions are mapped to physical areas -- this is why you can find out anything using fMRI, for example, and is the source of the famous sensory and motor homunculi (e.g. http://faculty.etsu.edu/currie/images/homunculus1.JPG). There's plast

[agi] NewScientist piece on AGI-08

2008-03-11 Thread J Storrs Hall, PhD
Many of us there met Celeste Biever, the NS correspondent. Her piece is now up: http://technology.newscientist.com/channel/tech/dn13446-virtual-child-passes-mental-milestone-.html Josh --- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Fe

[agi] Joseph Weizenbaum, creator of ELIZA, R. I. P.

2008-03-11 Thread J Storrs Hall, PhD
His classic book, Computer Power and Human Reason, should be required reading for everyone on this list. from the MIT news office: http://web.mit.edu/newsoffice/2008/obit-weizenbaum-0310.html Joseph Weizenbaum, professor emeritus of computer science, 85 March 10, 2008 Joseph Weizenbaum, profess

Re: [agi] What should we do to be prepared?

2008-03-09 Thread J Storrs Hall, PhD
On Sunday 09 March 2008 08:04:39 pm, Mark Waser wrote: > > 1) If I physically destroy every other intelligent thing, what is > > going to threaten me? > > Given the size of the universe, how can you possibly destroy every other > intelligent thing (and be sure that no others ever successfully ari

Re: [agi] What should we do to be prepared?

2008-03-08 Thread J Storrs Hall, PhD
On Friday 07 March 2008 05:13:17 pm, Matt Mahoney wrote: > How does an agent know if another agent is Friendly or not, especially if the > other agent is more intelligent? See Beyond AI, p331-2. What's needed is a form of open source and provable reliability guarantees. This would have to be wor

Re: [agi] What should we do to be prepared?

2008-03-07 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 08:45:00 pm, Vladimir Nesov wrote: > On Fri, Mar 7, 2008 at 3:27 AM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > The scenario takes on an entirely different tone if you replace "weed out some > > wild carrots" with "kill all

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 06:46:43 pm, Vladimir Nesov wrote: > My argument doesn't need 'something of a completely different kind'. > Society and human is fine as substitute for human and carrot in my > example, only if society could extract profit from replacing humans > with 'cultivated humans'.

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 04:28:20 pm, Vladimir Nesov wrote: > > This is different from what I replied to (comparative advantage, which > J Storrs Hall also assumed), although you did state this point > earlier. > > I think this one is a package deal fallacy. I can't s

Re: [agi] What should we do to be prepared?

2008-03-06 Thread J Storrs Hall, PhD
On Thursday 06 March 2008 12:27:57 pm, Mark Waser wrote: > TAKE-AWAY: Friendliness is an attractor because it IS equivalent to "enlightened self-interest" -- but it only works where all entities involved are Friendly. Check out Beyond AI pp 178-9 and 350-352, or the Preface which sums up the

Re: Common Sense Consciousness [WAS Re: [agi] reasoning & knowledge]

2008-02-27 Thread J Storrs Hall, PhD
On Wednesday 27 February 2008 12:22:30 pm, Richard Loosemore wrote: > Mike Tintner wrote: > > As Ben said, it's something like "multisensory integrative > > consciousness" - i.e. you track a subject/scene with all senses > > simultaneously and integratedly. > > Conventional approaches to AI may

Re: [agi] reasoning & knowledge

2008-02-26 Thread J Storrs Hall, PhD
On Tuesday 26 February 2008 12:33:32 pm, Jim Bromer wrote: > There is a lot of evidence that children do not learn through imitation, at least not in its truest sense. Haven't heard of any children born into, say, a purely French-speaking household suddenly acquiring a full-blown competence in

[agi] color

2008-02-21 Thread J Storrs Hall, PhD
ay 21 February 2008 03:34:27 am, Bob Mottram wrote: > On 20/02/2008, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > So, looking at the moon, what color would you say it was? > > > As Edwin Land showed colour perception does not just depend upon the > wavelength of lig

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
It's probably not worth too much taking this a lot further, since we're talking in analogies and metaphors. However, it's my intuition that the connectivity in a probabilistic formulation is going to produce a much denser graph (less sparse matrix) than what you find in the SAT problems that the

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
A PROBABILISTIC logic network is a lot more like a numerical problem than a SAT problem. On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote: > On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > OK, imagine a lifetime's experience

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine you have a heuristic that takes the problem down from NP-complete (which it almost certainly is) to a linear system, so there is an N^3 algorithm for solving it. We're talking order 1e27 ops. Now using HEPP = 1e16 x 30

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
On Wednesday 20 February 2008 02:58:54 pm, Ben Goertzel wrote: > I note also that a web-surfing AGI could resolve the color of the moon > quite easily by analyzing online pictures -- though this isn't pure > text mining, it's in the same spirit... U -- I just typed "moon" into google and at th

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
on" 163,000 "orange moon" 122,000 "green moon" 105,000 "gray moon" 9,460 To me, the moon varies from a deep orange to brilliant white depending on atmospheric conditions and time of night... none of which would help me understand the text references. On Wedne

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
Looking at the moon won't help -- it might be the case that it described a particular appearance that only had a slight resemblance to other blue things (as in "red hair"), for example. There are some rare conditions (high stratospheric dust) which can make the moon look actually blue. In fact

Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
> Water does not always run downhill, sometimes it runs uphill. Consider an AGI trying to discover world facts from textual inference and finding Dawkin's book "The River that Runs Uphill" (or things about the moon's color and finding the phrase "once in a blue moon" everywhere or the SF story

Re: [agi] Wozniak's defn of intelligence

2008-02-11 Thread J Storrs Hall, PhD
It's worth noting in this connection that once you get up to the level of mammals, everything is very high compliance, low stiffness, mostly serial joint architecture (no natural Stewart platforms, although you can of course grab something with two hands if need be) typically with significant en

Re: [agi] Wozniak's defn of intelligence

2008-02-10 Thread J Storrs Hall, PhD
Hmmm. I'd suspect you'd spend all your time and effort organizing the people. Orgs can grow that fast if they're grocery stores or something else the new hires already pretty much understand, but I don't see that happening smoothly in a pure research setting. I'd claim to be able to do it in 10

Re: [agi] Wozniak's defn of intelligence

2008-02-08 Thread J Storrs Hall, PhD
On Friday 08 February 2008 10:16:43 am, Richard Loosemore wrote: > J Storrs Hall, PhD wrote: > > Any system builders here care to give a guess as to how long it will be before > > a robot, with your system as its controller, can walk into the average > > suburban home, f

[agi] Wozniak's defn of intelligence

2008-02-08 Thread J Storrs Hall, PhD
[ http://www.chron.com/disp/story.mpl/headline/biz/5524028.html ] Steve Wozniak has given up on artificial intelligence. "What is intelligence?" Apple's co-founder asked an audience of about 550 Thursday at the Houston area's first Up Experience conference in Stafford. His answer? A robot that co

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread J Storrs Hall, PhD
"Breeds There a Man...?" by Isaac Asimov On Saturday 19 January 2008 04:42:30 pm, Eliezer S. Yudkowsky wrote: > http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all > > I guess the moral here is "Stay away from attempts to hand-program a > database of common-sense ass

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-22 Thread J Storrs Hall, PhD
On Friday 21 December 2007 09:51:13 pm, Ed Porter wrote: > As a lawyer, I can tell you there is no clear agreed upon definition for > most words, but that doesn't stop most of us from using un-clearly defined > words productively many times every day for communication with others. If > you can onl

[agi] one more indication

2007-10-23 Thread J Storrs Hall, PhD
... that during sleep, the brain fills in some inferencing and does memory organization http://www.nytimes.com/2007/10/23/health/23memo.html?_r=2&adxnnl=1&oref=slogin&ref=science&adxnnlx=1193144966-KV6FdDqmqr8bctopdX24dw (pointer from Kurzweil) - This list is sponsored by AGIRI: http://www.ag

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
page, what appears through the hole is a blur. Josh On Monday 22 October 2007 10:23:12 pm, Russell Wallace wrote: > On 10/23/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > Still don't buy it. Saccades are normally well below the conscious level, and > >

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 09:33:24 pm, Edward W. Porter wrote: > Richard, ... > Are you capable of understanding how that might be considered insulting? I think in all seriousness that he literally cannot understand. Richard's emotional interaction is very similar to that of some autistic people

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:48:20 pm, Russell Wallace wrote: > On 10/23/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > Still don't buy it. What the article amounts to is that "speed-reading" is > > fake. No kind of recognition beyond skimming (

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:01:55 pm, Richard Loosemore wrote: > Did you ever try to parse a sentence with more than one noun in it? > > Well, all right: but please be assured that the rest of us do in fact > do that. "Why make insulting personal remarkss instead of explaining your reasoning?

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 06:02:17 pm, Russell Wallace wrote: > On 10/22/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > I don't buy that there is parallel recognition going on. > > But that's not what the evidence you cited supports. > > The

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 03:35:33 pm, Russell Wallace wrote: > On 10/22/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > > Attention -- fovea -- saccade -- serial -- chunking -- frame. > > > > Those higher functions have to be there anyway. Is there any evidence

Re: Bogus Neuroscience [WAS Re: [agi] Human memory and number of synapses]

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 02:54:53 pm, Richard Loosemore wrote: > the question is how it can represent multiple > copies of a concept that occur in a situation without getting confused > about which is which. If the appearance of one chair in a scene causes > the [chair] neuron (or neurons, i

Re: [agi] An AGI Test/Prize

2007-10-22 Thread J Storrs Hall, PhD
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote: > ... but dynamic long-term memory, in my view, is a wildly > self-organizing mess, and would best be modeled algebraically as a quadratic > iteration over a high-dimensional real non-division algebra whose > multiplication table is ev

Re: [agi] Poll

2007-10-20 Thread J Storrs Hall, PhD
On Friday 19 October 2007 10:36:04 pm, Mike Tintner wrote: > The best way to get people to learn is to make them figure things out for > themselves . Yeah, right. That's why all Americans understand the theory of evolution so well, and why Britons have such an informed acceptance of genetically

Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 06:34:08 pm, Mike Tintner wrote: > In fact, there is an important, distinctive point here. AI/AGI machines may > be "uncertain," (usually quantifiably so), about how to learn an activity. > Humans are, to some extent, fundamentally "confused." We, typically, don't >

Re: [agi] An AGI Test/Prize

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 03:32:46 pm, Benjamin Goertzel wrote: > ... my strong feeling is that we can progress straight to > powerful AGI right now without first having to do anything like > > -- define a useful, rigorous definition of intelligence > -- define a pragmatic IQ test for AGI's I lar

Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote: > Josh: An AGI needs to be able to watch someone doing something and produce a > program such that it can now do the same thing. > > Sounds neat and tidy. But that's not the way the human mind does it. A vacuous statement, since I state

Re: [agi] Poll

2007-10-19 Thread J Storrs Hall, PhD
In case anyone else is interested, here are my own responses to these questions. Thanks to all who answered ... > 1. What is the single biggest technical gap between current AI and AGI? (e.g. > we need a way to do X or we just need more development of Y or we have the > ideas, just need hardw

[agi] evolution-like systems

2007-10-19 Thread J Storrs Hall, PhD
There's a really nice blog at http://karmatics.com/docs/evolution-and-wisdom-of-crowds.html talking about the intuitiveness (or not) of evolution-like systems (and a nice glimpse of his Netflix contest entry using a Kohonen-like map builder). Most of us here understand the value of a market or

  1   2   3   4   >