Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Bryan Bishop
On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote:

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.


You might be inerested in this- I've been putting together an
adopt-a-lab-rat program that is actually an adoption program for lab mice.
In some cases mice that are used as a control group in experiments are then
discarded at the end of the program because, honestly, their lifetime is
over more or less, so the idea is that some people might be interested in
adopting these mice. Of course, you can also just pony up the $15 and get
one from Jackson Labs. I haven't fully launced adopt-a-lab-rat yet because I
am still trying to figure out how to avoid ending up in a situation where I
have hundreds of rats and rodents running around my apartment and I get the
short end of the stick (oops).

- Bryan
http://heybryan.org/
1 512 203 0507



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] an advance in brain/computer interfaces

2008-11-21 Thread Bryan Bishop
On 11/21/08, Ed Porter [EMAIL PROTECTED] wrote:
 For those of you who don't read Kurzweil's  mailing list, here is a link to
 an article that describes progress being made in a type of brain/computer
 interface that may in the future have the potential of provided a high
 bandwidth communication with a reasonable percent of the cortex with minimal
 surgery.

 It may well have great potential for the early stages of the transhumanist
 transformation.

You may be more interested in microelectrode arrays:
http://heybryan.org/~bbishop/docs/neuro/

Those documents describe the design and implementation of the MEAs for
wireless, invasive brain stimulation. There's also some stuff on that
server about noninvasive magnetic stimulation techniques, but
personally I'm a fan of the 5 MB/sec speeds promised by JPL from half
a decade ago. It was only in 1998 that we were doing 32 KB/sec via
wireless telemetry through the skull, but the speeds are increasing.

- Bryan
http://heybryan.org/
1 512 203 0507


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] First issue of H+ magazine ... http://hplusmagazine.com/

2008-10-17 Thread Bryan Bishop
On Fri, Oct 17, 2008 at 4:10 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 Open source robotics may eventually occur, but I think it will require
 some common and relatively affordable platforms.  It becomes much
 easier to usefully share code when you're dealing with the same
 hardware (or at least compatible hardware).

Bob, it's already happening behind your back, and I'm not talking
about iCub. While platform standardization is important, there's other
things that you can do like write cross-platform compatible
applications and compilers, or working on rounding up all of the GPLed
source files that are scattered across the web for software that
people have released but nobody has ever really collected, and such.

- Bryan
http://heybryan.org/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: [agi] universal logical form for natural language

2008-09-30 Thread Bryan Bishop
On Tuesday 30 September 2008, YKY (Yan King Yin) wrote:
 Yeah, and I'm designing a voting system of virtual credits for
 working collaboratively on the project...

Write a plugin to cvs, svn, git, or some other.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] NLP? Nope. NLU? Yep!

2008-09-20 Thread Bryan Bishop
On Saturday 20 September 2008, Trent Waddington wrote:
 Hehe, indeed.  Although I'm sure Powerset has some nice little
 relationship links between words, I'm a little skeptical about the
 claim to meaning.  I don't mean that in a philosophical not
 grounded sense.. I'm of the belief that you *could* manually list
 all the relationships between words that a person gathers in a
 lifetime and that might well be something approaching the meaning of
 those words.  No.  Why I'm skeptical is that I'm pretty sure I could
 get a bunch of 5 year old children to tell me everything they can
 about the word truck and I'd still be writing things down after
 weeks of brainstorming.  It's a hefty task enumerating common
 knowledge - let alone the uncommon kind - and preferably one
 shouldn't need to.  After all, no-one lists all that meaning about
 trucks to 5 year olds.

Perhaps an interesting rule of thumb is that if you have to be listing 
meanings in a written form, then you're doing something wrong. 

http://xkcd.com/463/

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Bryan Bishop
On Friday 19 September 2008, BillK wrote:
 Last I heard Peter Norvig was saying that Google had no interest in
 putting a natural language front-end on Google.
 http://slashdot.org/article.pl?sid=07/12/18/1530209

Arguably that's still natural language, even if it's just tags instead 
of structured senteces. Right?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Bryan Bishop
On Friday 19 September 2008, Mike Tintner wrote:
 Your unconscious keeps talking to you. It is precisely paper that
 mainly shapes your thinking about AI. Paper has been the defining
 medium of literate civilisation. And what characterises all literate
 forms is nice, discrete, static, fragmented, crystallised units on
 the page.  Whether linguistic, logical, or mathematical. Words,
 letters and numbers. That was uni-media civilisation.

This is begging for a reference to Project Xanadu.
http://en.wikipedia.org/wiki/Project_Xanadu

 Project Xanadu was the first hypertext project. Founded in 1960 by
 Ted Nelson, the project contrasts its vision with that of paper:
 Today's popular software simulates paper. The World Wide Web
 (another imitation of paper) trivialises our original hypertext model
 with one-way ever-breaking links and no management of version or
 contents.[1] Wired magazine called it the longest-running vaporware
 story in the history of the computer industry. The first attempt at
 implementation began in 1960, but it wasn't until 1998 that an
 implementation (albeit incomplete) was released.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-18 Thread Bryan Bishop
On Wednesday 17 September 2008, Terren Suydam wrote:
 I think a similar case could be made for a lot of large open source
 projects such as Linux itself. However, in this case and others, the
 software itself is the result of a high-level super goal defined by
 one or more humans. Even if no single person is directing the
 subgoals, the supergoal is still well defined by the ostensible aim
 of the software. People who contribute align themselves with that
 supergoal, even if not directed explicitly to do so. So it's not
 exactly self-organized, since the supergoal is conceived when the
 software project was first instantiated and stays constant, for the
 most part.

Hm, that's interesting, because I see just the opposite re: the 
existence of supergoal alignment. What happens is that people write 
code, and if people figure out ways to make use of it, they do, and 
these use functions aren't regulated by some top-down management 
process.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: Two goals of AGI (was Re: [agi] Re: [OpenCog] Re: Proprietary_Open_Source)

2008-09-18 Thread Bryan Bishop
On Thursday 18 September 2008, Mike Tintner wrote:
 In principle, I'm all for the idea that I think you (and perhaps
 Bryan) have expressed of a GI Assistant - some program that could
 be of general assistance to humans dealing with similar
 problems across many domains. A diagnostics expert, perhaps, that
 could help analyse breakdowns in say, the human body, a car or any of
 many other machines, a building or civil structure, etc. etc. And
 it's certainly an idea worth exploring. 

That's just one of the many projects I have going, however. It's easy 
enough to wire it up to a simple perceptron, or weights-adjustable 
additive function, or even physically up to a neural tissue culture for 
sorting through the hiss and the noise of 'bad results'. This isn't 
your fabled intelligence.

  But I have yet to see any evidence that it is any more viable than a
 proper AGI - because, I suspect, it will run up against the same

It's not aiming to be AGI in the first place though.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-17 Thread Bryan Bishop
On Wednesday 17 September 2008, Terren Suydam wrote:
 OK, how's that different from the collaboration inherent in any human
 project? Can you just explain your viewpoint?

When you have something like 20,000+ contributors writing software that 
can very, very easily break, I think it's an interesting feat to have 
it managed effectively. There's no way that we top-down designed this 
and gave every 20,000 of these people a separate job to do on a giant 
todo list, it was self-organizing. So, you were mentioning the 
applicability of such things to the design of intelligence ... just 
thought it was relevant.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread Bryan Bishop
On Monday 15 September 2008, Terren Suydam wrote:
 I send this along because it's a great example of how systems that
 self-organize can result in structures and dynamics that are more
 complex and efficient than anything we can purposefully design. The
 applicability to the realm of designed intelligence is obvious.

Have you considered looking into the social dynamics allowed by apt-get 
before? It's a complex system, people can fork it or patch it, and it's 
resulted in the software running the backbone of the internet. On the 
extropian mailing list the other day I mentioned I have a linux live cd 
for building brains, I call it mind on a disc, but unfortunately 
I'm strapped for time and can only give a partial download. It's quite 
the alternative way of going about things, but people do seem to 
generally understand (sometimes): http://p2pfoundation.net/

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread Bryan Bishop
On Monday 15 September 2008, Terren Suydam wrote:
 By your argumentation, it would seem you won't find any argument
 about intelligence of worth unless it explains everything. I've never
 understood the strong resistance of many in the AI community to the
 concepts involved with complexity theory, particularly as applied to
 intelligence. It would seem to me to be a promising frontier for
 exploration and gathering insight.

Maybe documenting the resistence can help organize and show the trends.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] self organization

2008-09-16 Thread Bryan Bishop
On Tuesday 16 September 2008, Terren Suydam wrote:
 Not really familiar with apt-get.  How is it a complex system?  It
 looks like it's just a software installation tool.

How many people are writing the software?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Dimitry Volfson wrote:
 Well, then I don't understand what you're looking for.
 Brain chemistry is part of the model.

Check out one of the sentences:
 The thalamus in the limbic system ('leopard brain') converts the
 physical need into an urge within the cortex.

So if I shoot a physical need at a thalamus sitting in my lab, it'll 
pop out an urge ? You're just talking about the output of the 
neurons, not the concept of urge that most people talk about from 
Webster's etc -- which is of the mind, not the brain. I'm not saying 
that the mind is separate from the brain, I'm just saying that people 
are confused and probably wrong when they talk about the mind. They 
most often are .. having no background in neuroscience, etc.

If you look on the page, you see some implementation details like -
 Wants and needs have to struggle against one another in a priority
 list for action now or later or not at all. The strength of the urge
 is thus important, with strong urges leading to needs that jump the
 queue, demanding immediate action.

I'm a programmer, I know what a list and queue look like, show it to me. 
Nobody has yet shown neurons doing math, much less a list object.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Dimitry Volfson wrote:
 Actually, I remember reading something about scientists finding a
 list structure in the brain of a bird singing a song (a moving
 pointer to the next item in a list sort of thing). But whatever.

That does sound interesting, yes, I'd like to find a citation on it. Do 
you know where I might find that? Was it a magazine, journal, etc.?

 It's not a very low level model, but the lower level activation is
 implied.

How could it be used if it's left unspecified and hand-wavy?

 When you imagine a goal-state, the relationship is represented in the
 brain somehow (in the neurons of course). And when evidence of the

Of course. But how?

 actualization of that goal-state comes in through the senses, the
 brains sends an opiate reward, which might make the person want to do
 whatever that was again in the correct context.

How is it that only one class of molecules correlates to goalism? This 
seems suspect considering the complex infrastructure of the brain.

 Motivation circuits - familiar with the concept?

Yes, but only from psychology, not from stuff we can actually build or 
understand.

 If a motivation circuit gets over-energized then a person gets locked
 into doing the same thing over and over again (and not getting the
 goal-state), rather than having enough resources left to think about

Perseveration occurs for many other reasons than 'over-energized' 
though ..

 doing something different and what that different thing should be.
 Does someone need to know exactly how a motivation circuit becomes
 over-energized at the neuronal level in order to model it in an AI? I
 don't think so.

Another illustration of the problem with this line of hypothesis that I 
have is that you're trying to make intelligence, a vague concept in 
the first place, with a foundation made out of motivation, another 
somewhat vague psychology concept. I don't care how many times the 
mouse hits the button, etc. Also, I recently cited this:

http://www.ece.utexas.edu/~werner/siren_call.pdf

It's an elaboration of a few of my points here.

 Many things like this are known. And people don't need to understand
 such at the individual-neuron level to model what happens.

No, you miss my point. It's not that I'm saying there's some scale of 
microscopism that we have to climb down (brain, region, tissue slice, 
neuron, axon, subcellular mechanism, ..) to understand things; that's 
not it at all. What I'm talking about is actually considering the 
neurons as the physical components that make up the 'brain' which is 
the physical location of, supposedly, these 'goal-states'. These 
biological systems (brains) are the real things that can be 
experimentally tested or perhaps manipulated, but on the other hand the 
semantic space of goals, meaning, motivation is hardly 
meaningfully manipulated, even with the WordNet or Cyc relations. 

I can randomly generate new designs for experiments using WordNet or Cyc 
relations or something, where we observe mice subjected to a battery of 
different psychochemical compounds. Then, using WordNet, we can pull 
out random labels for each of the behaviors, maybe it's a goal or 
maybe it's a who knows what that the creature supposedly intrinsically 
has, and then what? You'd plot the data sets in some multidimensional 
manner, maybe a State Vector Machine, I'd have to ask some 
mathematicians, and then there's this strong likelihood of statistical 
irrelevance of assigning these labels to the different phenotypes 
observed in experiments. These same phenotypes are the things of folk 
psychology as well. The trick is that instead of observing rats, you're 
observing people. 

Given that scenario, what would I care if it's subatomic or neuron-level 
or whole brain level? So, no, our disagreement is about something else.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] A model for RSI

2008-09-14 Thread Bryan Bishop
On Sunday 14 September 2008, Pei Wang wrote:
 There is no guaranteed improvement in an open system.

On this note, somebody suggested I reread Wolfram's NKS pg 340~ 
yesterday. It was around this section of his book that he mentions his 
lack of optimism in iteration to bring about 'improvement' or better 
solutions from the overall search space. He showed some pretty diagrams 
and other factoids to support this position, so if anyone wants to 
elaborate on these points of the inability of iteration-only to do 
the trick, that's a good place to start .. but iteration seems like 
it's going to be a necessary ingredient, even for open systems. (Unless 
not? What would the conditions of it not being so?)

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement

2008-09-13 Thread Bryan Bishop
On Saturday 13 September 2008, Dimitry Volfson wrote:
 Look at The Brain's Urge System at ChangingMinds.org
 http://changingminds.org/explanations/brain/urge_system.htm: .
 Notice that the stimulus can be pure thought. Meaning that a mental
 image of a goal-state can form the basis of urge-desire-action.

No, that's the fictional version of the 'mind', nothing about the actual 
brain.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Ability to improve ones own efficiency as a measure of intelligence

2008-09-12 Thread Bryan Bishop
On Wednesday 10 September 2008, Rene de Visser wrote:
 Any comments?

Yes. Please look into computational complexity and Big O notation.

http://en.wikipedia.org/wiki/Computational_complexity

Computational complexity theory, as a branch of the theory of 
computation in computer science, investigates the problems related to 
the amounts of resources required for the execution of algorithms 
(e.g., execution time), and the inherent difficulty in providing 
efficient algorithms for specific computational problems.

The space complexity of a problem is a related concept, that measures 
the amount of space, or memory required by the algorithm. An informal 
analogy would be the amount of scratch paper needed while working out a 
problem with pen and paper. Space complexity is also measured with Big 
O notation.

http://en.wikipedia.org/wiki/Big_O_notation

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial [Humor ] vs Real Approaches to Information

2008-09-12 Thread Bryan Bishop
On Friday 12 September 2008, Mike Tintner wrote:
 to understand a piece of information and its information objects,
 (eg words) , is to realise (or know) how they refer to real
 objects in the real world, (and, ideally, and often necessarily,  to
 be able to point to and engage with those real objects).

This is usually called sourcing and citations, and so on. It's not 
enough to have a citation though, it's not enough to just have a 
symbolic representation of some part of the world beyond you within 
your system, you always have to be able to functionally and competently 
use those references, citations, or links in some useful manner, 
otherwise you're not grounded and you're off in la-la land.

Computers have offered us the chance to encapsulate and manage all of 
these citations (and so on) but in many cases they are citations that 
are limited and crude. Look at the difference between these two 
citations:

Tseng, A. A., Notargiacomo A.  Chen T. P. Nanofabrication by scanning 
probe microscope lithography: A review. J. Vac. Sci. Tech. B 23, 877–
894 (2005).

Compared to:

http://heybryan.org/graphene.html

Both would seem cryptic to any outsider to scientific literature or to 
the web. The first one is generally variablized across the literature, 
making OCR very difficult, and making it generally a challenge to 
always fetch the citations and refs in papers for researchers. Take a 
look at my attempts at OCR of bibliographies:

http://heybryan.org/projects/autoscholar/

Not good is an accurate summarization. With the HTTP string, it's not 
any better at all, *except* the fact that DNS servers are widely 
implemented, here's how to implement one, here's how the DNS root 
servers for the internet work, here's why you can (usually) type in any 
URL on the planet and get to the same site (unless you're on some other 
NIC of course - but this is very rare). There's a social context 
surprisingly involved for DNS .. which I guess is what you consider to 
be the realistics that everyone overlooks when they just assign 
symbols to many different things; for instance, I bet you don't know 
what DNS is, but you know what a dictionary is, even though they refer 
to more or less the same functional things (uh, sort of). 

Anyway, it's context that matters when it comes to groundtruthing 
citations and traces in information ecologies, and not so much the 
symbolic manipulation thereof. It's the overall groundtruthed process, 
the instantiated exploding von Neumann probe phylum that will 
ultimately (not) grey goo you.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Non-evolutionary models of recursive self-improvement (was: Ability to improve ones own efficiency as a measure of intelligence)

2008-09-12 Thread Bryan Bishop
On Wednesday 10 September 2008, Matt Mahoney wrote:
 I have asked this list as well as the singularity and SL4 lists
 whether there are any non-evolutionary models (mathematical,
 software, physical, or biological) for recursive self improvement
 (RSI), i.e. where the parent and not the environment decides what the
 goal is and measures progress toward it. But as far as I know, the
 answer is no.

Have considered resource constraint situations where parents kill their 
young? The runt of the litter or, sometimes, others - like when a lion 
takes over a pride. Mostly in the non-human, non-Chinese portions of 
the animal kingdom. (I refer to current events re: China's population 
constraints on female offspring, of course.)

Secondly, I'm still wondering about the representations of goals in the 
brain. So far, there has been no study showing the neurobiological 
basis of 'goal' in the human brain. As far as we know, it's folk 
psychology anyway, and it might not be 'true', since there's no hard 
physical evidence of the existence of goals. I'm talking about 
bottom-up existence, not top-down (top being us, humans and our 
social contexts and such). 

Does RSI have to be measured with respect to goals? Can you prove to me 
that there exists no non-goal oriented improvement methodology? Keeping 
some possibilities open, as you can guess. I suspect that a non-goal 
oriented improvement function could fit into your thoughts in the same 
way that you might hope the goal variation of RSI would. 

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 You start v. constructively thinking how to test the non-programmed
 nature of  - or simply record - the actual writing of programs, and
 then IMO fail to keep going.

You could trace their keyboard presses back to the cerebellum and motor 
cortex, yes, this is true, but this isn't going to be like tracing the 
programmer pathway in a brain. You might just end up tracing the 
entire brain [which is another project that I fully support, of 
course]. You can imagine this as the signals being traced back to their 
origins back to the spine and the CNS like the cerebellum and motor 
cortex, and then from the somatosensory cortex that gave them the 
feedback for debugger error output (parse error, rawr), etc. You could 
even spice up the experimental scenario by tracking different 
strategies and their executions in response to bugs, sure.

 Ask them to use the keyboard for everything - (how much do you guys
 use the keyboard vs say paper or other things?) - and you can
 automatically record key-presses.

Right.

 Hasn't anyone done this in any shape or form? It might sound as if it
 would produce terribly complicated results, but my guess is that they
 would be fascinating just to look at (and compare technique) as well
 as analyse.

I don't think it's sufficient to keep it as analyses, here's why:
http://heybryan.org/humancortex.html Basically, wouldn't it be 
interesting to have an online/real-time/run-time system for keeping 
track of your brain as you program? This would allow for neurofeedback 
and some other possibilities.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, William Pearson wrote:
 2008/9/5 Mike Tintner [EMAIL PROTECTED]:
  By contrast, all deterministic/programmed machines and computers
  are guaranteed to complete any task they begin.

 If only such could be guaranteed! We would never have system hangs,
 dead locks. Even if it could be made so, computer systems would not
 always want to do so. Have you every had a programmed computer system
 say to you. This program is not responding, do you wish to terminate
 it. There is no reason in principle why the decision to terminate
 the program couldn't be made automatically.

These errors are computed. Do what I mean, not what I say is a common 
phrase thrown around in programming circles. The errors are not because 
that suddenly the ALU decided to not be present, and the errors are not 
because it suddenly lost its status as a Turing machine (although if 
you drove a rock through it, this is quite likely). Rather this is 
because you failed to write a good kernel. And yes, the decision to 
terminate programs can be made automatically, and I sometimes choose 
scripts on my clusters to kill things that haven't been responding for 
a certain amount of time, but usually I prefer to investigate it by 
hand since it's so rare.

  Very different kinds of machines to us. Very different paradigm.
  (No?)

 We commonly talk about single program systems because they are
 generally interesting, and can be analysed simply. My discussion on
 self-modifying systems ignored the interrupt driven multi-tasking
 nature of the system I want to build, because that makes analysis a
 lot more hard. I will still be building an interrupt driven, multi
 tasking system.

That's an interesting proposal, but I'm wondering about something. 
Suppose you have a cluster of processors, and they are all 
communicating with each other in some way to divide up tasks and 
compute away. Now, given the ability to send interrupts from one 
another, and given the linear nature of each individual unit, is it 
really multitasking? At some point it has to integrate all of the 
results together at a single node for writing at a single address on 
the hdd (or something) so that the results are in one single place, 
that or the reading function of the results must do this. Is it really 
then multi-tasking and parallel?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote:
 Were your computer like a human mind, it would have been able to say
 (as you/we all do) - well if that part of the problem is going to be
 difficult, I'll ignore it  or.. I'll just make up an answer... or
 by God I'll keep trying other ways until I do solve this.. or...
 ..  or ... Computers, currently, aren't free thinkers.

I'm pretty sure that compiler optimizers, that go in and look at your 
loops and other computational elements of a program, are able to make 
assessments like that. Of course, they'll just leave it as it is 
instead of completely ignoring parts of your program that you wish to 
compile, but it does seem similar. I recently came across an 
evolutionary optimizer for compilers to test parameters to gcc to try 
to figure the best way to compile a program on a certain architecture 
(to learn all of the gcc parameters yourself seems impossible 
sometimes, you see). Perhaps there's some evolved laziness in the human 
brain that could be modeled with gcc easily enough.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Mike Tintner wrote:
 fundamental programming problem, right?) A creative free machine,
 like a human, really can follow any of what may be a vast range of
 routes - and you really can't predict what it will do or, at a basic
 level, be surprised by it.

What do you say to the brain simulation projects? There is a biophysical 
basis to the brain and it's being discovered and hammered out. You can, 
in fact, predict the results of the eye-blink rabbit experiments (I'm 
working with a lab on this - the simulations return results faster than 
the real neurons do in the lab. You can imagine how this is useful for 
hypothesis testing purposes.).

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, William Pearson wrote:
 I'm very interested in computers that self-maintain, that is reduce
 (or eliminate) the need for a human to be in the loop or know much
 about the internal workings of the computer. However it doesn't need
 a vastly different computing paradigm  it just needs a different way
 of thinking about the systems. E.g. how can you design a system that
 does not need a human around to fix mistakes, upgrade it or maintain
 it in general.

Yes, these systems are interesting. I can easily imagine a system that 
generates systems that have low human maintenance costs. But suppose 
that the system that you make generates a system (with that low hu 
maint cost), and this 2nd-gen system does it again and again. This is 
the problem of clanking replicators too -- you need to have some way to 
correct divergence and for errors of replication; and not only that, 
but as you go into new environments there are new things that have to 
be taken into account for maintenance. Bacteria solve this problem with 
having many billions of cells per culture and then having enough 
genetic variability to somehow scrounge up a partial solution within 
time -- so that once you get to the Nth-generation you're not screwed 
entirely if some change occurs in the environment. There was a recent 
experiment in the news that has been going for 20 years, the Michigan 
man who had bacterial selection experiments in bottles for the past 20 
years only to find that they evolved an ability to metabolize something 
they didn't metabolize before. That's an example of being able to work 
in new environments, and there's a lot of cost to it (dead bacteria, 
many generations, etc.) that silicon projects can't quite do simply 
because of resource/cost constraints if you use traditional approaches. 
What would an alternative approach look like? One where you don't need 
dead silicon projects, and one where you have enough instances of 
programs that you're able to find a solution with your genetic 
algorithm in enough time? The increasing availability of RAM and hdd 
space might be enough to let us bruteforce it, but the embodiment of 
bacteria in the problem domains is something that more memory 
strategies don't quite address. Thoughts?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Saturday 06 September 2008, Mike Tintner wrote:
 Our unreliabilty is the negative flip-side of our positive ability
 to stop an activity at any point, incl. the beginning and completely
 change tack/ course or whole approach, incl. the task itself, and
 even completely contradict ourself.

But this is starting to get into an odd-mix of folk psychology. I was 
reading an excellent paper the other day that says this very plainly, 
written by Gerhard Werner: 

The Siren Call of metaphor: Subverting the proper task of Neuroscience
http://www.ece.utexas.edu/~werner/siren_call.pdf

 The case of Neuro-Psychological vs. Naturalistic Neuroscience.
 For grounding the argument, let us look at the case of
 ‘deciding to’ [34] in studies of conditioned motor behavior in
 monkeys, on which there is a rich harvest of imaginative experimental
 work on scholarly reviews available. I write this in profound respect
 for the investigators who conduct this work with immense ingenuity
 and sophistication. However, I question the soundness of the
 conceptual framework on which such experiments are predicated,
 observations are interpreted, and conclusions are formulated. I
 contend that current practices tend to disregard genuine issues in
 Neurophysiology with its own definitions of what legitimate
 propositions and criteria of valid statements in this discipline are.

  Here is the typical experimental protocol: the experimenter
 uses some measure of neural activity of his/her choice (usual neural
 spike discharges), recorded from a neural structure (selected by
 him/her on some criterion, and determines relations to behavior that
 he/she created as link between two events: an antecedent stimulus (
 chosen by him/her) and a consequent, arbitrary behavior, induced by
 the training protocol [49]. So far, the experimenter has done all the
 ‘deciding’, except leaving it up to the monkey to assign a “value” to
 complying with the experimental protocol. Different investigators
 summarize their experimental objective in various ways (in the
 interest of brevity, I slightly paraphrase, though being careful to
 preserving the original sense): to characterize neural computations
 representing the formation of perceptual decision [12]; to
 investigate the neural basis of a decision process [37]; to examine
 the coupling of neural processes of stimulus selection with response
 preparation [34], reflecting connections between motor system and
 cognitive processes [38] ; to assess neural activity indicating
 probabilistic reward anticipation [22,27]. In Shadlen and Newsome’s
 [37] evocative analogy “it is a jury’s deliberation in which sensory
 signals are the evidence in open court, and motor signals the jury’s
 verdict”. Helpful as metaphors and analogies can be as interim steps
 for making sense of the observation in familiar terms, they also
 import the conceptual burden of their source domain and lead us to
 attribute to the animal a decision and choice making capacity along
 principles for which Psychology has developed evidential and
 conceptual accounts in humans under entirely different conditions,
 and based on different observational facts. Nevertheless, armed with
 the metaphors of choice and decision, we assert that the observed
 neural activity is a “correlate” [19] of a decision to emit the
 observed behavior. As the preceding citations indicate, the observed
 neural activity is variously attributed to perceptual discrimination
 between competing (or conflicting) stimuli, to motor planning, or to
 reward anticipation; the implication being that the neural activity
 stands for (“represents”) one or the other of these psychological
 categories.

So, Mike, when you write like:
 Our unreliabilty is the negative flip-side of our positive ability
 to stop an activity at any point, incl. the beginning and completely
 change tack/ course or whole approach, incl. the task itself, and
 even completely contradict ourself.

It makes me wonder how you can assert the existence of a neurophysical 
basis of the existence of 'task', in terms of the *brain*, not in terms 
of our folk psychology and collective cultural background that has 
given us these names to these things. It's hard to talk about the brain 
from the biology-up, yes, that's true, but it's also very rewarding in 
that we don't make top-down misunderstandings.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-07 Thread Bryan Bishop
On Friday 05 September 2008, Terren Suydam wrote:
 So, Mike, is free will:

 1) an illusion based on some kind of unpredictable, complex but
 *deterministic* interaction of physical components 2) the result of
 probabilistic physics - a *non-deterministic* interaction described
 by something like quantum mechanics 3) the expression of our
 god-given spirit, or some other non-physical mover of physical things

I've already mentioned an alternative on this mailing list that you 
haven't included in your question, would you consider it?
http://heybryan.org/free_will.html
^ Just so that I don't have to keep on rewriting it over and over again.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 Bryan,

 How do you know the brain has a code? Why can't it be entirely
 impression-istic - a system for literally forming, storing and
 associating sensory impressions (including abstracted, simplified,
 hierarchical impressions of other impressions)?

 1). FWIW some comments from a cortically knowledgeable robotics
 friend:

 The issue mentioned below is a major factor for die-hard
 card-carrying Turing-istas, and to me is also their greatest
 stumbling-block.

 You called it a code, but I see computation basically involves
 setting up a model or description of something, but many people
 think this is actually synonomous with the real-thing. It's not,
 but many people are in denial about this. All models involves tons of
 simplifying assumptions.

 EG, XXX is adamant that the visual cortex performs sparse-coded
 [whatever that means] wavelet transforms, and not edge-detection. To
 me, a wavelet transform is just one possible - and extremely
 simplistic (meaning subject to myriad assumptions) - mathematical
 description of how some cells in the VC appear to operate.

No, this is just a confusion of terminologies. I most certainly was not 
talking about 'code' in the sense of sparse-coded wavelet transform. 
I'm talking about code in the sense of source code. Sorry.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote:
 Yes you do. Every time you make a decision, you are assigning a
 higher probability of a good outcome to your choice than to the
 alternative.

You'll have to prove to me that I make decisions, whatever that means.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-07 Thread Bryan Bishop
On Sunday 07 September 2008, Matt Mahoney wrote:
 Depends on what you mean by I.

You started it - your first message had that dependency on identity. :-)

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 And as a matter of scientific, historical fact, computers are first
 and foremost keyboards - i.e.devices for CREATING programs  on
 keyboards, - and only then following them. [Remember how AI gets
 almost everything about intelligence back to front?] There is not and
 never has been a program that wasn't first created on a keyboard.
 Indisputable fact. Almost everything that happens in computers
 happens via the keyboard.

http://heybryan.org/mediawiki/index.php/Egan_quote

 So what exactly is a keyboard? Well, like all keyboards whether of
 computers, musical instruments or typewriters, it is a creative
 instrument. And what makes it creative is that it is - you could say
 - an organiser.

Then you're starting to get into (some well needed) complexity science.

 A device with certain organs (in this case keys) that are designed
 to be creatively organised - arranged in creative, improvised (rather
 than programmed) sequences of  action/ association./organ play.

Yes, but the genotype isn't the phenotype and the translation from 
the 'code', the intentions of the programmer and so on to the 
expressions is 'hard' - people get so caught up in folk psychology that 
it's maddening.

 And an extension of the body. Of the organism. All organisms are
 organisers - devices for creatively sequencing actions/
 associations./organs/ nervous systems first and developing fixed,
 orderly sequences/ routines/ programs second.

Some (I) say that neural systems are somewhat like optimizers, which are 
heavily used in compilers that are compiling your programs anyway, so 
be careful: the difference might not be that broad.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Terren Suydam wrote:
 Thus is creativity possible while preserving determinism. Of course,
 you still need to have an explanation for how creativity emerges in
 either case, but in contrast to what you said before, some AI folks
 have indeed worked on this issue.

http://heybryan.org/mediawiki/index.php/Egan_quote 

Egan solved that particular problem. It's about creation -- even if you 
have the most advanced mathematical theory of the universe, you just 
made it slightly more recursive and so on just by shuffling around 
neurotransmitters in your head.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Recursive self-change: some definitions

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 I think this is a good important point. I've been groping confusedly
 here. It seems to me computation necessarily involves the idea of
 using a code (?). But the nervous system seems to me something
 capable of functioning without a code - directly being imprinted on
 by the world, and directly forming movements, (even if also involving
 complex hierarchical processes), without any code. I've been
 wondering whether computers couldn't also be designed to function
 without a code in somewhat similar fashion.  Any thoughts or ideas of
 your own?

Hold on there -- the brain most certainly has a code, if you will 
remember the gene expression and the general neurophysical nature of it 
all. I think partly the difference you might be seeing here is how much 
more complex and grown the brain is in comparison to somewhat fragile 
circuits and the ecological differences between the WWW and the 
combined evolutionary history keeping your neurons healthy each day. 

Anyway, because of the quantified nature of energy in general, the brain 
must be doing something physical and operating on a code, or i.e. 
have an actual nature to it. I would like to see alternatives to this 
line of reasoning, of course.

As for computers that don't have to be executing code all of the time. 
I've been wondering about machines that could also imitate the 
biological ability to recover from errors and not spontaneously burst 
into flames when something goes wrong in the Source. Clearly there's 
something of interest here.

- Bryan
who has gone 36 hours without sleep. Why am I here?

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 And what I am asserting is a  paradigm of a creative machine, which
 starts as, and is, NON-algorithmic and UNstructured  in all its
 activities, albeit that it acquires and creates a multitude of
 algorithms, or
 routines/structures, for *parts* of those  activities. For example,
 when you write a post,  nearly every word and a great many phrases
 and even odd sentences, will be automatically, algorithmically
 produced. But the whole post, and most paras will *not* be - and
 *could not* be.

Here's an alternative formulation for you to play with, Mike. I suspect 
it is still possible to consider it a creative machine even with an 
algorithmic basis *because* it is the nature of reality itself to 
compute these things; there is nothing that can have as much 
information about the moment than the moment itself, and thus why 
there's still this element of stochasticity and creativity that we see, 
even if we say that the brain is deterministic and so on.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Wednesday 03 September 2008, Mike Tintner wrote:
 And how to produce creativity is the central problem of AGI -
 completely unsolved.  So maybe a new approach/paradigm is worth at
 least considering rather than more of the same? I'm not aware of a
 single idea from any AGI-er past or present that directly addresses
 that problem - are you?

Mike, one of the big problems in computer science is the prediction of 
genotypes from phenotypes in general problem spaces. So far, from what 
I've learned, we haven't a way to guarantee that a resulting process 
is going to be creative. So it's not going to be solved per-se in the 
traditional sense of hey look, here's a foolproof equivalency of 
creativity. I truly hope I am wrong. This is a good way to be wrong 
about the whole thing, I must admit.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Mike Tintner wrote:
 Do you honestly think that you write programs in a programmed way?
 That it's not an *art* pace Matt, full of hesitation, halts,
 meandering, twists and turns, dead ends, detours etc?  If you have
 to have some sort of program to start with, how come there is no
 sign  of that being true, in the creative process of programmers
 actually writing programs?

Two notes on this one. 

I'd like to see fMRI studies of programmers having at it. I've seen this 
of authors, but not of programmers per-se. It would be interesting. But 
this isn't going to work because it'll just show you lots of active 
regions of the brain and what good does that do you?

Another thing I would be interested in showing to people is all of those 
dead ends and turns that one makes when traveling down those paths. 
I've sometimes been able to go fully into a recording session where I 
could write about a few minutes of decisions for hours on end 
afterwards, but it's just not efficient to getting the point across. 
I've sometimes wanted to do this for web crawling, when I do my 
browsing and reading, and at least somewhat track my jumps from page to 
page and so on, or even in my own grammar and writing so that I can 
make sure I optimize it :-) and so that I can see where I was going or 
not going :-) but any solution that requires me to type even /more/ 
will be a sort of contradiction, since then I will have to type even 
more, and more.

Bah, unused data in the brain should help work with this stuff. Tabletop 
fMRI and EROS and so on. Fun stuff. Neurobiofeedback.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] A NewMetaphor for Intelligence - the Computer/Organiser

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Valentina Poletti wrote:
 When we want to step further and create an AGI I think we want to
 externalize the very ability to create technology - we want the
 environment to start adapting to us by itself, spontaneously by
 gaining our goals.

There is a sense of resilience in the whole scheme of things. It's not 
hard to show how stupid each one of us can be in a single moment; but 
luckily our stupid decisions don't blow us up [often] - it's not so 
much luck as it might be resilience. In an earlier email to which I 
replied today, Mike was looking for a resilient computer that didn't 
need code. 

On another note: goals are an interesting folk psychology mechanism. 
I've seen other cultures afflict their own goals upon their 
environment, sort of how the brain contains a map of the skin for 
sensory representation, the same with the environment to their own 
goals and aspirations in life. What alternatives to goals could you do 
when doing programming? Otherwise you'll not end up with Mike's 
requested 'resilient computer' as I'm calling it.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote:
 A closed model is unrealistic, but an open model is even more
 unrealistic because you lack a means of assigning likelihoods to
 statements like the sun will rise tomorrow or the world will end
 tomorrow. You absolutely must have a means of guessing probabilities
 to do anything at all in the real world.

I don't assign or guess probabilities and I seem to get things done. 
What gives?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] draft for comment

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Matt Mahoney wrote:
 Another aspect of embodiment (as the term is commonly used), is the
 false appearance of intelligence. We associate intelligence with
 humans, given that there are no other examples. So giving an AI a
 face or a robotic body modeled after a human can bias people to
 believe there is more intelligence than is actually present.

I'm still waiting until you guys could show me a psychometric test that 
has a one-to-one correlation with the bioinformatics and 
neuroinformatics and then thus could be approached with a physical 
model down at the biophysics. Otherwise the 'false appearance of 
intelligence' is a truism - intelligence is false. What then? (Would 
you give up making brains and such systems? I'm just wondering. It's an 
interesting scenario.)

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] open models, closed models, priors

2008-09-04 Thread Bryan Bishop
On Thursday 04 September 2008, Abram Demski wrote:
 My intention here is that there is a basic level of well-defined,
 crisp models which probabilities act upon; so in actuality the
 system will never be using a single model, open or closed...

I think Mike's model is one more of approach, creativity and action 
rather than a formalized system existing in some quasi-state between 
open and closed. I'm not sure if the epistemiologies are meshing here.

Hrm.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Bryan Bishop
On Friday 27 June 2008, Richard Loosemore wrote:
 Pardon my fury, but the problem is understanding HOW TO DO IT, and
 HOW TO BUILD THE TOOLS TO DO IT, not having expensive hardware.  So
 long as some people on this list repeat this mistake, this list will
 degenerate even further into obsolescence.

I am working on this issue, but it will not look like ai from your 
perspective. It is, in a sense, ai. Here's the tool approach:

http://heybryan.org/buildingbrains.html
http://heybryan.org/exp.html

Sort of.

- Bryan

http://heybryan.org/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Cognitive Neuropsychology

2008-05-08 Thread Bryan Bishop
On Thu, May 8, 2008 at 3:21 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 It oughtn't to be all neuro- though. There is a need for some kind of
 corporate science - that studies whole body simulation and not just the
 cerebral end,.After all, a lot of the simulations being talked about are v.
 definitely whole body affairs. You're playing the football match you watch
 and reacting with your whole body. In fact, I wonder whether any simulations
 aren't. It shouldn't be too hard to set up some kind of whole body studies.
 Know of anyone thinking along these lines? Has to come soon.

http://heybryan.org/mediawiki/index.php/Henry_Markram
for the brain. Now for the rest of the body.

http://sbml.org/

- Bryan

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Accidental Genius

2008-05-08 Thread Bryan Bishop
On Thu, May 8, 2008 at 10:02 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Anyhow it is very interesting.  Perhaps savantism is an attention mechanism
 disorder?  Like, too much attention.

Yes.

Autism is a devastating neurodevelopmental disorder with a
polygenetic predisposition that seems to be triggered by multiple envi
ronmental factors during embryonic and/or early postnatal life. While
significant advances have been made in identifying the neuronal
structures and cells affected, a unifying theory that could explain
the manifold autistic symptoms has still not emerged. Based on recent
synaptic, cellular, molecular, microcircuit, and behavioral results
obtained with the valproic acid (VPA) rat model of autism, we propose
here a unifying hypothesis where the core pathology of the autistic
brain is hyper-reactivity and hyper-plasticity of local neuronal
circuits. Such excessive neuronal processing in circumscribed circuits
is suggested to lead to hyper-perception, hyper-attention, and
hyper-memory, which may lie at the heart of most autistic symptoms. In
this view, the autistic spectrum are disorders of hyper-functionality,
which turns debilitating, as opposed to disorders of
hypo-functionality, as is often assumed. We discuss how excessive
neuronal processing may render the world painfully intense when the
neocortex is affected and even aversive when the amygdala is affected,
leading to social and environmental withdrawal. Excessive neuronal
learning is also hypothesized to rapidly lock down the individual into
a small repertoire of secure behavioral routines that are obsessively
repeated. We further discuss the key autistic neuropathologies and
several of the main theories of autism and re-interpret them in the
light of the hypothesized Intense World Syndrome.

http://heybryan.org/intense_world_syndrome.html

See also the last email I sent out on this subject:
http://heybryan.org/pipermail/hplusroadmap/2008-May/000466.html

- Bryan

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] BMI/BCI Growing Fast

2007-12-23 Thread Bryan Bishop
On Saturday 22 December 2007, Philip Goetz wrote:
 If we define mindreading as knowing whether someone is telling the
 truth, whether someone likes you, or is sexually attracted to you, or
 recognizes you; knowing whether someone is paying attention; knowing
 whether someone is reasoning logically or being controlled by
 emotions

The entire idea of mindreading is peculiar. Haven't you ever had a 
moment when you've wondered if you like somebody? When you realize that 
such simple separations just don't matter and apply, that you can't 
even read your own mind in that regard? The idea that everybody must 
have a solid, readable opinion that must be expressed in certain 
detectable characteristics, sounds like the wrong way from creativity 
and intelligence.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=79017098-f2b069


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Bryan Bishop
On Friday 14 December 2007, Benjamin Goertzel wrote:
 But, we're still quite clueless about how to, say, hook the brain up
 to a calculator or to Google in a useful way... due to having a
 vastly insufficiently detailed knowledge of how the brain carries out
 cognitive operations...

While overall I agree with this statement, I also have to point out that 
recent news article (check transhumantech for a copy) where a team was 
getting some good results from 41 neurons for talking. Google voice 
search. The problem is feeding the data back in, at that point.

Back on topic for AGI: we need mind-bots and mind-agents. That's what 
I'm planning on working on.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76324626-ad0fac


Re: [agi] BMI/BCI Growing Fast

2007-12-14 Thread Bryan Bishop
On Friday 14 December 2007, Mike Dougherty wrote:
 Are there any efforts at using Nootropic drugs in a 'brain
 enhancement race' ?  I haven't heard about it, but then I wouldn't
 because the program would be kept secret.

There might be one behind the scenes. *cough*

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=76324681-fa0fa1

Re: [agi] CyberLover passing Turing Test

2007-12-12 Thread Bryan Bishop
On Wednesday 12 December 2007, Dennis Gorelik wrote:
 In my taste, testing with clueless judges is more appropriate
 approach. It makes test less biased.

How can they judge when they don't know what they are judging? Surely, 
when they hang out for some cyberlovin', they are not scanning for 
intelligence. Our mostly in-bred stupidity is evidence.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75149396-09c4b3


Re: [agi] Worst case scenario

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Matt Mahoney wrote:
 --- Bryan Bishop [EMAIL PROTECTED] wrote:
  Re: how much computing power is needed for ai. My worst-case
  scenario accounts for nearly any finite computing power, via the
  production of semiconductant silicon wafer tech.

 A human brain sized neural network requires about 10^15 bits of
 memory and 10^16 operations per second.  The Internet already has
 enough computing power to simulate a few thousand brains.  The

Yes, but how much of that computing power is accessible to you? Probably 
very little at the moment, and even if you had the penetration of the 
likes of YouTube and other massive websites, you're still only getting 
a fraction of the computational power of the internet. Again, 
worst-case: we have to make our own factories. 

 threshold for a singularity is to surpass the collective intelligence
 of all 10^10 human brains on Earth.

I am not so sure that the goal of making ai is the same as making a 
singularity. But this is probably less relevant.

 Moore's law allows you to estimate when this will happen, but keep in

Or you can make it happen yourself. Make your own fabs. Get the computer 
nodes you need. Write the software to take advantage of millions of 
nodes all at once. etc. 

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75034625-49cfcc


Re: [agi] CyberLover passing Turing Test

2007-12-11 Thread Bryan Bishop
On Tuesday 11 December 2007, Dennis Gorelik wrote:
 If CyberLover works as described, it will qualify as one of the first
 computer programs ever written that is actually passing the Turing
 Test.

I thought the Turing Test involved fooling/convincing judges, not 
clueless men hoping to get some action?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75065128-644ffb


Re: [agi] Worst case scenario

2007-12-10 Thread Bryan Bishop
On Monday 10 December 2007, Matt Mahoney wrote:
 The worst case scenario is that AI wipes out all life on earth, and
 then itself, although I believe at least the AI is likely to survive.

http://lifeboat.com/ex/ai.shield

Re: how much computing power is needed for ai. My worst-case scenario 
accounts for nearly any finite computing power, via the production of 
semiconductant silicon wafer tech. Now, if the dx on the number of 
nodes is too low, we may have to start making factories that build 
factories that build factories that build factories, etc. etc., which 
would exponentially increase the rate of production of computational 
nodes, and supposedly there is in fact some finite limit of 
computational bruteforce required, yes?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74460047-1e241c


Re: [agi] AGI and Deity

2007-12-09 Thread Bryan Bishop
On Sunday 09 December 2007, Mark Waser wrote:
 Pascal's wager starts with the false assumption that belief in a
 deity has no cost.

Formally, yes. However, I think it's easy to imagine a Pascal's wager 
where we replace diety with anything Truly Objective, such as 
whatever it is that we hope the sciences are asymptotically approaching 
with increasingly accurate models. And then you say, what if there is 
another Truly Objective cost to believing in this Truly Objective 
reality (or something)-- and I think that's an argument that has been 
well debated before, yes? Mutually exclusive propositions, yes?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74075746-6747a3


[agi] Worst case scenario

2007-12-07 Thread Bryan Bishop

Here's the worst case scenario I see for ai: that there has to be 
hardware complexity to the extent that generally nobody is going to be 
able to get the initial push. Indeed, there's Moore's law to take 
account of, but the economics might just prevent us from accumulating 
enough nodes, enough connections, and so on.

So, worst case, maybe some gazillionair will have to purchase/make his 
own semiconductor manufacturing facility and have it completely devoted 
to building additional microprocessors to add to a giant cluster, 
supercomputer, or computation cloud, whatever you want to call it.

A first step on the way to such a setup might be purchasing 
supercomputer time and trying to wire up a few different supers, then 
trying to see if even a percentage of the computational power predicted 
yields results remotely ressembling ai.

Over time, ai will improve and so the semiconductor facility can recover 
costs by hosting a very large digital work force, but this is all or 
nothing and so what arguments might there be to persuade a 
gazillionair into doing this?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73878310-b41ab6


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Bryan Bishop
On Friday 07 December 2007, Mike Tintner wrote:
 P.S. You also don't answer my question re: how many neurons  in total
 *can* be activated within a half second, or given period, to work on
 a given problem - given their relative slowness of communication? Is
 it indeed possible for hundreds of millions of messages about that
 one subject to be passed among millions of neurons in that short
 space (dunno-just asking)? Or did you pluck that figure out of the
 air?

I suppose that the number of neurons that are working on a problem at a 
moment will have to expand exponentially based on the number of 
synaptic connections per neuron as well as the number of hits/misses 
per neuron that are receiving the signals, viewed as if an expanding 
light-cone sphere in the brain (it's of course, a neural activity 
cone / sphere, not light). I am sure this rate can be made into a 
model. 

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73877309-9727c9

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Bryan Bishop
On Sunday 02 December 2007, John G. Rose wrote:
 Building up parse trees and word sense models, let's say that would
 be a first step. And then say after a while this was accomplished and
 running on some peers. What would the next theoretical step be?

I am not sure what the next step would be. The first step might be 
enough for the moment. When you have the network functioning at all, 
expose an API so that other programmers can come in and try to utilize 
sentence analysis (and other functions) as if the network is just 
another lobe of the brain or another component for ai. This would allow 
others who are possibly more creative than us to take advantage of what 
looks to be interesting work.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71422338-8cb1da


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Bryan Bishop
On Thursday 29 November 2007, Ed Porter wrote:
 Somebody (I think it was David Hart) told me there is a shareware
 distributed web crawler already available, but I don't know the
 details, such as how good or fast it is.

http://grub.org/
Previous owner went by the name of 'kordless'. I found him on Slashdot.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71712384-417a60


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Bryan Bishop
On Monday 03 December 2007, Mike Dougherty wrote:
 I believe the next step of such a system is to become an abstraction
 between the user and the network they're using.  So if you can hook
 into your P2P network via a firefox extension, (consider StumbleUpon
 or Greasemonkey) so it (the agent) can passively monitor your web
 interaction - then it could be learn to screen emails (for example)
 or pre-chew either your first 10 google hits or summarize the next
 100 for relevance.  I have been told that by the time you have an
 agent doing this well, you'd already have AGI - but i can't believe
 this kind of data mining is beyond narrow AI (or requires fully
 general adaptive intelligence)

Another method of doing search agents, in the mean time, might be to 
take neural tissue samples (or simple scanning of the brain) and try to 
simulate a patch of neurons via computers so that when the simulated 
neurons send good signals, the search agent knows that there has been a 
good match that excites the neurons, and then tells the wetware human 
what has been found. The problem that immediately comes to mind is that 
neurons for such searching are probably somewhere deep in the 
prefrontal cortex ... does anybody have any references to studies done 
with fMRI on people forming Google queries?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71715011-399ee5

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-16 Thread Bryan Bishop
On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote:
 non-brain-based AGI.  After all it's not like we know how real
 chemistry gives rise to real biology yet --- the dynamics underlying
 protein-folding remain ill-understood, etc. etc.

Can anybody elaborate on the actual problems remaining (beyond etc. 
etc.-- which is appropriate from Ben who is most notably not a 
biochemist/chemist/bioinformatician)? Protein folding is one, yes. 
Another problem might be the evolutionary events that led chem - bio 
transformation. Any others?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65714613-6d0dc8

Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 02:30, Bob Mottram wrote:
 I think the main problem here is the low complexity of the
 environment

Complex programs can only be written in an environment capable of 
bearing that complexity:

http://sl4.org/archive/0710/16880.html

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65328088-2120d3


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote:
 On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
  Can anybody elaborate on the actual problems remaining (beyond
  etc. etc.-- which is appropriate from Ben who is most notably not
  a biochemist/chemist/bioinformatician)?

 Hey -- That is a funny comment

Oh my. This is a big, big mistake on my part. I am sorry. Please accept 
my apologies .. and the knowledge that my parenthetical comment no 
longer applies.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65722472-5cdf65


Re: [agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-15 Thread Bryan Bishop
On Thursday 15 November 2007 21:19, Benjamin Goertzel wrote:
  so we still don't know exactly how poor
 a model the formal neuron used in computer science is

Speaking of which: isn't this the age-old simple math function involving 
an integral or two and a summation over the inputs? I remember seeing 
this many years ago (before I knew its importance) on ai-junkie or 
maybe from Jeff Hawkins' On Intelligence. Way back when.

And clearly I haven't been keeping track of the literature on neuronal 
modeling, but I would hope that there are other models out there by 
now. I need to read more journals.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65727925-e235c5


Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:55, Richard Loosemore wrote:
 I was really thinking of the data collection problem:  we cannot take
 one brain and get full information about all those things, down to a
 sufficient level of detail.  I do not see such a technology even over
 the horizon (short of full-blow nanotechnology) that can deliver
 that. We can get different information from different individual
 brains (all of them dead), but combining that would not necessarily
 be meaningful: all brains are different.

Re: all brains are different. What about the possibilities of cloning 
mice and then proceeding to raise them in Skinner boxes with the exact 
same environmental conditions, the same stimulation routines, etc. ? 
Ideally this will give us a baseline mouse that is not only 
genetically similar, but also behaviorally similar to some degree. This 
would undoubtedly be helpful in this quest.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191157-9f3b24

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote:
 The complaint is not your symbols are not connected to experience.
 Everyone and their mother has an AI system that could be connected to
 real world input.  The simple act of connecting to the real world is
 NOT the core problem.

Are we sure? How much of the real world are we able to get into our AGI 
models anyway? Bandwidth is limited, much more limited than in humans 
and other animals. In fact, it might be the equivalent to worm tech.

To do the calculations would I just have to check out how many neurons 
are in a worm, how many sensory neurons, and rough information 
theoretic estimations as to the minimum and maximums as to amounts of 
information processing that the worm's sensorium could be doing?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=65191610-b12544

Re: [agi] Human uploading

2007-11-13 Thread Bryan Bishop
Ben, 

This is all very interesting work. I have heard of brain slicing before, 
as well as viral gene therapy to add a way for our neurons to debug 
themselves into the blood stream, which is not yet technologically 
possible (or here yet, rather), and the age-old concept of using MNT 
to signal data about our neurons, synapses, etc. There is also the 
concept of incrementally replacing the brain, component by component, 
also requiring MNT, or possibly taking out regions of the brain and 
replacing them with equivalents and re-training those portions 
somehow, obviously less effective with memories. 

I have been thinking that if we do not care for *pure* mind uploading, 
we should also be focusing on how long we can keep regions of the brain 
alive on life support with MEAs or DNIs (a type of BCI) to connect it 
back to the rest of the brain or a digitized brain. If we can do this 
well enough, we can keep our minds alive long enough to see the day 
when we have more options for mind uploading.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64756752-3c621b


Re: [agi] What best evidence for fast AI?

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 09:11, Richard Loosemore wrote:
 This is the whole brain emulation approach, I guess (my previous
 comments were about evolution of brains rather than neural level
 duplication).

Ah, you are right. But this too is an interesting topic. I think that 
the order of magnitudes for whole brain emulation, connectome, and 
similar evolutionary methods, are roughly the same, but I haven't done 
any calculations.

 It seems quite possible that what we need is a detailed map of every
 synapse, exact layout of dendritic tree structures, detailed
 knowledge of the dynamics of these things (they change rapidly) AND
 wiring between every single neuron.

Hm. It would seem that we could have some groups focusing on neurons, 
another on types of neurons, another on dendritic tree structures, some 
more on the abstractions of dendritic trees, etc. in an up-*and*-down 
propagation hierarchy so that the abstract processes of the brain are 
studied just as well as the in-betweens of brain architecture.

 I think that if they did the whole project at that level of detail it
 would amount to a possibly interesting hint at some of the wiring, of
 peripheral interest to people doing work at the cognitive system
 level. But that is all.

You see no more possible value of such a project?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64757679-f3c1ec


Re: [agi] advice-level dev collaboration

2007-11-13 Thread Bryan Bishop
On Tuesday 13 November 2007 17:12, Benjamin Johnston wrote:
 Why not try this list, and then move to the private discussion model
 (or start an [agi-developer] list) if there's a backlash?

I'd certainly join.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64758692-14dcfa


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 15:56, Richard Loosemore wrote:
 You never know what new situation might arise that might be a
 problem, and you cannot market a driverless car on the understanding
 that IF it starts killing people under particular circumstances, THEN
 someone will follow that by adding code to deal with that specific
 circumstance.

It seems that this was the way that the brain was 
progressively 'improved' via evolution. However, we want to compress a 
few billion years of evolutionary selective pressure into the next 10 
or 100 years instead. Have there been any proposed strategies that try 
to take an evolutionary approach on the magnitude that was needed for 
human brain evolution?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64395703-74a5cb


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 19:31, Richard Loosemore wrote:
 Yikes, no:  my strategy is to piggyback on all that work, not to try
 to duplicate it.

 Even the Genetic Algorithm people don't (I think) dream of evolution
 on that scale.

Yudkowsky recently wrote an email on preservation of the absurdity of 
the future. The method that I have proposed requires this massive 
international effort and maybe can only be started when we hit a few 
more billion births. It is not entirely absurd, however, since we would 
start the project with investigation methods known today and slowly 
improve until we have millions of people researching the millions of 
varied pathways in the brain. From what I have read of Novamente today, 
Goertzel might be hoping that the circuits in the brain are ultimately 
simple, or that some similar model that has simpler components building 
up to some greater actor-exchange medium, effectively mimics the brain 
to some degree.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64423497-d1a153

Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 19:48, Richard Loosemore wrote:
 Even with everyone on the planet running evolutionary simulations, I
 do not believe we could reinvent an intelligent system by brute
 force.

Of your message, this part is the most peculiar. Brute force is all that 
we have.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64427651-dc0d91


Re: [agi] What best evidence for fast AI?

2007-11-12 Thread Bryan Bishop
On Monday 12 November 2007 22:16, Richard Loosemore wrote:
 If anyone were to throw that quantity of resources at the AGI problem
 (recruiting all of the planet), heck, I could get it done in about 3
 years. ;-)

I have done some research on this topic in the last hour and have found 
that a Connectome Project is in fact in the very early stages out 
there on the internet:

http://iic.harvard.edu/projects/connectome.html
http://acenetica.blogspot.com/2005/11/human-connectome.html
http://acenetica.blogspot.com/2005/10/mission-to-build-simulated-brain.html
http://www.indiana.edu/~cortex/connectome_plos.pdf

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64449857-7dd95a


Re: [agi] Re: What best evidence for fast AI?

2007-11-11 Thread Bryan Bishop
Excellent post, and I hope that I may come across enough time to give it 
a more thorough reading.

Is it possible that at the moment our working with 'intelligence' is 
just like flapping in an attempt to fly? It seems like the concept of 
intelligence is a good way to preserve the nonabsurdity of the future.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63962021-7b03c5


Re: [agi] Upper Ontologies

2007-11-10 Thread Bryan Bishop
On Friday 09 November 2007 23:27, Benjamin Goertzel wrote:
 I would bet that merging two KB's obtained by mining natural
 language would work a lot better than merging two KB's
 like Cyc and SUMO that were artificially created by humans.

Upon reading Waser's response I reread that segment to say by mining 
_neural_ language which could have interesting results, esp. on the 
front of neuroscience and neural signal decoding/encoding and how 
information is represented internally.

However, wetware neurons do not necessarily have a KB, and don't rely on 
upper ontologies unless you consider the DNA to be the base-class.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63821464-b57f31


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 09:29, Derek Zahn wrote:
 On such a chart I think we're supposed to be at something like mouse
 level right now -- and in fact we have seen supercomputers beginning
 to take a shot at simulating mouse-brain-like structures.

Ref?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63834893-c2b731


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 10:07, Kaj Sotala wrote:
 http://news.bbc.co.uk/2/hi/technology/6600965.stm

 The researchers say that although the simulation shared some
 similarities with a mouse's mental make-up in terms of nerves and
 connections it lacked the structures seen in real mice brains.  

Looks like they were just simulating eight million neurons with up to 
6.3k synapses each. How's that necessarily a mouse simulation, anyway?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63838397-7d08b6


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 11:31, Derek Zahn wrote:
 Unfortunately, not enough is yet known about specific connectivity so
 the best that can be done is play with structures of similar scale in
 anticipation of further advances.

What signs will tell us that we do know enough about the architecture of 
the mouse brain to simulate it to some degree of usefulness?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63855916-7f88e6


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 12:52, Edward W. Porter wrote:
 In fact, if the ITRS roadmap projections continue to be met through

What is the ITRS roadmap? Do you have a link?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63859781-dcb1eb


Re: [agi] What best evidence for fast AI?

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 12:52, Edward W. Porter wrote:
 There is a small, but increasing number of people who pretty much
 understand how to build artificial brains

I would be interested in learning who these people are and meeting them. 
Artificial brains are tough things to build. A sac of sand with as much 
volume as the human head practically rivals our silicon attempts at 
artificial brains in the same space.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63869038-324dfd


Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 13:40, Charles D Hixson wrote:
 OTOH, to make a go of this would require several people willing to
 dedicate a lot of time consistently over a long duration.

A good start might be a few bibliographies.
http://liinwww.ira.uka.de/bibliography/

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63873809-55989b


Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 14:10, Charles D Hixson wrote:
 Bryan Bishop wrote:
  On Saturday 10 November 2007 13:40, Charles D Hixson wrote:
  OTOH, to make a go of this would require several people willing to
  dedicate a lot of time consistently over a long duration.
 
  A good start might be a few bibliographies.
  http://liinwww.ira.uka.de/bibliography/
 
  - Bryan

 Perhaps you could elaborate?  I can see how those contributing to the
 proposed wiki who also had access to a comprehensive mathcomp-sci
 library might find that useful, but I don't see it as a good way to
 start.

Bibliography + paper archive, then.
http://arxiv.org/ (perhaps we need one for AGI)


 It seems to me that better way would be to put up a few pages with
(snip) Yes- that too would be useful.


 create. For this kind of a wiki reliability is probably crucial, so

Or deadly considering the majority of AI reputation comes from I 
*think* that guy over there, the one in the corner, might be doing 
something interesting.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63878133-cc0354


Re: [agi] Connecting Compatible Mindsets

2007-11-10 Thread Bryan Bishop
On Saturday 10 November 2007 14:17, I wrote:
 Bibliography + paper archive, then.
 http://arxiv.org/ (perhaps we need one for AGI)

It has come to my attention that there is no open source software for 
(p)reprint archives. This is unacceptable- I was hoping to quickly 
download something from http://sourceforge.net/ and throw it up on my 
server as a temporary, quick hack. Guess not?

The software wouldn't take too many man-hours to get to a usable state 
in development.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63879787-41dc84


Re: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread Bryan Bishop
On Friday 09 November 2007 20:01, John G. Rose wrote:
 I already have some basics of merging OOP and Group/Category Theory.
 Am working on some ideas on jamming, or I should say intertwining
 automata in that. The complexity integration still trying to figure
 out... trying to stay as far from uncomputable as possible :)

But it sounds like you have only vague ideas of how to add complexity 
and so on. How do you actually program if you are operating on such an 
abstract level?*

* Would be interesting to do abstract programming and claim that others 
in the future will be able to apply this, let them implement it.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63788500-031ff7


Re: [agi] can superintelligence-augmented humans compete

2007-11-04 Thread Bryan Bishop
On Saturday 03 November 2007 16:53, Edward W. Porter wrote:
 In my below recent list of ways to improve the power of human
 intelligent augmentation I forgot to think about possible ways to
 actually increase the bandwidth of the top level decision making of
 the brain, which I had listed as a real problem but had made no

To increase the actual bandwidth we would have to change the number of 
neurons. To squeeze performance out of the neurons as they are, 
direct neural interfaces might be appropriate, so that the higher 
level decision making can operate at a more abstract level, and tech 
can then translate it down to the levels that the rest of the brain 
works with (for example).

 On way to improve the bandwidth of the top level of human decision
 making would be to replace or augment the brain's machinery for
 performing it, which is probably in the prefrontal cortex, basil
 ganglia, and general cortico-thalamic loop.  Some include the
 Cerebellum in such mechanism for its role in fine tuning behaviors
 into the current context (including very time sensitive feedback) and
 by controlling the timing of learned sequential behaviors, including
 mental behaviors.

Re: augmenting/replacing the PFC. We can advance this field of knowledge 
via attempting to extend Dr. White's work on brain transplantation in 
monkeys, instead with mice, in an attempt to keep brain regions of the 
mice on life support systems, perhaps on silicon for recording and 
stimulation. Buffer as many signals in/out as possible, drop 
replacement transponder chip in mouse brain, then start playing around 
with emulating/simulating the cortex-on-a-chip.

 -A--have the AGI learn the goal system of the human brain and have
 delegated authority to make decisions on its own, much as the basil
 ganglia often does relative to our conscious decision processes. 
 (i.e., if you drop something you are often first aware of that fact
 by the subconscious response your body is making to catch it.)  Such
 a system could respond in real time to complex inputs thousands or
 millions of times faster than a human.  Although it might not always
 do what we want, neither does our basil ganglia.  It might be just as
 faithful to our goals and emotions as the basil ganglia,  Such a
 system could help us keep pace with many superintelligences, when,
 for example trying to prevent them from infecting our trusted
 machines.

My hope is that AGI will one day be able to do most of my redundant 
mental cycles for me ;) It would be nice to eliminate redundancy. 
Computers are very, very good at doing things over and over again.


 -B--Create a super intelligent basil ganglia (either by replacement
 or supplementation) that receives the inputs from the portions of the
 cortex the basil ganglia currently does, but also receives inputs
 from the AGI


 Can, and how can, our human descendants compete with
 superintelligences, other than by deserting human wetware and
 declaring machines to be our descendants?

Are you asking how to compete with change without changing ourselves?

 There are real issues about the extent to which any intelligence that
 has a human brain at its top level of control can compete with
 machines that conceivably could have a top level decision process
 with hundreds or perhaps millions of times the bandwidth.

Yes- I think that we can do an information theoretics analysis of the 
optimal performance of the human brain. Est. 100 billion neurons, a few 
quadrillion possible connections, so much protein, LTP activation 
networks, etc. This could then be used to show our optimal intelligence 
without augmentation. But this would of course require us to figure out 
a good definition of 'intelligence' to work with.

 There are also questions of how much bandwidth of machine
 intelligence can be pumped into, or shared, with a human
 consciousness and/or subconscious, i.e., how much of the
 superintelligence we humans could be conscious of and/or effectively
 use in our subconscious.

Is the limit yourself? If the superintelligent machine that shares 
itself with you is an order of magnitude more than you, then does this 
mean that you can only 'comprehend' a part of the superintelligence's 
data output at once, even if you have full data access?

- imagine a superintelligence embedded in your brain via nanotech, 
living in between your current neurons and synapses

 (In fact, it would not be that hard to have a system where the
 superintelligence only communicates to our brain its consciousness,
 or portions of its consciousness that its learning indicate will have
 importance or interest to us, so that it would be acting somewhat
 like an extended subconsciousness that would occasionally pop ideas
 up into our subconsciousness or consciousness.  This would greatly
 increase our mental powers, particularly if we had the capability to
 send information down to control it, give it sub-goals, or queries,
 etc.  )

You have given me an idea: we could 

Re: [agi] can superintelligence-augmented humans compete

2007-11-04 Thread Bryan Bishop
On Sunday 04 November 2007 14:37, Edward W. Porter wrote:
  Re: augmenting/replacing the PFC. We can advance this field of
  knowledge via attempting to extend Dr. White's work on brain
  transplantation in monkeys, instead with mice, in an attempt to keep
  brain regions of the mice on life support systems, perhaps on
  silicon for recording and stimulation. Buffer as many signals in/out 
  as possible, drop replacement transponder chip in mouse brain, then
  start playing around with emulating/simulating the cortex-on-a-chip.   

 I am not aware of Dr. White's work.

http://homepage.ntlworld.com/david.bennun/interviews/drwhite.html

(I am most interested in this section of the thread.)
- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61027009-9b8907