my deux centimes' worth.
On a more positive note, I do think it is possible for AGI researchers
to work together within a common formalism. My presentation at the
AGIRI workshop was about that, and when I get the paper version of the
talk finalized I will post it somewhere.
Richard Loosemore
substituted
for those components, making them less than obvious.
Exactly the same critique bears on anyone who suggests that
Reinforcement Learning could be the basis for an AGI. I do not believe
there is still any reply to that critique.
Richard Loosemore
William Pearson wrote:
On 01/06
is not a set of environment states S, a set of actions A,
and a set of scalar rewards in the Reals.)
Watching history repeat itself is pretty damned annoying.
Richard Loosemore
James Ratcliff wrote:
Richard,
Can you explain differently, in other words the second part of this
post. I am very
of the visual cortex flow was going frontward? In other
words, the frontal cortex is doing a lot more than just handling
infromation from the environment, so I am not sure your original
question can be easily answered.
Richard Loosemore
Philip Goetz wrote:
On 6/9/06, Eugen Leitl [EMAIL
computer in 1982.
Richard Loosemore
A. T. Murray wrote:
In Vernor Vinge's classic paper on Technological Singularity:
And what of the arrival of the Singularity itself?
What can be said of its actual appearance? Since it
involves an intellectual runaway, it will probably
occur faster
.
Hope this clarifies it a little.
Richard Loosemore
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
to test the
system as a whole, and get hammered in the mean time for not actually
doing anything that counts. Very tricky.
Richard Loosemore.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
You Got, Its The Way
That You Do It.
Richard Loosemore.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
undefined and yet at the same time subject to
a proof of how computationally difficult it is.
I'm not sure why you would think it an unhelpful argument. Isn't it a
clear case of semantic incoherence?
Richard Loosemore
---
To unsubscribe, change your address, or temporarily deactivate your
I am beginning to wonder if this forum would be better off with a
restricted membership policy.
Richard Loosemore
Davy Bartoloni - Minware S.r.l. wrote:
Which thing we want from a IA? , we want TRULY something? the doubt
rises me that nobody affidera' never to the words
to know when anyone sat down and
figured out that it could not be valid.
Richard Loosemore
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
that there are so many people out there who cannot even
understand that last point, let alone debate it.
Richard Loosemore
Pei Wang wrote:
Richard,
Thanks for taking the time to explain your position. I actually agree
with most what you wrote, though I don't think it is inconsistent with
my
that Yan
produced, but it is not literally a production rule. Writing it in
rule form like that is just a summary of a constraint structure that,
when triggers, engages in the active process of trying to fit itself to
the rest of the situation model.
Richard Loosemore
at closely will not be PL at all.
I'm working on it. (As hard as I can, though not by any means full
time, alas).
Richard Loosemore
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
this, and they will start to find promotions slipping, or
they'll just be dumped. Short term results pressure in other words.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
Ben Goertzel wrote:
Hi,
The real grounding problem is the awkward and annoying fact that if
you presume a KR format, you can't reverse engineer a learning mechanism
that reliably fills that KR with knowledge.
Sure...
To go back to the source, in
of
several interpretations of what you say, but am not sure which you mean.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
brains are far from optimal as
intelligences...
-- Ben G
On 10/11/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Sergio,
Your words sound nice in theory, but that is not the way it is happening
on the ground.
What I tried to say was that neuroscience folks are far too quick to
deploy words like
since 20 years ago.
Having a clue about just what a complex thing intelligence is, has
everything to do with it.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL
BillK wrote:
On 10/19/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Sorry, but IMO large databases, fast hardware, and cheap memory ain't
got nothing to do with it.
Anyone who doubts this get a copy of Pim Levelt's Speaking, read and
digest the whole thing, and then meditate on the fact
that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you
sure can predict such overall characteristics as temperature, pressure
and volume.
Richard Loosemore
-
This list is sponsored
BillK wrote:
On 10/20/06, Richard Loosemore [EMAIL PROTECTED] wrote:
For you to blithely say Most normal speaking requires relatively little
'intelligence' is just mind-boggling.
I am not trying to say that language skills don't require a human
level of intelligence. That's obvious
it. There is just no point.
What you said above is just flat-out wrong from beginning to end. I
have done research in that field, and taught postgraduate courses in it,
and what you are saying is completely divorced from reality.
Richard Loosemore
-
This list is sponsored by AGIRI: http
be disastrous.
I realise that I have been tempted to explain an idea in partial,
cryptic terms (laying myself open to requests for more detail, or
scorn), so apologies if the above seems opaque. More when I get the time.
Richard Loosemore.
It may be that the goals of and motivations from
-undergraduate level of
comprehension.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
and, as ever, I will do my best to respond to
anyone who has thoughtful questions.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
this is a milestone of
mutual accord in a hitherto divided community.
Progress!
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
This is why I finished my essay with a request for comments based on an
understanding of what I wrote.
This is not a comment on my proposal, only a series of unsupported
assertions that don't seem to hang together into any kind of argument.
Richard Loosemore.
Matt Mahoney wrote:
My
it will work.
Hope that helps.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]
, or if I
had started a successful lemonade-stand business, it would of course
only take ten minutes to convince an investor, given the way investors
operate, but, hey ho: ten years it is. :-(
Enough for now.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
to implement in an AI system. Such a language would also
be a member of the class fifth generation computer language.
Not true. If it is too dumb to acquire a natural language then it is
too dumb, period.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
if experiences are the same.
Your conclusions therefore do not follow.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
something working, and then go from there
This rationale is the very same rationale that drove researchers into
Blocks World programs. Winograd and SHRDLU, etc. It was a mistake
then: it is surely just as much of a mistake now.
Richard Loosemore.
-
This list is sponsored by AGIRI
these interfaces would help ... but it would be overstating the
case to say that this includes all AI designs.
Just wanted to make that disclaimer, that's all.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go
question.
Richard Loosemore.
- Original Message - From: John Scanlon [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, November 06, 2006 5:04 PM
Subject: Re: [agi] The concept of a KBMS
Richard Loosemore wrote:
When you say that it provides ... a general AI shell, within
and got over it).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
of all those begged questions.
Richard Loosemore.
Ben Goertzel wrote:
About
http://www.physorg.com/news82190531.html
Rabinovich and his colleague at the Institute for Nonlinear Science
at the
University of California, San Diego, Ramon Huerta, along with Valentin
Afraimovich
not speak to what they might do in the future.
I cannot see how anyone could come to a strong conclusion about the
uselessness of deploying that internal knowledge.
Richard Loosemore
*** Introspection, after all, is what all AI researchers use as the
original source of their algorithms
it is with redefinitions of the term understanding to be synonymous
with a variety of compression. This is an egregious distortion of the
real meaning of the term, and *everything* that follows from that
distortion is just nonsense.
Richard Loosemore.
Richard Loosemore
-
This list
say, I will try to see if your book contains material which evades
this trap my understanding of your paper made me suspect not, but I
will suspend judgment.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
in Bristol.
occam is a beautiful language in some ways, diabolically infuriating in
others.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
(to coin a phrase) debunked every which way from
sunday. ;-)
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
objection is not so much that it is nakedly wrong, as that it
diabolically inconsistent with a lot of stuff, and untested).
From what you write, I think it was the latter issue that you were
referring to.
Richard Loosemore.
John Scanlon wrote:
I get the impression that a lot of people
in the development of real world knowledge) are posited to
play a significant role in the learning of grammar in humans. As such,
these proofs say nothing whatsoever about the learning of NL grammars.
I agree they do have other limitations, of the sort you suggest below.
Richard Loosemore.
Rather
symbol grounding, perhaps other
issues. I think all of us have moved on from most of the simplistic
GOFAI ideas.
Richard Loosemore
John Scanlon wrote:
I was referring to the kind of symbol-system hypothesis that Searle's
Chinese room and Hubert Dreyfus's writings attack, and wondering
Pei Wang wrote:
On 11/13/06, Richard Loosemore [EMAIL PROTECTED] wrote:
But
Now you have me really confused, because Searle's attack would have
targetted your approach, my approach and Ben's approach equally: none
of us have moved on from the position he was attacking!
The situation
degree of comprehension by quoting numbers of bits.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Matt Mahoney wrote:
Richard Loosemore [EMAIL PROTECTED] wrote:
Understanding 10^9 bits of information is not the same as storing 10^9
bits of information.
That is true. Understanding n bits is the same as compressing some larger
training set that has an algorithmic complexity of n bits
of what a hurricane is.
5) I have looked at your paper and my feelings are exactly the same as
Mark's theorems developed on erroneous assumptions are worthless.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
depend on any special assumptions about
the nature of learning.
Richard Loosemore wrote:
I beg to differ. IIRC the sense of learning they require is
induction over example sentences. They exclude the use of
real world knowledge, in spite of the fact that such knowledge
(or at least primitives
Matt Mahoney wrote:
Richard Loosemore [EMAIL PROTECTED] wrote:
5) I have looked at your paper and my feelings are exactly the same as
Mark's theorems developed on erroneous assumptions are worthless.
Which assumptions are erroneous?
Marcus Hutter's work is about abstract idealizations
Ben Goertzel wrote:
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
Please, let us avoid explicitly insulting one
to infinity... a spurious argument, of
course, because they can go in any direction.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Ben Goertzel wrote:
Rings and Models are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair
, said Marvin and trudged away.
**
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
here before (Levelt's Speaking) in which the author takes apart a
single conversational exchange consisting of a couple of short sentences.
Richard Loosemore
J. Storrs Hall, PhD. wrote:
It was a true solar-plexus blow, and completely knocked out, Perkins
staggered back against the instrument
something that was already stretched.
But maybe that was not what you meant. I stand ready to be corrected,
if it turns out I have goofed.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com
to human
language really was? It sounds like Immerman is putting the
significance of complexity classes on firmer ground, but not changing
the nature of what they are saying.
Richard Loosemore
-- Ben
On 11/24/06, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben Goertzel wrote
are making with respect to the computational
complexity of processes like grammar induction and the evolutionary
construction of learning systems.
We are coming from similar points of view, but reaching diametrically
opposed conclusions.
Richard Loosemore.
-
This list is sponsored
no reason to suppose
that such a framework heads in the direction of a system that is
intelligent. You could build an entire system using the framework, and
then do some experiments, and then I'd be convinced. But short of that
I don't see any reason to be optimistic.
Richard Loosemore
Philip Goetz wrote:
On 11/17/06, Richard Loosemore [EMAIL PROTECTED] wrote:
I was saying that *because* (for independent reasons) these people's
usage of terms like intelligence is so disconnected from commonsense
usage (they idealize so extremely that the sense of the word no longer
bears
, at some point in the future.
Richard Loosemore wrote:
The point I am heading towards, in all of this, is that we need to
unpack some of these ideas in great detail in order to come to sensible
conclusions.
I think the best way would be in a full length paper, although I did
arguments.
Does that make sense?
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
, in other words, is in the details.
Richard Loosemore.
*/Philip Goetz [EMAIL PROTECTED]/* wrote:
On 11/19/06, Richard Loosemore wrote:
The goal-stack AI might very well turn out simply not to be a
workable
design at all! I really do mean that: it won't become
Samantha Atkins wrote:
On Nov 30, 2006, at 12:21 PM, Richard Loosemore wrote:
Recursive Self Inmprovement?
The answer is yes, but with some qualifications.
In general RSI would be useful to the system IF it were done in such a
way as to preserve its existing motivational priorities
at
least thirty years ago (with the exception of a few diehards in North
Wales and Cambridge).
Richard Loosemore
[With apologies to Fergus, Nick and Ian, who may someday come across
this message and start flaming me].
-
This list is sponsored by AGIRI: http://www.agiri.org/email
on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Philip Goetz wrote:
On 12/1/06, Richard Loosemore [EMAIL PROTECTED] wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Some people would call it repeating the same mistakes I already dealt
with.
Some
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
I am disputing the very idea that monkeys (or rats or pigeons or humans)
have a part of the brain which generates the reward/punishment signal
for operant conditioning.
This is behaviorism. I find myself completely
J. Storrs Hall, PhD. wrote:
On Friday 01 December 2006 23:42, Richard Loosemore wrote:
It's a lot easier than you suppose. The system would be built in two
parts: the motivational system, which would not change substantially
during RSI, and the thinking part (for want of a better term
is the present approach to AI then I
tend to agree with you John: ludicrous.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
just stated).
Richard Loosemore.
SUBGOAL PROMOTION AND ALIENATION
One very common phenomenon is when a supergoal is erased, but one of
its subgoals is promoted to the level of supergoal. For instance,
originally one may become
of repetitions of the same ideological
statement).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
.
Richard Loosemore.
By discussing goals, I was not trying to imply that all aspect of a
mind (or even most) need to, or should, operate according to an
explicit goal hierarchy.
I believe that the human mind incorporates **both(( a set of goal
stacks (mainly useful in deliberative thought
.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
the time.
Hope that helps.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Generation Project and (Naive)
Neural Networks.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
unreasonable
position, that's all ;-).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
!)
cognitive system is a direct rejection of the idea that I was asking
you to consider as a hypothesis.
I *know* you don't believe it to be true! ;-) What I was trying to do
was to ask on what grounds you reject it.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
the type of my question is).
Richard Loosemore.
Pei Wang wrote:
Richard,
The assumption is that the underlying dynamics of things at the concept
level (or logical term level, if concept is not to your liking) can
be meaningfully described by things that look something like
probabilities.
I
Pei Wang wrote:
On 2/4/07, Richard Loosemore [EMAIL PROTECTED] wrote:
I fully accept that you don't care if the human mind does it that way,
because you want NARS to do it differently. My question was at a higher
level. If we knew for sure that the human mind was using something like
interpretation of
Oaksford and Chater is that it is actually caused by too much of it.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
, is that the
possibility I raised is still completely open.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
goal.
Just a thought.
Richard Loosemore.
Charles D Hixson wrote:
That's not what I meant. I don't think that people really operate on
the basis of probabilistic calculations, but rather on short-range
attractors. What I see them being motivated by is the dream of
riches, which feels closer
is
the best, of the two suggested above.
Hint: don't go for the dumb one, because it is not really smart enough
to be an Artificial GENERAL Intelligence.
Regards
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options
gts wrote:
On Sat, 10 Feb 2007 13:41:33 -0500, Richard Loosemore
[EMAIL PROTECTED] wrote:
The meat of this argument is all in what exact type of AGI you claim
is the best, of the two suggested above.
The best AGI in this context would be one capable of avoiding the
conjunction fallacy
gts wrote:
On Sun, 11 Feb 2007 11:41:31 -0500, Richard Loosemore
[EMAIL PROTECTED] wrote:
P.S. This isn't the first time this topic has come up. For a now
famous example, see my essay at http://sl4.org/archive/0605/14748.html
and the follow-up at http://sl4.org/archive/0605/14773.html
of that machinery.
And what is the boundary between an ontological bias and a lesser
tendency to learn a certain kind of thing, which can nevertheless be
overridden through experience?
Richard Loosemore.
Ben Goertzel wrote:
Hi,
In a recent offlist email dialogue with an AI researcher, he made
, it was different. Lisp and Prolog, for example,
represented particular ways of thinking about the task of building an
AI. The framework for those paradigms was strongly represented by the
language itself.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
the general
problem. Again, apologies for coyness: possible patent pending and all
that.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
an
alternative approach.
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303
Bo Morgan wrote:
On Mon, 19 Feb 2007, Richard Loosemore wrote:
) Bo Morgan wrote:
)
) On Mon, 19 Feb 2007, John Scanlon wrote:
)
) ) Is there anyone out there who has a sense that most of the work being
) ) done in AI is still following the same track that has failed for
) ) fifty years
of the symbols being encoded at that hardware-dependent
level. I haven't seen any neuroscientists who talk that way show any
indication that they have a clue that there are even problems with it,
let alone that they have good answers to those problems.
In other words, I don't think I buy it.
Richard
Chuck Esterbrook wrote:
On 2/19/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Wow, I leave off email for two days and a 55-message Religious War
breaks out! ;-)
I promise this is nothing to do with languages I do or do not like (i.e.
it is non-religious...).
As many people pointed out
banging the rocks together.
Having said that, there is an element of truth in what Hawkins says. My
personal opinion is that he has only a fragment of the truth, however,
and is mistaking it for the whole deal.
Richard Loosemore.
-
This list is sponsored by AGIRI: http
construction of AI
systems.
Richard Loosemore
Eric Baum wrote:
Josh The other idea in OI worth noting is Mountcastle's Principle,
Josh that all of the cortex seems to be doing the same thing. Hawkins
Josh gets credit for pointing it out, but of course it was a
Josh published observation
new theme that I missed?
Richard Loosemore.
Mark Waser wrote:
I think that it's also very important/interesting to note that his
subject headings exactly specify the development environment that
Richard Loosemoore and others are pushing for (i.e. An Infrastructure
to Support
-systems/complexity
approach, Ben has his eclectic approach, Pei has his NARS approach and
Peter Voss has something else again (does it make sense to call it a
neural-gas approach, Peter?).
Richard Loosemore.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe
Bo Morgan wrote:
On Mon, 5 Mar 2007, Richard Loosemore wrote:
) Rowan Cox wrote:
) Hey all,
)
) Just thought I'd breifly delurk to post a link (or three,..). I
) believe this is a talk from 2001, so everyone else has probably heard
) it already ;)
)
) Part 1:
) http
1 - 100 of 750 matches
Mail list logo