On Tue, Mar 25, 2008 at 7:19 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Certainly ambiguity (=applicability to multiple contexts in different
ways) and presence of rich structure in presumably simple 'ideas', as
you call it, is a known issue. Even interaction between concept clouds
evoked
On Wed, Mar 26, 2008 at 4:27 PM, Jim Bromer [EMAIL PROTECTED] wrote:
I agreed with you up until your conclusion. While the problems that I
talked about may be known issues, they are discussed almost exclusively
using intuitive models, like we used, or by referring to ineffective models,
like
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote
Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given algorithms will learn
miserably. The problem is to find a
On Wed, Mar 26, 2008 at 10:17 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
What you describe is essentially my own path up to this point: I
started with considering high-level capabilities and gradually worked
towards an implementation that seems to be able to exhibit these
high-level
2008/3/26 William Pearson [EMAIL PROTECTED]:
On 25/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote
Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given
On Wed, Mar 26, 2008 at 8:47 PM, Jim Bromer [EMAIL PROTECTED] wrote:
I do not know much about neural networks, but from what I read, I always
felt that a recurrent network would be the only way you could feasibly get
an ANN to represent (excuse my french) distinct items without absurdly huge
First a riddle: What can be all learning algorithms, but is none?
A human being!
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
On 26/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
First a riddle: What can be all learning algorithms, but is none?
A human being!
Well my answer was a common PC, which I hope is more illuminating
because we know it well.
But human being works, as does any future AI design, as far as I am
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City kind of game. In this kind of high-level model no one would ever
imagine all of the
On Tue, Mar 25, 2008 at 11:23 AM, William Pearson [EMAIL PROTECTED]
wrote:
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
To try to understand what I am talking about, start by imagining a
simulation of some physical operation, like a part of a complex factory in a
Sim City
On Tue, Mar 25, 2008 at 2:17 AM, Jim Bromer [EMAIL PROTECTED] wrote:
A usage evaluation could be taken as an example of an effect of application,
because the idea of usage and of statistical evaluation can be combined with
the object of consideration along with other theories that detail how
On Tue, Mar 25, 2008 at 11:30 PM, Jim Bromer [EMAIL PROTECTED] wrote:
I am saying that the method of recognizing and defining the effect of ideas
on other ideas would not, by itself, make it all work, but rather it would
help us to better understand how to better automate the kind of extensive
On Tue, Mar 25, 2008 at 4:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given algorithms will learn
miserably. The
On Wed, Mar 26, 2008 at 1:27 AM, Jim Bromer [EMAIL PROTECTED] wrote:
Let's suppose that I claim that Ed bumped into me. Right away we can see
that the word-concept bumped has some effect on any ideas you might have
about Ed, me and Ed and me. My claim here is that the effect of the
Thanks for asking. I will try to come up with a simple model during the
next week. I can create an example because the principle can be used in
well-defined constrained models or in more extensible models.
The theory does not answer all questions about AGI. I would think that
should be taken
Jim,
It sounds like something about concept grounding, but that's all I
got. Can you give an example that demonstrates the structure of what
you are talking about?
--
Vladimir Nesov
[EMAIL PROTECTED]
---
agi
Archives:
16 matches
Mail list logo