Some semi-organized responses to points raised in this thread...
1) About spatial maps...
It seems to be the case that the brain uses spatial maps a lot, which
abstract
considerably from the territory they represent
Similarly in Novamente we have a spatial map data structure which has an
Ben,
Good Post
I my mind the ability to map each of N things into a model of a space is a
very valuable thing. It lets us represent all of the N^2 spatial
relationships between those N things based on just N mappings. This is
something we all know, but it is one of the many wonderful
Your busy and I'm busy, so we can wait for another topic before
communicating next. But our communication on this topic has been
interesting.
Edward W. Porter
Porter Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original
On 10/21/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Ben,
Good Post
I my mind the ability to map each of N things into a model of a space is a
very valuable thing. It lets us represent all of the N^2 spatial
relationships between those N things based on just N mappings. This is
http://www.mail-archive.com/agi@v2.listbox.com/msg08026.html
is where Ben Goertzel wrote stimuli evoking AGI list response.
Some semi-organized responses to points raised in this thread...
[...]
Furthermore, it seems to be the case that
the brain stores a lot of detail about some
things
On 10/21/07, John G. Rose [EMAIL PROTECTED] wrote:
Vladimir,
That may very well be the case and something that I'm unaware of. The
system I have in mind basically has I/O that is algebraic structures.
Everything that it deals with is modeled this way. Any sort of system that
it analyzes
Benjamin,
It's interesting that you mentioned this right now. My discussion with
Edward in parallel thread effectively led to this issue. Basically, it's
useful to be able to find regularities between arbitrary pair of concepts
(say, A and B) that system supports (as kind of domain-independence).
Vladimir,
Yes, if a concept is defined by its associations, and if a significant
subset of them somewhat distinguish a concept, it would seem only natural
that links between associations of nodes A and node could help the two
concepts find each other in a large, high dimensional space.
This
Edward,
Your reply raised very interesting issues which I'll have to think about
some more. I'll also need to read Valiant's paper to get a better idea of
realistic properties of the brain regarding this kind of process. So, I'll
answer in more detailed way when I'm ready.
For now, I have to
Edward W. Porter wrote:
[snip]
There is a very interest paper at
http://www.icsuci.edu/~granger/RHGenginesJ1s.pdf
http://www.ics.uci.edu/~granger/RHGenginesJ1s.pdf that I have referred
to before on this list that states the cortico-thalmic feedback loop
functions to serialize the brain's
Loosemore wrote:
Edward
If I were you, I would not get too excited about this paper, nor others
of this sort (see, e.g. Granger's other general brain-engineering paper
at http://www.dartmouth.edu/~rhg/pubs/RHGai50.pdf).
This kind of research comes pretty close to something that deserves
Pei,
Sorry for delayed reply. I answer point-by-point below.
On 10/11/07, Pei Wang [EMAIL PROTECTED] wrote:
Basic rule for evidence-based
estimation of implication in NARS seems to be roughly along the lines
of term construction in my framework (I think there's much freedom in
its
About NARS... Nesov/Wang dialogued:
Why do you need so many rules?
I didn't expect so many rules myself at the beginning. I add new rules
only when the existing ones are not enough for a situation. It will be
great if someone can find a simpler design.
I feel that some of complexity
On 10/21/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Vladimir,
Yes, the deleted point FIVE mentioned that I had assumed (perhaps
incorrectly) that Valiant was looking for enough interconnect to do
traditinal Hebbian learning, which as normally defined would require
synapses from either A
Ben:Furthermore, it seems to be the case that the brain stores a lot of detail
about some
things that it sees -- and much less about others.
For instance, it's famous that when observing a visual scene, a person can
generally
remember only around 7 visual facts about it. Trained observers can
On 10/21/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Vladimir,
Yes, if a concept is defined by its associations, and if a significant
subset of them somewhat distinguish a concept, it would seem only natural
that links between associations of nodes A and node could help the two
Richard,
I was not citing this article as Gods truth, but as an extremely
interesting hypotheses that seems to have backing in brain science. But
to be fair I gave no clear indication of that.
I have read enough papers attempting to assign various cognitive functions
to various parts of the
Vladmir,
I think a very important issue, ist the one about how much you can
multiplex the number of cell assemblies a neuron is in. If X is the total
number of neurons, and M is the number of neurons in a cell assembly, as
in one of your earlier posts, and you assume even distribution of
On 10/21/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
Pei,
Sorry for delayed reply. I answer point-by-point below.
On 10/11/07, Pei Wang [EMAIL PROTECTED] wrote:
Basic rule for evidence-based
estimation of implication in NARS seems to be roughly along the lines
of term construction
The difference between NARS and PLN has much more to do with their
different semantics, than with their different logical/algebraic
formalism.
For example, according to the semantics of NARS, Bayes rule, with all
of its variants, is deduction. Therefore it is impossible to use on
Edward,
Did you read Izhikevich's papers (specifically, [1])? They explore the model
of polychronization, where cell assemblies are formed in different ways
depending on temporal shifts of firings of neurons that initiate assembly's
formation. He has some experimental estimations, but they are
Benjamin Goertzel wrote:
Loosemore wrote:
Edward
If I were you, I would not get too excited about this paper, nor others
of this sort (see, e.g. Granger's other general brain-engineering paper
at http://www.dartmouth.edu/~rhg/pubs/RHGai50.pdf).
This kind of research
The questions you ask are not worth asking, because you cannot do
anything with a 'theory' (Granger's) that consists of a bunch of vague
assertions about various outdated, broken cognitive ideas, asserted
without justification.
Richard Loosemore
Richard, you haven't convinced me, but I
As Ben suggests, clearly Grangers title claims to much. At best the
article suggests what may be some important aspects of the computational
architecture of the human brain, not anything approaching a complete
instruction set.
But as I implied in my last post to Richard Loosemore, you have to
On 10/21/07, Pei Wang [EMAIL PROTECTED] wrote:
The difference between NARS and PLN has much more to do with their
different semantics, than with their different logical/algebraic
formalism.
Sure; in both cases, the algebraic structure of the rules and the
truth-value formulas follow from the
Edward,
I was not criticising you or your opinion of Granger's paper, but only
pointing out that the paper itself had two sides to it: a neuroscience
side (which appeared detailed and well-researched, as far as I could
tell) and a cognitive side (which consisted of a few sentences of
Edward W. Porter wrote:
As Ben suggests, clearly Granger’s title claims to much. At best the
article suggests what may be some important aspects of the computational
architecture of the human brain, not anything approaching a complete
instruction set.
But as I implied in my last post to
On Oct 21, 2007, at 6:37 PM, Richard Loosemore wrote:
It took me at least five years of struggle to get to the point
where I could start to have the confidence to call a spade a spade
It still looks like a shovel to me.
Cheers,
J. Andrew Rogers
-
This list is sponsored by AGIRI:
It took me at least five years of struggle to get to the point where I
could start to have the confidence to call a spade a spade, and dismiss
stuff that looked like rubbish.
Now, you say we have to forgive academics for doing this? The hell we
do.
If I see garbage being peddled as if
And you are also not above making patronizing remarks in which you
implicitly refer to someone as behaving in a simian -- i.e.
monkey-like manner.
Hey, I'm a monkey too -- and I'm pretty tired of being one. Let's bring on
the
Singularity already!!!
If you read the paper I just wrote,
30 matches
Mail list logo