Do they tell us what grief is doing when a loved one dies?

Well the "grief" that is felt when a loved one dies is similar to that of
unreturned love. So you love them, and they don't love you back -- as they
are dead. This causes a feeling of futility and eventually changes direction
-- to focus more inwardly mu'a(in example) self-pity/self-love where you
give yourself supporting beliefs rather than a different person.

Do these inference system tell us why we get depressed when we keep
failing to accomplish our goals?

Why implies causation, which is something that is system specific and not an
inherant property of the universe.  So you'd have to ask yourself as the
computer that created the rule set of "failing to achieve goals" causes
depression.

Personally I just choose not to fail. If I do, then I accept that it was I
that set the standards -- perhaps to do something about it later.

Do they give a model for understanding why we feel proud when we are
encouraged by our parents?

As a child you give power to your parents. So when your parents encourage
you, they hold the belief that you will feel happy, and so you do -- being a
child is giving others the responsibility for their environment.  Many
mortal Homo Sapiens can be considered children in that sense.

So if you could imagine all mathematical expressions as a 3d fabric, where
sentient creatures are "droplets" or "sets" of these mathematical
expressions.  You can envision two "parents" sharing a similar space in the
"fabric" (at least time/location)  and they form another "droplet" between
the two of them. A sort of seeding of consciousness.

It is possible to create this kind of mathematical "fabric". I think it
would be very intersteing if we could figure out how, as then we would be
able to map Homo Sapiens as well as other related conceptual species, maybe
even figure out how to cross the belief barriers to access them.

I'm not really sure what such a "belief fabric" would consist of.  Though it
is possible that we could just make a large database of beliefs in some
logical language (Lojban) and  have people describe their own beliefs, then
we would be able to expand this if we got it onto a distributed network.  If
we get some people that believe they are aliens, or have significantly
different beliefs and implications than we do, we could make a claim to
first contact.

*shrugs* it would be relatively simple to implement.  Only concievable issue
is lack of Lojban speakers.

coding isn't useless, especially on the small scale where you grasp what is
happening. When you can no longer grasp what is happening, things are
"random" which is a sign of intelligence -- you couldn't predict my reply,
and hence it was "random".  Though you could just as easily control your
reality by keeping a record of the things you believe and changing them when
you want a change.

An interesting thing to try out would be to have a set of beliefs/statements
(perhaps that you want the computer to have) then you have a purely random
number generator to select a belief at random to output.  You could also add
beliefs/statements to the file by saying them.  Could probably have a
relatively intelligent conversation with the computer.  Typically will reply
with what you expect it to.




On 2/20/07, Bo Morgan <[EMAIL PROTECTED]> wrote:


On Tue, 20 Feb 2007, Richard Loosemore wrote:

) Chuck Esterbrook wrote:
) > On 2/19/07, John Scanlon <[EMAIL PROTECTED]> wrote:
) > > Language is the manipulation of symbols.  When you think of how a
) > > non-linguistic proto-human species first started using language, you
can
) > > imagine creatures associating sounds with images -- "oog" is the big
hairy
) > > red ape who's always trying to steal your women.  "akk" is the
action of
) > > hitting him with a club.
) > >
) > > The symbol, the sound, is associated with a sensorimotor
pattern.  The
) > > visual pattern is the big hairy red ape you know, and the motor
pattern is
) > > the sequence of muscle activations that swing the club.
) >
) > Regarding "imagine creatures associating sounds with images", I
) > imagine there being a "concept node" in between. The sound and the
) > image lead to this node and stimulation of the node stimulates the
) > associated patterns. My inspiration comes from this:
) > http://www.newscientist.com/article.ns?id=dn7567
)
) Chuck,
)
) I'm glad you brought that article to my attention, I somehow missed
it.  Be
) warned: the result is extremely dubious, IMO.
)
) Just ask yourself what is the probability that the researchers just
"happened"
) to come across the neurons that encoded the particular pictures they
showed to
) their subjects.....
)
) The probability is ludicrously small.  They were probably hitting
something
) that was *part* of a temporary representation of "most recently seen
things".
) Within the context of "most recently seen things" that neuron could
easily
) have triggered only to (say) the Halle Berry concept.  But if they had
come
) back the next day, it would probably have triggered on something else.
)
) Haven't had a chance to read the original article yet, but on first
look, this
) seems to be more of the same old neuroscience naivete that I complain
about so
) frequently.
)
) More generally, what you say about concepts being formed as a result of
) associations must be something like the truth .... but the real story is
) vastly more complex that just co-occurence => new concept.  Even what we
know
) today, from regular old cognitive science studies, is huge.
)
) I could write a thousand-page book about the complex issues that branch
off
) the paragraph you wrote above :-).  Heck, I *am* writing it (not up to a
) thousand pages yet, but I wouldn't be surprised if it gets there
eventually).
)
)
) Richard Loosemore.

I agree that the pairing of co-occuring ideas is not going to amount to
intelligence.  But this is a certain type of A.I. that I've seen crop up
pretty often as well.  It amounts to the idea of inference combined with
reinforcement learning.  This is a pretty simple model.

In regard to your comments about complexity theory: from what I
understand, it is primarily about taking simple physics models and trying
to explain complicated datasets by recognizing these simple models.
These simple "complexity theory" patterns can be found in complicated
datasets for the purpose of inference, but do they get us closer to human
thought?

Do they tell us what grief is doing when a loved one dies?
Do these inference system tell us why we get depressed when we keep
  failing to accomplish our goals?
Do they give a model for understanding why we feel proud when we are
  encouraged by our parents?

These questions are trying to get at some of the most powerful thought
processes in humans.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




--
ta'o(by the way)  We With You Network at: http://lokadin.blogspot.com .e
http://lokiworld.org .i(and)
more on Lojban: http://lojban.org
irc: irc://irc.oftc.net/#ma'a
mu'oimi'e lOkadin (Over, my name is lOkadin)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to