On Saturday, October 26, 2013 10:33:51 PM UTC-4, Liz R wrote:
> > wrote:
>> On Friday, October 25, 2013 7:09:47 PM UTC-4, Liz R wrote:
>>> On 26 October 2013 06:23, Craig Weinberg <whats...@gmail.com> wrote:
>>>> The argument against comp is not one of impossibility, but of empirical
>>>> failure. Sure, numbers could do this or that, but our experience does not
>>>> support that it has ever happened. In the mean time, the view that I
>>>> suggest I think does make more sense and supports our experience fully.
>>> Could you explain this, about how Comp has failed empirically? Comp
>>> presupposes that the brain is Turing emulable etc, so if you disagree with
>>> that then obviously it fails but not empirically since no one has
>>> proved/disproved the brain being TE.
>> What I meant is that I don't have a problem with Comp theoretically or
>> ideally - it doesn't matter to me one way or another if consciousness can
>> or cannot be duplicated or emulated synthetically, and there is not
>> necessarily anything wrong with the logic of why Comp should work, given
>> the assumptions that we can make about the nature of our awareness and the
>> functioning of the brain. The problem that Comp has is that it seems not to
>> be true in reality. We do not see any non-organic biologies, or awareness
>> that is disembodied. We don't see any computation that is disembodied. We
>> do not see any appearance of symbols becoming sentient or unexpected
>> stirrings within big data such as the entire internet that would indicate
>> intentionality. To me, the actual story of human consciousness is one of
>> nested superlatives - a single species out of a few hominids, out of
>> several kinds of animals, out of many species of organisms, out of
>> countless planets... It is not a story of ubiquitous opportunity. Nothing
>> about machines seems to be reflect personal or unique characteristics, and
>> in fact mechanism is universally considered synonymous with impersonal,
>> automatic, unconscious, rigid, and "robotic" behavior.
> Hi Craig, thanks for the detailed response. I see Bruno has also
> responded, but I will look at that later. For my own part I can't see why
> comp should *entail* the existence of non organic biology or disembodied
> awareness, although it allows for these. What it does suggest is that one
> could build a sentient machine (given enough time and knowledge) but there
> is no reason such machines should have evolved - or perhaps it would be
> more accurate to say we are such machines, although obviously we refer to
> ourselves as organic. It appears that only certain types of molecules have
> the flexibility to take part in evolution starting from nonliving material,
> but that doesn't mean that inorganic machines are ruled out if we built
> then rather than requiring that they evolve.
True, but since we don't know the reason why the appearance and survival of
biology is only associated with organic macromolecules, we should not
assume that there is no reason. Inorganic things which we do not recognize
as aware in the way that we are aware I would say are another type of
awareness, but one which has a very different or nearly opposite aesthetic
to our own (due to eigenmorphism). Certainly there are mechanical reasons
why Carbon, Oxygen, Hydrogen, and Nitrogen lend themselves to explosive
complexity, but that does not explain why complexity alone should take on
an awareness that simplicity does not.
> Machines reflect "robotic" characteristics because we haven't yet learned
> how to make them flexible enough. But then when people go wrong they also
> show such behaviour, sadly - examples abound, e.g. OCD.
>> In light of the preponderance of odd details, I think that as scientists,
>> we owe it to ourselves to consider them in a context of how Comp could be
>> an illusion. We should start over from scratch and formulate a deep and
>> precise inquiry into the nature of computation and mechanism, vis a vis
>> personality, automaticity, intention, controllability, etc. What I have
>> found is that there is a clear and persuasive case to be made for a
>> definition of awareness as the antithesis of mechanism. Taking this
>> definition as a hypothesis for a new general systems theory, I have found
>> that it makes a lot of sense to understand the mind-brain relation as
>> contra-isomorphic rather than isomorphic. The activity of the brain is a
>> picture of what the mind is not, and all appearances of matter in space can
>> be more completely understood as a picture of what the totality of
>> experience is not.
> OK, I think I see what you're saying - a "sentience of the gaps" as it
> were? However obviously this needs to be formulated in a way that people
> who know about these things can understand and test. Bruno has done this
> with comp I believe, so rather than worrying about odd details, it would be
> better to show a flaw in his premises or his reasoning.
The only flaw in Bruno's work that I am familiar with it requires that we
bet on Comp. If I'm right, and Comp is an early reflection of sense, then
his work should be also correct if the conclusions are inverted. In my
world, he'll be right about everything, except that his everything will be
>> Working with that view, and becoming comfortable with it can yield a
>> completely new and startlingly simple perspective of the universe in which
>> the ordinary and the probable emerge naturally from a deeper divergence
>> within absolute and extraordinary improbability. Rather than duplicating
>> awareness, constructions of mind-like bodies are inversions of awareness.
>> Instead of developing unique personal perspectives grounded in the
>> experience of an evolutionary history going back to the beginning of time,
>> we get the polar opposite. All machines will only ever share the same
>> impersonality, the identical evacuated perspective which is incapable of
>> feeling or participation in any way. This is, however, great news. It means
>> that AI is not a threat to us, not a competitor to humanity or biology. It
>> will always only be a servant. Unless of course, we begin to use it to
>> enhance and empower biological organisms which we cannot control. The
>> bottom line is that the ability to be controlled is identical to
>> unconsciousness. The more you want to be able to control what your AI can
>> do and not do, the more it is impossible for it to have any awareness at
>> This sounds good, but I assume there is a theory behind it (which I will
> probably have some difficulty understanding, but I assume people like Bruno
> could analyse as I think should be done with comp).
Sure, yeah, lots of theory, overview, etc here:
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
To post to this group, send email to email@example.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.