On 2/23/2017 12:44 AM, Bruno Marchal wrote:

On 20 Feb 2017, at 20:52, Brent Meeker wrote:



On 2/20/2017 7:33 AM, Telmo Menezes wrote:
On Sat, Feb 18, 2017 at 1:19 AM, John Clark <johnkcl...@gmail.com> wrote:
On Wed, Feb 15, 2017 Telmo Menezes <te...@telmomenezes.com> wrote:


Dark Matter and Dark Energy remain complete mysteries.


As far as I can tell, what we have is a falsification of current
theories. They appear to be good enough approximations for many
things, but then they fail at predicting the expansion rate of the
universe right? Maybe it's dark matter, maybe it's something else,

They are 2 separate mysteries. Dark Matter is a mysterious something that makes up 28% of the universe and holds galaxies and clusters of galaxies together. Dark Energy is a even more mysterious something that makes up 69% of everything and causes the expansion of the entire universe to accelerate. And about 4% of the universe is made of the sort of normal matter and energy
that until about 20 years ago was the only type we thought existed.

There is a straightforward extension of General Relativity and Quantum
Mechanics that explains Dark Energy, however it gives a figure that is
10^120 too large, it's been called the worse mismatch between theory and observation in the entire history of science. I think it's fair to say we really don't have a clue about Dark Energy, and Dark Matter is almost as
confusing.

If science failed so far at explaining something, then it doesn't

matter?

Science has an explanation for consciousness that works beautifully,
consciousness is the way information feels when it is being processed
intelligently.
I know that your position is that information processing is
nonsensical without matter. Many times you invited Bruno to compete
with Intel, etc. So what you are saying is that "consciousness is the
way matter feels when it participates in an intelligent computation".
This "explanation" begs the question already.

Then there's the issue of defining "processed intelligently". What
does that even mean? Where do you draw the line between intelligent
and non-intelligent processing? Let me guess: intelligent processing
is the kind that generates consciousness.

No, intelligent processing it that which leads to useful activity toward a goal. That's why consciousness has to be consciousness OF a world in which action is possible. It only exists in a context.

I am OK, with "model" in place of "world". But those are close.





For me, the interesting question is whether there can be intelligence without consciousness,

When we do something intelligent (a priori) a billion times, we can do it without consciousness (like walking), but is it still intelligent?
Here, I would say that it depends of what we mean by "intelligent".



or more accurately can there be intelligence which is conscious in a different way.

different from what?

Different from the way I am conscious: A stream of narrative and images which mixes memories, feelings, and perceptions into a sort of coherent story. I think this story is a way compressing what goes into memory so that it can be used for learning - at least if I were designing a robot and I used this technique for storing information to later be used in learning (i.e. reducing the information to a coherent stream) I would identify that part of the design as the "consciousness module". But I can imagine designing the robot differently. For example, if memory were very cheap and fast of access I might not try to filter experience into small stream of information before storing it - I might just put it all in and evaulate for learning later. Or I can imagine designing the robot so that more than one coherent stream were produced based on different value weights and there were several learning modules based on different techniques and action decisions involved voting.





We can see from Big Blue, Watson, and deep neural nets that there can be intelligence based different kinds of information processing. I suspect this means there would be different kinds of consciousness associated with them - but how could we know and what would it mean? John McCarthy warned many years ago that we should be careful not to create robots that had general intelligence, lest we inadvertently create conscious beings to whom we would have ethical obligations.

And he warn us that we could become the pet of the machine.

No.  He warned that we could become slave owners.

Brent

In fact if we continue to treat "corporation" as person, we will ultimately become the slaves of the machine, and most plausibly, a slave that the machine will stop to afford the price. We might disappear if this happens before we digitalize ourself completely. We are on a bad slope with respect to this, alas.

Bruno

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to