On 18 December 2017 at 07:08, Brent Meeker <meeke...@verizon.net> wrote:

>
>
> On 12/17/2017 9:03 AM, Telmo Menezes wrote:
>
> On Fri, Dec 8, 2017 at 7:53 PM, Brent Meeker <meeke...@verizon.net> 
> <meeke...@verizon.net> wrote:
>
> On 12/8/2017 2:24 AM, Telmo Menezes wrote:
>
> On Thu, Dec 7, 2017 at 10:47 PM, Brent Meeker <meeke...@verizon.net> 
> <meeke...@verizon.net>
> wrote:
>
> On 12/7/2017 1:01 AM, Telmo Menezes wrote:
>
> On Wed, Dec 6, 2017 at 11:50 PM, Brent Meeker <meeke...@verizon.net> 
> <meeke...@verizon.net>
> wrote:
>
> On 12/6/2017 1:46 AM, Bruno Marchal wrote:
>
> I suspect that this is perhaps why Brent want to refer to the
> environment
> for relating consciousness to the machine, and in Artificial
> Intelligence,
> some people defend the idea that (mundane) consciousness occur only
> when
> the
> environment contradicts a little bit the quasi automatic persistent
> inference we do all the time.
>
> That's Jeff Hawkins model of consciousness: one becomes conscious of
> something when all lower, more specialized levels of the brain have
> found
> it
> not to match their predictions.
>
> In that sort of model, how does matter "know" that it is being used to
> run a forecasting algorithm? Surely it doesn't right?
>
> ?? Why surely.  It seems you're rejecting the idea that a physical system
> can be conscious just out of prejudice.
>
> Not at all. I remain agnostic on materialism vs. idealism. Maybe I am
> even a strong agnostic: I suspect that the answer to this question
> cannot be known.
>
> Assuming materialism, consciousness must indeed be a property or
> something that emerges from the interaction of fundamental particles,
> the same way that, say, life does. Ok. All that I am saying is that
> nobody has proposed any explanation of consciousness under this
> assumption that I would call a theory. The above is not a theory, in
> the same way that the Christian God is not a theory: it proposes to
> explain a simple thing by appealing to a pre-existing more complex
> thing -- in this case claiming that the act of forecasting at a very
> high level somehow leads to consciousness, but without proposing any
> first principles. It's a magical step.
>
> What would a satisfactory (to you) first principle look like.
>
> I cannot imagine one -- and this fuels my intuition that consciousness
> is more fundamental than matter,
>
>
> It fuels my intuition that it is a "wrong question".
>
> and that emergentism is a dead-end.
> But of course, my lack of imagination is not an argument. It could be
> that I am too dumb/ignorant/crazy to come up with a good emergentist
> theory. What I can -- and do -- is listen to any idea that comes up
> and have an open mind. If you have one, I will gladly listen.
>
>
> If we
> consider the analogy of life, in the early 1900's when it was considered as
> a chemical process all that could be said about it was that it involved
> using energy to construct carbon based compounds and at a high level this
> led to reproduction and natural selection and the origin of species.  Now,
> we have greatly elaborated on the molecular chemistry and can modify and
> even created DNA and RNA molecules that realize "life".  Where did we get
> past the "magical step"?  Or are you still waiting for "the atom of life" to
> be discovered?
>
> Here there is no magical step. Life can be understood all the way down
> to basic chemistry. Ok, we don't have all the details, but we are not
> missing anything fundamental. I am not waiting for the atoms of life
> because I already know what they are. You just described them above.
> Can you do that for consciousness?
>
>
> Maybe not yet, but I can imagine what they might be: self-awareness,
> construction of narratives about one's experiences, modeling other minds,...
>

​Sometimes your responses really puzzle me Brent. What you say above almost
makes it sound as though you just don't get the distinction Telmo is
pointing to. But based on what you have said at other times I think you do
get it, but because you also know that there's really no explicating that
distinction in a purely third person way, you sometimes want to say that
that's as far as explanation can legitimately go and the rest is just woo.


>
> What makes the hard problem hard is that it relates to a qualitatively
> different phenomena than anything else that we try to understand. Life
> can be talked about purely in the third person, but consciousness is
> first person by definition.
>
>
> So we are told.  But what if someone could look at a recorded MRI of you
> brain and tell you what you were thinking?
>

​Yeah, but notice also that there's only ever one person who can attest to
the truth of that.
​

>
>
> My view is that this sort of emergentism always smuggles a subtle but
> important switcheroo at some point: moving from epistemology to
> ontology.
>
> For me, emergence is an epistemic tool. It is not possible for a human
> to understand hyper-complex systems by considering all the variables
> at the same time. We wouldn't be able to understand the human body
> purely at the molecular level. So we create simplifying abstractions.
> These abstractions have names such as "cells", "tissues", "organs",
> "disease", etc etc. A Jupiter Brain might not need these tools. If
> it's mind is orders of magnitude more complex than the human body,
> then it could apprehend the entire thing at the molecular level, and
> one could even say that this would lead to a higher level of
> understanding than what we could hope for with our little monkey
> brains.
>
> In a sense, this would violate the very meaning of "understanding". If you
> look at a website discussing the recent triumph of AlphaZero over Stockfish
> in chess, there are arguments over whether the programs "understand" chess
> or are they just very good at playing it.  Those that claim the programs
> don't understand chess mean that the programs just consult lots of memorized
> positions and wether they led to a win or a loss.  To "understand" chess
> they should base their moves on some general principle which are simple
> enough to explain to an amateur.  In other words, to "understand" the game
> is a social attribute = being able to explain it to a person.  A lion knows
> how to catch an antelope, but she doesn't understand it because she can't
> explain it.
>
> I would say that what we mean by "understanding" is having a model
> (and I am going to repeat and agree with some things you say above),
> that can:
>
> - Make good predictions for behaviors under new conditions;
> - Be communicable;
> - Be constructive, in the sense that it can be combined to other
> models about other things and fit nicely in a larger tapestry of
> knowledge.
>
> Humans can talk about cells and organs. Jupiter brains can talk about
> swarms of molecules that require gigantic many-dimensional matrices to
> describe. Emergence is a tool to overcome cognitive limits by creating
> simpler levels of description. The magic step is to pretend that
> creating a simpler level of description generates a new behavior.
>
>
> If it succeeds in correctly predicting new behavior then as Arthur C.
> Clarke said, "Any sufficiently advanced technology is indistinguishable
> from *magic*."  and "indistinguishable" is a symmetric relation.
>

​Cute but irrelevant. As has been said when we've discussed Telmo's point
in the past, the fact of the matter is that ontological reduction *just is*
ontological elimination. That's the whole point of the reductive project
and precisely therein lies its explanatory power. But somehow that same
ontological reduction doesn't entail *epistemological* elimination. There's
the rub.

David
​

>
>
> Brent
>
> It
> is literally "magical thinking" e.g. thinking that wearing a white
> coat and a stethoscope makes you capable of diagnosing diseases.
>
>
> In the case of biological systems, although we couldn't do what the
> Jupiter Brain does, we could understand what are the first principles
> that said brain would make use of.
>
> Emergentists switch to the ontological. As if "emergence" generates
> something new. As if it's something akin to a fundamental law of
> nature. It's a language trick. When we say that something emerges from
> something else, we are building an epistemic tool, we are not being
> literal.
>
> Is it a trick to say life emerges from chemistry?
>
> No. It is an epistemic move. It is a trick to pretend that emergence
> is ontological.
>
>
> Could emergentism be true? Sure. But for it to be an actual theory, it
> would have to provide some first principle. What I referred to as the
> "atom" of consciousness, in the same way that a local trasaction
> between two actors is the atom of economic models. Were is it?
>
>
> The only way this could work is if the forecasting algorithm and the
> cascading effects of failing predictions have the side effect of
> creating the "right" sort of interactions at a lower level that
> trigger consciousness.
>
> In Hawkins model the predictions fail from the "bottom up", i.e. from the
> subconscious, automatic responses up to the top/lanuage/conscious level.
>
> I like Hawkins model and his work in general, but I think it is purely
> about intelligence.
>
>
> Then I want to know what these interactions
> are, and what if the "atom" of consciousness, what is the first
> principle. Without this, I would say that such hypothesis are not even
> wrong.
>
> There is no "atom of consciousness".  In Hawkins model consciousness is
> the
> spreading to the 'failed prediction' signal across the top level of the
> neocortex.  As I said  earlier, this is not Hawkins main interest, it's
> more
> an aside.  He's more interested in intelligence.
>
> Indeed. I read "On Intelligence" years ago (one decade?) so I might be
> fuzzy on the details. I got the impression that he is completely
> uninterested in consciousness, or that he doesn't even consider it a
> serious question.
>
> He does discuss it in the last chapter because he realizes it will be of
> interest to the reader and that's where he speculates on an account similar
> to Bruno's "waking up the boss".
>
> John McCarthy was concerned that in creating an AI that we would
> inadvertently create consciousness and thus incur ethical obligations we
> were not prepared to meet.
>
> Yes. I strongly agree with that concern.
>
> Telmo.
>
>
> Brent
>
>
>
> But as has been discussed
> here many times, philosophical zombies are probably not possible.  That
> would imply that a sufficiently intelligent system, however constructed,
> will be conscious.
>
> Yes, I tend to agree with this view.
>
> Telmo.
>
>
> Brent
>
>
>
> Telmo.
>
>
> Brent
>
>
> --
> You received this message because you are subscribed to the Google
> Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to