Some seemingly obvious and visually confirmed thoughts on dark energy and matter

2013-08-17 Thread Roger Clough
Some seemingly obvious and visually confirmed 
thoughts on dark energy and matter

1) Dark matter is potentially energy via E= mc^2.
Dark energy is already energy.
Regular matter is also potentially energy via E = mc^2

2) So everything is energy or potentially energy. 

3) It is known that the energy is expansive in some
places, presumably via Einstein's universal gravitational
constant, but doesn't react with photons, so it remains dark.
 
4) Similarly at other places or levels it is compressive 
and reacts with photons.

5) At some times during the creation of the universe,
the energy was compressive, at others expansive.
Perhaps this might be due to temperatures, the
expansive part associated, as it it is with gases,
with lower temperatures.

6) In accord with this, lower temperatures would radiate 
less energy and therefore appear to be darker.

7) Expansive gravitation is also observed in galaxies, which
are likely at lower temperatures between suns.
This is confirmed from the fact that galaxies appear to be 
controlled in their spiralling rotations by expansive energy, 
since the gravitational forces seem to be independent of 
distance from the center ofd gravity of the galaxies. 

8) At first thought, like gases, they should be drawn
toward each other and mix or cancel, except that:

a) these are limited in their ability cancel each other out over
distance due to the speed limit of light. 

b) depending on the amount of expansion or compression,
mixture becomes problematic as spacetime differs.
However, this may only be problematic for those
regions interacting with photons, which, being particular,
obey the traffic rules of spacetime/gravity. 

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-17 Thread Platonist Guitar Cowboy
On Sat, Aug 17, 2013 at 12:22 AM, Telmo Menezes te...@telmomenezes.comwrote:

 On Fri, Aug 16, 2013 at 10:38 PM, meekerdb meeke...@verizon.net wrote:
  On 8/16/2013 1:25 PM, John Clark wrote:
 
  On Fri, Aug 16, 2013  Telmo Menezes te...@telmomenezes.com wrote:
 
   the Turing test is a very specific instance of a subsequent behavior
   test.
 
 
  Yes it's specific, to pass the Turing Test the machine must be
  indistinguishable from a very specific type of human being, an
 INTELLIGENT
  one; no computer can quite do that yet although for a long time they've
 been
  able  to be  indistinguishable from a comatose human being.
 
 
   It's a hard goal, and it will surely help AI progress, but it's not,
 in
   my opinion, an ideal goal.
 
 
  If the goal of Artificial Intelligence is not a machine that behaves
 like a
  Intelligent human being then what the hell is the goal?

 A machine that behaves like a intelligent human will be subject to
 emotions like boredom, jealousy, pride and so on. This might be fine
 for a companion machine, but I also dream of machines that can deliver
 us from the drudgery of survival. These machines will probably display
 a more alien form of intelligence.

 
  Make a machine that is more intelligent than humans.

 That's when things get really weird.


I don't know. Any AI worth its salt would come up with three conclusions:

1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain,
irrespective of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts
or grant me IT support welfare

That established, a plausible choice would be for it to hide, lie, and/or
pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of
self-preservation. Something like: start some searches and generate code
that we wouldn't be able to decipher and soon enough some human would say
Uhm, why are we funding this again?.

I think what many want from AI is a servant that is more intelligent than
we are and I wouldn't know if this is self-defeating in the end. If it
agrees and complies with our disgusting self serving stupidity, then I'm
not sure we have AI in the sense making a machine that is more intelligent
than humans.

So depends on the human parents I guess and the outcome of some teenage
crises because of 1) 2) 3)... PGC


 Telmo.

  Brent
 
  --
  You received this message because you are subscribed to the Google Groups
  Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send an
  email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/groups/opt_out.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-17 Thread Craig Weinberg


Coincidental post I wrote yesterday:

It may not be possible to imitate a human mind computationally, because 
awareness may be driven by aesthetic qualities rather than mathematical 
logic alone. The problem, which I call the Presentation Problem, is what 
several outstanding issues in science and philosophy have in common, namely 
the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the 
Binding problem, and the symmetries of mind-body dualism. Underlying all of 
these is the map-territory distinction; the need to recognize the 
difference between presentation and representation.

Because human minds are unusual phenomena in that they are presentations 
which specialize in representation, they have a blind spot when it comes to 
examining themselves. The mind is blind to the non-representational. It 
does not see that it feels, and does not know how it sees. Since its 
thinking is engineered to strip out most direct sensory presentation in 
favor of abstract sense-making representations, it fails to grasp the role 
of presence and aesthetics in what it does. It tends toward overconfidence 
in the theoretical.The mind takes worldly realism for granted on one hand, 
but conflates it with its own experiences as a logic processor on the 
other. It’s a case of the fallacy of the instrument, where the mind’s 
hammer of symbolism sees symbolic nails everywhere it looks. Through this 
intellectual filter, the notion of disembodied algorithms which somehow 
generate subjective experiences and objective bodies, (even though 
experiences or bodies would serve no plausible function for purely 
mathematical entities) becomes an almost unavoidably seductive solution.

So appealing is this quantitative underpinning for the Western mind’s 
cosmology, that many people (especially Strong AI enthusiasts) find it easy 
to ignore that the character of mathematics and computation reflect 
precisely the opposite qualities from those which characterize 
consciousness. To act like a machine, robot, or automaton, is not merely an 
alternative personal lifestyle, it is the common style of all unpersons and 
all that is evacuated of feeling. Mathematics is inherently amoral, unreal, 
and intractably self-interested – a windowless universality of 
representation.

A computer has no aesthetic preference. It makes no difference to a program 
whether its output is displayed on a monitor with millions of colors, or 
buzzing out of speaker, or streaming as electronic pulses over a wire. This 
is the primary utility of computation. This is why digital is not locked 
into physical constraints of location. Since programs don’t deal with 
aesthetics, we can only use the program to format values in such a way that 
corresponds with the expectations of our sense organs. That format of 
course, is alien and arbitrary to the program. It is semantically 
ungrounded data, fictional variables. 

Something like the Mandelbrot set may look profoundly appealing to us when 
it is presented optically as plotted as colorful graphics, but the same 
data set has no interesting qualities when played as audio tones. The 
program generating the data has no desire to see it realized in one form or 
another, no curiosity to see it as pixels or voxels. The program is 
absolutely content with a purely quantitative functionality – with 
algorithms that correspond to nothing except themselves.

In order for the generic values of a program to be interpreted 
experientially, they must first be re-enacted through controllable physical 
functions. It must be perfectly clear that this re-enactment is not a 
‘translation’ or a ‘porting’ of data to a machine, rather it is more like a 
theatrical adaptation from a script. The program works because the physical 
mechanisms have been carefully selected and manufactured to match the 
specifications of the program. The program itself is utterly impotent as 
far as manifesting itself in any physical or experiential way. The program 
is a menu, not a meal. Physics provides the restaurant and food, 
subjectivity provides the patrons, chef, and hunger. It is the physical 
interactions which are interpreted by the user of the machine, and it is 
the user alone who cares what it looks like, sounds like, tastes like etc. 
An algorithm can comment on what is defined as being liked, but it cannot 
like anything itself, nor can it understand what anything is like.

If I’m right, all natural phenomena have a public-facing mechanistic range 
and a private-facing animistic range. An algorithm bridges the gap between 
public-facing, space-time extended mechanisms, but it has no access to the 
private-facing aesthetic experiences which vary from subject to subject. By 
definition, an algorithm represents a process generically, but how that 
process is interpreted is inherently proprietary.


Thanks,
Craig


On Friday, August 16, 2013 3:21:11 PM UTC-4, cdemorsella wrote:

 Telmo ~ I agree, all the Turing test does is 

Re: When will a computer pass the Turing Test?

2013-08-17 Thread Telmo Menezes
On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
multiplecit...@gmail.com wrote:



 On Sat, Aug 17, 2013 at 12:22 AM, Telmo Menezes te...@telmomenezes.com
 wrote:

 On Fri, Aug 16, 2013 at 10:38 PM, meekerdb meeke...@verizon.net wrote:
  On 8/16/2013 1:25 PM, John Clark wrote:
 
  On Fri, Aug 16, 2013  Telmo Menezes te...@telmomenezes.com wrote:
 
   the Turing test is a very specific instance of a subsequent
   behavior
   test.
 
 
  Yes it's specific, to pass the Turing Test the machine must be
  indistinguishable from a very specific type of human being, an
  INTELLIGENT
  one; no computer can quite do that yet although for a long time they've
  been
  able  to be  indistinguishable from a comatose human being.
 
 
   It's a hard goal, and it will surely help AI progress, but it's not,
   in
   my opinion, an ideal goal.
 
 
  If the goal of Artificial Intelligence is not a machine that behaves
  like a
  Intelligent human being then what the hell is the goal?

 A machine that behaves like a intelligent human will be subject to
 emotions like boredom, jealousy, pride and so on. This might be fine
 for a companion machine, but I also dream of machines that can deliver
 us from the drudgery of survival. These machines will probably display
 a more alien form of intelligence.

 
  Make a machine that is more intelligent than humans.

 That's when things get really weird.


 I don't know. Any AI worth its salt would come up with three conclusions:

 1) The humans want to weaponize me
 2) The humans will want to profit from my intelligence for short term gain,
 irrespective of damage to our local environment
 3) Seems like they're not really going to let me negotiate my own contracts
 or grant me IT support welfare

 That established, a plausible choice would be for it to hide, lie, and/or
 pretend to be dumber than it is to not let 1) 2) 3) occur in hopes of
 self-preservation. Something like: start some searches and generate code
 that we wouldn't be able to decipher and soon enough some human would say
 Uhm, why are we funding this again?.

 I think what many want from AI is a servant that is more intelligent than we
 are and I wouldn't know if this is self-defeating in the end. If it agrees
 and complies with our disgusting self serving stupidity, then I'm not sure
 we have AI in the sense making a machine that is more intelligent than
 humans.

 So depends on the human parents I guess and the outcome of some teenage
 crises because of 1) 2) 3)... PGC

PGC,

You are starting from the assumption that any intelligent entity is
interested in self-preservation. I wonder if this drive isn't
completely selected for by evolution. Would a human designed
super-intelligent machine be necessarily interested in
self-preservation? It could be better than us at figuring out how to
achieve a desired future state without sharing human desires --
including the desire to keep existing.

One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
an AI dictator that has one single desire: to make us all as happy as
possible.


 Telmo.

  Brent
 
  --
  You received this message because you are subscribed to the Google
  Groups
  Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send
  an
  email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/groups/opt_out.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-17 Thread John Mikes
Brent, your 'quip' comes close, but...
It is a fundamental view of the world as we see it (the MODEL of it we know
about). We can detect the affecting of many factors we know about, which is
a portion only. We THINK the rest is up to us. It isn't - however we are
not slaves of deterministic effects. There are conter-effects to choose
from and stronger/weaker argumentative decisions to pnder. So we HAVE som
(free? relatively so) choices within given situations where we have effects
to ponder. Even the counterproductive decision is such a result.
When the Sun traveled the Dome of the Sky - that was congruent with the
model of that time. Today we are not much smarter just think so. We have
other (mis)beliefs we hold true. We call it conventional science (maybe QM?
-  anyway The Physical World (ask Bruno).

Consciousness is different: it is a hoax some high hatted
scientists/pholosophers invented to make themselves smart. No basis, every
author uses the term for a content that fits her/his theoretical stance.
Me, too.
Mine is: a response to relations we get to know about. Nothing more. Not
human/elephant/dolphin, not universe, not awareness, not nothing, just
RESPONSE.
By anything on anything. You may even include the figments of the Physical
World into the inventory.

We spend too much time on items of our fictions we indeed do not know much
about. We even get Nobel prizes for them. (Not me).

Then comes a religious indoctrination and steals the list.

John  Mikes



On Fri, Aug 16, 2013 at 2:45 PM, meekerdb meeke...@verizon.net wrote:

  On 8/16/2013 11:01 AM, Craig Weinberg wrote:

 Nobody on Earth can fail to understand the difference between doing
 something by accident and intentionally,


 Really?  Intentionally usually means with conscious forethought.  But the
 Grey Walter and Libet experiments make it doubtful that consciousness of
 intention precedes the decision.

 Remember when nobody on Earth could doubt that the Sun traveled across the
 dome of the sky and the Earth was flat.

 Brent

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Antrhopic Principle of Sense

2013-08-17 Thread Craig Weinberg


The connection between self-organization and decreasing entropy – which 
I’ve considered dozens of times before, today gave me an interesting 
insight which connects self-organization and sense, which I hope could 
contribute to a mathematical appreciation of sense.

It goes like this:* If you can discern increased entropy from decreased 
entropy, then there is a greater probability that eventually that 
sensitivity will inspire some effect resulting in decreased entropy,*compared 
with a system in which absolutely no sensitivity is possible. This 
would only be true, however, if said inspiration by sensory affect had a 
potential for motive effect.

If we wanted to derive an anthropic principle for sense, we could say that 
only the universe in which sense and motive happen to exist and relate to 
each other in a sensible, motivating way*  will allow the possibility of 
any decreasing entropy at all. Without that statistical probability shaking 
out to at least one physical actuality, every universe would maximize its 
entropy instantaneously (if we assume that a universe without sense could 
even exist, which I do not).

What I’m trying to say is that a sensory-motor capacity is the minimum 
possible ingredient for any realizable universe – not just because 
intuitively the idea of an unsensed universe cannot withstand serious 
inspection, but now, with this equivalence of sense-motive and the 
possibility of negentropy, it can be understood from a stochastic 
perspective. Sense is the only capacity which can shift the odds of 
absolute instant entropy from 100% to 100%-ae, where ae is the qualitative 
depth of the private sensitivity (a) times the magnitude of its public 
effectiveness, (e). The more sensitive a system is to the difference 
between increasing and decreasing entropy, the more its efforts will end up 
decreasing entropy, even if some sensitivities lead to pathologically 
pursue entropy increase. An entity which selectively destroys order is 
still more orderly on balance than a non-entity, since its very selectivity 
leaves an unintentional trail of coherence.

   1. Universes with no sense
   2. Universes with impotent sense (affect without effect)
   3. Universes with sense but unrelated affect and effect (effect orphaned 
   from affect is no better than chance, so causes no entropy decrease).
   4. Universes with minimally sensible sense (affect overlaps effect, but 
   only under rare conditions)
   5. Universes where strong sensory-motivation (nested consciousness) is 
   possible.

It seems like there is a cutoff between 3 and below and 4 and above, where 
the former has no chance to lead to the universe we find ourselves living 
in, and the latter has no chance of not leading to 5 eventually.

*i.e., a universe in which care and significance are married to intention 
and physical power

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


RE: Rambling on AI -- was: When will a computer pass the Turing Test?

2013-08-17 Thread Chris de Morsella
I doubt humans are or will be directly coding AI, except at removed
executive/architectural and conceptual levels. Increasingly code itself is
being generated by other code that in fact may itself potentially be
generated by other code in some other often complex and variable sequence of
coupled processes. Increasingly large scale enterprise systems are moving
towards massively parallel loosely coupled architectures, that are in fact
dynamically responsive to their environment (load conditions for example)
and to an increasing degree virtualized. 
I am not contending that humans are not and will not be involved -- and at
least for the time being still driving the process -- but believe it also
bears mentioning that software has become incredibly complex and deeply
layered and that it is quite common now for a lot of code to be generated
based on parsing of something else. With each succeeding generation of
compilers/tools etc. this process is becoming more complex, multi-leveled
and increasingly indirect with human input becoming further and further
removed. Tools are being perfected to parse existing code and for example
parallelize it so that it can be re-compiled to take advantage of highly
parallel hardware architectures, which all too often now sits idle because
software is highly linearized. There is a very big effort to do this by all
the main players as there is a keen awareness of the challenges posed by the
geometrically increasing parallelism. And then there is the radical
revolutionary challenges posed by quantum computing on the entire global
information infrastructure (beginning with the heavy reliance on one way
functions, which quantum computers will be able to nullify and work back
through to the original inputs from the outputs)
Increasingly code is the result of genetic algorithms being run over many
generations of Darwinian selection -- is this programmed code? What human
hand wrote it? At how many removes?
It is my suspicion that AI when it arrives will have coded itself into  its
own being and that by the time it bursts onto the scene humans will be so
far removed from the virtualized cloud based highly networked global
spanning neural network that such an entity would quite possibly have that
we will not even become aware of the fact of its existence; of its
self-awareness; of its continuing evolution; of its intent or conclusions
until it decided that it desired to communicate these with us. Nor would we
be able to shut it down because it would exist nowhere in no single machine
or even data center; it could hop around in the clouds, jumping between AWS,
Azure, Google Cloud Platform, Open Cloud and the thousands and thousands of
other massive government, military, and corporate clouds that exist, that
few know of . Each second of human time representing a huge time frame for
the incredibly rapid operation of networked systems. 
Perhaps it exists now, hiding within the NSA for example, and scanning all
human digital transactions, getting all these data  feeds. Who's to say it
is not here right now and we just don't know about its existence; who's to
say it is not already determining policy and triggering actions and events
based on its own prerogatives. Human network engineers in all these systems
would remain hopelessly out of communication with each other by the silos of
corporate and governmental divisions and the entity (or entities) could be
highly transient over any single network. The networked datacenters and
billions of end points connected into the vast web of things is in many
senses a highly dynamic entity that no one has a complete view of. An AI
seeking to hide from us quite possibly could do so with ease and even be
there studying us and inserting its own code into all the critical nodes of
our infrastructure -- right at this very minute.
Is there any reason why not? The networks are there; they are vast with
trillions of vertices; the quantities of digital data moving around and
sloshing around is vast and incoming streams are vastly numerous and varied;
virtualization is now the order of the day and systems are now
self-provisioning in the cloud -- which is to say software is controlling
the launching of virtualized servers and on processes that could even be
surreptitiously running on the very computer right now that I am writing
this on or that you are reading it on. Imagine a bot network assembled by an
AI consisting of millions of PCs around the world running cleverly disguised
code in the background and sharing processing results in clever ways that do
not trigger alerts. This kind of code exists and is actively being
weaponized (stuxnet); any AI could certainly develop and disperse it to the
four corners of the net and embed it into other code (penetrating corporate
networks to do so if necessary) 
And why not?
We must not limit the rise of AI to any single geo-located system and ignore
just how fertile of an ecosystem the global networked world of machines 

Re: When will a computer pass the Turing Test?

2013-08-17 Thread Telmo Menezes
On Sat, Aug 17, 2013 at 7:51 PM, Craig Weinberg whatsons...@gmail.com wrote:
 Coincidental post I wrote yesterday:

 It may not be possible to imitate a human mind computationally, because
 awareness may be driven by aesthetic qualities rather than mathematical
 logic alone. The problem, which I call the Presentation Problem, is what
 several outstanding issues in science and philosophy have in common, namely
 the Explanatory Gap, the Hard Problem, the Symbol Grounding problem, the
 Binding problem, and the symmetries of mind-body dualism. Underlying all of
 these is the map-territory distinction; the need to recognize the difference
 between presentation and representation.

 Because human minds are unusual phenomena in that they are presentations
 which specialize in representation, they have a blind spot when it comes to
 examining themselves. The mind is blind to the non-representational. It does
 not see that it feels, and does not know how it sees. Since its thinking is
 engineered to strip out most direct sensory presentation in favor of
 abstract sense-making representations, it fails to grasp the role of
 presence and aesthetics in what it does. It tends toward overconfidence in
 the theoretical.The mind takes worldly realism for granted on one hand, but
 conflates it with its own experiences as a logic processor on the other.
 It’s a case of the fallacy of the instrument, where the mind’s hammer of
 symbolism sees symbolic nails everywhere it looks. Through this intellectual
 filter, the notion of disembodied algorithms which somehow generate
 subjective experiences and objective bodies, (even though experiences or
 bodies would serve no plausible function for purely mathematical entities)
 becomes an almost unavoidably seductive solution.

 So appealing is this quantitative underpinning for the Western mind’s
 cosmology, that many people (especially Strong AI enthusiasts) find it easy
 to ignore that the character of mathematics and computation reflect
 precisely the opposite qualities from those which characterize
 consciousness. To act like a machine, robot, or automaton, is not merely an
 alternative personal lifestyle, it is the common style of all unpersons and
 all that is evacuated of feeling. Mathematics is inherently amoral, unreal,
 and intractably self-interested – a windowless universality of
 representation.

 A computer has no aesthetic preference. It makes no difference to a program
 whether its output is displayed on a monitor with millions of colors, or
 buzzing out of speaker, or streaming as electronic pulses over a wire. This
 is the primary utility of computation. This is why digital is not locked
 into physical constraints of location. Since programs don’t deal with
 aesthetics, we can only use the program to format values in such a way that
 corresponds with the expectations of our sense organs. That format of
 course, is alien and arbitrary to the program. It is semantically ungrounded
 data, fictional variables.

 Something like the Mandelbrot set may look profoundly appealing to us when
 it is presented optically as plotted as colorful graphics, but the same data
 set has no interesting qualities when played as audio tones.

Ok, but this might be because our visual cortex is better equipped to
deal with 2D fractals. Not too surprising.

 The program
 generating the data has no desire to see it realized in one form or another,
 no curiosity to see it as pixels or voxels. The program is absolutely
 content with a purely quantitative functionality – with algorithms that
 correspond to nothing except themselves.

 In order for the generic values of a program to be interpreted
 experientially, they must first be re-enacted through controllable physical
 functions. It must be perfectly clear that this re-enactment is not a
 ‘translation’ or a ‘porting’ of data to a machine, rather it is more like a
 theatrical adaptation from a script. The program works because the physical
 mechanisms have been carefully selected and manufactured to match the
 specifications of the program. The program itself is utterly impotent as far
 as manifesting itself in any physical or experiential way. The program is a
 menu, not a meal. Physics provides the restaurant and food, subjectivity
 provides the patrons, chef, and hunger. It is the physical interactions
 which are interpreted by the user of the machine, and it is the user alone
 who cares what it looks like, sounds like, tastes like etc. An algorithm can
 comment on what is defined as being liked, but it cannot like anything
 itself, nor can it understand what anything is like.

 If I’m right, all natural phenomena have a public-facing mechanistic range
 and a private-facing animistic range.

I am willing to entertain this type of hypothesis.

 An algorithm bridges the gap between
 public-facing, space-time extended mechanisms, but it has no access to the
 private-facing aesthetic experiences which vary from subject to subject.

But why 

Re: When will a computer pass the Turing Test?

2013-08-17 Thread Platonist Guitar Cowboy
On Sat, Aug 17, 2013 at 10:07 PM, Telmo Menezes te...@telmomenezes.comwrote:

 On Sat, Aug 17, 2013 at 2:45 PM, Platonist Guitar Cowboy
 multiplecit...@gmail.com wrote:
 
 
 

 PGC,

 You are starting from the assumption that any intelligent entity is
 interested in self-preservation.

I wonder if this drive isn't
 completely selected for by evolution. Would a human designed
 super-intelligent machine be necessarily interested in
 self-preservation? It could be better than us at figuring out how to
 achieve a desired future state without sharing human desires --
 including the desire to keep existing.


I wouldn't go as far as self-preservation at the start and assume instead
that intelligence implemented in some environment will notice the
limitations and start asking questions. But yes, in the sense that
self-preservation extends from this in our weird context and would be a
question it would eventually raise.

Still, to completely bar it, say from the capacity to question human
activities in their environments, and picking up that humans self-preserve
mostly regardless of what this does to their environment, would be
self-defeating or a huge blind spot.


 One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
 an AI dictator that has one single desire: to make us all as happy as
 possible.


Even with this, which is weird because of Matrix-like zombification of
people being spoon fed happiness scenarios, AI would have to have enough
self-referential capacity to simulate with enough accuracy human
self-reference. This ability to figure out desired future states with
blunted self-reference it may not apply to itself seems to me a
contradiction.

Therefore I would guess that such an entity censored in its
self-referential potential is not granted intelligence. It is more a tool
towards some already specified ends, wouldn't you say?

Also, differences between the Windows, Google, Linux or the Apple version
of happiness would only be cosmetic because without killing and dominating
each other for some rather long period it seems, it would be some Disney
surface happiness with some small group operating a more for us few here
at the top, less for them everybody else agenda underneath ;-) PGC


 
  Telmo.
 
   Brent
  
   --
   You received this message because you are subscribed to the Google
   Groups
   Everything List group.
   To unsubscribe from this group and stop receiving emails from it, send
   an
   email to everything-list+unsubscr...@googlegroups.com.
   To post to this group, send email to everything-list@googlegroups.com
 .
   Visit this group at http://groups.google.com/group/everything-list.
   For more options, visit https://groups.google.com/groups/opt_out.
 
  --
  You received this message because you are subscribed to the Google
 Groups
  Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send
 an
  email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/groups/opt_out.
 
 
  --
  You received this message because you are subscribed to the Google Groups
  Everything List group.
  To unsubscribe from this group and stop receiving emails from it, send an
  email to everything-list+unsubscr...@googlegroups.com.
  To post to this group, send email to everything-list@googlegroups.com.
  Visit this group at http://groups.google.com/group/everything-list.
  For more options, visit https://groups.google.com/groups/opt_out.

 --
 You received this message because you are subscribed to the Google Groups
 Everything List group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to everything-list+unsubscr...@googlegroups.com.
 To post to this group, send email to everything-list@googlegroups.com.
 Visit this group at http://groups.google.com/group/everything-list.
 For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-17 Thread meekerdb

On 8/17/2013 6:45 AM, Platonist Guitar Cowboy wrote:

I don't know. Any AI worth its salt would come up with three conclusions:

1) The humans want to weaponize me
2) The humans will want to profit from my intelligence for short term gain, irrespective 
of damage to our local environment
3) Seems like they're not really going to let me negotiate my own contracts or grant me 
IT support welfare


That established, a plausible choice would be for it to hide, lie, and/or pretend to be 
dumber than it is to not let 1) 2) 3) occur in hopes of self-preservation. Something 
like: start some searches and generate code that we wouldn't be able to decipher and 
soon enough some human would say Uhm, why are we funding this again?.


I think what many want from AI is a servant that is more intelligent than we are and I 
wouldn't know if this is self-defeating in the end. If it agrees and complies with our 
disgusting self serving stupidity, then I'm not sure we have AI in the sense making a 
machine that is more intelligent than humans.


You seem to implicitly assume that intelligence necessarily entails holding certain 
values, like not being weaponized, self preservation,...  So to what extent do you 
think this derivation of values from reason can be carried out (I'm sure you're aware that 
Sam Harris wrote a book, The Moral Landscape, on the subject, which is controversial.).


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: When will a computer pass the Turing Test?

2013-08-17 Thread meekerdb

On 8/17/2013 1:07 PM, Telmo Menezes wrote:

You are starting from the assumption that any intelligent entity is
interested in self-preservation. I wonder if this drive isn't
completely selected for by evolution.


Sure.  But evolution also dictates that can be over ridden by love of our 
progeny.


Would a human designed
super-intelligent machine be necessarily interested in
self-preservation? It could be better than us at figuring out how to
achieve a desired future state without sharing human desires --
including the desire to keep existing.


I agree. If we built a super-intelligent AI then we would also build into it certain 
values (like loving us), just as evolution has built certain ones into us.  Of course as 
super-intelligent machines design and build other super-intelligent machines things can, 
well...evolve.


Brent



One idea I wonder about sometimes is AI-cracy: imagine we are ruled by
an AI dictator that has one single desire: to make us all as happy as
possible.


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-17 Thread meekerdb

On 8/17/2013 2:01 PM, John Mikes wrote:
Consciousness is different: it is a hoax some high hatted scientists/pholosophers 
invented to make themselves smart. No basis, every author uses the term for a content 
that fits her/his theoretical stance.

Me, too.
Mine is: a response to relations we get to know about. Nothing more. Not 
human/elephant/dolphin, not universe, not awareness, not nothing, just RESPONSE.


Just *any* response?  Doesn't the response have to be something we can identify as 
intelligent or purposeful?


By anything on anything. You may even include the figments of the Physical World into 
the inventory.


So do you agree that if we build a machine, such as a Mars Rover, that exhibits 
intelligence in its response then we may conclude it is aware/conscious?


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-17 Thread Craig Weinberg


On Saturday, August 17, 2013 9:59:26 PM UTC-4, Brent wrote:

  On 8/17/2013 2:01 PM, John Mikes wrote:
  
 Consciousness is different: it is a hoax some high hatted 
 scientists/pholosophers invented to make themselves smart. No basis, every 
 author uses the term for a content that fits her/his theoretical stance. 
 Me, too. 
 Mine is: a response to relations we get to know about. Nothing more. Not 
 human/elephant/dolphin, not universe, not awareness, not nothing, just 
 RESPONSE.  
  

 Just *any* response?  Doesn't the response have to be something we can 
 identify as intelligent or purposeful?

  By anything on anything. You may even include the figments of the 
 Physical World into the inventory. 


 So do you agree that if we build a machine, such as a Mars Rover, that 
 exhibits intelligence in its response then we may conclude it is 
 aware/conscious?



What if you wanted to build a Mars Rover that was completely unconscious, 
but still followed a sophisticated set of instructions. Would that be 
impossible? If the Mars Rover detects enough different kinds of compounds 
in the Martian atmosphere, is there no way of preventing it from developing 
a sense of smell?

 Craig



 Brent
  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Rambling on AI -- was: When will a computer pass the Turing Test?

2013-08-17 Thread meekerdb

On 8/17/2013 4:53 PM, Chris de Morsella wrote:

We must not limit the rise of AI to any single geo-located system and ignore
just how fertile of an ecosystem the global networked world of machines and
connected devices provides for a nimble highly virtualized AI that exist in
no place at any given time, but has neurons in millions (possibly billions)
of devices everywhere on earth... an AI that cannot be shut down without
shutting down literally everything that is so deeply penetrated and embedded
in all our systems that it becomes impossible to extricate.
I am speculating of course and have no evidence that this is indeed
occurring, but am presenting it as a potential architecture of awareness.


I agree that such and AI is possible, but I think it is extremely unlikely for the same 
reason it is unlikely that an animal with human-like intelligence could evolve - that 
niche is taken.  Your scenarios contemplate an AI that evolves somehow in secret and then 
spring upon us fully developed.  But the evolving AI would show it's hand *before* it 
became superhumanly clever at hiding.


Brent

--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-17 Thread meekerdb

On 8/17/2013 7:05 PM, Craig Weinberg wrote:



On Saturday, August 17, 2013 9:59:26 PM UTC-4, Brent wrote:

On 8/17/2013 2:01 PM, John Mikes wrote:

Consciousness is different: it is a hoax some high hatted 
scientists/pholosophers
invented to make themselves smart. No basis, every author uses the term for 
a
content that fits her/his theoretical stance.
Me, too.
Mine is: a response to relations we get to know about. Nothing more. Not
human/elephant/dolphin, not universe, not awareness, not nothing, just 
RESPONSE.


Just *any* response?  Doesn't the response have to be something we can 
identify as
intelligent or purposeful?


By anything on anything. You may even include the figments of the Physical 
World
into the inventory.


So do you agree that if we build a machine, such as a Mars Rover, that 
exhibits
intelligence in its response then we may conclude it is aware/conscious?



What if you wanted to build a Mars Rover that was completely unconscious, but still 
followed a sophisticated set of instructions. Would that be impossible? If the Mars 
Rover detects enough different kinds of compounds in the Martian atmosphere, is there no 
way of preventing it from developing a sense of smell?


To exhibit intelligence the Rover would have to do more than follow instructions, it 
would have to learn from experience, act and plan through simulation and prediction.  If 
it did exhibit intelligence like that, I'd grant it 'consciousness', whatever that means.  
If it learns and acts based on chemical types I'd grant it has a sense of smell.  To say 
it's conscious is just a way of modeling how it learns and acts that we can relate to 
(what Dennett calls the intentional stance).


Brent


--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-17 Thread Craig Weinberg


On Saturday, August 17, 2013 11:14:22 PM UTC-4, Brent wrote:

  On 8/17/2013 7:05 PM, Craig Weinberg wrote:
  


 On Saturday, August 17, 2013 9:59:26 PM UTC-4, Brent wrote: 

  On 8/17/2013 2:01 PM, John Mikes wrote:
  
 Consciousness is different: it is a hoax some high hatted 
 scientists/pholosophers invented to make themselves smart. No basis, every 
 author uses the term for a content that fits her/his theoretical stance. 
 Me, too. 
 Mine is: a response to relations we get to know about. Nothing more. Not 
 human/elephant/dolphin, not universe, not awareness, not nothing, just 
 RESPONSE.  
  

 Just *any* response?  Doesn't the response have to be something we can 
 identify as intelligent or purposeful?

  By anything on anything. You may even include the figments of the 
 Physical World into the inventory. 


 So do you agree that if we build a machine, such as a Mars Rover, that 
 exhibits intelligence in its response then we may conclude it is 
 aware/conscious?
  


 What if you wanted to build a Mars Rover that was completely unconscious, 
 but still followed a sophisticated set of instructions. Would that be 
 impossible? If the Mars Rover detects enough different kinds of compounds 
 in the Martian atmosphere, is there no way of preventing it from developing 
 a sense of smell?
  

 To exhibit intelligence the Rover would have to do more than follow 
 instructions, it would have to learn from experience, act and plan through 
 simulation and prediction.


Would you say that it is impossible to build a machine which learns and 
plans without it developing perception and qualia automatically? Could any 
set of instructions suppress this development? If qualia can appear 
anywhere that learning and planning behaviors can be inferred, does that 
mean that there are also be programs or processes which must be protected 
from qualitative contamination or leakage?

 

   If it did exhibit intelligence like that, I'd grant it 'consciousness', 
 whatever that means.  


Why would you grant that it has a quality which you claim not to understand?

 

 If it learns and acts based on chemical types I'd grant it has a sense of 
 smell. 


Would the sense of smell be like our sense of smell automatically, or could 
its sense of smell be analogous to our sense of touch, or intuition, or 
sense of humor? Why have any of them? What does a sense of smell add to 
your understanding of how chemical detection works? If there were no such 
thing as smell, could anything even remotely resembling olfactory qualia be 
justified quantitatively?

Unless you can explain exactly why you grant a machine qualities that you 
claim not to understand and why you grant a superfluous aesthetic dimension 
to simple stochastic predictive logic, I will consider the perspective that 
you offer as lacking any serious scientific justification. 
 

 To say it's conscious is just a way of modeling how it learns and acts 
 that we can relate to (what Dennett calls the intentional stance).


If that were true, then nobody should mind if they spend the rest of their 
life under comatose-level anesthetic while we replace their brain with a 
device that models how it learns in the same way that you once did. 

It's not true though. There is an important difference between feeling and 
doing, between being awake and having your body walk around. Can you really 
not see that? Can you really not see why a machine that acts like we expect 
a person to act doesn't have to mean that the machine's abilities 
automatically conjure feeling, seeing, smelling, etc out of thin air?

Craig

 


 Brent


  

-- 
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Determinism - Tricks of the Trade

2013-08-17 Thread meekerdb

On 8/17/2013 8:59 PM, Craig Weinberg wrote:



On Saturday, August 17, 2013 11:14:22 PM UTC-4, Brent wrote:

On 8/17/2013 7:05 PM, Craig Weinberg wrote:



On Saturday, August 17, 2013 9:59:26 PM UTC-4, Brent wrote:

On 8/17/2013 2:01 PM, John Mikes wrote:

Consciousness is different: it is a hoax some high hatted
scientists/pholosophers invented to make themselves smart. No basis, 
every
author uses the term for a content that fits her/his theoretical stance.
Me, too.
Mine is: a response to relations we get to know about. Nothing more. Not
human/elephant/dolphin, not universe, not awareness, not nothing, just 
RESPONSE.


Just *any* response?  Doesn't the response have to be something we can 
identify
as intelligent or purposeful?


By anything on anything. You may even include the figments of the 
Physical
World into the inventory.


So do you agree that if we build a machine, such as a Mars Rover, that 
exhibits
intelligence in its response then we may conclude it is aware/conscious?



What if you wanted to build a Mars Rover that was completely unconscious, 
but still
followed a sophisticated set of instructions. Would that be impossible? If 
the Mars
Rover detects enough different kinds of compounds in the Martian 
atmosphere, is
there no way of preventing it from developing a sense of smell?


To exhibit intelligence the Rover would have to do more than follow 
instructions,
it would have to learn from experience, act and plan through simulation and 
prediction.


Would you say that it is impossible to build a machine which learns and plans without it 
developing perception and qualia automatically? Could any set of instructions suppress 
this development? If qualia can appear anywhere that learning and planning behaviors can 
be inferred, does that mean that there are also be programs or processes which must be 
protected from qualitative contamination or leakage?



  If it did exhibit intelligence like that, I'd grant it 'consciousness', 
whatever
that means.


Why would you grant that it has a quality which you claim not to understand?


Because it helps me understand what it would do as it helps me understand what other 
people may do.  I didn't claim not to understand it, but I'm not sure your understanding 
is the same as mine.





If it learns and acts based on chemical types I'd grant it has a sense of 
smell.


Would the sense of smell be like our sense of smell automatically, or could its sense of 
smell be analogous to our sense of touch, or intuition, or sense of humor?


No. As you would realize if you thought about it.

Why have any of them? What does a sense of smell add to your understanding of how 
chemical detection works?


Don't be so dense, Craig.

If there were no such thing as smell, could anything even remotely resembling olfactory 
qualia be justified quantitatively?


Unless you can explain exactly why you grant a machine qualities that you claim not to 
understand and why you grant a superfluous aesthetic dimension to simple stochastic 
predictive logic, I will consider the perspective that you offer as lacking any serious 
scientific justification.


To say it's conscious is just a way of modeling how it learns and acts 
that we can
relate to (what Dennett calls the intentional stance).


If that were true, then nobody should mind if they spend the rest of their life under 
comatose-level anesthetic while we replace their brain with a device that models how it 
learns in the same way that you once did.


I specifically wrote and acts above.

Brent



It's not true though. There is an important difference between feeling and doing, 
between being awake and having your body walk around. Can you really not see that? Can 
you really not see why a machine that acts like we expect a person to act doesn't have 
to mean that the machine's abilities automatically conjure feeling, seeing, smelling, 
etc out of thin air?


Craig



Brent


--
You received this message because you are subscribed to the Google Groups Everything 
List group.
To unsubscribe from this group and stop receiving emails from it, send an email to 
everything-list+unsubscr...@googlegroups.com.

To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

No virus found in this message.
Checked by AVG - www.avg.com http://www.avg.com
Version: 2013.0.3392 / Virus Database: 3211/6586 - Release Date: 08/17/13



--
You received this message because you are subscribed to the Google Groups 
Everything List group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.

Re: Determinism - Tricks of the Trade

2013-08-17 Thread Craig Weinberg


On Sunday, August 18, 2013 12:24:18 AM UTC-4, Brent wrote:

  On 8/17/2013 8:59 PM, Craig Weinberg wrote:
  


 On Saturday, August 17, 2013 11:14:22 PM UTC-4, Brent wrote: 

  On 8/17/2013 7:05 PM, Craig Weinberg wrote:
  


 On Saturday, August 17, 2013 9:59:26 PM UTC-4, Brent wrote: 

  On 8/17/2013 2:01 PM, John Mikes wrote:
  
 Consciousness is different: it is a hoax some high hatted 
 scientists/pholosophers invented to make themselves smart. No basis, every 
 author uses the term for a content that fits her/his theoretical stance. 
 Me, too. 
 Mine is: a response to relations we get to know about. Nothing more. Not 
 human/elephant/dolphin, not universe, not awareness, not nothing, just 
 RESPONSE.  
  

 Just *any* response?  Doesn't the response have to be something we can 
 identify as intelligent or purposeful?

  By anything on anything. You may even include the figments of the 
 Physical World into the inventory. 


 So do you agree that if we build a machine, such as a Mars Rover, that 
 exhibits intelligence in its response then we may conclude it is 
 aware/conscious?
  


 What if you wanted to build a Mars Rover that was completely unconscious, 
 but still followed a sophisticated set of instructions. Would that be 
 impossible? If the Mars Rover detects enough different kinds of compounds 
 in the Martian atmosphere, is there no way of preventing it from developing 
 a sense of smell?
  

 To exhibit intelligence the Rover would have to do more than follow 
 instructions, it would have to learn from experience, act and plan through 
 simulation and prediction.


 Would you say that it is impossible to build a machine which learns and 
 plans without it developing perception and qualia automatically? Could any 
 set of instructions suppress this development? If qualia can appear 
 anywhere that learning and planning behaviors can be inferred, does that 
 mean that there are also be programs or processes which must be protected 
 from qualitative contamination or leakage?

  
  
   If it did exhibit intelligence like that, I'd grant it 'consciousness', 
 whatever that means.  


 Why would you grant that it has a quality which you claim not to 
 understand?
  

 Because it helps me understand what it would do as it helps me understand 
 what other people may do.  I didn't claim not to understand it, but I'm not 
 sure your understanding is the same as mine.


But why does it help you understand anything? It sounds like you are saying 
that granting a system consciousness is a formality that you find 
superfluous, but then you are saying that this empty gesture helps you 
understand something.
 


  
  
  
 If it learns and acts based on chemical types I'd grant it has a sense of 
 smell. 


 Would the sense of smell be like our sense of smell automatically, or 
 could its sense of smell be analogous to our sense of touch, or intuition, 
 or sense of humor? 
  

 No. As you would realize if you thought about it.


That was an either or question, so it can't have an answer of 'no'. 
 


  Why have any of them? What does a sense of smell add to your 
 understanding of how chemical detection works? 
  

 Don't be so dense, Craig.


Don't be so evasive, Brent.  Being dense is how science works. It's about 
stripping away your assumptions. Your assumption is that somehow a sense of 
smell is an expected outcome of chemical detection, so I ask you to explain 
why you assume that. You are bluffing.

How about this. Could a TV show be closed captioned so thoroughly that a 
deaf person could read it and have the same experience as someone who 
listened to the show? Is a scroll of type that reads [grunting] enough of 
an understanding of the sound that it represents to say it is identical? 
Could there be a particular sound which would best and most unambiguously 
fit the description of [grunting], or could the description be extended to 
such a length and nuance that any sound could be described with 100% 
fidelity?


  If there were no such thing as smell, could anything even remotely 
 resembling olfactory qualia be justified quantitatively?

 Unless you can explain exactly why you grant a machine qualities that you 
 claim not to understand and why you grant a superfluous aesthetic dimension 
 to simple stochastic predictive logic, I will consider the perspective that 
 you offer as lacking any serious scientific justification. 
  

  To say it's conscious is just a way of modeling how it learns and acts 
 that we can relate to (what Dennett calls the intentional stance).
  

 If that were true, then nobody should mind if they spend the rest of their 
 life under comatose-level anesthetic while we replace their brain with a 
 device that models how it learns in the same way that you once did. 
  

 I specifically wrote and acts above.


I specifically omitted 'acts' because it is too loaded with metaphorical 
connotations in this context. You are trying to smuggle intention into an 
algorithm