Hello, Everythinglisters!

The below text is a philosophical essay on what qualia may represent.
I doubt you'll manage to finish reading it (it's kind of long, and
translated from anoter language), but if you do I'll be happy to hear
your opinion about what it says.

Thanks!

<<<A simpler model of the world with different points of view>>>

It can often get quite amusing watching qualophiles' self-confidence,
mutual assurance and agreement when they talk about something a priori
defined as inherently private and un-accessible to third-party
analysis (i.e. qualia), so they say, but they somehow agree on what
they're discussing about even though as far as I've been able to
understand they don't display the slightest scant of evidence which
would show that they believe there will ever be a theory that could
bridge the gap between the ineffable what-it-is-likeness (WIIL) of
personal experience and the scientific, objective descriptions of
reality. They don’t even try to brainstorm ideas about such a theory.
How are we to explain this what-it-is-likeness (WIIL) if we can't
subject it to what science has been and will always be? Third-party
analysis. So, here it is: Qualia, one of the last remaining unresolved
quandaries for us to splinter and rise on the pedestals of science,
but we must stop, qualophiles say, because, .... “Because what?” I
ask. “Because the what-it-is-likeness of qualia” most of them will
respond. And believe me that is the whole argument from which they
sprout all of the other awkward deductions and misconstrued axioms if
we are to succinctly resume their rigorous, inner-gut, “aprioristic
analysis”. I'll try to expose the absurdity of their stance by making
some analogies while telling the story of how architects and designers
build 3D models of reality with the help of 3D modeling software.

The 1s and 0s that make the large variety of 3D design software on the
market today are all we need in order to bring to virtual-reality
whatever model of our real world we desire. Those 1s and 0s, which are
by the way as physical as the neurons in your brain though not of the
same assortment (see below), are further arranged into sub-modules
that are further integrated into other different parts and subsystems
of the computer onto which the software they are part of is running
on, so their arrangement is obviously far from aleatory. One needs to
adopt the intentional stance in order to understand the intricacies,
details and roles that these specific particular modules play in this
large and complex computer programs.

If you had the desire you could bring to virtual reality any city of
the world you want. Let's for example take the city of Rome. Every
monument, restaurant, hospital, park, mall and police department can
be accounted for in a detailed, virtual replica which we can model
using one of these 3D modeling programs. Every car, plane and boat,
even the people and their biomechanics are so well represented that we
could easily mistake the computer model for the real thing. Here we
are looking at the monitor screen from our God-like-point-of-view. All
the points, lines, 2D-planes and 3D objects in this digital
presentation have their properties and behavior ruled by simulated
laws of physics which are identical to the laws encountered in our
real world. These objects and the laws that govern them are 100%
traceable to the 1s and 0s, that is, to the voltages and transistors
on the silica chips that make up the computer onto which the software
is runs on. We have a 100% description of the city of Rome in our
computer in the sense that there is no object in that model that we
can't say all there is to say about it and the movement of the points,
lines and planes which compose it because they're all accounted for in
the 0s and 1s saved on the hard-drive and then loaded into the RAM and
video-RAM of our state of the art video graphics card. Let's call that
perspective, the perspective of knowing all there is to know about the
3D-model, the third-person perspective (the perspective described by
using only third-party objective data). What's interesting is that all
of these 3D design programs have the option to add cameras to whatever
world model you are currently developing. Cameras present a scene from
a particular point-of-view (POV – or point of reference, call it how
you will). Camera objects simulate still-image, motion picture, or
video cameras in the real world and have the same usage here. The
benefit of cameras is that you can position them anywhere within a
scene to offer a custom view. You can imagine that camera not only as
a point of view but also as an area point of view (all the light
reflected from the objects in your particular world model enter the
lens of the camera), but for our particular mental exercise this
doesn't matter. What you need to know is that our virtual cameras can
perfectly simulate real world cameras and all the optical science of
the lens is integrated in the program making the simulated models
similar to the ones that are found real life. We’ll use POVs and CPOVs
interchangeably from now on; they mean the same thing in the logic of
our argumentation.

The point-of-view (POV) of the camera is obviously completely
traceable and mathematically deducible from the third-person
perspective of the current model we are simulating and from the
physical characteristics of the virtual lens built into the camera
through which the light reflected of the objects in the model is
projected (Bare in mind that the physical properties and optics of the
lens are also simulated by the computer model). Of course, the
software does all that calculation and drawing for you. But if you had
the ambition you could practically do all that work for yourself by
taking the 3D-model’s mathematical and geometric data from the saved
computer file containing your particular model description and
calculate on sheets of paper how objects from it would look and behave
from a particular CPOV, and more to that, you could literally draw
those objects yourself by using the widely known techniques of
descriptive geometry (the same as the ones used by the 3D modeling
software). But what point would that make when we already have
computers that achieve this arduous task for us? Maybe living in a
period of time without computers would make this easily relentless
task one worth considering.


So, we can basically take a virtual trip to whatever part of Rome we
want by just jumping inside a CPOV provided to us by the software. We
can see, experience what it is like to be in Rome by adopting whatever
CPOV which will be calculated and drawn to us by this complex but 100%
describable and understandable computer program. The software would be
no mystery to us if we were sufficiently trained programmers,
architects and mathematicians. The WIIL of experiencing Rome will
never be a mystery to us also if we’ll let the 3D design software do
the job of calculating and drawing the CPOV for us. Of course, as said
above, we can achieve the same WIIL by making strenuous calculations
and draw ourselves on sheets of paper exactly the same POV “painted”
to us by the computer program. Whatever our choice one thing stands to
pure reason: We will achieve to experience the what-it-is-likeness
(WIIL) of Rome by deducing it from objective, third-party data that we
can all share by accessing the program file that contains the 3D-model
third-person description; so there is nothing special about it. The
whole point is that the experience of the WIIL can be achieved and
built by/for us using third-person data). The WIIL only seems to be
some kind of metaphysical thing because of its circumstantial
relatedness with the idiosyncrasies of the POV. No need to squander
energy contriving not-worth-considering meanings because of this
relatedness. The WIIL is the intentional interpretation of the
mathematical description of the physical objects' properties and
relationships to each other which the POV describes; it is the
richness and detail of the description of the POV taken as a whole by
whatever is on the other side of the lens. On the other hand the POV
can be accounted for by its mathematical and geometrical description;
it’s all data, 0s and 1s. The WIIL and the POV represent the same
thing but each are different interpretations of a specific slice of
the 3D model: one is a reducible, mathematical and geometric
description of a set of objects and how their would appear from a
certain vantage point (i.e. the POV), the other one is the non-
reducible, intentional, apparently immediate interpretation of all
that data contained in the POV taken as a whole. The WIIL is all
accounted for, we know all about it: how it comes to existence, how it
is 100% physical but non-reducible because of its intentionality, and
how the circumstantial relation to its POV makes it seem as if it’s
something separate from it but that's an illusion.


The what-it-is-likeness (WIIL) of points-of-view (POVs) in our model
of Rome are unique in the sense that they each have idiosyncrasies in
the arrangement of points, lines, planes, colors and light reflectance
that make up the objects in the model, idiosyncrasies caused by the
perspective that we randomly chose to be a point or a certain area
(lens of the camera) on the map of our 3D model onto which the light
reflected by some of the objects contained in it is projected through.
The WIIL is 100% mathematically, geometrically described and accounted
for by the calculations and drawings done in order to design the POV
that we experience the WIIL through. To make it more clearly lets
describe the relationship between the WIIL and the POV a little more.
The WIIL is not something separate from the POV in one important sense
and here sits the crux of my argumentation: The POV which was inferred
and created from the objective, third-person perspective of the
computer model is the WIIL in the sense that all we need to know if we
are to describe the WIIL is the mathematical description of the POV
and that is all. For someone (or something) to experience objects
contained in the city model through a specific CPOV that is how WIILs
come into existence. The sole act of accessing that POV (i.e, its
mathematical description) is the WIIL. The question "And then what
happens?" has no meaning here because nothing happens next. As I've
said above you can think of POVs as reducible in the sense that they
can be accounted for mathematically by knowing each coordinate of
every point belonging to every object in its description, and you can
think at WIIL as a non-reducible, intentional representation of the
objects described by that POV taken as a whole by the observer sitting
on the other side of the lens. The sole act of acknowledging the
mathematical and geometric descriptive richness of a piece of the
world through the lens of the camera-point-of-view (CPOV) by whatever
remains on the other side of the lens is the WIIL and nothing more is
there to be said; the story is complete. Acknowledging the richness in
description of the mathematical and geometric data does not mean that
the observer needs to understand all the intricate equations,
elaborate calculations and geometric deductions; all there needs to be
done is for that observer is to be hit with all that idiosyncratic
data ". “Can you describe this WIIL?" Of course, by providing you with
all the mathematical relations between all the points, planes and
surface properties that describe the POV through which this wholeness
of experience (WIIL) comes to reality. How did i get those points and
planes and their properties? Again, I got them from the third-party,
objective data contained in the 3D-model of the city located on the 1s
and 0s hardwired on the hard-drive of the computer.


Something on privateness now. The WILL is only private in the sense
that only something which experiences a certain POV can experience its
WIIL but that is all. Can this POV be shared with others? Of course.
After we create that CPOV in the computer program we can save it to a
file and send it via email to whatever part of the globe you want for
someone else to experience its WIIL. So, the possibility of sharing it
with others makes it a not very good candidate for privateness. POVs
are only unique, but hardly private so let's not confuse the terms.

The same reasons as above I should say go for the qualia of color,
smells, etc. So, I doubt there is any difference with these types of
experiences. What it is like to see a color is just experiencing the
complete model from a slice of the world from a certain POV. Why can't
that POV be deduced and inferred from the widely agreed-upon,
sufficient, scientific data as qualophiles’ plea for metaphysics
suggest eludes me so far as i can see, so that's why the they haven't
proved anything yet. I doubt they'll ever will. If we knew almost
everything there is to know about the particles and forces that make
up our world we could be able to build models of whatever brains we'd
like that could experience all there is to be possible designable as
an experience.

Daniel Dennett's RoboMarry shouldn't have a hard job at building color
into herself without access to the already build in color-modules that
are part of her 100% silica made brain. And that's our next story.

<<<RoboMarry has a busy afternoon>>>
In one of his more recent books, Daniel Dennett answered critics whom
do not share his position on the possibility that John Searle's color-
bereft Mary, recently liberated from the black&white, grey-shaded
Chinese room which she inhabited in the course of her lifetime, could
not be fooled into believing that a blue-colored banana shown to her
by her masters is in fact yellow. Even though Mary did not experience
any colors in her lifetime she somehow managed to put herself into the
dispositional states of yellowness and blueness with the help of
scientific data she gathered and made sense of in her black&white
room. Mary would not be at all fooled by the cheap trick her masters
tried on her, but Dan's critics said Mary wouldn't be able to pull
this task off. So, Dennett devised another but more ingenious
intuition pump: Locked RoboMary. From now on my story will differ a
bit from Dennett's in order to make my point clearer (You can check
the original story in Dan’s 2006 book, Sweet Dreams).


Let's replace Mary with RoboMary: a robot just as adroit in cognitive
skills as any other human being but much more rapid in thinking and
with a greater bandwidth for information acquiring than any of us
could ever imagine would be possible in the future even by today's
standards of technological advancement. Even if she's a standard Mark
19 model, RoboMary was stripped of her HD Color Cameras and was
equipped with bulk black&white CCDs that have the same performance and
resolution but cannot compute colors. Also, RoboMary had been
restricted access to her color-experience modules that were part of
her silica-made brain using some set of plug-ins installed into her by
her masters before her brain’s conscious capabilities were activated.
So, RoboMary has no experience of colors in her memory, and could
neither experience them through her black&white electronic eyes
because they can't render color, nor would she ever be able to put
herself into the state of experiencing them because she was denied
access to the color memory stack that was accessible by her color-
experience modules now blocked by the plug-ins. So, Locked-RoboMary,
trapped in her black&white room, with her black&white CCDs, without
her color rendering parts of her mechanized brain could apparently
never be able to experience colors. Or would she?

Even though her electronic color-experience brain modules were blocked
by the plug-ins installed at birth into her kernel software, the
design plans for that part of her electronic brain could be accessed
by her if she was trained enough to hack into the servers of the
corporation that happens to hold the patents for Mark 19 robots. Being
trapped in a room that has non-stop general-level access into the
network of the corporation and having access to the Internet this
makes her task so much easier. More to that she can converse with
other Mark 19 and Mark 20 robots. Low and behold, RoboMary managed to
hack into one of her robotic friends' computer some months ago by
installing a version of a Trojan horse that she managed to program in
her spare time; the fun part is that his friend which is now part of
the developing team for Mark 21 models thanks to the months of
training and million-dollar software installed into him has two levels
higher network access to the complete design files for Mark 19 robots.
That's how RoboMary managed to educate herself about the hardware and
software that makes up her brain, about her robot mechanics and the
design of the electronics from which her currently missing HD Color
Cameras are build. She now completely understands the functionality of
all the subsystems that make her color-experiencing modules even
though she still cannot access them directly. Having access to Moogle,
which is now the greatest and most used search engine on the Internet
network, she can easily access all information having to do with
vision and vision systems. By accessing the web she understands the
physics and chemistry of color, acquires vast knowledge about the
biomechanics of vision in humans with all the details on how their
color detection systems are wired into their brains, etc. Nothing
about vision and the world of physics, biology, artificial-
intelligence and bio-technology is un-known to her. She has an almost
complete third-person perspective on everything there is to know from
the world (also on everything there is to know from the design of Mark
19 robots) that has anything to do with colors. But how could she
build into herself the phenomenal, personal, ineffable experience of
colors having only third-party data about these phenomena? How could
she do that when, in the first place, one needs to have been in that
state of first-person experiencing sometime in the past, privilege she
was denied off at birth?

So, on Sunday afternoon, having some hours off because her training
has ended prematurely due to failure of all the Design and Development
server farms in the building complex she happened to be installed,
RoboMary put herself onto the task of building into herself the
experience of colors which were described by her robotic friends as
very awkward and unusual tools used to study the surface properties of
objects. She was now ready to do this because she gathered all the
data which was needed in order achieve this task. All she lacked up
until now was the computing power from the supercomputer located in
her building and which she now had access to because its processors
are not as stressed in this afternoon "thanks" to the cessation in
normal operation of the server clusters; she could now use that those
extra flops for herself in order to see what's so special about these
colors.

Being locked inside her room RoboMary had no colored object she could
study. Nothing colored ever touched her senses so she had to make use
of the ingenuity which always made her the adroitful robot she proved
to be. Having access to the higher level network though the trojan
virus she installed in her robotic friend's computer she could
replicate and simulate a complete digital model of her brain (and the
original HD Color Cameras that usually equip Mark 19 robots) inside
the currently laid-off supercomputer located in her building; that
would be no problem to her because she managed to steal all the Mark
19 design files; all she needed now was a few hours in order to built
the replica model into the system of the supercomputer and to make a
few thousand simulations on it using as input the few terabytes of
video-data gathered from all the security cameras spread inside the
corporation building complex she was living in (all those stolen video
recordings were in color format but that was no use to her because the
LCD screen inside her room was black&white) plus the gigabytes of
scientific information on vision systems, optics, colors, etc. But how
was she going to put herself into the state of experiencing colors if
there were no color that tickled her senses? All was black& white
around her.

Well, if you remember the story of how architects create specific
camera-points-of-view (CPOVs) inside their 3D modeling software in
order to experience a certain point-of-view (POV) of whatever model of
reality or of their imagination they are designing, maybe building
color experience inside oneself without ever having experienced colors
may not seem that unbelievable after all. Remember that RoboMary knows
everything about the physical world there is to be known. Couldn't she
simulate (that is by a third-person perspective of course) what the
brain state of a Mark 19 robot would be upon experiencing colors using
a computer model of this type of robot and subjecting it to a
completely digital replica of a LCD monitor screen onto which the
stolen colorized security videos would be projected onto? The CAD/CAM
software for integration of optical and electronic mechanics in 2050
is highly advanced so this wouldn’t be at all an impossible task; it's
would be quite ordinary in fact. Having access to the brain states
(i.e. the color-experiencing modules) of the simulated Mark 19 brain
while its mind experiences colors that would mean RoboMary could
easily make some print-screens of those brain states and then put
herself into the specific mental point-of-view (POV – or mental point
of reference) that would allow her to experience what-it-is-likeness
of colors by building into her RAM a complete replica of those color-
experiencing modules and setting them up with the data captures in the
print-screens. Also, having a complete list of all the belongings of
the corporation, their GPS position, their colors, that means she
could easily deduce from their position in the stolen videos how a
specific object's color is named so that she could easily build into
herself the color-verbal associations that every other Mark 19 robot
and human being has already built in. So, when the "playful"
scientists release RoboMary from here color bereft room and give her a
blue banana they will be the ones amazed by the lack of non-
astonishment in her behavior; RoboMary will completely call their
spoof. Many will deny that the above story could ever be true or, more
interestingly, some will retort that what RoboMary did was cheating.
But would that be true in any sense of the word? Is it failure of
imagination on the side of the party-popper philosophers perhaps?

Some may retort that what may be a true fact for architecture and 3D
computer modeling is not even close at explaining special phenomenal
qualities like colors, pains, etc. But then again, why would that be a
possibility worth taking into consideration? What qualia is, this what-
it-is-likeness, is not something metaphysical (at least that is what
we should a priori consider it if we ever wish to explain it),
indescribable by third-party objective data; it is in fact just the
intentional interpretation of the apparent immediateness in
understanding of the sumptuousness (which is 100% accountable) of
whatever particular POV’s description we are acknowledging at the
moment. The richness in the description of the POV and its
acknowledgement is the what-it-is-likeness; there is nothing
metaphysical about it. By using only third-party, objective data
RoboMary built into herself the experience of color so how could she
be cheating? How was she able to put herself into experiencing the so-
called ineffable, private phenomenal qualities of colors by only using
data provided by science? Would that be because colors are from this
world, not so ineffable, not so private qualities after all?
Qualophiles may retort by further stating that the POV's description
doesn't explain the specialness of the WIIL because my explanation
misses the enjoyer, the analyzer, but, as I've stated above, that is
just an illusion because there is an analyzer: the virtual machine in
the brain takes care of all those tasks of acknowledging and
discrimination. So, there should be no mystery about who the enjoyer
is and the means by which it achieves the acknowledgement of POV’s
mathematical description.

Others think otherwise. Consider what Torin Alter has to say about
Dennett's Locked RoboMary intution pump:

"Why does putting herself in state B enable RoboMary to know what it’s
like to see red? B is a dispositional and (let us assume)
nonphenomenal
state; there is nothing it’s like to be in B. Nevertheless, B involves
color phenomenology in that it contains the relevant phenomenal
information. Therein lies the problem for Dennett’s argument. By
putting herself in a state that involves color phenomenology, RoboMary
cheats. Pre-release
Mary should be no less puzzled about B than she is about seeing red.
If she lacks phenomenal information about seeing red, then she lacks
the phenomenal information that B contains. If there are open
epistemic possibilities about the nature of phenomenal redness that
she cannot eliminate, then there are open epistemic possibilities
about the content of B that she cannot eliminate. RoboMary comes by
her phenomenal knowledge of color experience not by a priori deduction
from physical information but rather by putting herself in a
nonphenomenal dispositional state that contains the relevant
phenomenal information. (The case for Qualia, p252-253)"

So Torin Alter's argumentation goes like this: "Why would architects
that adopt certain camera-points-of-view (CPOVs) in their 3D model of
Rome come to experience the what-it-is-likeness (WIIL) of Rome? There
is nothing like experiencing something that would be born from a point-
of-view (POV), is it? POVs are from-this-world, non-phenomenal (not-
metaphysical) descriptions of reality so how can they account for the
WIIL of Rome? By accessing the POV and practically acknowledging its
"sudden" mathematical and geometrical description architects cheat
because even though they've accepted and recognized all the above they
are missing something important. There is more to the WIIL of the POV
than the intentional interpretation of its mathematical description."
That's nonsense of course! The arguments don't line up and are
obviously self-contradictory. Saying that the specialness of POVs
could not be accounted for by their mathematical description only, but
also by the fact that they possess something special, out of this
world is just plain old unmotivated fantasy and sky-hook anchoring of
an illusion as old as debates about brain an mind. A brain only needs
its virtual machine and its specialized intentional discrimination
devices in order to process the description of the POV and that
discrimination done by the brain is the WIIL. So, to finish my
deductive reasoning, what Alter is actually saying is that if color-
bereft RoboMary could manage to achieve the task of putting herself
into the state of experiencing colors only by using third-party,
scientific data that would mean qualia is just a messed-up term
invented by science-deprived, imagination-bereft philosophers, and
that would make the mystery go away! Quales would be from-this-world,
100% explainable, non-magical tools used by the brain to discriminate
different properties from their external world. The magic of
phenomenal experience would fade away like the blink of an eye, at
least that's what qualophiles fear. How else could RoboMary build into
herself the experience of colors she now enjoys, only by using
objective data, if this so-called "color quales" weren't completely
accountable and traceable by that data? So, RoboMary has got to cheat,
Alter says, otherwise color quales wouldn't be out of this world.
Alter's got an agenda all right but I doubt it is finding out the
truth if he keeps postulating things out of this world which will, by
definition, always defy scientific explanation. Let's not confuse
failures in imagination with truths about reality.

To take the analogy with 3D computer modeling a little farther we can
say that just like an architect enjoys objects from a computer model
through the custom CPOV (having whatever custom properties its
designer want it to have) created by its 3D modeling software, by the
same line of reasoning we could say that Mark 19 robots are given
“immediate” representations of color experiences through the HD color
cameras they posses and are able to acknowledge that richness of
information through their color-experiencing modules (there are
another sort of CPOV). In the case of RoboMary that couldn’t be
possible because she was bereft of the HD color cameras and her color-
experience modules. But she managed to get around this problem by
building a complete digital replica of a Mark 19 robot and calculating
how her color discriminating systems would functionally look and
behave from third-person perspective; just like architects that can
calculate and draw camera-points-of-view (CPOVs) without the help of
computers RoboMary managed to "calculate" and "draw" to herself the
WIIL of colors. If was just that it took a lot more time, but it was
worth it: she could now appreciate the mechanisms that bring colors to
reality; and oddly enough, colors are so much more interesting because
she now knows that what brings them to reality are just physical
subsystems build onto bigger modules that are further arranged into
intricate discrimination systems of which's functionality is all that
matters.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to