I shall with fear and trembling venture a short explanation of the movement
to cyberfy the world. It signaled the end, bitter and ongoing, of oil and
the car. The PC became the new car, with requisite lingo about speed and so
forth. And availability to all. It was a market force toward the affordable
which had revolutionary implications for the future shape of society and
possibly the moral evolution of humankind.

*ShortFormContent at Blogger* <http://shortformcontent.blogspot.com/>



On Fri, Dec 16, 2011 at 9:19 AM, Skagestad, Peter
<peter_skages...@uml.edu>wrote:

> Ben,
>
> Thank you for your comments, which I have been chewing on. I wish I had
> some insightful responses, but this is all I come up with.
>
> You wrote:
> “I find it very hard to believe that the second computer revolution could
> have very easily failed to take place soon enough after the first one,
> given the potential market, though as you say below, you were mainly
> concerned (and I agree with you) to reject a monocausal technological
> determinism.”
>
> PS: We are in the realm of speculation here, and I cannot claim to be an
> economic historian, but I do not believe the evolution of either
> interactive or personal computing was market-driven. When you read, for
> instance, the Licklider biography “The Dream Machine” (I forget the
> author’s name), you find Licklider knocking his head against the wall
> trying to persuade IBM to provide time-sharing, the first major
> breakthrough in interactive computing. Eventually there emerged
> entrepreneurs – notably Steve Jobs, Bill Gates, and Mitch Kapor – who
> recognized the market potential of the new technology. But by then
> networking, word-processing, email, and GUIs had already been developed,
> mostly by government-funded researchers guided by the augmentationist
> vision. What would have happened if Licklider, Engelbart, and Sutherland
> had not been guided by this vision, or if they had not obtained government
> funding? I think the answer is that we simply do not know.
>
> This may be the place to add that, when I wrote “Thinking With Machines”
> and “The Mind’s Machines”, I did not yet recognize Sutherland’s
> significance. Bush, Licklider, and Engelbart were the theoreticians and
> advocates for IA, but arguably – and in fact argued by Howard Rheingold –
> Sutherland’s “Sketchpad” was the single most important technological
> breakthrough. I was privately rebuked by Arthur Burks for this omission.
>
>
> You continue:
>
> “I know almost nothing about computer programming, but I was a Word and
> PowerPoint "guru" for some years. It's just that I think that some
> relevantly able people would soon enough have recognized the tremendous
> potential for personal computers. As the 1990s wore on, companies ended up
> stocking their cubicles with computers although most users never heard of,
> much less learned to use, more than 1/10 of the power of such programs as
> Word and PowerPoint, and workplace pressures tend to lock people into
> short-sighted views of the value of developing skills on word processors,
> spreadsheets, etc. ("quick and dirty" is the motto). Well, "1/10" is just
> my subjective impression, but whatever the vague fraction, it was small but
> enough to make the companies' investment worthwhile. (And probably the
> added value per added "power" doesn't equal one and involves diminishing
> returns, especially in terms of empowering collaboration beyond
> interaction).”
>
> PS: I think this is absolutely true, and I just want to add that
> Engelbart’s particular vision of IA has largely failed to materialize, due
> to the general unwillingness of corporations to provide training for their
> employees. Engelbart never set much store by user-friendliness; his project
> was to provide intellectual leverage through machinery and training.
> Probably his most cherished input device was not the mouse, but the keyset,
> with ten keys on which the user could enter chords.  It never went
> anywhere, as it would take about three weeks of training to gain
> proficiency with it.
>
> Moving on, you say:
>
> “Looking over Joe's paper, I'd guess that he wasn't aware of the
> interaction-collaboration distinction, or didn't remember it while writing
> the paper, and that by "interactive" he meant interactive and collaborative
> alike. I'm not all that clear on the distinction myself. I tend to think of
> it not only in terms of people and computers but also in terms of various
> programs or computer systems (with attendent interoperability challenges)
> interacting (requesting and receiving data) and collaborating (asking each
> other to work on solving problems).”
>
> PS: I think this is true, and my disagreement with Joe here may be purely
> verbal; i.e. by “interactive” he probably meant to include the
> collaborative aspect.
>
> Finally, you raise this question:
>
> “Thinking, actively cogitating, is even less pure cognition than are
> looking (in order to see) and listening (in order to hear). The idea of
> reasoning, as _deliberate_ self-controlled inference, evokes the idea not
> only of active ability/competence (or able and competent doings themselves)
> but also of active willing. While ability/competence implies an end for
> which one cares to act, aside from the end's being in question, active
> willing implies an end for which one cares to contest, in a contest over
> what ends will prevail.  (Ironically "competent" comes from a word meaning
> "competing" but the connotation of the competitive has been lost by the
> word "competent" in English, a loss partly enabled, I suspect, by the
> difference in stress location and consequent vowel pronunciation.)
>
> So, my question, which I find I have trouble posing clearly, is, granting
> that IA involves an extension of mind in its abilities/competences as well
> as its cognitions, does it much extend volition and feeling (including
> emotion)?  Well, certainly it extends the reach of people's wills and
> feelings. But how mental is it if its processes are chiefly competential
> and cognitive? Are they such? Or are volitional and affective processes,
> not merely secondarily as needed for competence and cognition, in there
> even in the programming, not usually recognized?”
>
> PS: It is a very interesting question. I confess that I never thought
> beyond the purely cognitive aspect of mind, and I have no new insights
> regarding volitional or affective processes at this point. But anything
> listers have to add on this will be welcomed.
>
> That is my two-cents’ worth for now. My plan is to move on to the next
> part of Joe’s paper tonight or tomorrow.
>
> All the best,
> Peter
>
>
>
>
> ________________________________________
> From: C S Peirce discussion list [PEIRCE-L@LISTSERV.IUPUI.EDU] on behalf
> of Benjamin Udell [bud...@nyc.rr.com]
> Sent: Wednesday, December 14, 2011 2:59 PM
> To: PEIRCE-L@LISTSERV.IUPUI.EDU
> Subject: Re: [peirce-l] SLOW READ: THE RELEVANCE OF PEIRCEAN SEMIOTIC TO
> COMPUTATIONAL INTELLIGENCE AUGMENTATION
>
> Peter, list,
>
> This slow read is quiet enough that I might as well send some minor
> comments that might provide a little to chew on, I don't know. But before
> those, let me first of all thank you for leading the slow read and for your
> heart-warming reminiscences of Joe.
>
> The second computer revolution - inevitable after the first?
> Joe quotes you:
> In the sixties computers were huge, expensive machines usable only by an
> initiated elite; the idea of turning these machines into personal
> information-management tools that would be generally affordable and usable
> without special training was advocated only by a fringe of visionaries and
> was regarded as bizarre not only by the general public, but also by the
> mainstream of the electronics industry. The second computer revolution
> obviously could not have taken place without the first one preceding it,
> but the first computer revolution could very easily have taken place
> without being followed by the second one.
> I find it very hard to believe that the second computer revolution could
> have very easily failed to take place soon enough after the first one,
> given the potential market, though as you say below, you were mainly
> concerned (and I agree with you) to reject a monocausal technological
> determinism. I know almost nothing about computer programming, but I was a
> Word and PowerPoint "guru" for some years. It's just that I think that some
> relevantly able people would soon enough have recognized the teremendous
> potential for personal computers. As the 1990s wore on, companies ended up
> stocking their cubicles with computers although most users never heard of,
> much less learned to use, more than 1/10 of the power of such programs as
> Word and PowerPoint, and workplace pressures tend to lock people into
> short-sighted views of the value of developing skills on word processors,
> spreadsheets, etc. ("quick and dirty" is the motto). Well, "1/10" is just
> my subjective impression, but whatever the vague fraction, it was small but
> enough to make the companies' investment worthwhile. (And probably the
> added value per added "power" doesn't equal one and involves diminishing
> returns, especially in terms of empowering collaboration beyond
> interaction). Well, all of that, even the point about the continuing though
> shrunken need for special skills, is a quibble. The second revolution was
> not destined but only enabled by previous technology and was brought about
> by people seeing the potential. As you say below:
> I made the point that the emergence of the personal computer was not a
> given consequence of the invention of the microprocessor, but also required
> a particular vision of what computers were for. In so doing I was simply
> rejecting technological determinism, not advancing any monocausal thesis of
> my own.
> Interactive or collaborative.
> You wrote,
> PS: I do not totally agree with Joe here. I gladly admit that I never
> tried to identify what was fundamental to the IA tradition, believing that
> job to have been already done by Engelbart. But interactive computing,
> while essential to IA, has been endemic to computing of all kinds during
> the past forty years. I played chess games with the MIT computer as early
> as 1973; it was interactive, it had time sharing, but there was nothing
> about it that specifically related to IA. I would agree that collaborative
> computing is central to IA: more of that later.
> Looking over Joe's paper, I'd guess that he wasn't aware of the
> interaction-collaboration distinction, or didn't remember it while writing
> the paper, and that by "interactive" he meant interactive and collaborative
> alike. I'm not all that clear on the distinction myself. I tend to think of
> it not only in terms of people and computers but also in terms of various
> programs or computer systems (with attendent interoperability challenges)
> interacting (requesting and receiving data) and collaborating (asking each
> other to work on solving problems). So I look forward to your discussion of
> the difference and of the distinctive importance of collaborative ends to
> IA, also comparing to Engelbart's idea of what's fundamental to IA.
>
> Exosomatic mind - all cognitive?
> Peirce once expounded a trichotomy of feeling, will (sense of resistance),
> and general conception. Presumably all three can be conscious or
> unconscious, and thus seem attributable to mind. How really mental is
> something that is almost exclusively cognitive?
>
> In his paper, Joe wrote,
> Peter Skagestad understands the dictum "All thought is in signs" to mean
> that thought is not primarily a modification of consciousness, since
> unconscious thought is quite possible in Peirce’s view, but rather a matter
> of behavior -- not, however, a matter of a thinker's behavior (which would
> be a special case) but rather of the behavior of the publicly available
> material media and artifacts in which thought resides as a dispositional
> power. The power is signification, which is the power of the sign to
> generate interpretants of itself. Thinking is semiosis, and semiosis is the
> action of a sign. The sign actualizes itself as a sign in generating an
> interpretant, which is itself a further sign of the same thing, which,
> actualized as a sign, generates a further interpretant, and so on. As
> Skagestad construes the import of this -- correctly, I believe -- the
> development of thinking can take the form of development of the material
> media of thinking, which means such things as the development of
> instruments and media of expression, such as notational systems, or means
> and media of inscription such as books and writing instruments, languages
> considered as material entities like written inscriptions and sounds,
> physical instruments of observation such as test tubes, microscopes,
> particle accelerators, and so forth.
> Thinking, actively cogitating, is even less pure cognition than are
> looking (in order to see) and listening (in order to hear). The idea of
> reasoning, as _deliberate_ self-controlled inference, evokes the idea not
> only of active ability/competence (or able and competent doings themselves)
> but also of active willing. While ability/competence implies an end for
> which one cares to act, aside from the end's being in question, active
> willing implies an end for which one cares to contest, in a contest over
> what ends will prevail.  (Ironically "competent" comes from a word meaning
> "competing" but the connotation of the competitive has been lost by the
> word "competent" in English, a loss partly enabled, I suspect, by the
> difference in stress location and consequent vowel pronunciation.)
>
> So, my question, which I find I have trouble posing clearly, is, granting
> that IA involves an extension of mind in its abilities/competences as well
> as its cognitions, does it much extend volition and feeling (including
> emotion)?  Well, certainly it extends the reach of people's wills and
> feelings. But how mental is it if its processes are chiefly competential
> and cognitive? Are they such? Or are volitional and affective processes,
> not merely secondarily as needed for competence and cognition, in there
> even in the programming, not usually recognized?
>
> Best, Ben
>
>
> ----- Original Message -----
> From: Skagestad, Peter
> To: PEIRCE-L@LISTSERV.IUPUI.EDU
> Sent: Saturday, December 03, 2011 11:43 AM
> Subject: [peirce-l] SLOW READ: THE RELEVANCE OF PEIRCEAN SEMIOTIC TO
> COMPUTATIONAL INTELLIGENCE AUGMENTATION
>
> I am now opening the slow read of Joe Ransdell’s paper ‘The Relevance of
> Peircean Semiotic to Computational Intelligence Augmentation’, the final
> paper in this slow read series. I realize that Steven’s slow read is still
> in progress, but we have had overlapping reads before.
>
> Since we are conducting these reads to commemorate Joe, I will open with
> some personal reminiscences. In the fall of 1994, I bought the first modem
> for my home computer, a Macintosh SE-30. At about the same time I received
> a hand-written snail-mail letter from my erstwhile mentor the psychologist
> Donald Campbell, who had just returned from Germany, where he had met with
> Alfred Lange, who told him about an online discussion group devoted to
> Peirce’s philosophy. Campbell was not himself very interested in Peirce,
> but he knew I was, and so passed the information along. And so I logged on
> to Peirce-L.
>
> My connection was very primitive. I used a dial-up connection to U Mass
> Lowell’s antiquated VAX computer, which I had to access in
> terminal-emulation mode, whereby my Macintosh mimicked a dumb terminal for
> the VAX, which ran the VMS (Virtual Memory System) operating system and VMS
> Mail (later replaced with the somewhat more user-friendly DECmail). It was
> extremely awkward to use, but it was free.
>
> I had never met Joe Ransdell before – I only ever met him face to face
> once – although we knew of each other’s work. Joe immediately caught on to
> my difficulties in navigating VMS, and coached me patiently in the
> technical side of things offline, while constantly prodding and encouraging
> my participation in the online discussion. While never leaving one in doubt
> of his own opinions, Joe consistently stimulated and nurtured an open and
> critical, yet at the same time nonjudgmental exchange of ideas and
> opinions. The intellectual environment Joe created was an invaluable aid to
> me in developing my ideas on intelligence augmentation and the relevance of
> Peircean semiotic thereto.
>
> Now to the paper, available on the Arisbe site at
> http://www.cspeirce.com/menu/library/aboutcsp/ransdell/ia.htm. It is the
> longest paper in the slow read – 30 single-spaced pages plus notes – and
> December tends to be a short month, as many listers will no doubt be too
> busy with other things to pay much attention to Peirce-L in the final week
> or so of the month. My feeling is that we will probably only be able to hit
> the high points, but we will see how it goes. Since this is the last slow
> read in the series, we can also go on into January, should there be
> sufficient interest. I should add that the paper generated considerable
> discussion on the list when Joe first posted it about a decade ago; I do
> not know how many current listers were around at the time, but I believe
> both Gary Richmond and Jon Awbrey took active part in the discussion.
>
> As I see it, the paper falls into four parts. The first part – roughly one
> fourth of the paper – sets out the concept of computational intelligence
> augmentation as articulated in three published papers of mine, along with
> some reservations/revisions of Joe’s. The second part adumbrates the
> Peircean/Deweyan conception of inquiry, the third part examines Ginsparg’s
> publication system as a model of intelligence augmentation, and the fourth
> part examines the role of peer review in inquiry, sharply distinguishing
> editorially commissioned review from what Joe understands proper peer
> review to consist in.
>
> Personally, I shall naturally have most to say about the first part. This
> does not mean that I think the list discussion ought to focus on this part,
> at the expense of the other parts. This is decidedly not my view. But given
> the attention Joe devotes to my work, I think the most valuable
> contribution I personally can make here is commenting on, and engaging in
> discussion on, what Joe has to say about my work.
>
> I am not here going to rehash Joe’s admirable and scrupulously fair
> recapitulation of my writings on intelligence augmentation – although
> people may, of course, want to raise questions/comments about this or that
> point in his recapitulation. What I propose to do in this initial post is
> make a few introductory comments on intelligence augmentation, offer my
> take on FJoe’s differences with my articulation, and then propose a few
> questions for list discussion – in full awareness that other listers may
> find other questions to pose that may be as worthy or worthier of
> discussion.
>
> JR: “Peter Skagestad – philosopher and Peirce scholar – identifies two
> distinct programming visions that have animated research into
> computationally based intelligence which he labels, respectively, as:
> “Artificial Intelligence” or “AI” and “Intelligence Augmentation” or “AI”.
> The aim of the present paper is, first, to describe the distinction between
> these two type of computational intelligence research for the benefit of
> those who might not be accustomed to recognizing these as co-ordinate parts
> of it, and then, second, to draw attention to a special sort of
> Intelligence Augmentation (IA) research which seems to me to warrant
> special emphasis and description, both vbeause of its potential importance
> and because Skagestad”s account of the distinctive features of IA research
> does not seem to me to capture the most salient characteristics of this
> special part of it, perhaps because it may not have occurred to him that it
> is distinctive enough to require special attention in order to be
> recognized for what it is.”
>
> PS: I’ll return to what I may have paid insufficient attention to and why.
> First a little history. As far as I know, the concept of intelligence
> augmentation was first articulated by Doug Engelbart in his classic 1962
> “Framework” report, where it denotes the use of computers (or other
> artifacts) to augment human intellect by creating human-computer systems
> whose behavior is more intelligent than that of the unaided human.
> Engelbart acknowledges an affinity with the concept of “intelligence
> amplification,” earlier articulated by the cyberneticist W.R. Ashby. Based
> on my reading of Ashby, however, his concept of intelligence amplification
> is broader and encompasses both AI and Engelbart’s intelligence
> augmentation. Finally, the term “intelligence amplification” was later
> embraced by the computer scientist Frederic Brook, who used it much in the
> same sense as Engelbart’s “intelligence augmentation,” and who, to the best
> of my knowledge, was the first to use the abbreviation “IA” and explicitly
> contrast it with “AI”.
>
> Now, my thesis, advanced in three papers cited by Joe and available at
> Arisbe, was that IA, as understood by Engelbart, presupposes a conception
> of the mind as being exosomatically embodied, and that such a conception,
> unbeknownst to Engelbart, had been articulated by Peirce, and summarized in
> his dictum “all thought is in signs.” Joe does not disagree with this, but
> does not think I go quite far enough:
>
> JR: “In developing Skagestad’s conception further in the direction
> indicated I also ground this in Peirce’s dictum, but I do so by making
> explicit a different (but complementary) implication of the same Peircean
> dictum, namely that all thought is dialogical. (JR’s emphasis)”
>
> PS: A footnote indicates that I agree with this, which I do, but I want to
> raise the question whether this implication is actually ever made explicit
> by Peirce himself. Signs presuppose interpretation, and interpretation
> presupposes interpreters, which is made very explicit by Josiah Royce in
> his most Peircean writings, but did Peirce himself make this explicit? I am
> not saying he did not, but I am curious about references.
>
> Joe goes on to make some valuable observations about the evolution of IA
> that I had not made, to wit, that a great deal of what we now recognize as
> IA, notably word processing, came about rather serendipitously, because
> programmers needed to document their work and wanted to do so without
> taking their hands off the keyboard. I have no argument with that. I made
> the point that the emergence of the personal computer was not a given
> consequence of the invention of the microprocessor, but also required a
> particular vision of what computers were for. In so doing I was simply
> rejecting technological determinism, not advancing any monocausal thesis of
> my own.
>
> I move on to what I take to be Joe’s most important reservation to my
> treatment of IA:
>
> JR: “I do not think that Skagestad has succeeded so far in identifying
> precisely enough what it is that is fundamental in the IA tradition that
> runs through Douglas Engelbart, J.C.R. Licklider, Ivan Sutherland, Ted
> Nelson, Alan Kaye, … Tim Berners-Lee… That is, I do not find any place
> where Skagestad describes IA in a way that seems to capture what the
> various facets of it to which he appeals have in common. … My own hunch –
> and it is a little more than that, but it seems worth mentioning in a
> suggestive spirit here – is that the key to the identity of  what Skagestad
> characterizes as the IA tradition in computational research lies in the
> conception of interactive computing…”
>
> PS: I do not totally agree with Joe here. I gladly admit that I never
> tried to identify what was fundamental to the IA tradition, believing that
> job to have been already done by Engelbart. But interactive computing,
> while essential to IA, has been endemic to computing of all kinds during
> the past forty years. I played chess games with the MIT computer as early
> as 1973; it was interactive, it had time sharing, but there was nothing
> about it that specifically related to IA. I would agree that collaborative
> computing is central to IA: more of that later.
>
> Those are my initial thoughts on pages 1-8 of Joe’s paper. Some of it was
> admittedly fast, as much of it is Joe’s recapitulation and as I see it
> unproblematic exegesis of my papers. But others should feel free to revisit
> any details I have skipped which may merit closer attention. I will sit
> back now and let others weigh in.
>
> Peter Skagestad
>
> ---------------------------------------------------------------------------------
> You are receiving this message because you are subscribed to the PEIRCE-L
> listserv. To remove yourself from this list, send a message to
> lists...@listserv.iupui.edu with the line "SIGNOFF PEIRCE-L" in the body
> of the message. To post a message to the list, send it to
> PEIRCE-L@LISTSERV.IUPUI.EDU
>
>
> ---------------------------------------------------------------------------------
> You are receiving this message because you are subscribed to the PEIRCE-L
> listserv.  To remove yourself from this list, send a message to
> lists...@listserv.iupui.edu with the line "SIGNOFF PEIRCE-L" in the body
> of the message.  To post a message to the list, send it to
> PEIRCE-L@LISTSERV.IUPUI.EDU
>

---------------------------------------------------------------------------------
You are receiving this message because you are subscribed to the PEIRCE-L 
listserv.  To remove yourself from this list, send a message to 
lists...@listserv.iupui.edu with the line "SIGNOFF PEIRCE-L" in the body of the 
message.  To post a message to the list, send it to PEIRCE-L@LISTSERV.IUPUI.EDU

Reply via email to