On Thu, 4 Dec 2003, pete wrote:

> On Thu, 4 Dec 2003, [EMAIL PROTECTED] wrote:
>
> >Eventually machine intelligence will replace human intelligence
> >throughout the economy.  Wonder if the final outcome will be "good" or
> >"bad" Productivity will have increased but human interaction (at least in
> >these traditional areas such as education and probably health care) will
> >have decreased.

That is a good point. I recently posted a comment to the MIT- list on
the ubiquitous nature of opposing forces like constructive-destructive,
action-reaction, catabolic-anabolic and educational-countereducational.
A variety of motives lie behind educationally progressive and
educationally regressive outcomes. But I am sure we can use the teaching
machine optimally and still retain the option of calling in human teachers
as we (the students) wish.

The question re MIT's $100,000,000 OCW is to what extent educational vs.
countereducational forces are being applied. Facetiously we have to ask
how much of OCW is "boon" and how much is "doggle". Remember our old
friend the "Unabomber"? He was a Phud mathematician. Yet he was targeting
high tech leaders like Professor Gelernter (computing science) at Yale.

Both progressive and regressive forces are in the midst of the
intelligentsia. IMO, computing science is a particular target for the
vultures. Was it Prometheus who was chained to a rock by the gods for
giving knowledge to mortals, where he had his liver ripped out daily by
vultures? Would a modern charicature of such a vulture look suspiciously
like Bill Gates? After all, why would Bill Gates want to lead in AI
advances? The status quo works very well for him ... and for many other
neo-millionaires and billioanaires in high technology. What then is the
Microsoft agenda in funding the "reinventing of teaching and learning" as
Dean Magnanti at MIT puts it? Could it not be to enhance countereducation
rather than education? The Microsoft Visual Stdio .NET manual which
accompanied this software is an educational abomination. I have told them
I can turn it into a model of educational clarity but they keep ignoring
me. So what are they up to at MIT?

> >arthur
>
> I guess this is a good place to relate an experience I had today.
> I'm currently at CERN, helping to install some pieces of hardware
> we've cobbled up into the next great accelerator - big science
> at its most impressive. Anyway, we had this huge piece of hardware
> held up on supports in the middle of a large workroom, when a
> couple of girls came in, one with a camera, and one with a laptop
> under her arm. I thought, perhaps the CERN Courier is going to do
> another little article on the progress of our project. But instead,
> these two take out a bunch of little black squares about the size of
> postit notes, and start climbing up and sticking them all over the
> construction. I'm not sure if they were adhesive, or like fridge magnets,
> or both. Each square has a one cm white spot in the centre, but each
> has a differently segmented white circle around the central dot, at
> about 3cm diameter. Then they take out a pair of telescoping
> rods and extend them to about a metre and a half, and clip them
> to our construction, one horizontally, the other vertically. Each
> rod also has one of the black patches with white coding, mounted
> at each end. Then one sets up the laptop, while the other starts
> taking pictures, walking around the device. While the picture taking
> is still proceeding, the one with the laptop says, "Would you like to
> see?" and shows a diagram already appearing on the laptop screen.
> You see, these girls are the survey team, and they are generating
> a full 3D map of the device. The camera has a wireless connection
> to the laptop and is uploading images. The laptop identifies the
> little targets in the photos and does a brutal quantity of computation
> in real time among the photographs to deduce the position of the
> targets based solely on the multiple images and the two reference
> rods. As the surveyor operating the laptop explained to me (she is
> now a CERN employee, but used to work with the company which developed
> the technology) by taking a sufficient number of photographs, with
> a sufficient number of targets (I'm guessing they used a binary
> multiple, 32 or 64) it is not even necessary to have a pre-calibrated
> distortion free lens on the camera. The software can deduce and
> correct for any aberration in the lens as part of the overall
> calculation. The accuracy of the process is somewhat limited by the
> image quality of the digital camera, though it does much better than
> simple resolution of the camera image - for our gadget, about
> 6x6x3 metres, they get down to about 1/2 mm. So much for theodolites,
> and a day's computations, to generate a survey.
>
> Well, that's my whizzbang techno story for today...

My doctorate is in phil-psych and not computing. I had an AI prof sign my
masters' papers at U of A and I took matrix algebra from him so that I
could apply it to correlation matrices and do factor analysis but I had
little interest in AI at the time because computers in 1968 were very
limited and AI was a distant dream.

My 1976 text with Al Buss titled "Individual Differences" is in VPL.
Have a look at Chapter 3 on Mental Abilities and take a profile of man
vs. machine on the 19 primary mental abilities in Table 3.3. My present
expectation is that somebody, somewhere can tell us how to make the
machine surpass human performance on each and every factor. That includes
SO (Spatial Orientation) and S (Spatial Relations).

Mr. V on the IMP list is a computer scientist/engineer who is working on
the problem of machine object recognition. I keep telling him that if he
can come up with a program whereby a robot can do the Peabody Picture
Vocabulary Test with a cluttered background better than a human, I will
raise the robot personally and teach it a variation on English
("robo-speak") which will make it 100% clear that this machine is smarter
than any human on the planet.

FWP

>   -Pete
>
> -----Original Message-----
> From: Franklin Wayne Poley [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, December 3, 2003 8:42 PM
> To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
> Subject: [Futurework] Future Teaching
>
>
> Have a look at the robotic teacher I'd like to hire from King's
> College, London:
>
> <http://www.geocities.com/machine_psychology/IMP_Cover_Page>
>
> ---------- Forwarded message ----------
> Date: Wed, 3 Dec 2003 17:35:46 -0800 (PST)
> From: Franklin Wayne Poley <[EMAIL PROTECTED]>
> Reply-To: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED]
> Subject: [IMP] Final Lesson 36
>
> There are typically 36 hours of class time for a one semester, 3-credit
> course. Lessons 31-35 are more in the nature of an assignment: draft out a
> set of menus and prompts for the SEE-to-C program or even go further and
> turn that into C code if you are so inclined. How much of my notes on
> SEE-to-C I will post eventually on the expert system program for C code
> writing, I do not know. If I am correct about this (and you can find out
> by trying to write SEE-to-C for yourself) then future students can forget
> about texts like Aitken and Jones ("Teach Yourself C in 21 Days") or a
> course like COMP 2425 at BCIT which takes about 144 hours. Gary Livick's
> C-programmed robot, Etcetera, will be able to teach C in one hour.
>
> Final lesson 36 is titled "Godbot" and it is designed to stimulate some
> creative and metaphysical thinking. If anyone has SPECIFIC criticisms I
> will welcome them. I certainly don't want to cap off a course which I have
> spent so much time developing, with any errors.
>
> <http://www.geocities.com/machine_psychology/The_Ghost_In_The_Machine>
>
> FWP
>
> <http://www.geocities.com/machine_psychology/Table_of_Lessons>



_______________________________________________
Futurework mailing list
[EMAIL PROTECTED]
http://scribe.uwaterloo.ca/mailman/listinfo/futurework

Reply via email to