At 18:41 29/12/2012, you wrote:
(AC) Some years ago when I was looking at AI, I
was surprised to see that the areas where AI was
successful was quickly dismissed as advanced
software with the naysayers saying we arent
there yet. The advances in speech recognition,
face recognition, etc. etc. were all once
considered to be out there. Now they are here.
We are developing aspects human
intelligence. For real human intelligence we
just need humans. But humans get tired, old and
demand more wages. Robots can go all day and
all night with no breaks and no demands.
(KH) Yes, indeed. And the contrast is even more
pointed. When an employer is paying a worker for
the energy he expends in carrying out his tasks,
he is also paying for a great deal more energy.
This the energy that the worker spends in his
non-work activities. The employer may therefore
be paying anything from 10, 20 , 30 or even more
times energy than he needs to when employing a robot only.
Adam Smith never considered energy or the costs
of energy. In his famous example of
the specialization of labour in a pin factory,
he didn't comment on the necessity of the
overhead belt-and-wheel energy fed from a
watermill. The latter were so common on all
rivers that they were a given. The science of
thermodynamics had hardly started in Smith's
day. However, had he done so then the subject of
economics could have gone off in an entirely
different direction. For a while, anyway. It
would have had to loop back into modern economics
because energy is being seen as the key to all
production. It's going to be increasingly
important in the coming years when manual methods
are compared with automation., and when one robot
is compared with a competitor's.
Keith
I found that many of the AI types were quite
socially isolated and were intent on creating
some sort of artificial life. Perhaps a buddy
of some sort. They succeeded in creating
intelligent software that is really applied
AI. Didnt create humans but did create
software that mimics (and in many cases goes
beyond !!) human intelligence in very many areas.
arthur
From: [email protected]
[mailto:[email protected]] On Behalf Of Keith Hudson
Sent: Friday, December 28, 2012 2:16 PM
To: RE-DESIGNING WORK, INCOME DISTRIBUTION, EDUCATION; Ed Weick
Subject: Re: [Futurework] Hey, you gotta watch dem machines...
At 12:05 28/12/2012, Ed wrote:
Krugman's piece in this morning's NYTimes
appears to take us well into the realm of
science fiction. But then maybe it isn't fiction any more?
(KH) For those who want to read Krugman's latest
in the here-and-now I've copied it after my comments:
Surprise! Surprise! Paul Krugman might actually
be waking up to reality. That we might now be
in a period of no-growth. This is something
I've been saying for years and before the 2007/8
crunch, too. but Krugman is now being equally
naive about the future -- if he thinks that,
somehow, automation will soon produce miraculous
economic growth. Or perhaps it's only Prof
Gordon who believes that. Perhaps Krugman will
blow Gordon's prognostications into the sky in
the sequel he promises to write. If so, I would
welcome that because I'd be able to praise Krugman for a change.
I also believe that automation will take away
all repetitive work away from humans. But it
won't be anytime soon. Ever since so-called
"5th generation" computing -- the massive effort
by the Japanese government in the 1980s -- to
develop super-computing, artificial intelligence
(AI) and the like, full realization
of automation is no nearer being achieved today that previously.
The reason is (IMHO) that automation software is
uni-directional. It simply goes from A to Z. It
may, in the course of it, be temporarily
directed into sub-sets, and even into
sub-sub-sets, but, sooner or later, the
instructions rejoin the main track. This is why
AI has got absolutely nowhere in the last 30
years Outside Japan many researchers were
working on AI many years beforehand by building
neural circuits that were copies of the dense
networks in the human cortex, and hoping that
the act of cognition would somehow follow. Well, it never did.
The reason is that cognition and decision-making
of the human variety seem to require two
separate inputs, not just one. For example, in
daily decision- making (mental or physiological)
our own software, instruction from our genes,
also require quite independent feedback from
thousands of different sorts of chemical agents
which also lie along our DNA. These are called epigenes.
My guess is that mathematicians who are involved
in AI will have to invent a double software
system. If this is of the same mind-boggling
nature as the discovery of epigenes was then
no-one can possibly say when it might
occur. Epigenes were suspected as existing for
over 50 years (150 if we count Lamarck and
Wallace) but the dicovery had ti wait until human DNA was finally sequenced.
Keith
<http://www.nytimes.com/2012/12/28/opinion/krugman-is-growth-over.html?hp&_r=0>http://www.nytimes.com/2012/12/28/opinion/krugman-is-growth-over.html?hp&_r=0
Ed
IS GROWTH OVER?
Paul Krugman
The great bulk of the economic commentary you
read in the papers is focused on the short run:
the effects of the fiscal cliff on U.S.
recovery, the stresses on the euro, Japans
latest attempt to break out of deflation. This
focus is understandable, since one global
depression can ruin your whole day. But our
current travails will eventually end. What do we
know about the prospects for long-run prosperity?
The answer is: less than we think.
The long-term projections produced by official
agencies, like the Congressional Budget Office,
generally make two big assumptions. One is that
economic growth over the next few decades will
resemble growth over the past few decades. In
particular, productivity the key driver of
growth is projected to rise at a rate not too
different from its average growth since the
1970s. On the other side, however, these
projections generally assume that income
inequality, which soared over the past three
decades, will increase only modestly looking forward.
Its not hard to understand why agencies make
these assumptions. Given how little we know
about long-run growth, simply assuming that the
future will resemble the past is a natural
guess. On the other hand, if income inequality
continues to soar, were looking at a dystopian,
class-warfare future not the kind of thing
government agencies want to contemplate.
Yet this conventional wisdom is very likely to
be wrong on one or both dimensions.
Recently, Robert Gordon of Northwestern
University created a stir by arguing that
economic growth is likely to slow sharply
indeed, that the age of growth that began in the
18th century may well be drawing to an end.
Mr. Gordon points out that long-term economic
growth hasnt been a steady process; it has been
driven by several discrete industrial
revolutions, each based on a particular set of
technologies. The first industrial revolution,
based largely on the steam engine, drove growth
in the late-18th and early-19th centuries. The
second, made possible, in large part, by the
application of science to technologies such as
electrification, internal combustion and
chemical engineering, began circa 1870 and drove
growth into the 1960s. The third, centered
around information technology, defines our current era.
And, as Mr. Gordon correctly notes, the payoffs
so far to the third industrial revolution, while
real, have been far smaller than those to the
second. Electrification, for example, was a much
bigger deal than the Internet.
Its an interesting thesis, and a useful
counterweight to all the gee-whiz glorification
of the latest tech. And while I dont think hes
right, the way in which hes probably wrong has
implications equally destructive of conventional
wisdom. For the case against Mr. Gordons
techno-pessimism rests largely on the assertion
that the big payoff to information technology,
which is just getting started, will come from the rise of smart machines.
If you follow these things, you know that the
field of artificial intelligence has for decades
been a frustrating underachiever, as it proved
incredibly hard for computers to do things every
human being finds easy, like understanding
ordinary speech or recognizing different objects
in a picture. Lately, however, the barriers seem
to have fallen not because weve learned to
replicate human understanding, but because
computers can now yield seemingly intelligent
results by searching for patterns in huge databases.
True, speech recognition is still imperfect;
according to the software, one irate caller
informed me that I was fall issue yet. But
its vastly better than it was just a few years
ago, and has already become a seriously useful
tool. Object recognition is a bit further
behind: its still a source of excitement that a
computer network fed images from YouTube
spontaneously learned to identify cats. But its
not a large step from there to a host of economically important applications.
So machines may soon be ready to perform many
tasks that currently require large amounts of
human labor. This will mean rapid productivity
growth and, therefore, high overall economic growth.
But and this is the crucial question who
will benefit from that growth? Unfortunately,
its all too easy to make the case that most
Americans will be left behind, because smart
machines will end up devaluing the contribution
of workers, including highly skilled workers
whose skills suddenly become redundant. The
point is that theres good reason to believe
that the conventional wisdom embodied in
long-run budget projections projections that
shape almost every aspect of current policy discussion is all wrong.
What, then, are the implications of this
alternative vision for policy? Well, Ill have
to address that topic in a future column.
------------------------------
_______________________________________________
Futurework mailing list
[email protected]
https://lists.uwaterloo.ca/mailman/listinfo/futurework
_______________________________________________
Futurework mailing list
[email protected]
https://lists.uwaterloo.ca/mailman/listinfo/futurework