Re: [FRIAM] singularity

2006-07-20 Thread Carlos Gershenson
 I wouldn't be surprised if software development was actually
 exponential, however it is harder to measure improvement, and the
 improvement is not a smooth as hardware improvement.

I guess that we would like to have a general measure of the growth of  
software complexity, but I don't know if there is anything like that,  
nor how easy would it be to develop... moreover to check... where  
could we get the data of e.g. number of lines of code, or source code  
size in Kb, of software for the last 20 years or so???

A rough and naive way would be to check e.g. the size in KB of the  
installation files of a certain software, e.g. Linux, Windows, MS  
Office, Corel Draw, AutoCAD...
(with Linux it's quite difficult, because a minimal version of it can  
fit in a couple of floppies, all the rest are add-ons...)

Best regards,

 Carlos Gershenson...
 Centrum Leo Apostel, Vrije Universiteit Brussel
 Krijgskundestraat 33. B-1160 Brussels, Belgium
 http://homepages.vub.ac.be/~cgershen/

   “Tendencies tend to change...”




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] singularity

2006-07-20 Thread Carlos Gershenson
 Crude quantitative measures are no good. For instance, the intro of OO
 techniques can increase functionality with sometimes a decrease in the
 number of lines of code. An example close to home for me was the
 change from EcoLab 3 to EcoLab 4. The number of lines halved, but
 functionality was increased maybe tenfold (**subjective measure  
 warning**).

Then maybe a measure could be the length of the manuals 
+documentation, which reflect the functionality of a particular program?
(Well, Francis just switched to MacOS X from MacOS 9, and the one  
thing he complained was that there was no manual... he didn't like  
the amount of help files)

If this would be reasonable, I don't see that these have increased  
too much, since the size of books hasn't increased noticeably... in  
Unix/Linux you could measure it better with the size of man and how- 
to pages

Best regards,

 Carlos Gershenson...
 Centrum Leo Apostel, Vrije Universiteit Brussel
 Krijgskundestraat 33. B-1160 Brussels, Belgium
 http://homepages.vub.ac.be/~cgershen/

   “Tendencies tend to change...”




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] singularity

2006-07-20 Thread Russell Standish
Like weighing Stroustrup versus Kernighan  Richie ?? I think the C++
book weighs 4 times as much as the C book, but I'm sure C++ is more
than 4 times as powerful...


Cheers

On Thu, Jul 20, 2006 at 01:36:00PM +0200, Carlos Gershenson wrote:
  Crude quantitative measures are no good. For instance, the intro of OO
  techniques can increase functionality with sometimes a decrease in the
  number of lines of code. An example close to home for me was the
  change from EcoLab 3 to EcoLab 4. The number of lines halved, but
  functionality was increased maybe tenfold (**subjective measure  
  warning**).
 
 Then maybe a measure could be the length of the manuals 
 +documentation, which reflect the functionality of a particular program?
 (Well, Francis just switched to MacOS X from MacOS 9, and the one  
 thing he complained was that there was no manual... he didn't like  
 the amount of help files)
 
 If this would be reasonable, I don't see that these have increased  
 too much, since the size of books hasn't increased noticeably... in  
 Unix/Linux you could measure it better with the size of man and how- 
 to pages
 
 Best regards,
 
  Carlos Gershenson...
  Centrum Leo Apostel, Vrije Universiteit Brussel
  Krijgskundestraat 33. B-1160 Brussels, Belgium
  http://homepages.vub.ac.be/~cgershen/
 
?Tendencies tend to change...?
 
 
 
 
 FRIAM Applied Complexity Group listserv
 Meets Fridays 9a-11:30 at cafe at St. John's College
 lectures, archives, unsubscribe, maps at http://www.friam.org

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


[FRIAM] singularity

2006-07-19 Thread Carlos Gershenson

 Tangentally, this question is part of the reason I am very disturbed
 by the concept of the singularity

I made yesterday a blog entry about the singularity:
http://complexes.blogspot.com/2006/07/limits-of-moores-law.html

Best regards,

 Carlos Gershenson...
 Centrum Leo Apostel, Vrije Universiteit Brussel
 Krijgskundestraat 33. B-1160 Brussels, Belgium
 http://homepages.vub.ac.be/~cgershen/

   “Winning or losing does not matter as  much as what you learn from  
it”




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] singularity

2006-07-19 Thread Bill Eldridge




Carlos Gershenson wrote:

  
Tangentally, this question is part of the reason I am very disturbed
by the concept of the "singularity"

  
  
I made yesterday a blog entry about the singularity:
http://complexes.blogspot.com/2006/07/limits-of-moores-law.html

  

Well, you note, " How the hell do you
program a human mind in there???
It takes us several
years just to learn to talk!"

Part of the answer is, "We just copy the ability from computer to
computer".
Humans are difficult to clone. Machines much less so. Is it because
machines
are so much less complex, or that the method nature chose was the best
available
at the time or that human replication serves other purposes as well not
satisfied
by an in-depth copy? A bit of all 3. Humans have not evolved terribly
quickly,
but this model has had relatively long shelf life - much longer than an
ENIAC, say.

As we digitize data, information and knowledge, it becomes easier to
load
up a machine with it all. Obviously accessing and integrating this is
more important
than just having it stored on relatively fast disk, but it's hard to
deny that the ability
to store tons of knowledge is an advantage.

Machines have much faster data transfer internal-to-external. 

Where humans do seem to win is in internal communications and the
software
programming. I'm sure as we go to molecular computers we'll pick up
some speed
on the internal bus bandwidth as well. Not that cognition is all about
speed - 
slow filtering is very useful in places.

Regarding the software, well, human development is a little bit stupid.
Yes, it takes
us years to learn how to talk, and then we spend years learning "Row
row row your boat"
and other time intensive learn-by-repetition-and-rote tasks just so we
can be relatively
self-sufficient for 50 years, which means we hold meaningless jobs so
we can find time
to head to the bar. But our external knowledge and technology cumulate,
so in 2016 we'll be able to organize computer knowledge much better
than we do know,
and presumably as the machines get smarter and smarter, they can play a
larger role
in programming their descendants. 

The relevancy of Google answers is much better than we had 10 years
ago, in a large
part due to comingling requests and answers over millions of nodes and
requests, as well as the
algorithms that go into the responses. Where will this approach be in
10 years?
What new insights, what new applications? Computers will be more
capable of
aggregating insights from a billion more nodes and applying the
insights to new problems.
While humans are getting better at programming the ability to have
these insights,
we are not getting much better at having the insights ourselves. Our
creative thinking
more and more depends on the machine for its completion. 

That doesn't mean all computer questions are tackled with ease. There
were big linguistics/
machine learning setbacks in the 1980's, AI was overhyped, etc. But
these efforts don't so
much disappear as they recur as technological and societal environments
become more
prepared to utilize them. Whether this all leads to a singularity
followed by the Cyberiad,
or simply continues as a long-term symbiotic relationship (man and dog,
computer and man),
I don't know - I favor the latter. But not because we can't program,
only that the relationship
will continue to evolve in ways we find useful, and we've already made
great progress on
what we'd like machines to do even in the short span of computer
science. I don't expect
a single algorithm or insight to change everything - I imagine there
will be a number of evolving,
slightly incompatible approaches, from which a few will gain sway and
slowly be replaced.

In any case, just because software evolution has historically gone
slower than hardware, I don't think
it's inherent in programming that that has to be true forever. For one
thing, we've used hardware as
stable datum - program new tasks, but leave the hardware design
consistent and backwards compatible.
So while the goal of the hardware is higher performance/efficiency,
software has to have better performance
and more features. And attempts to improve automatic programming have
had poor results. But
the state of programming is much improved over 1991, and my guess is
that it's only a matter of time
before all of our efforts in different approaches hit something that
pays off more exponentially.

Okay, I didn't address the one question - if you copy my mind out to
disk and back into another body,
will it have identity as "Bill", self-knowledge, consciousness, etc.? I
think Lem answered that in the Cyberiad,
but I'll have to re-read it, I don't store data that well.




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Re: [FRIAM] singularity

2006-07-19 Thread Russell Standish
On Wed, Jul 19, 2006 at 02:07:58PM +0200, Carlos Gershenson wrote:
 
 Also agree, but what I claim is that maybe the evolution of software  
 is not exponential, as it is with hardware, so there would be no  
 singularity in sight...

I wouldn't be surprised if software development was actually
exponential, however it is harder to measure improvement, and the
improvement is not a smooth as hardware improvement.

During my 25 years of programming computers, I have seen several
revolutionary jumps in software: vectorisation, parallelisation,
object-oriented programming, higher-level scripting (Perl, Python et
al), evolutionary algorithms ...

Each of these software techniques has brought orders of magnitude of
increased functionality, but in each case the effect is different
(generally not across the board), and hard to quantify. During the
same period we have seen approximately 8 generations of Intel
processors or 5 orders of magnitude in processor performance, measured
on the same scale

Cheers

-- 
*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not a
virus. It is an electronic signature, that may be used to verify this
email came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.


A/Prof Russell Standish  Phone 8308 3119 (mobile)
Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED] 
Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix  +612, Interstate prefix 02




FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org


Re: [FRIAM] singularity

2006-07-19 Thread Robert Holmes
Indeed. I'm certainly capable of misapplying statistical techniques several orders of magnitude faster than I could a decade ago.RobetrOn 7/19/06, 
Russell Standish [EMAIL PROTECTED] wrote:
On Wed, Jul 19, 2006 at 02:07:58PM +0200, Carlos Gershenson wrote: Also agree, but what I claim is that maybe the evolution of software is not exponential, as it is with hardware, so there would be no
 singularity in sight...I wouldn't be surprised if software development was actuallyexponential, however it is harder to measure improvement, and theimprovement is not a smooth as hardware improvement.
During my 25 years of programming computers, I have seen severalrevolutionary jumps in software: vectorisation, parallelisation,object-oriented programming, higher-level scripting (Perl, Python et
al), evolutionary algorithms ...Each of these software techniques has brought orders of magnitude ofincreased functionality, but in each case the effect is different(generally not across the board), and hard to quantify. During the
same period we have seen approximately 8 generations of Intelprocessors or 5 orders of magnitude in processor performance, measuredon the same scaleCheers--*PS: A number of people ask me about the attachment to my email, which
is of type application/pgp-signature. Don't worry, it is not avirus. It is an electronic signature, that may be used to verify thisemail came from me if you have PGP or GPG installed. Otherwise, you
may safely ignore this attachment.A/Prof Russell StandishPhone 8308 3119 (mobile)Mathematics0425 253119 ()
UNSW SYDNEY 2052 [EMAIL PROTECTED]Australiahttp://parallel.hpc.unsw.edu.au/rks
International prefix+612, Interstate prefix 02
FRIAM Applied Complexity Group listservMeets Fridays 9a-11:30 at cafe at St. John's Collegelectures, archives, unsubscribe, maps at http://www.friam.org

FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org