Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread karl ramberg
On Thu, Dec 15, 2011 at 2:09 AM, Jecel Assumpcao Jr. je...@merlintec.comwrote:

 Karl Ramberg wrote:

  One of Alans points in his talk is that students should be using
 bleeding edge
  hardware, not just regular laptops. I think he is right for some part
 but he also
  recollected the Joss environment which was done on a machine about to be
  scraped. Some research and development does not need the bleeding edge
  hardware. It can get a long way by using what you have till it's fullest.

 You mixed research and development, and they are rather different. One
 is building stuff for the computers of 2020, the other for those of
 2012.


It's true that I mixed them. Alas much development is research and much
research is development :-)

Karl


 I was at a talk where Intel was showing their new multicore direction
 and the guy kept repeating how the academic people really should be
 changing their courses to teach their students to deal with, for
 example, four cores. At the very end he showed an experimental 80 core
 chip and as he ended the talk and took questions he left that slide up.
 When it was my turn to ask, I pointed to the 80 core chip on the screen
 and asked if programming it was exactly the same as on a quad core. He
 said it was different, so I asked if it wouldn't be better investment to
 teach the students to program the 80 core one instead? He said he didn't
 have an answer to that.

 About Joss, we normally like to plot computer improvement on a log
 scale. But if you look at it on a linear scale, you see that many years
 go by initially where we don't see any change. So the relative
 improvement in five years is more or less the same no matter what five
 years you pick, but the absolute improvement is very different. When I
 needed a serious computer for software development back in 1985 I
 built an Apple II clone for myself, even though that machine was already
 8 years old at the time (about five Moore cycles). The state of the art
 in personal computers at the time was the IBM PC AT (6MHz iAPX286) which
 was indeed a few times faster than the Apple II, but not enough to make
 a qualitative difference for me. If I compare a 1992 PC with one from
 2000, the difference is far more important to me.

  On Tue, Dec 13, 2011 at 9:02 PM, Kim Rose wrote:
 
  For those of you looking to hear more from Alan Kay -- you'll find a
 talk from
  him and several other big names in computer science here -- thanks to
 San
  Jose State University.
 
 
 http://www.sjsu.edu/atn/services/webcasting/archives/fall_2011/hist/computing.html

 Thanks, Kim, for the link!

 I have added this and four other talks from 2011 to

 http://www.smalltalk.org.br/movies/

 I also added a link to the Esug channel on Youtube, which has lots of
 stuff from their recent conferences.

 Cheers,
 -- Jecel


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread John Zabroski
I disagree with the tone in Alan's talk here.  While it is great to
see what was happening in the 50-70s, he makes it sound like there is
absolutely nothing worth talking about in the personal computing
space in the past 30 years.

Pranav Mistry's work on sixth sense technology and the mouseless
mouse alone raise legitimate counterpoints to much of what is
suggested by this talk.  For example, Alan touches upon Englebart's
fury over what happened with the mouse and how the needs of mass
market commercialization trump utility.  Yet, I see a future where we
are far less dependent on mechanical tools like the mouse.

But progress takes time.  For example, the first e-ink technologies
were developed at PARC in the 70s by Nicholas K. Sheridan as a
prototype for a future Alto computer (not mentioned at all by Alan in
his talk).  Reducing the cost to manufacture such displays has been a
long-running process and one I follow intently.  For example, only
recently has a consortium of researchers gotten together and come up
with a fairly brilliant idea to use the same techniques found in
inkjet printing to print pholed screens, making the construction of
flexible e-paper as cost effective as the invention of inkjet printing
to the paper medium.

With these newer mediums we will also need greater automation in
analyzing so-called big data.  Today most analysis is not automated
by computers, and so scientists are separated from truly interacting
with their massive datasets.  They have to talk to project managers,
who then talk to programmers, who then write code that gets deployed
to QA, etc.  The human social process here is fraught with error.

On Tue, Dec 13, 2011 at 3:02 PM, Kim Rose kim.r...@vpri.org wrote:
 For those of you looking to hear more from Alan Kay -- you'll find a talk
 from him and several other big names in computer science here -- thanks to
 San Jose State University.

  http://www.sjsu.edu/atn/services/webcasting/archives/fall_2011/hist/computing.html

  -- Kim


 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Eugen Leitl
On Fri, Dec 16, 2011 at 04:14:40PM -0300, Jecel Assumpcao Jr. wrote:
 Eugen Leitl wrote:
 
  It's remarkable how few are using MPI in practice. A lot of code 
  is being made multithread-proof, and for what? So that they'll have
  to rewrite it for message-passing, again?
 
 Having seen a couple of applications which used MPI it seems like a dead
 end to me. The code is mangled to the point where it becomes really hard

Yes, you're running into the limitations of the human mind. Despite
being a massively parallel process underneath somewhat paradoxically
the upper layers have big problems with utilizing parallelism.

I actually think that the problem is unsolvable at the human end
(just consider debugging millions to billions of fine-grained
asynchronous shared-nothing processes) and have to be routed around
the human by automatic code generation by stochastical means.
Growing your code a la Darwin might be the only thing that could
scale. Of course, we have to learn evolvability first. Current
stuff is way too brittle.

 to understand what it does (in one case I rewrote it with OpenMP and the

OpenMP assumes shared memory, and shared memory does not exist in
this universe. It has to be expensively emulated. Cache coherency
will be distinctly dead well before we'll get to kilonode country.
We can already rack some quite impressive numbers of ARM-based
SoCs on a mesh without the corium failure mode if cooling
fails briefly.

 difference in clarity was amazing). Fortunately, message passing in
 Smalltalk looks far nicer and doesn't get in the way. So that is what I

I must admit I've never done Smalltalk in anger, though I definitely
loved the concept when I did my history in early 1980s.

 am working on (and yes, I know all about Peter Deutsch's opinion about
 making local and remote messages look the same -
 http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing).

If you remove the cache and use cache-like embedded memory
than accessing remote locations by message passing (routed via
cut-through signalling mesh) is only slightly more expensive than
accessing local embedded memory. Some gate delays and relativistic
latency (think of pingpong across a 300 mm wafer) do apply, of course.
 
 How can we spend money now to live in the future? Alan mentioned the
 first way in his talk: put lots and lots of FPGA together. The BEE3

FPGAs suffer the problem of lack of embedded memory. Consider
GPGPU with quarter of TByte/s bandwidth across 2-3 GByte grains.
You just can't compete with economies of scale which allows you
hundreds to thousands of meshing such with InfiniBand.

 board isn't cheap (something like $5K without the FPGAs, which are a few
 thousand dollars each themselves, nor memory) and a good RAMP machine
 hook a bunch of these together. The advantage of this approach is that
 each FPGA is large enough to do pretty much anything you can imagine. If
 you know your processors will be rather small, it might be more cost
 effective to have a larger number of cheaper FPGAs. That is what I am
 working on.
 
 A second way to live in the future is far less flexible, and so should
 only be a second step after the above is no longer getting you the
 results you need: use wafer scale integration to have now roughly the
 same number of transistors you will have in 2020 on a typical chip. This
 is pretty hard (just ask Clive Sinclair or Gene Amdahl how much they
 lost on wafer scale integration back in the 1980s). But if you can get
 it to work, then you could distribute hundreds (or more) of 2020's
 computers to today's researchers.

But today's computers as tomorrow's are already large clusters.
The question is one of how many nodes you can afford, and what is
your electricity bill. If you know how your problem maps you'll just
pick the best of COTS of today, and run it for 3-5 years after which
it's cheaper to buy new hardware than to keep paying the electricity
bill.

I'm not sure how well the SmallTalk model would fare here. 


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Steve Dekorte

FWIW, in my memory, my old NeXTstation felt as snappy as modern desktops but 
when I ran across one at the Computer History Museum it felt painfully slow. 
I've had similar experiences with seeing old video games and finding the 
quality of the graphics to be much lower than I remembered.

This is just a guess, but I suspect what we remember is strongly influenced by 
our emotional reactions which in turn are shaped by our expectations. At the 
time, my expectations were lower.

On 2011-12-16 Fri, at 11:14 AM, Jecel Assumpcao Jr. wrote:

 Compare running Squeak on a 40MHz 386 PC (my 1992 computer) with running
 the exact same code on a 1GHz Pentium 4 PC (available to me in 2000).
 Not even the old MVC interface is really usable on the first while the
 second machine can handle Morphic just fine. The quantitive difference
 becomes a qualititive one. I didn't feel the same between my 1 MHz Apple
 II and the 6MHz PC AT. But of course there was a diffence - to show of
 the AT in trade shows we used to run a Microsoft flight simulator called
 Jet (later merged with MS Flight Simulator) on that machine side by side
 with a 4.77MHz PC XT. It was a fun game on the AT, but looked more like
 a slide show on the XT. I still felt I could get by with the Apple II,
 however.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Steve Dekorte

On 2011-12-16 Fri, at 01:38 PM, Eugen Leitl wrote:
 How can we spend money now to live in the future? Alan mentioned the
 first way in his talk: put lots and lots of FPGA together. The BEE3
 
 FPGAs suffer the problem of lack of embedded memory. Consider
 GPGPU with quarter of TByte/s bandwidth across 2-3 GByte grains.
 You just can't compete with economies of scale which allows you
 hundreds to thousands of meshing such with InfiniBand.

Is speed really the bottleneck for making computers more useful?

Personally, I don't find myself waiting on my computer much anymore. 
Most of my time is instead spent trying to tell the machine what to do 
while it sits there, idling.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Alan Kay
I hope I didn't say there was absolutely nothing worth talking about in the 
'personal computing' space in the past 30 years (and don't think I did say 
that).

Let us all share in the excitement of Discovery without vain attempts to claim 
priority -- Goethe

So some recent manifestations of ideas and technologies such as multitouch, 
mouseless, and SixthSense, should be praised.

However, it is also interesting to discover where ideas came from and who came 
up with them first -- this helps us understand and differentiate high 
creativity from low context from low creativity from high context.

I don't know who did the mouseless idea first, but certainly Dick Shoup at 
Xerox PARC and later at Interval, conceived and showed something very similar. 
One of the central parts of this was to use image recognition to track people, 
hands, and fingers.

Similarly, the SixthSense idea has much in common with Nicholas Negroponte's 
(and many in his Arch-Mac group at MIT) idea in the 70s that we would wear 
things that would let computers know where we are and where we are pointing, 
and that there will be displays everywhere (from a variety of means) and the 
Internet will also be everywhere by then, and there will be embedded computers 
everywhere, etc., so that one's helper agents will have the effect of 
following us around and responding to our gestures and commands. There are 
several terrific movies of their prototypes.


Multitouch, similarly is hard to find out who did it first, but again Nicholas' 
Arch-Mach group certainly did do it (Chris Herot as I recall) in the 70s.

And what Engelbart was upset about was that the hands out -- hands together 
style did not survive. The hands out had one hand with the 5 finger keyboard 
and the other with the mouse and 3 buttons -- this allowed navigation and all 
commands and typing to be done really efficiently compared to today. Hands 
together on the regular keyboard only happened when you had bulk typing to do.

It should be clear that being able to sense all the fingers in some way that 
allows piano keyboard like fluency/polyphony is still a good idea. Musical 
instruments require some training and practice but then allow many more degrees 
of freedom to be controlled.


And, though Nick Sheriden was the leader of the PARC electrophoretic 
migration display project, it was colloidal chemist Ann Chiang who 
accomplished many of the breakthroughs in the 70s. That Xerox didn't follow 
through with this technology was a great disappointment for me. It was really 
nice, and even the prototype had higher contrast ratios than the e-ink displays 
of today (different approach, different kinds of particles).

And a few things have happened since 1980  but the talk was supposed to be 
about the Dynabook idea 

Best wishes,

Alan





 From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org 
Sent: Friday, December 16, 2011 1:12 PM
Subject: Re: [fonc] History of computing talks at SJSU
 
I disagree with the tone in Alan's talk here.  While it is great to
see what was happening in the 50-70s, he makes it sound like there is
absolutely nothing worth talking about in the personal computing
space in the past 30 years.

Pranav Mistry's work on sixth sense technology and the mouseless
mouse alone raise legitimate counterpoints to much of what is
suggested by this talk.  For example, Alan touches upon Englebart's
fury over what happened with the mouse and how the needs of mass
market commercialization trump utility.  Yet, I see a future where we
are far less dependent on mechanical tools like the mouse.

But progress takes time.  For example, the first e-ink technologies
were developed at PARC in the 70s by Nicholas K. Sheridan as a
prototype for a future Alto computer (not mentioned at all by Alan in
his talk).  Reducing the cost to manufacture such displays has been a
long-running process and one I follow intently.  For example, only
recently has a consortium of researchers gotten together and come up
with a fairly brilliant idea to use the same techniques found in
inkjet printing to print pholed screens, making the construction of
flexible e-paper as cost effective as the invention of inkjet printing
to the paper medium.

With these newer mediums we will also need greater automation in
analyzing so-called big data.  Today most analysis is not automated
by computers, and so scientists are separated from truly interacting
with their massive datasets.  They have to talk to project managers,
who then talk to programmers, who then write code that gets deployed
to QA, etc.  The human social process here is fraught with error.

On Tue, Dec 13, 2011 at 3:02 PM, Kim Rose kim.r...@vpri.org wrote:
 For those of you looking to hear more from Alan Kay -- you'll find a talk
 from him and several other big names in computer science here -- thanks to
 San Jose State University.

  

Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread John Zabroski
On Fri, Dec 16, 2011 at 6:19 PM, Alan Kay alan.n...@yahoo.com wrote:
 I hope I didn't say there was absolutely nothing worth talking about in the
 'personal computing' space in the past 30 years (and don't think I did say
 that).

 Let us all share in the excitement of Discovery without vain attempts to
 claim priority -- Goethe


Certainly.  We can't argue with Goethe.  Yet, I don't think that applies here.

You said that our field had become so impoverished because nobody
googles Douglas Englebart and watches The Mother of All Demoes, and
also noted that evolution finds fits rather than optimal solutions.
But you didn't really provide any examples of how we are the victims
of evolution finding these fits.  So I think I am providing a valuable
push back by being my stubborn self and saying, Hey, wait, I know
that's not true.  It just seemed very incongruent to the question of
how we see the present: is it solely in terms of the past?  And the
real question is what do you want it to do for its end users?  You
answer this question with your own perspective, but only saying *WE*
wanted children to learn profound things...

There is good content in your talk, owing to your immense experience
and knowledge, but it is dispersed like a spray.

If I could summarize one thing to takeaway, it's that the medium is
the message, and the performance of the medium changes how people
think and interact with computers and each other.  But even that
takeaway feels buried in digressions.  The other takeaways I got was:

* a note to self to read E.M. Forster's The Machine Stops.
* Nobody wants a coordinate system if we don't have to use one, for
goodness sakes.
* We still don't write computer system's that take into account the
user's context
* You mention that you worked on fonts, but didn't say anything
about the books you read and research you did on displaying fonts

Just 2 cents.

 Dick Shoup at Xerox PARC and later at Interval, conceived and showed
 something very similar.

Tried googling this using various phrases and spellings.  Zero results.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Jecel Assumpcao Jr.
John Zabroski wrote:

 You said that our field had become so impoverished because nobody
 googles Douglas Englebart and watches The Mother of All Demoes, and
 also noted that evolution finds fits rather than optimal solutions.
 But you didn't really provide any examples of how we are the victims
 of evolution finding these fits. 

Alan mentioned the Burroughs B5000 compared with the architectures that
survived. In Donald Knuth's talk the same design was mentioned as an
example of a mistake we got rid of (a guy who still only programs in
assembly would say that ;-). So the students got to hear both sides.

 So I think I am providing a valuable
 push back by being my stubborn self and saying, Hey, wait, I know
 that's not true.  It just seemed very incongruent to the question of
 how we see the present: is it solely in terms of the past?

Normally Alan presents seeing the past only in terms of the present as
being the problem because this also limits how you see the future. Take
any modern timeline of the microprocessor, for example. It will indeed
be a line and not a tree. It will start with the 4004, then 8008, 8080,
8086, 286 and so on to the latest Core i7. Interesting parts of the
past, like the 6502, the 29000 and so many others can't be seen because
nothing in the present traces back to them.

-- Jecel


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread John Zabroski
On Fri, Dec 16, 2011 at 10:04 PM, Jecel Assumpcao Jr.
je...@merlintec.com wrote:
 Steve Dekorte wrote:

 [NeXTStation memories versus reality]

 I still have a running Apple II. My slowest working PC is a 33MHz 486,
 so I can't directly do the comparison I mentioned. But I agree we
 shouldn't trust what we remember things feeling like.

 -- Jecel


The Apple booting up faster was not simply a feeling, but a fact owing
to its human-computer interaction demands.  They set fast boot speeds
as a design criteria.  Jef Raskin talks about this in the book The
Humane Interface.  Even modern attempts to reduce boot speed have not
been that good, such as upstart, an event-driven alternative to
init.

Eugen has some very good points about human limits of managing
performance details, though.  Modern approaches to performance are
already moving away from such crude methods.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread John Zabroski
On Fri, Dec 16, 2011 at 10:10 PM, John Zabroski johnzabro...@gmail.com wrote:
 On Fri, Dec 16, 2011 at 10:04 PM, Jecel Assumpcao Jr.
 je...@merlintec.com wrote:
 Steve Dekorte wrote:

 [NeXTStation memories versus reality]

 I still have a running Apple II. My slowest working PC is a 33MHz 486,
 so I can't directly do the comparison I mentioned. But I agree we
 shouldn't trust what we remember things feeling like.

 -- Jecel


 The Apple booting up faster was not simply a feeling, but a fact owing
 to its human-computer interaction demands.  They set fast boot speeds
 as a design criteria.  Jef Raskin talks about this in the book The
 Humane Interface.  Even modern attempts to reduce boot speed have not
 been that good, such as upstart, an event-driven alternative to
 init.

 Eugen has some very good points about human limits of managing
 performance details, though.  Modern approaches to performance are
 already moving away from such crude methods.

By the way, slight tangent: Modern operating systems, with all their
hot-swapping requirements, do a poor job distinguishing device error
from continuously plugging-in and plugging-out the device. For
example, if you have an optical mouse and damage it, it might slowly
die and your entire system will hang because 99% of your CPU will be
handling plugin and plugout events.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Casey Ransberger
Below. Abridged. 

On Dec 16, 2011, at 1:42 PM, Steve Dekorte st...@dekorte.com wrote:

 
 FWIW, in my memory, my old NeXTstation felt as snappy as modern desktops but 
 when I ran across one at the Computer History Museum it felt painfully slow. 
 I've had similar experiences with seeing old video games and finding the 
 quality of the graphics to be much lower than I remembered.
 
 This is just a guess, but I suspect what we remember is strongly influenced 
 by our emotional reactions which in turn are shaped by our expectations. At 
 the time, my expectations were lower.

This is an excellent point. 

At work I'm using a 32-bit single core machine that's 0.6ghz slower than my 
personal 64-bit dual core machine. 

Once in awhile, I notice that it's slower. I have a feeling, though, that this 
is a consequence of slower hardware *compounded* by expensive software, because 
most of the time, I can't tell the difference at all.

What I'm saying is in part that the computational power of modern computers 
typically eclipses my personal need for computing power. When things are 
suddenly slow, I suspect algorithm/datastructure. 

Whereas: it used to be that everything seemed to take a long time. 

Some things are just expensive. No one has found an acceptable solution. These 
are things we should avoid in the infrastructure underneath a personal 
computing experience:)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Wesley Smith
 Some things are just expensive. No one has found an acceptable solution. 
 These are things we should avoid in the infrastructure underneath a personal 
 computing experience:)


Or figure out how to amortize them over time.  I think recent
raytracing apps are a good example of this.  You can preview the image
as it is rendered to see if it's just right and if not, tweak it.
Another example is scraping data to build a database that will inform
autocompletion and other productivity enhancing UI effects.  Sometimes
gather and parsing out the data to put in the database can be
expensive, but it can easily be done in a background thread without
any cost to responsiveness.  I'm sure there are plenty of other
examples.

wes

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] History of computing talks at SJSU

2011-12-16 Thread Casey Ransberger
Below. 

On Dec 16, 2011, at 3:19 PM, Alan Kay alan.n...@yahoo.com wrote:

 And what Engelbart was upset about was that the hands out -- hands together 
 style did not survive. The hands out had one hand with the 5 finger 
 keyboard and the other with the mouse and 3 buttons -- this allowed 
 navigation and all commands and typing to be done really efficiently compared 
 to today. Hands together on the regular keyboard only happened when you had 
 bulk typing to do.

Are you talking about the so-called chording keyboard?

I had an idea years ago to have a pair of twiddlers (the one chording 
keyboard I'd seen was called a twiddler) which tracked movement of both hands 
over the desktop, basically giving you two pointing devices and a keyboarding 
solution at the same time. 

Now it's all trackpads and touch screens, and my idea seems almost Victorian:)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc