Re: [fonc] IBM eyes brain-like computing

2011-10-29 Thread BGB

On 10/29/2011 6:46 AM, karl ramberg wrote:

On Sat, Oct 29, 2011 at 5:06 AM, BGBcr88...@gmail.com  wrote:

On 10/28/2011 2:27 PM, karl ramberg wrote:

On Fri, Oct 28, 2011 at 6:36 PM, BGBcr88...@gmail.comwrote:

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now
solvable
in
the large but the third one is still stuck in the dark ages. I
recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that
made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
It does everything that a mainframe does and more and it costs only
$100.
Amazing! exclaimed the passenger as he held the marvel in his hands,
Where
can I get one?. You can have this piece, said the gracious gent, as
thank
you gift for helping me. Thank you very much. the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed,
the
passenger yelled at him. Hey! you forgot your suitcases!. Not
really!
the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu

yeah...

this is probably a major issue at this point with hugely multi-core
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy
nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing
technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has
been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in
a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few unique ways of representing instructions (the
idea
being that they are aligned values of 1/2/4/8 bytes, rather than either
more
free-form byte-patterns or fixed-width instruction-words).

or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


This is also relevant regarding understanding how to make these computers
work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute

seems interesting, but is very much a pain trying to watch as my internet is
slow and the player doesn't really seem to buffer up the video all that far
when paused...


but, yeah, eval and reflection are features I really like, although sadly
one doesn't really have much of anything like this standard in C, meaning
one has to put a lot of effort into making a lot of scripting and VM
technology primarily simply to make up for the lack of things like 'eval'
and 'apply'.


this becomes at times a point of contention with many C++ developers, where
they often believe that the greatness of C++ for everything more than
makes up for its lack of reflection or dynamic features, and I hold that
plain C has a lot of merit if-anything because it is more readily amendable
to dynamic features (which can plug into the language from outside), which
more or less makes up for the lack of syntax sugar in many areas...

The notion I get from this presentation is that he is against C and
static languages in general.
It seems lambda calculus derived languages that are very dynamic and
can self generate code
is the way he thinks the exploration should take.


I was not that far into the video at the point I posted, due mostly to 
slow internet, and the player not allowing the pause, let it buffer, 
and come back later strategy, generally needed for things 

Re: [fonc] IBM eyes brain-like computing

2011-10-28 Thread BGB

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three needs
- power (portable), space/weight and speed. The last two are now solvable in
the large but the third one is still stuck in the dark ages. I recollect a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that made
portable computers. He showed one that fit in a pocket to his fellow passenger.
It does everything that a mainframe does and more and it costs only $100.
Amazing! exclaimed the passenger as he held the marvel in his hands, Where
can I get one?. You can have this piece, said the gracious gent, as thank
you gift for helping me. Thank you very much. the passenger was thrilled
beyond words as he gingerly explored the new gadget. Soon, the train reached
the next station and the salesman stepped out. As the train departed, the
passenger yelled at him. Hey! you forgot your suitcases!. Not really! the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu


yeah...

this is probably a major issue at this point with hugely multi-core 
processors:

if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy 
nVidia card, which is then noted to have a few issues:

it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into 
their computer.


nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and 
methodologies will continue to be necessary of new computing technologies.



also, likewise people will continue pushing to gradually drive-down the 
memory requirements, but for the most part the power use of devices has 
been largely dictated by what one can get from plugging a power-cord 
into the wall (vs either running off batteries, or OTOH, requiring one 
to plug in a 240V dryer/arc-welder/... style power cord).



elsewhere, I designed a hypothetical ISA, partly combining ideas from 
ARM and x86-64, with a few unique ways of representing instructions 
(the idea being that they are aligned values of 1/2/4/8 bytes, rather 
than either more free-form byte-patterns or fixed-width instruction-words).


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-28 Thread BGB

On 10/28/2011 2:27 PM, karl ramberg wrote:

On Fri, Oct 28, 2011 at 6:36 PM, BGBcr88...@gmail.com  wrote:

On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:

On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:

most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).

The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now solvable
in
the large but the third one is still stuck in the dark ages. I recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
It does everything that a mainframe does and more and it costs only
$100.
Amazing! exclaimed the passenger as he held the marvel in his hands,
Where
can I get one?. You can have this piece, said the gracious gent, as
thank
you gift for helping me. Thank you very much. the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed, the
passenger yelled at him. Hey! you forgot your suitcases!. Not really!
the
gent shouted back. Those are the batteries for your computer.

;-) .. Subbu

yeah...

this is probably a major issue at this point with hugely multi-core
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few unique ways of representing instructions (the idea
being that they are aligned values of 1/2/4/8 bytes, rather than either more
free-form byte-patterns or fixed-width instruction-words).

or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


This is also relevant regarding understanding how to make these computers work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute


seems interesting, but is very much a pain trying to watch as my 
internet is slow and the player doesn't really seem to buffer up the 
video all that far when paused...



but, yeah, eval and reflection are features I really like, although 
sadly one doesn't really have much of anything like this standard in C, 
meaning one has to put a lot of effort into making a lot of scripting 
and VM technology primarily simply to make up for the lack of things 
like 'eval' and 'apply'.



this becomes at times a point of contention with many C++ developers, 
where they often believe that the greatness of C++ for everything more 
than makes up for its lack of reflection or dynamic features, and I hold 
that plain C has a lot of merit if-anything because it is more readily 
amendable to dynamic features (which can plug into the language from 
outside), which more or less makes up for the lack of syntax sugar in 
many areas...


although, granted, in my case, the language I eval is BGBScript and not 
C, but in many cases they are similar enough that the difference can 
be glossed over. I had considered, but never got around to, creating a 
language I was calling C-Aux, which would have taken this further, being 
cosmetically similar to and mostly (85-95% ?) source-compatible with C, 
but being far more dynamic (being designed to more readily allow quickly 
loading code from source, supporting eval, ...). essentially, in a 
practical sense C-Aux would 

Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread Eugen Leitl
On Wed, Oct 26, 2011 at 12:53:24AM -0700, BGB wrote:

 from what I read, IBM was using a digital crossbar.

It sounds like Kwabena Boahen (Carver Mead's school) is 
on the right track

http://web.cecs.pdx.edu/~strom/onr_workshop/boahen.pdf
the group seems to be still publishing
http://www.stanford.edu/group/brainsinsilicon/pubs.html

 I suspect something more generic would be needed.
 I don't see how generic will do long-term any than for bootstrap
 (above co-evolution) reasons.

 more generic is more likely to be able to do something interesting.

More generic hardware (not optimized for a particular model)
also means it's less efficient. On the other hand, we don't
have a particular model to optimize for, so right now generic
is the way to go.

 ARM or similar could work (as could a large number of 386-like cores).

 I had considered something like GPGPU, but depending on the type of  
 neural-net, there could be issues with mapping it efficiently to  
 existing GPUs.

You could wire it up in a 3d grid, with offsets to mostly
local other grid sites and stream through memory. Refresh
rate could be 10 Hz or more, depending on how complex the
model, and how much random-access like memory accesses
you're producing.

 also, the strong-areas for GPUs are not necessarily the same as the  
 requirements for implementing neural nets. again, it could work, but it  

If you can do plenty of parallel ops on short integers (8-16 bit)
then it seems to match well, provided you can feed the shaders.
Mapping things to memory is tricky, as otherwise you'll starve
the shaders and thus underutilize the hardware.

 is just less certain it is ideal.


 the second part of the question is:
 assuming one can transition to a purely biology-like model, is this a
 good tradeoff?...
 if one gets rid of a few of the limitations of computers but gains some
 of the limitations of biology, this may not be an ideal solution.
 Biology had never had the issue to deal with high-performance numerics,
 I'm sure if it had it wouldn't do too shabbily. You can always go hybrid
 e.g. if you want to do proofs or cryptography.

 biology also doesn't have software distribution, ability to make  
 backups, ...

Even hybrid analog systems can be converted into a large blob of
binary data and loaded back. The loss of precision due to digitization
is negligible, as the system will be noisy/low precision (6 bit) to 
start with.

 ideally, a silicon neural-net strategy would also readily allow the  
 installation of new software and creation of backups.

You can always digitize and serialize your data, put it through
a pipe and reinstantialize it in equivalent piece of hardware
on the other end. Consider crystalline computation (Margolus/Toffoli)
http://people.csail.mit.edu/nhm/cc.pdf which maps very well to
3d lattice of sites in future's molecular hardware. If you can 
halt the execution, or clone a static shadow copy of the dynamic
process you can serialize and extract from the faces of the crystal
at leisure.


 the most likely strategy here IMO is to leverage what existing OS's can  
 already do, essentially treating any neural-net processors as another  
 piece of hardware as far as the OS is concerned.

I'm not sure you need an OS. Many aspects of operation can be mapped
to hardware. Consider a 10 GBit/s short reach fiber, the length
of fiber through these keystrokes are passing is an optical FIFO
conveniently containing a standard MTU (1500 Bytes) frame/packet.
Same thing does vacuum to a sufficiently fast line of sight 
laser link. With the right header layout, you can see how you
could make a purely photonic cut-through router or switch, with
zero software. Same thing with operating Toffoli's hypothetical 
computronium crystal, you have stop and go, read and write, and
that's it. I/O could be mapped directly to crystal faces.

 this would probably mean using neural-net processors along-side  
 traditional CPU cores (rather than in place of them).


 better would be to try for a strategy where the merits of both can be
 gained, and as many limitations as possible can be avoided.

 most likely, this would be via a hybrid model.
 Absolutely. Hybrid at many scales, down to analog computation for neurons.

 yeah.

 analog is an idea I had not personally considered.

 I guess a mystery here is how effectively semiconductor logic (in  
 integrated circuits) can work with analog signals.

 the main alternative is, of course, 8 or 16-bit digital signals.

8 or 16 bit ALUs are ridiculously complicated, power-hungry and slow if compared
with analog computation in transistors or memristor. Of course the
precision is low, particularly if you're working at nanoscale. A few atoms
displaced shift the parameters of the device visibly, but then, you're
engaging a redundant mode of computation anyway. The brain does not just
tolerate noise, some aspects of it are noise-driven.


 I had more imagined as hybrids of neural-nets and traditional software.

 granted, 

Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread John Zabroski
On Thu, Oct 27, 2011 at 1:10 PM, Steve Dekorte st...@dekorte.com wrote:





 BGB cr88...@gmail.com wrote:
   Leitl wrote:
  John Zabroski wrote:
 
  Kurzweil addresses that.
  As far as I know Kurzweil hasn't presented anything technical or even
 detailed.
  Armwaving is cheap enough.
 
  yep, one can follow a polynomial curve to pretty much anything...
  actually getting there is another matter.

 I wonder what the curve from the early Roman civilization looked liked and
 how that compared to the actual data from the Dark Ages.



Good point!  Folks in the Dark Ages still built the Chartres Cathedral,
while Rome was forced to build the Parthenon.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread Steve Dekorte




On Oct 27, 2011, at 1:32 PM, John Zabroski johnzabro...@gmail.com wrote:

 
 
 Steve Dekorte wrote:
 I wonder what the curve from the early Roman civilization looked liked and 
 how that compared to the actual data from the Dark Ages.
 
 Good point!  Folks in the Dark Ages still built the Chartres Cathedral, while 
 Rome was forced to build the Parthenon. 

After a millennium, yes. Does kurzweil's  theory allow for millennium wide 
technology depressions?___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread John Zabroski
On Thu, Oct 27, 2011 at 1:32 PM, John Zabroski johnzabro...@gmail.comwrote:



 On Thu, Oct 27, 2011 at 1:10 PM, Steve Dekorte st...@dekorte.com wrote:





 BGB cr88...@gmail.com wrote:
   Leitl wrote:
  John Zabroski wrote:
 
  Kurzweil addresses that.
  As far as I know Kurzweil hasn't presented anything technical or even
 detailed.
  Armwaving is cheap enough.
 
  yep, one can follow a polynomial curve to pretty much anything...
  actually getting there is another matter.

 I wonder what the curve from the early Roman civilization looked liked and
 how that compared to the actual data from the Dark Ages.



 Good point!  Folks in the Dark Ages still built the Chartres Cathedral,
 while Rome was forced to build the Parthenon.



Oops, I meant Athens was forced to build the Parthenon.  (At the time, many
in Athens thought the Parthenon was disgusting due to its extra columns and
other features required for its abnormal size.)
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread BGB

On 10/27/2011 10:10 AM, Steve Dekorte wrote:




BGBcr88...@gmail.com  wrote:

  Leitl wrote:

John Zabroski wrote:


Kurzweil addresses that.

As far as I know Kurzweil hasn't presented anything technical or even detailed.
Armwaving is cheap enough.

yep, one can follow a polynomial curve to pretty much anything...
actually getting there is another matter.

I wonder what the curve from the early Roman civilization looked liked and how 
that compared to the actual data from the Dark Ages.


probably:
sharp rise...
plateau...
collapse...
dark ages then begin.

a lot was forgotten for a while, but then in the following centuries 
much of what was lost was recovered, and then the original roman empire 
was surpassed.



now, things are rising at the moment, and may either:
continue indefinitely;
hit a plateau and stabilize;
hit a plateau and then follow a downward trend.

most likely, processing power will stop increasing (WRT density and/or 
watts) once the respective physical limits are met (basically, it would 
no longer be possible to get more processing power in the same space or 
using less power within the confines of the laws of physics).


granted, I suspect there may still be a ways to go (it is possible that 
such a computer might not even necessarily be matter as currently 
understood).


then again, the limits of what is practical could come a bit sooner.

a fairly conservative estimate would be if people hit the limits of what 
could be practically done with silicon, and then called it good enough.


otherwise, people may migrate to other possibilities, such as graphene 
or photonics, or maybe build anyonic systems, or similar.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread Steve Dekorte


John Zabroski wrote:
 Steve Dekorte wrote:
 
 After a millennium, yes. Does kurzweil's  theory allow for millennium wide 
 technology depressions?
 
 Interesting question!
 
 What technology depressions can you think of and are referring to?

Basically, government intervention crushing the economy. There are no shortage 
of examples. One is how east Germany was effectively frozen in time under 
communism. Another is the effective destruction of technological civilization 
in Rome and Egypt:

http://www.dekorte.com/blog/blog.cgi?do=itemid=4571


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread John Zabroski
On Thu, Oct 27, 2011 at 8:35 PM, David Goehrig d...@nexttolast.com wrote:



  probably:
  sharp rise...
  plateau...
  collapse...
  dark ages then begin.

 As probably the only Late Ancient / Early Medievalist on this list, I feel
 a need to correct this myth of the Dark Ages (which can be squarely blamed
 on Edward Gibbon, and his personal issues with organized religion). As we
 managed to work beyond a certain cultural bias brought on by Imperialistic
 19th century powers manipulating our perspective of the Roman world for
 political gain, most historians who now study this era see it as an
 incredibly vibrant period of political, technological, and cognitive change.

 Most languages spoken in Europe today are a direct result of a massive
 growth in technical language developed by people who married Classical
 thought with new Germanic and Asiatic influences. Critical mathematical
 advances occurred laying the groundwork for what would become symbolic logic
 and algebra.

 If you focus on the then more populist and wealthy east, there is a
 straight continuity. In the west, there is actually pretty radical change
 which gave birth to the political structures that created the modern era
 (which Classicists view as everything after 1066). While outside of Ireland,
 nearly all knowledge of Greek was lost, those concepts were translated into
 vulgate resulting in a vast democratization of thought. (seeds of the
 reformation)

 From a pure info technology standpoint, there is no plateau, merely a
 paradigm shift which enabled new sources of intellectual growth. Just like
 we saw with the advent of digital computing.



My friend who is full of weird facts tells me, and I quote:

It's not that [the Dark Ages] were dismal and miserable, it's that a lot of
the rest of the word (especially the Middle East) surpassed Europe very
quickly in a lot of areas including philosophy, medicine, and mathematics.
Avicenna alone probably exceeded the combined output of Europe for a few
hundred years...
...Dark Ages really reflects a very culturally Eurocentric view.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-27 Thread BGB

On 10/27/2011 5:35 PM, David Goehrig wrote:



probably:
sharp rise...
plateau...
collapse...
dark ages then begin.

As probably the only Late Ancient / Early Medievalist on this list, I feel a 
need to correct this myth of the Dark Ages (which can be squarely blamed on 
Edward Gibbon, and his personal issues with organized religion). As we managed 
to work beyond a certain cultural bias brought on by Imperialistic 19th century 
powers manipulating our perspective of the Roman world for political gain, most 
historians who now study this era see it as an incredibly vibrant period of 
political, technological, and cognitive change.

Most languages spoken in Europe today are a direct result of a massive growth 
in technical language developed by people who married Classical thought with 
new Germanic and Asiatic influences. Critical mathematical advances occurred 
laying the groundwork for what would become symbolic logic and algebra.

If you focus on the then more populist and wealthy east, there is a straight 
continuity. In the west, there is actually pretty radical change which gave 
birth to the political structures that created the modern era (which 
Classicists view as everything after 1066). While outside of Ireland, nearly 
all knowledge of Greek was lost, those concepts were translated into vulgate 
resulting in a vast democratization of thought. (seeds of the reformation)


dunno, I think the usual idea (not necessarily accurate, but common 
perception) was that, following the fall of Rome, was a time where 
everything generally sucked, where disease was everywhere, where people 
almost invariably had mud on their clothes and faces, where sewage 
rained from buildings, ...


then there was philosophers, founding for-fathers, and industrialization 
and so on, and then all of the stuff going on in the 20th century, and 
then one is more-or-less at the present (which was generally much 
better, apart from WWII and the 60s, where the 60s was sort of a more 
recent dark-age filled with hippies and similar...).


I guess elsewhere (back in the dark ages) people were building 
cathedrals, and Muslims were off doing philosophy and developing Algebra 
and similar (before the crusaders went and brought a bunch of stuff back 
to Europe).


but, in Rome there was concrete and some instances of reinforced 
concrete, which didn't come back into being until recently. also 
apparently (on a documentary I saw involving Terry Jones) they (more or 
less) had hamburgers in Rome, but these were not rediscovered until much 
later... (but apparently they didn't have french-fries or soda, or use 
cheese slices, making the modern form potentially more advanced...).



probably in any case, people had surpassed Rome probably by the time 
industry was coming around, since AFAIK Rome had never industrialized 
(it is a question if Rome could have industrialized had it not collapsed 
first... say could the industrial revolution happened 1000 years 
earlier?...).



just in my personal perception, the modern era probably began sometime 
between 1985 and 1995 (or, somewhere between MacOS and Win95), or maybe 
with the release of Quake in 1996. or, at least, this is when the world 
I as know it more or less took its current form (but I am left feeling 
old, as I am old enough to remember the end of the era that was the 
early 90s, of the gradual death of MS-DOS and 5.25 inch floppies).


then a few times I have been left thinking about how terrible the modern 
times are, but then I am left to think about the 1960s and 70s and left 
to think they probably would have been much worse (or, at least, a 
strange-looking world I can't particularly relate to, and filled with 
people with overly promiscuous lifestyles and using lots of drugs and 
similar...).


granted, others who have lived through these decades might decide to 
disagree with me, which is fair enough.



also, apparently, it being younger people who watched Jeopardy and 
Wheel of Fortune, rather than these being primarily the domain of 
older people, with the people having always having watched these shows, 
rather than the show preference having been a result of the aging process...


also, people who were not old-looking back in the 90s (on TV and 
similar) being much older looking now (and often born back in the 60s). 
it is almost surreal sometimes.


and similar...



 From a pure info technology standpoint, there is no plateau, merely a paradigm 
shift which enabled new sources of intellectual growth. Just like we saw with 
the advent of digital computing.


this is an interesting perspective...

I had not heard this position claimed before (almost always, it was the 
Romans were smart, but collapsed, and there was a period of intense suck 
until either 'the Renaissance' or the 'Age of Enlightenment'..., or 
according to others, the Protestant Reformation, since the idea is 
that the Catholics have fallen into a state of heresy or apostasy).


however, granted, 

Re: [fonc] IBM eyes brain-like computing

2011-10-26 Thread Eugen Leitl
On Tue, Oct 25, 2011 at 10:17:24AM -0700, BGB wrote:
 I was not arguing about the limits of computing, rather, IBM's specific  
 design.
 it doesn't really realistically emulate real neurons, rather it is a  

Real neurons have many features, many of them unknown, and do not
map well into solid state as is. However, you can probably find a
simplified model which is powerful and generic enough and maps
well to solid state by co-evolving substrate and representation.

 from what I can gather a simplistic accumulate and fire model, with  
 the neurons hard-wired into a grid.

In principle you can use a hybrid model by using a crossbar for
local connectivity which is analog, and a packet-switched signalling
mesh for long-range interactions, similiarly as real neurons do
it. The mesh can emulate total connectivity fine, and you can probably
even finagle something which scales better than a crossbar locally.

 I suspect something more generic would be needed.

I don't see how generic will do long-term any than for bootstrap
(above co-evolution) reasons.

 another question is what can be done in the near term and on present  
 hardware (future hardware may or may not exist, but any new hardware may  
 take years to make it into typical end-user systems).

Boxes with large number of ARM SoCs with embedded memory and signalling
mesh have been sighted, arguably this is the way to go for large-scale.
GPGPU approaches are also quite good, if you map your neurons to a 3d
array and stream through memory sequentially. Exchanging interface state
with adjacent nodes (which can be even on GBit Ethernet) is cheap enough.


 the second part of the question is:
 assuming one can transition to a purely biology-like model, is this a  
 good tradeoff?...
 if one gets rid of a few of the limitations of computers but gains some  
 of the limitations of biology, this may not be an ideal solution.

Biology had never had the issue to deal with high-performance numerics,
I'm sure if it had it wouldn't do too shabbily. You can always go hybrid
e.g. if you want to do proofs or cryptography.

 better would be to try for a strategy where the merits of both can be  
 gained, and as many limitations as possible can be avoided.

 most likely, this would be via a hybrid model.

Absolutely. Hybrid at many scales, down to analog computation for neurons.


 or such...



-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-26 Thread Loup Vaillant

Eugen Leitl wrote:

On Wed, Oct 26, 2011 at 09:00:36AM -0400, John Zabroski wrote:


Kurzweil addresses that.


As far as I know Kurzweil hasn't presented anything technical or even detailed.
Armwaving is cheap enough.


Kurzweil addresses that.


Do you have literature references for that?


As for biology, it is an iterative approach.  The biggest insights are
coming from better medical imaging techniques that allow us to see inside
living systems' organs and better understand how they work.  Kurzweil sort
of discusses this too, and relates it to how it helped him develop better
and more scalable algorithms even when the underlying hardware did not
change.


When was the last time Kurzweil did design something? 1990? Prior to that?



It looks like Kurzweil is mainly using an outside view perspective
when making his predictions.  Moore's law for instance comes from such
a perspective: we observe a past trend (number of transistors doubling
every 18 months), and conclude that this trend will very likely go on
for a while. (Note that we use similar methods for the laws of physics:
we let an apple fall a millions times, and we predict that it will fall
again if we try one more time.)

Yet Moore's law doesn't by itself increase the power of our computers.
Researchers do, and their findings don't exactly come out of thin air.
Yet we don't need to know how they work to be able to derive accurate
predictions, such as Moore's law.

Now we could debate the validity of the outside view when making
predictions for the next few decades.  I for one am not as confident as
Kurzweil seems to be about the regularity of the exponential growth in
technology, and what it tells us about a technological singularity.

However, the outside view doesn't require its wielder to have as much
expertise as the experts in the field.  For instance,One doesn't need
to have designed anything to notice that things are regularly being
designed.

Does Kurzweil have relevant expertise? Did he demonstrate understanding
of the subjects he talks about?  Did he earned authority?  As long as
he is using an outside view, those questions don't matter much.  What
matters is whether the outside view is a valid method or not, and how
accurately does Kurzweil uses it (we could meta-recursively apply the
outside view on Kurzweil predictions: look at his past predictions to
predict the accuracy (and bias) of his present ones).

Of course, if Kurzweil does use the inside view, then his expertise
suddenly becomes much more relevant.

Loup.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-26 Thread BGB

On 10/26/2011 6:06 AM, Eugen Leitl wrote:

On Wed, Oct 26, 2011 at 09:00:36AM -0400, John Zabroski wrote:


Kurzweil addresses that.

As far as I know Kurzweil hasn't presented anything technical or even detailed.
Armwaving is cheap enough.
  


yep, one can follow a polynomial curve to pretty much anything...
actually getting there is another matter.


Kurzweil addresses that.

Do you have literature references for that?


especially considering the context:
another question is what can be done in the near term and on present 
hardware (future hardware may or may not exist, but any new hardware may 
take years to make it into typical end-user systems).


if it can't be done on present HW, it can't be done on present HW.
much like I can't render a whole damn city-scape down to the level of 
the individual pencils in the desk drawers in real-time on a current 
video card, it just isn't going to work.


the other issue is the time to market cycle...

consider, for example, the Intel AVX extensions:
it was detailed out a number of years ago;
it only started coming out in real chips fairly recently;
it will likely be another number of years until most consumer systems 
have chips with this feature.


and this was something trivial (adding larger YMM registers, and 
adding instruction forms which took additional arguments).


other current/past examples would be pixel/fragment shaders:
most graphics cards now have them, but there are still some floating 
around that don't.



so, it is unlikely that Kurzweil is going to be able to make the 
market-cycle go away anytime soon (IOW: within the next several years).



the closest thing we have at the moment is software, where features can 
be deployed more readily, and end users can get updated versions more 
quickly, rather than having to wait many years for everything to cycle 
through.






As for biology, it is an iterative approach.  The biggest insights are
coming from better medical imaging techniques that allow us to see inside
living systems' organs and better understand how they work.  Kurzweil sort
of discusses this too, and relates it to how it helped him develop better
and more scalable algorithms even when the underlying hardware did not
change.

When was the last time Kurzweil did design something? 1990? Prior to that?


can't comment much on this.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-25 Thread Paul Homer
I've always suspected that it comes from the ability to see around corners, 
which appears to be a rare ability. If someone keeps seeing things that other 
people say aren't there, eventually it will drive them a little crazy :-)

An amazing example of this (I think) is contained in this video:

http://www.randsinrepose.com/archives/2011/10/06/you_are_underestimating_the_future.html



Paul.





From: John Zabroski johnzabro...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Tuesday, October 25, 2011 11:55:29 AM
Subject: Re: [fonc] IBM eyes brain-like computing


Brian,

I recommend you pick up a copy of Ray Kurzweil's The Singularity Is Near.  Ray 
is smarter than basically everyone, and although a tad bit crazy (teaching at 
MIT will do that to you :)), he is a legitimate genius.

Basically, before arguing about the limits of computing, read Ray Kurzweil.  
Others have written similar stuff here and there, but nobody is as passionate 
and willing to argue about the subject as Ray.

Cheers,
Z-Bo


On Fri, Oct 14, 2011 at 2:44 PM, BGB cr88...@gmail.com wrote:

On 10/14/2011 9:29 AM, karl ramberg wrote:

Interesting article :
http://www.itnews.com.au/News/276700,ibm-eyes-brain-like-computing.aspx

Not much details, but the what they envisions seems to be more of the
character a autonomic system that can be quarried for answers, not
programmed like today's computers.


I have seen stuff about this several times, with some articles actively 
demeaning and belittling / trivializing the existing pre-programmed Von Veumann 
/ stored-program style machines.


but, one can ask, but why then are there these machines in the first place:
largely it is because the human mind also falls on its face for tasks which 
computers can perform easily, such as performing large amounts of 
calculations (and being readily updated).

also, IBM is exploring some lines of chips (neural-net processors, ...) which 
may well be able to do a few interesting things, but I predict, will fall far 
short of their present claims.


it is likely that the road forwards will not be a one or the other 
scenario, but will likely result in hybrid systems combining the strengths of 
both.

for example, powerful neural-nets would be a nice addition, but I would not 
want to see them at the cost of programmability, ability to copy or install 
software, make backups, ...

better IMO is if the neural nets could essentially exist in-computer as giant 
data-cubes under program control, which can be paused/resumed, or loaded from 
or stored to the HDD, ...

also, programs using neural-nets would still remain as software in the 
traditional sense, and maybe neural-nets would be stored/copied/... as 
ordinary files.

(for example, if a human-like mind could be represented as several TB worth 
of data-files...).


granted, also debatable is how to best represent/process the neural-nets.
IBM is exploring the use of hard-wired logic and crossbar arrays / 
memristors / ...
also implied was that all of the neural state was stored in the chip itself 
in a non-volatile manner, and also (by implication from things read) not 
readily subject to being read/written externally.


my own thoughts had been more along the lines of fine-grained GPUs, where the 
architecture would be vaguely similar to a GPU but probably with lots more 
cores and each likely only being a simple integer unit (or fixed-point), 
probably with some local cache memory.
likely, these units would be specialized some for the task, with common 
calculations/... likely being handled in hardware.

the more cheaper/immediate route would be, of course, to just do it on the 
GPU (lots of GPU power and OpenCL or similar). or maybe creating an 
OpenGL-like library dedicated mostly to running neural nets on the GPU (with 
both built-in neuron types, and maybe also neuronal shaders, sort of like 
fragment shaders or similar). maybe called OpenNNL or something...

although potentially not as powerful (in terms of neurons/watt), I think my 
idea would have an advantage that it would allow more variety in neuron 
behavior, which could likely be necessary for making this sort of thing 
actually work in a practical sense.


however, I think the idea of memristors is also cool, but I would presume 
that their use would more likely be as a type of RAM / NVRAM / SSD-like 
technology, and not in conflict with the existing technology and architecture.


or such...




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-25 Thread BGB
I was not arguing about the limits of computing, rather, IBM's specific 
design.
it doesn't really realistically emulate real neurons, rather it is a 
from what I can gather a simplistic accumulate and fire model, with 
the neurons hard-wired into a grid.


I suspect something more generic would be needed.

another question is what can be done in the near term and on present 
hardware (future hardware may or may not exist, but any new hardware may 
take years to make it into typical end-user systems).



the second part of the question is:
assuming one can transition to a purely biology-like model, is this a 
good tradeoff?...
if one gets rid of a few of the limitations of computers but gains some 
of the limitations of biology, this may not be an ideal solution.


better would be to try for a strategy where the merits of both can be 
gained, and as many limitations as possible can be avoided.


most likely, this would be via a hybrid model.


or such...


On 10/25/2011 9:07 AM, Paul Homer wrote:
I've always suspected that it comes from the ability to see around 
corners, which appears to be a rare ability. If someone keeps seeing 
things that other people say aren't there, eventually it will drive 
them a little crazy :-)


An amazing example of this (I think) is contained in this video:

http://www.randsinrepose.com/archives/2011/10/06/you_are_underestimating_the_future.html


Paul.


*From:* John Zabroski johnzabro...@gmail.com
*To:* Fundamentals of New Computing fonc@vpri.org
*Sent:* Tuesday, October 25, 2011 11:55:29 AM
*Subject:* Re: [fonc] IBM eyes brain-like computing

Brian,

I recommend you pick up a copy of Ray Kurzweil's The Singularity
Is Near.  Ray is smarter than basically everyone, and although a
tad bit crazy (teaching at MIT will do that to you :)), he is a
legitimate genius.

Basically, before arguing about the limits of computing, read Ray
Kurzweil.  Others have written similar stuff here and there, but
nobody is as passionate and willing to argue about the subject as Ray.

Cheers,
Z-Bo

On Fri, Oct 14, 2011 at 2:44 PM, BGB cr88...@gmail.com
mailto:cr88...@gmail.com wrote:

On 10/14/2011 9:29 AM, karl ramberg wrote:

Interesting article :

http://www.itnews.com.au/News/276700,ibm-eyes-brain-like-computing.aspx

Not much details, but the what they envisions seems to be
more of the
character a autonomic system that can be quarried for
answers, not
programmed like today's computers.


I have seen stuff about this several times, with some articles
actively demeaning and belittling / trivializing the existing
pre-programmed Von Veumann / stored-program style machines.


but, one can ask, but why then are there these machines in the
first place:
largely it is because the human mind also falls on its face
for tasks which computers can perform easily, such as
performing large amounts of calculations (and being readily
updated).

also, IBM is exploring some lines of chips (neural-net
processors, ...) which may well be able to do a few
interesting things, but I predict, will fall far short of
their present claims.


it is likely that the road forwards will not be a one or
the other scenario, but will likely result in hybrid systems
combining the strengths of both.

for example, powerful neural-nets would be a nice addition,
but I would not want to see them at the cost of
programmability, ability to copy or install software, make
backups, ...

better IMO is if the neural nets could essentially exist
in-computer as giant data-cubes under program control, which
can be paused/resumed, or loaded from or stored to the HDD, ...

also, programs using neural-nets would still remain as
software in the traditional sense, and maybe neural-nets would
be stored/copied/... as ordinary files.

(for example, if a human-like mind could be represented as
several TB worth of data-files...).


granted, also debatable is how to best represent/process the
neural-nets.
IBM is exploring the use of hard-wired logic and crossbar
arrays / memristors / ...
also implied was that all of the neural state was stored in
the chip itself in a non-volatile manner, and also (by
implication from things read) not readily subject to being
read/written externally.


my own thoughts had been more along the lines of fine-grained
GPUs, where the architecture would be vaguely similar to a GPU
but probably with lots more cores and each likely only being a
simple integer unit (or fixed-point

[fonc] IBM eyes brain-like computing

2011-10-14 Thread karl ramberg
Interesting article :
http://www.itnews.com.au/News/276700,ibm-eyes-brain-like-computing.aspx

Not much details, but the what they envisions seems to be more of the
character a autonomic system that can be quarried for answers, not
programmed like today's computers.

Karl

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] IBM eyes brain-like computing

2011-10-14 Thread BGB

On 10/14/2011 9:29 AM, karl ramberg wrote:

Interesting article :
http://www.itnews.com.au/News/276700,ibm-eyes-brain-like-computing.aspx

Not much details, but the what they envisions seems to be more of the
character a autonomic system that can be quarried for answers, not
programmed like today's computers.


I have seen stuff about this several times, with some articles actively 
demeaning and belittling / trivializing the existing pre-programmed Von 
Veumann / stored-program style machines.



but, one can ask, but why then are there these machines in the first place:
largely it is because the human mind also falls on its face for tasks 
which computers can perform easily, such as performing large amounts of 
calculations (and being readily updated).


also, IBM is exploring some lines of chips (neural-net processors, ...) 
which may well be able to do a few interesting things, but I predict, 
will fall far short of their present claims.



it is likely that the road forwards will not be a one or the other 
scenario, but will likely result in hybrid systems combining the 
strengths of both.


for example, powerful neural-nets would be a nice addition, but I would 
not want to see them at the cost of programmability, ability to copy or 
install software, make backups, ...


better IMO is if the neural nets could essentially exist in-computer as 
giant data-cubes under program control, which can be paused/resumed, or 
loaded from or stored to the HDD, ...


also, programs using neural-nets would still remain as software in the 
traditional sense, and maybe neural-nets would be stored/copied/... as 
ordinary files.


(for example, if a human-like mind could be represented as several TB 
worth of data-files...).



granted, also debatable is how to best represent/process the neural-nets.
IBM is exploring the use of hard-wired logic and crossbar arrays / 
memristors / ...
also implied was that all of the neural state was stored in the chip 
itself in a non-volatile manner, and also (by implication from things 
read) not readily subject to being read/written externally.



my own thoughts had been more along the lines of fine-grained GPUs, 
where the architecture would be vaguely similar to a GPU but probably 
with lots more cores and each likely only being a simple integer unit 
(or fixed-point), probably with some local cache memory.
likely, these units would be specialized some for the task, with common 
calculations/... likely being handled in hardware.


the more cheaper/immediate route would be, of course, to just do it on 
the GPU (lots of GPU power and OpenCL or similar). or maybe creating an 
OpenGL-like library dedicated mostly to running neural nets on the GPU 
(with both built-in neuron types, and maybe also neuronal shaders, 
sort of like fragment shaders or similar). maybe called OpenNNL or 
something...


although potentially not as powerful (in terms of neurons/watt), I think 
my idea would have an advantage that it would allow more variety in 
neuron behavior, which could likely be necessary for making this sort of 
thing actually work in a practical sense.



however, I think the idea of memristors is also cool, but I would 
presume that their use would more likely be as a type of RAM / NVRAM / 
SSD-like technology, and not in conflict with the existing technology 
and architecture.



or such...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc