On Wed, Sep 04, 2013 at 04:28:52PM -0700, Simon Forman wrote:
There is a (the?) universal logical notation being elucidated right now that
seems to me to be very promising for this sort of stuff.
Is it intrinsically massively parallel? If it isn't, it's probably
not going to go places.
On Fri, Apr 19, 2013 at 02:05:07PM -0500, Tristan Slominski wrote:
That alone seems to me to dismiss the concern that mind uploading would not
be possible (despite that I think it's a wrong and a horrible idea
personally :D)
I'm curious why you think that it's wrong (?) and horrible.
On Mon, Apr 15, 2013 at 08:27:08AM -0700, David Barbour wrote:
On Mon, Apr 15, 2013 at 6:46 AM, Eugen Leitl eu...@leitl.org wrote:
Few ns are effective eternities in terms of modern gate delays.
I presume the conversation was about synchronization, which
should be avoided in general
On Sun, Apr 14, 2013 at 03:03:12PM -0700, David Barbour wrote:
And I've seen Grace Hopper's video on nanoseconds before. If you carry a
piece of wire of the right length, it isn't difficult to say where light
carrying information will be after a few nanoseconds. :D
Few ns are effective
On Sat, Apr 06, 2013 at 12:08:35PM -0500, John Carlson wrote:
The Lord will return like a thief in the night:
http://bible.cc/1_thessalonians/5-2.htm
Is this predictable? Is there more than one return? Jews believe in one
Messiah. Christians believe in 2 Messiahs (Jesus and his return).
http://mythz.servicestack.net/blog/2013/02/27/the-deep-insights-of-alan-kay/
FEB 27TH, 2013
If you haven’t heard of Alan Kay you’ll likley have heard one of his many
famous quotes, the most popular, likely his 1971 gem:
The best way to predict the future is to invent it.
But for the
On Tue, Feb 12, 2013 at 11:33:04AM -0700, Jeff Gonis wrote:
I see no one has taken Alan's bait and asked the million dollar question:
if you decided that messaging is no longer the right path for scaling, what
approach are you currently using?
Classical computation doesn't allow storing
On Thu, Jan 03, 2013 at 08:27:53PM -0500, Miles Fidelman wrote:
you might want to google biological computing - you'll start finding
things like this:
http://www.guardian.co.uk/science/blog/2009/jul/24/bacteria-computer
(title: Bacteria make computers look like pocket calculators)
http://www.drdobbs.com/architecture-and-design/interview-with-alan-kay/240003442#
Interview with Alan Kay
By Andrew Binstock, July 10, 2012
The pioneer of object-orientation, co-designer of Smalltalk, and UI luminary
opines on programming, browsers, objects, the illusion of patterns, and how
On Sat, Oct 27, 2012 at 10:38:09AM -0700, GrrrWaaa wrote:
What do people here think of this? Like RPi meets multicore:
http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone
Great, I was just about to send this. I backed, and am looking
forward to what and when
On Thu, Jul 19, 2012 at 02:28:18PM +0200, John Nilsson wrote:
More work relative to an approach where full specification and controll is
feasible. I was thinking that in a not to distant future we'll want to
build systems of such complexity that we need to let go of such dreams.
It could be
http://highscalability.com/blog/2012/3/6/ask-for-forgiveness-programming-or-how-well-program-1000-cor.html
Ask For Forgiveness Programming - Or How We'll Program 1000 Cores
Tuesday, March 6, 2012 at 9:15AM
The argument for a massively multicore future is now familiar: while clock
speeds have
On Tue, Apr 03, 2012 at 08:19:53AM -0700, David Barbour wrote:
That said, I also disagree with Tom, there: design complexity doesn't need
to increase with parallelism. The tradeoff between complexity vs.
parallelism is more an artifact of sticking with imperative programming.
It's not just
http://splashcon.org/2011/program/dls/245-invited-talk-2
Mon 2:00-3:00 pm - Pavilion East
Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed
about the Future
invited speakerDavid Ungar, IBM Research, USA
In the 1970’s, researchers at Xerox PARC gave themselves a glimpse
On Thu, Mar 08, 2012 at 03:00:35PM -0800, Casey Ransberger wrote:
Books? First, the smell. Especially old books. I have a friend who has a
Kindle. It smells *nothing* like a library, and I do think something is lost
there.
Some people get olfactorically imprinted on dead tree
during their
http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck?print
The death of CPU scaling: From one core to many — and why we’re still stuck
By Joel Hruska on February 1, 2012 at 2:31 pm
It’s been nearly eight years since Intel canceled
)
On 12/22/11 7:42 AM, Prentice Bisbal prent...@ias.edu wrote:
On 12/22/2011 09:57 AM, Eugen Leitl wrote:
On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote:
Or if your German is rusty:
http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-lau
nched-benchmarked-fastest
sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
/2011 09:57 AM, Eugen Leitl wrote:
On Thu, Dec 22, 2011 at 09:43:55AM -0500, Prentice Bisbal wrote:
Or if your German is rusty:
http://www.zdnet.com/blog/computers/amd-radeon-hd-7970-graphics-card-lau
nched-benchmarked-fastest-single-gpu-board-available/7204
Wonder what kind of response
On Mon, Dec 19, 2011 at 01:02:28PM -0800, Steve Dekorte wrote:
Suppose you want to write an app to help people organize events.
Neither the development or running the app is compute bound
and a machine 1000x faster in itself likely wouldn't much with either.
Suppose I need to simulate 10^12
On Tue, Dec 20, 2011 at 10:50:36AM -0800, Steve Dekorte wrote:
Could you describe how more compute power helps you write the app I described
faster?
It is a really narrow problem space I'm not familiar with. I presume
this isn't about scheduling, but about UI and usability?
Anything
On Fri, Dec 16, 2011 at 02:16:41PM -0800, Steve Dekorte wrote:
Is speed really the bottleneck for making computers more useful?
Many major scientific problems or even gaming are resource-constrained.
I personally would have no difficulties keeping astronomical numbers
of nodes at 100% CPU for
On Fri, Dec 16, 2011 at 04:14:40PM -0300, Jecel Assumpcao Jr. wrote:
Eugen Leitl wrote:
It's remarkable how few are using MPI in practice. A lot of code
is being made multithread-proof, and for what? So that they'll have
to rewrite it for message-passing, again?
Having seen a couple
(also a crystal, only 2D, not 3D; yet)
https://www.technologyreview.com/blog/arxiv/27291/?p1=blogs
Massively Parallel Computer Built From Single Layer of Molecules
Japanese scientists have built a cellular automaton from individual molecules
that carries out huge numbers of calculations in
(cores, that is)
http://www.techworld.com.au/article/405599/arm_cto_predicts_chips_size_blood_cells
ARM CTO predicts chips the size of blood cells The chip design company is on
its way to making chips no bigger than a red blood cell, its CTO says
James Niccolai (IDG News Service) 28
-assembled macromolecular crystals (like
viral capsids, only containing CA cells, designed to link up and connect,
unlike real virus capids, who don't really want to, unless coaxed by
crystallographers).
or such...
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
.
better would be to try for a strategy where the merits of both can be
gained, and as many limitations as possible can be avoided.
most likely, this would be via a hybrid model.
Absolutely. Hybrid at many scales, down to analog computation for neurons.
or such...
--
Eugen* Leitl a href
27 matches
Mail list logo