On 10/29/2011 6:46 AM, karl ramberg wrote:
On Sat, Oct 29, 2011 at 5:06 AM, BGB<[email protected]>  wrote:
On 10/28/2011 2:27 PM, karl ramberg wrote:
On Fri, Oct 28, 2011 at 6:36 PM, BGB<[email protected]>    wrote:
On 10/28/2011 7:28 AM, K. K. Subramaniam wrote:
On Thursday 27 Oct 2011 11:27:39 PM BGB wrote:
most likely, processing power will stop increasing (WRT density and/or
watts) once the respective physical limits are met (basically, it would
no longer be possible to get more processing power in the same space or
using less power within the confines of the laws of physics).
The adoption of computing machines at large is driven primarily by three
needs
- power (portable), space/weight and speed. The last two are now
solvable
in
the large but the third one is still stuck in the "dark ages". I
recollect
a
joke by Dr An Wang (founder of Wang Labs) in keynote during the 80s that
goes
something like this:

A man struggled to lug two heavy suitcases into a bogie in a train that
was
just about to depart. A fellow passenger helped him in and they start a
conversation. The man turns out to be a salesman from a company that
made
portable computers. He showed one that fit in a pocket to his fellow
passenger.
"It does everything that a mainframe does and more and it costs only
$100".
"Amazing!" exclaimed the passenger as he held the marvel in his hands,
"Where
can I get one?". "You can have this piece," said the gracious gent, "as
thank
you gift for helping me." "Thank you very much." the passenger was
thrilled
beyond words as he gingerly explored the new gadget. Soon, the train
reached
the next station and the salesman stepped out. As the train departed,
the
passenger yelled at him. "Hey! you forgot your suitcases!". "Not
really!"
the
gent shouted back. "Those are the batteries for your computer".

;-) .. Subbu
yeah...

this is probably a major issue at this point with "hugely multi-core"
processors:
if built, they would likely use lots of power and produce lots of heat.

this is sort of also an issue with video cards, one gets a new/fancy
nVidia
card, which is then noted to have a few issues:
it takes up two card slots (much of this apparently its heat-sink);
it is long enough that it partially sticks into the hard-drive bays;
it requires a 500W power supply;
it requires 4 plugs from the power-supply;
...

so, then one can joke that they have essentially installed a brick into
their computer.

nevermind it getting high framerates in games...


however, they would have an advantage as well:
people can still write their software in good old C/C++/Java/...

it is likely that the existence of existing programming languages and
methodologies will continue to be necessary of new computing
technologies.


also, likewise people will continue pushing to gradually drive-down the
memory requirements, but for the most part the power use of devices has
been
largely dictated by what one can get from plugging a power-cord into the
wall (vs either running off batteries, or OTOH, requiring one to plug in
a
240V dryer/arc-welder/... style power cord).


elsewhere, I designed a hypothetical ISA, partly combining ideas from ARM
and x86-64, with a few "unique" ways of representing instructions (the
idea
being that they are aligned values of 1/2/4/8 bytes, rather than either
more
free-form byte-patterns or fixed-width instruction-words).

or such...


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

This is also relevant regarding understanding how to make these computers
work:

http://www.infoq.com/presentations/We-Really-Dont-Know-How-To-Compute
seems interesting, but is very much a pain trying to watch as my internet is
slow and the player doesn't really seem to buffer up the video all that far
when paused...


but, yeah, eval and reflection are features I really like, although sadly
one doesn't really have much of anything like this standard in C, meaning
one has to put a lot of effort into making a lot of scripting and VM
technology primarily simply to make up for the lack of things like 'eval'
and 'apply'.


this becomes at times a point of contention with many C++ developers, where
they often believe that the "greatness of C++ for everything" more than
makes up for its lack of reflection or dynamic features, and I hold that
plain C has a lot of merit if-anything because it is more readily amendable
to dynamic features (which can plug into the language from outside), which
more or less makes up for the lack of syntax sugar in many areas...
The notion I get from this presentation is that he is against C and
static languages in general.
It seems lambda calculus derived languages that are very dynamic and
can self generate code
is the way he thinks the exploration should take.

I was not that far into the video at the point I posted, due mostly to slow internet, and the player not allowing the "pause, let it buffer, and come back later" strategy, generally needed for things like YouTube (start buffering 10 minute YouTube video, go and do something else for the next 30 or 45 minutes, and hope it doesn't die in the middle...).


yes, but C is still better than C++ in this regard, since one can then build a HLL which has these capabilities (lambdas/eval/...) and plug it into C, but not so easy to plug such an HLL into C++ though (due to its much more complex syntax, semantics, and ABI interfaces, with the ABI generally being specific to each compiler).

the problem is, at this point, these sorts of languages are essentially inescapable.

(and trying to remake the world in a fundamentally different language is, impractical).

however, having these sorts of features need not require things to be written in Lisp or Scheme, only a language which is sufficiently generic and dynamic to be able to handle them.


for example, although recently C++ and Java have added "closures", in both cases, they are rather half-assed, and have overly restrictive semantics (not surviving the parent scope in C++, all captured bindings as "final" in Java). similar goes for several closure extensions for C.

ideally, the lambdas/closures should retain the full/proper semantics.


One problem with self generating code is that is very hard to read and
understand.
Maybe tools to simulate the code should be at hand while writing, kind
of like debugger unwinding
the other way.

interesting, dunno...


sadly, the BGBScript eval, like the JavaScript eval, requires composing all code as strings. it would be trivial enough to make it accept S-Expressions, only that composing code as lists would look funky, as this is not the "native" syntax.

I also have "quote"/"unquote" syntax, which could maybe be adapted for this. quote allows gaining access to "syntax objects", unquote allows inlining syntax-objects (which are expressions expected to return a syntax object). technically, it was inspired by the "quasiquote" feature in Scheme.

the issue would be how to best allow more advanced capabilities without exposing the underlying AST structure (which, in the current VM, is itself essentially Scheme-based, although I wouldn't want to make this a mandatory feature...).



I also have some amount of mechanically-generated code in C (tools process code and data and generate more code in response), but this is not quite the same.

the world doesn't need another Java...


or such...


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to