Re: [fonc] Simple, Simplistic, and Scale

2011-07-30 Thread BGB

On 7/29/2011 7:06 PM, David Barbour wrote:



On Fri, Jul 29, 2011 at 5:08 PM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:



Linden Labs tried to do similar with Second Life, but it hasn't
really caught on very well in-general.
however, most prior attempts: VRML, Adobe Atmosphere, ... have
generally gone nowhere.


I've looked into several systems, including Second Life, VRML, and 
Croquet. Someone eventually explained the problem: 3D has issues with 
information density - i.e. unless the information in question is a 
visual model, it's hard to use. This makes it suitable for games, CAD, 
and situational awareness... but not so much for regular use.


A big 3D virtual world was what I wanted at age 23. My tastes have 
since changed a bit.




yeah, making it not lame is itself a difficult problem.


Today, I'm much more interested in distributed command-and-control, 
wearable and ambient computing, and augmented reality. The most 
significant barriers to such technology - involving composition, 
distribution, security, concurrency, and resource management - are the 
same. But I still keep the gaming issues to help filter language designs.


possible, but those are not problems which can be readily addressed.




potentially, messages could be delivered over an XMPP-like message
protocol, albeit ideally it would need higher bandwidth limits,
server-side scene-management, and probably a compressed and/or
binary transport: XML+Deflate, WBXML+Deflate, SBXE(1)+Deflate, or EXI.


Fortunately, there are a lot of people doing good work on the issue of 
message serialization. David Ryan's Argot, Google Protocol Buffers, 
and various other options are also feasible. I would want to choose a 
model suitable for real-time embedded systems, and that doesn't rely 
on namespaces.


XMPP doesn't depend directly on namespace prefixes (an XML subset is used).
the merit of XMPP is that it allows transporting fairly arbitrary 
messages, which is fairly useful.


its main downside though is that textual XML uses a lot of bandwidth and 
imposes a 1.5 kB/s transfer limit (past this point and throttling is 
used), limiting movement updates to about 1 update per second.


however, if the protocol were compressed and bandwidth was uncapped, 
then it could be a bit more useful.





the one language I can't really readily interop with is Java


So you can readily interop with Clojure, Lustre, Oz/Mozart, and 
Distributed Maude? Wow, that's really impressive!




well, of the languages I have tested against.

to say that I can't really interop with Java doesn't mean that all other 
languages will work automatically.


but, pretty much anything with a strong C FFI should be reasonably 
possible to interop with, but there is more of a problem with languages 
which lack a good C FFI. this is because, for the most part, the C ABIs 
(and some limited aspects of the C++ ABIs) are the substrate on which 
much of the rest is built.


so, roughly this excludes languages with poor C interop.




today, we cannot effectively compose, integrate, and scale
frameworks, DSLs, and applications developed /in the
same/ programming model or language. Even /without/ the extra
complications of foreign service integration, there is a major
problem that needs solving.


these are more library/API design issues though.


No. An API/library design issue is, by nature, an issue that can be 
isolated to /specific/ libraries or API designs. When a symptom is 
systemic, as in this case, one should seek its cause.




it is a generalized issue of API designs, or more correctly, the common 
lack of good external APIs.





Problem is, many of the more interesting things I want to do
would be complicated, inefficient, unsafe, or insecure with
our current foundations.

granted, but expecting any sort of widespread fundamental changes
is not likely to happen anytime soon, so probably better is to go
with the flow.


Would you plant a tree, knowing that it's unlikely to grow tall and 
wide enough to offer shade 'any time soon'? Would you pollute a river 
one day at a time, knowing that the chemicals won't reach poisonous 
levels 'any time soon'? Would you write one line of code for a VM, 
knowing you won't reach something competitive 'any time soon'?


why would these need to effect ones' decisions?...

competitiveness may not actually matter, since ideally one will write a 
VM to serve ones own needs, when they have a use for it. these sorts of 
things are not done (entirely) in a vacuum.


but, anyways, the future will come along when it does.

a lot more is likely about doing the right thing at the right time, and 
if now is not the time, then when? one can likely get a lot more useful 
doing what is needed for today, and leave what is needed for tomorrow 
for another day.


not that one shouldn't prepare for the future though, but if and 

Re: [fonc] Simple, Simplistic, and Scale

2011-07-30 Thread David Barbour
On Sat, Jul 30, 2011 at 1:19 PM, BGB cr88...@gmail.com wrote:

 concurrency doesn't care, because both multithreading and message queues
 can be readily used with the stack machine abstraction.


I'm not saying you cannot use them, BGB. I'm saying that they're *
complications*, i.e. that the resulting system is difficult to explain and
reason about. Because the stack machine is *simplistic*, and developers are
forced to invent all these ways to *work around* the stack-machine
abstraction rather than with it.

Message queues are a fine example: now we have a queue as well as a stack,
and we cannot easily reason about how long it will take to process an event.
And have you ever tried to greenspun a model of threads atop a stack? Or are
you just 'assuming' an orthogonal model - yet another complication, to work
around the stacks?

You are defending stacks with cruft and hacks.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Simple, Simplistic, and Scale

2011-07-29 Thread BGB

On 7/28/2011 8:19 PM, David Barbour wrote:
On Thu, Jul 28, 2011 at 2:16 PM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


striving for simplicity can also help, but even simplicity can
have costs:
sometimes, simplicity in one place may lead to much higher complexity

somewhere else. [...]

it is better to try to find a simple way to handle issues, rather
than try to 


sweep them under the carpet or try to push them somewhere else.

I like to call this the difference between 'simple' and 'simplistic'. 
It is unfortunate that it is so easy to strive for the former and 
achieve the latter.


* Simple is better than complex.
* Complex is better than complicated.
* Complicated is better than simplistic.

The key is that 'simple' must still capture the essential difficulty 
and complexity of a problem. There really is a limit for 'as simple as 
possible', and if you breach it you get 'simplistic', which shifts 
uncaptured complexity unto each client of the model.




yeah. simplistic IMO would likely describe the JVM architecture at 
around 1.0.
the design seemed clean and simple enough (well, except the class 
libraries, where Sun seemed to really like their huge number of 
classes / huge number of API calls, in stark contrast to the 
minimalism of the Java language and Java ByteCode...).


some time later, and a mountain of crud is built on top. why?... because 
JBC could not extend cleanly or gracefully.


later, when I was working on my own implementation of the JVM (seemed 
like a good idea at the time), I developed a very nifty hack to the 
.class format to allow more readily extending the file format.



some of my own bytecode formats (designed after all this petered out), 
basically took the basic idea and ran with it, essentially discarding 
the whole rest of the file format (the existence of any structures 
besides the constant pool seemed unnecessary).


now, whether this is simple or simplistic, who knows?...

sadly, the contained bytecode itself is not quite so elegant (at the 
moment it has 533 opcodes). it has also evolved over much of a decade 
now, but has never really had an external file-format. in its abstract 
sense, it is an RPN and blocks-based format, vaguely like PostScript 
(stack machine with a dynamic/variant type-model, makes heavy use of 
'mark', ...)


a close sibling/fork of this IL was used by the C compiler, but that 
version made some ill-advised simplifications (damage caused by 
initially dropping nested blocks, using right-to-left ordering for 
ordinary function calls, but left-to-right for built-ins, ...).



We can conclude some interesting properties: First, you cannot achieve 
'simplicity' without knowing your problem or requirements very 
precisely. Second, the difference between simple and simplistic only 
becomes visible for a model, framework, or API when it is shared and 
scaled to multiple clients and use cases (this allows you to see 
repetition of the uncaptured complexity).


I first made these observations in early 2004, and developed a 
methodological approach to achieving simplicity:

(1) Take a set of requirements.
(2) Generate a model that /barely/ covers them, precisely as possible. 
(This will be simplistic.)
(3) Explore the model with multiple use-cases, especially at large 
scales. (Developer stories. Pseudocode.)

(4) Identify repetitions, boiler-plate, any stumbling blocks.
(5) Distill a new set of requirements. (Not monotonic.)
(6) Rinse, wash, repeat until I fail to make discernible progress for 
a long while.
(7) At the end, generate a model that /barely/ overshoots the 
requirements.




I generally start with a more complex description and tend to see what 
can be shaved off without compromising its essential properties. doesn't 
always work though, such as if requirements change or a dropped feature 
comes back to bite one sometime later.



This methodology works on the simple principle: it's easier to 
recognize 'simplistic' than 'not quite as simple as possible'. All you 
need to do is scale the problem (in as many dimensions as possible) 
and simplistic hops right out of the picture and slaps you in the face.




possible, but at times problems don't fully appear until one tries to 
implement or test the idea, and one is left to discover that something 
has gone terribly wrong.



By comparison, unnecessary complexity or power will lurk, invisible to 
our preconceptions and perspectives - sometimes as a glass ceiling, 
sometimes as an eroding force, sometimes as brittleness - but always 
causing scalability issues that don't seem obvious. Most people who 
live in the model won't even see there is a problem, just 'the way 
things are', just Blub. The only way to recognize unnecessary power or 
complexity is to find a simpler way.




IMO, glass-ceilings and brittleness are far more often a result of cruft 
than of complexity in itself.


plain complexity will often lead to a straightforward solution, 

Re: [fonc] Simple, Simplistic, and Scale

2011-07-29 Thread Ken 'classmaker' Ritchie
Keep It Simple  Sufficient
The other meaning of K.I.S.S.
Paraphrasing Einstein.

Cheers,
-KR
;-)


On Jul 28, 2011, at 23:19, David Barbour dmbarb...@gmail.com wrote:

 The key is that 'simple' must still capture the essential difficulty and 
 complexity of a problem. There really is a limit for 'as simple as possible', 
 and if you breach it you get 'simplistic', which shifts uncaptured complexity 
 unto each client of the model. 

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Simple, Simplistic, and Scale

2011-07-29 Thread Paul Homer
There is nothing simple about simplification :-)

In '07 I penned a few thoughts about it too:

http://theprogrammersparadox.blogspot.com/2007/12/nature-of-simple.html


Paul.





From: David Barbour dmbarb...@gmail.com
To: Fundamentals of New Computing fonc@vpri.org
Sent: Thursday, July 28, 2011 11:19:47 PM
Subject: [fonc] Simple, Simplistic, and Scale


On Thu, Jul 28, 2011 at 2:16 PM, BGB cr88...@gmail.com wrote:

striving for simplicity can also help, but even simplicity can have costs:
sometimes, simplicity in one place may lead to much higher complexity 
somewhere else. [...]

it is better to try to find a simple way to handle issues, rather than try to 
sweep them under the carpet or try to push them somewhere else.
 
I like to call this the difference between 'simple' and 'simplistic'. It is 
unfortunate that it is so easy to strive for the former and achieve the 
latter. 


* Simple is better than complex.
* Complex is better than complicated.
* Complicated is better than simplistic.


The key is that 'simple' must still capture the essential difficulty and 
complexity of a problem. There really is a limit for 'as simple as possible', 
and if you breach it you get 'simplistic', which shifts uncaptured complexity 
unto each client of the model. 


We can conclude some interesting properties: First, you cannot achieve 
'simplicity' without knowing your problem or requirements very precisely. 
Second, the difference between simple and simplistic only becomes visible for 
a model, framework, or API when it is shared and scaled to multiple clients 
and use cases (this allows you to see repetition of the uncaptured 
complexity). 


I first made these observations in early 2004, and developed a methodological 
approach to achieving simplicity:
(1) Take a set of requirements.
(2) Generate a model that barely covers them, precisely as possible. (This 
will be simplistic.)
(3) Explore the model with multiple use-cases, especially at large scales. 
(Developer stories. Pseudocode.)
(4) Identify repetitions, boiler-plate, any stumbling blocks.
(5) Distill a new set of requirements. (Not monotonic.)
(6) Rinse, wash, repeat until I fail to make discernible progress for a long 
while.
(7) At the end, generate a model that barely overshoots the requirements.


This methodology works on the simple principle: it's easier to recognize 
'simplistic' than 'not quite as simple as possible'. All you need to do is 
scale the problem (in as many dimensions as possible) and simplistic hops 
right out of the picture and slaps you in the face. 


By comparison, unnecessary complexity or power will lurk, invisible to our 
preconceptions and perspectives - sometimes as a glass ceiling, sometimes as 
an eroding force, sometimes as brittleness - but always causing scalability 
issues that don't seem obvious. Most people who live in the model won't even 
see there is a problem, just 'the way things are', just Blub. The only way to 
recognize unnecessary power or complexity is to find a simpler way. 


So, when initially developing a model, it's better to start simplistic and 
work towards simple. When you're done, at the very edges, add just enough 
power, with constrained access, to cover the cases you did not foresee (e.g. 
Turing-complete only at the toplevel, or only with a special object 
capability). After all, complicated but sufficient is better than simplistic 
or insufficient.


I've been repeating this for 7 years now. My model was seeded in 2003 October 
with the question: What would it take to build the cyber-world envisioned in 
Neal Stephenson's Snow Crash? At that time, I had no interest in language 
design, but that quickly changed after distilling some requirements. I took a 
bunch of post-grad courses related to language design, compilers, distributed 
systems, and survivable networking. Of course, my original refinement model 
didn't really account for inspiration on the way. I've since become interested 
in command-and-control and data-fusion, which now has a major influence on my 
model. A requirement discovered in 2010 March led to my current programming 
model, Reactive Demand Programming, which has been further refined: temporal 
semantics were added initially to support precise multimedia synchronization 
in a distributed system, my temporal semantics have been refined to support 
anticipation (which is useful for
 ad-hoc coordination, smooth animation, event detection), and my state model 
was refined twice to support anticipation (via the temporal semantics) and live 
programming. 


I had to develop a methodological approach to simplicity, because the problem 
I so gleefully attacked is much, much bigger than I am. (Still is. Besides 
developing RDP, I've also studied interactive fiction, modular simulations, 
the accessibility issues for blind access to the virtual world, the 
possibility of CSS-like transforms on 3D structures, and so on. I have a 
potentially

Re: [fonc] Simple, Simplistic, and Scale

2011-07-29 Thread BGB

On 7/29/2011 1:05 PM, David Barbour wrote:


On Fri, Jul 29, 2011 at 3:12 AM, BGB cr88...@gmail.com 
mailto:cr88...@gmail.com wrote:


snip...

nothing interesting to comment/add...




Snow Crash: dot pattern from space - brain-damage


Ah, yes, that wasn't the bit I wanted to create from Snow Crash. Just 
the vision of the cyber-world, with user agents and a multi-company 
federated but contiguous space.


fair enough.

Linden Labs tried to do similar with Second Life, but it hasn't really 
caught on very well in-general.



however, most prior attempts: VRML, Adobe Atmosphere, ... have generally 
gone nowhere.


I once considered trying to pursue similar goals, essentially 
hybridizing 3D engines and a web-based substrate (HTTP, ...).


I once built something simplistic (3D chat-room style) IIRC built mostly 
on top of HTTP and Jabber/XMPP (however, its capabilities were limited 
mostly as XMPP uses textual XML and bandwidth throttling). this wasn't 
maintained mostly as it was severely uninspiring.



if anything is going to really work, it is probably going to need to be 
based on a more game-like technology and architecture (and ideally try 
to deliver a more game-like user experience).


an example would be if it delivered the look and feel of something 
like Quake3 or Doom3 or Unreal Tournament or similar, but was a more 
open and extensible architecture (based on HTTP servers and otherwise 
open protocols).


potentially, messages could be delivered over an XMPP-like message 
protocol, albeit ideally it would need higher bandwidth limits, 
server-side scene-management, and probably a compressed and/or binary 
transport: XML+Deflate, WBXML+Deflate, SBXE(1)+Deflate, or EXI.


1: SBXE is my own binary XML format, which is a similar format to WBXML, 
but is a little more compact and supports a few more standard XML 
features (such as namespaces). in my tests, SBXE+Deflate tends to be 
about 30%-40% smaller than Textual XML+Deflate (which is in my tests 
often 75%-90% smaller than uncompressed XML).


however, given that SBXE does not use schemas, but builds a model from 
prior tags, so its relative overhead is higher for small disjoint 
messages, as it tends to need to retransmit tag names, ... but tends to 
do better in my tests for larger messages (it will partly predict later 
tags from prior ones). it is also reasonably Deflate-friendly (Deflate 
usually compresses down by around an additional 25%-75% in my tests). 
note that XML+Deflate is typically smaller than raw SBXE.


I have not personally tested against EXI (EXI is a bitstream-oriented 
schema-based encoding, and is a W3C standards recomendation).



I am personally not so much of a fan of schema-based encodings though, 
as then one has additional hassles (such as having to write schemas), as 
well as hindering or eliminating the use of free-form messages (IMO, it 
is much like the matter of Vector Quantization and codebooks vs Deflate: 
VQ+Codebooks can be potentially smaller, but Deflate is far more generic).


Deflate+Textual XML generally works fairly well though in terms of 
shaving down size.



note that my current 3D engine is using an S-Expression based protocol 
for client/server communication (was less effort and potentially 
higher-performance than XML), but a hack could allow shoving basic 
S-Exps over an XML-based message transport.


(foo (bar 1 2) 3)
=
sexp xmlns=...
T n=fooT n=barI v=1/I v=2//TI v=3//T
/sexp




I started out with language design and VM implementation some time
around 2000.
at the time I was using Guile, but was frustrated some with it.
for whatever reason,

I skimmed over the source of it and several other Scheme
implementations, and

threw together my own.


My start in language design is similar. Guile was one of the 
implementations I studied, though I did not use it.


I used it some, but at the time (late 90s) is wasn't very good (what 
really killed it for me though was that it was hard-coded to call 
abort() at the first sign of trouble, which was hardly a desirable 
behavior in my case).


most of my stuff generally handled most error-conditions by returning 
UNDEFINED, which was generally intended as a catch-all error 
condition. ideally, I would go add proper exception handling eventually 
(having everything fail silently with UNDEFINED popping up everywhere is 
ultimately not very good for debugging things not working, but the issue 
is that mixed-language exception-handling is an awkward issue, so it 
hasn't really been dealt with effectively).





/Too much power with too little perspective - that is the problem./

I doubt power is the cause of this problem.

being simplistic or made from cruft or hacks are far more likely
to be their downfall.


Power, in the absence of perspective, leads humans to develop 
cruft. Hacks are how we compose cruft.


Self-discipline and best practices are ways to 'tame' the use of 
power, when we have too much 

[fonc] Simple, Simplistic, and Scale

2011-07-28 Thread David Barbour
On Thu, Jul 28, 2011 at 2:16 PM, BGB cr88...@gmail.com wrote:

 striving for simplicity can also help, but even simplicity can have costs:
 sometimes, simplicity in one place may lead to much higher complexity

somewhere else. [...]

it is better to try to find a simple way to handle issues, rather than try
 to

sweep them under the carpet or try to push them somewhere else.


I like to call this the difference between 'simple' and 'simplistic'. It is
unfortunate that it is so easy to strive for the former and achieve the
latter.

* Simple is better than complex.
* Complex is better than complicated.
* Complicated is better than simplistic.

The key is that 'simple' must still capture the essential difficulty and
complexity of a problem. There really is a limit for 'as simple as
possible', and if you breach it you get 'simplistic', which shifts
uncaptured complexity unto each client of the model.

We can conclude some interesting properties: First, you cannot achieve
'simplicity' without knowing your problem or requirements very precisely.
Second, the difference between simple and simplistic only becomes visible
for a model, framework, or API when it is shared and scaled to multiple
clients and use cases (this allows you to see repetition of the uncaptured
complexity).

I first made these observations in early 2004, and developed a
methodological approach to achieving simplicity:
(1) Take a set of requirements.
(2) Generate a model that *barely* covers them, precisely as possible. (This
will be simplistic.)
(3) Explore the model with multiple use-cases, especially at large scales.
(Developer stories. Pseudocode.)
(4) Identify repetitions, boiler-plate, any stumbling blocks.
(5) Distill a new set of requirements. (Not monotonic.)
(6) Rinse, wash, repeat until I fail to make discernible progress for a long
while.
(7) At the end, generate a model that *barely* overshoots the requirements.

This methodology works on the simple principle: it's easier to recognize
'simplistic' than 'not quite as simple as possible'. All you need to do is
scale the problem (in as many dimensions as possible) and simplistic hops
right out of the picture and slaps you in the face.

By comparison, unnecessary complexity or power will lurk, invisible to our
preconceptions and perspectives - sometimes as a glass ceiling, sometimes as
an eroding force, sometimes as brittleness - but always causing scalability
issues that don't seem obvious. Most people who live in the model won't even
see there is a problem, just 'the way things are', just Blub. The only way
to recognize unnecessary power or complexity is to find a simpler way.

So, when initially developing a model, it's better to start simplistic and
work towards simple. When you're done, at the very edges, add just enough
power, with constrained access, to cover the cases you did not foresee (e.g.
Turing-complete only at the toplevel, or only with a special object
capability). After all, complicated but sufficient *is* better than
simplistic or insufficient.

I've been repeating this for 7 years now. My model was seeded
in 2003 October with the question: *What would it take to build the
cyber-world envisioned in Neal Stephenson's Snow Crash? *At that time, I
had no interest in language design, but that quickly changed after
distilling some requirements. I took a bunch of post-grad courses related to
language design, compilers, distributed systems, and survivable networking.
Of course, my original refinement model didn't really account for
inspiration on the way. I've since become interested in command-and-control
and data-fusion, which now has a major influence on my model. A requirement
discovered in 2010 March led to my current programming model, Reactive
Demand Programming, which has been further refined: temporal semantics were
added initially to support precise multimedia synchronization in a
distributed system, my temporal semantics have been refined to support
anticipation (which is useful for ad-hoc coordination, smooth animation,
event detection), and my state model was refined twice to support
anticipation (via the temporal semantics) and live programming.

I *had* to develop a methodological approach to simplicity, because the
problem I so gleefully attacked is much, much bigger than I am. (Still is.
Besides developing RDP, I've also studied interactive fiction, modular
simulations, the accessibility issues for blind access to the virtual world,
the possibility of CSS-like transforms on 3D structures, and so on. I have a
potentially powerful idea involving multi-agent generative grammars for
intelligent controlled creativity and stability in a shared, federated
world. I doubt I'll finish *any* of that on my own, except maybe the
generative grammars bit.)

Most developers are clever, but lack perspective. Their eyes and noses are
close to a problem, focused on a local problem and code-smells. When they
build atop a *powerful* substrate - such as OOP or monads -