On Mon, Jul 25, 2011 at 10:49 AM, Marcel Weiher
<[email protected]>wrote (in: Re: [fonc] Alan Kay talk at HPI in
Potsdam)

> Well, now that we actually have a 1E9 : 1 scale system to look at and scale
> down, and it turns out that it looks a bit different than we thought it
> would. This seems like a good thing to me (see above), because it means we
> have an opportunity for learning.
>

I agree with the approach of taking a large system and scaling it 'down' to
discover new truths about how we should be developing applications. Indeed,
such an approach has led me to develop the Reactive Demand Programming
model, which promises to scale from FPGAs to federated MMORPGs.

We should all open our eggs on the 'big' end. ;-)

But you seem to be providing a single scalability number: a billion to one.
I think we need to consider *N dimensions of scalability*.
* concurrent developers and administrators
* number of available services and apps
* depth for composition of services
* number of dependencies for a service
* dependencies between administrative domains
* concurrent users for a service
* concurrently visible applications (zoomable UI)
* connectivity of applications; mashups
* sharing resources between users (CSCW)

And each dimension has its own safety properties. For example, you should
look at phenomena such as 'dead links' and think, "it seems maintenance at
scale could use some work." Similarly, inconsistent data is another common
maintenance issue. I think the web doesn't measure so well if you scrutinize
all the above scaling properties.

While I did learn a lot from the web, I feel I learned more from its
inefficiencies and weaknesses than its success. For example:

* We cannot treat URIs as identifying unique resources. Everyone sees a
different 'iGoogle.com'. Forms are not shared between clients. DHTML
diverges after reaching the client. Use of cookies tie results to a specific
client. These properties hinder CSCW, application mobility, and (perhaps
surprisingly) scalability. (The scalability issue: cookies weaken our
ability to utilize proxy caches or a content distribution network, which in
turn hinders our ability to handle flash-crowds.)

* Document markup is not the right model for composable data systems.
XSLT+CSS+(XML Structured Data) offers far superior opportunity for reuse and
re-composition of information. DHTML too easily hides information behind the
language semantics; we need a more accessible language and model for dynamic
behaviors. Our UIs should place clients much closer to the data, so that
they can join, compose, adapt, and alter views of the data in ways not
anticipated by its provider. We can learn a lot from tangible values and
naked objects.

* The Web's security model is inefficient (overheads per connection),
non-composable (i.e. with regards to both service brokering and page
mashups), weak (only as strong as your least-trusted intermediate CA and
your DNS). We could do much better, e.g. using the object capability model
for security, but it will require considerable reorganization of how we
access pages.

* Identifying content on the web by 'name' is neither resilient nor stable.
 Links break, or are deprecated. We cannot easily 'extricate' a useful
document or subprogram when content linked by name - i.e. we end up tugging
on the whole forest of dependencies. This has also proven an issue for
scaling of programming languages, requiring a lot of complex dependency and
configuration management. We should link instead by content - i.e. pattern
matching and functional analysis.

* Zoomability! We also want the web to scale effectively on the
'client'-side. Currently, the number of 'open' web-applications a client can
support is very limited because each app may consume client and server
memory, CPU, bandwidth, and battery life even when users aren't actively
involved with them. Within apps, we often pay for features that clients
aren't currently using.

I believe solving zoomability requires a corresponding advances in our
programming model to: control consumption of resources (demand-driven), to
allow easy tabling and recovery of an app that is out of sight (resilient,
disruption tolerant), to support shared editing and CSCW (reactive,
consistent), and to allow a large number of visible apps so long as most are
stable (RESTful).

I have been developing a way to get both content-centric networking and
object capability model. It involves a fine-grained network of
micro-registries that we can publish, revoke, and identify, and share by
capability. It turns out, the idea of a 'micro-registry' also serves
effectively as a document - i.e. in the same sense that XML can be viewed
with XSLT+CSS, but without hindering ability to do other useful things.


> there should be no difference between our "local" and our "global" apps,
> but instead of making our global apps look like our local ones, a more
> fruitful approach could be to make our local apps look like our global ones.


I agree with the sentiment, but I think properly fixing the 'global' apps
will result in something that works well locally.


> we can hide computation behind a "static document" interface.
>

I've always (well, since 1996, when I first saw 'meta refresh') thought this
was a terrible idea. We shouldn't invite abstraction violations. We'd be
better off acknowledging that every remote object is inherently 'dynamic',
and provide a corresponding network protocol to support live queries. As it
is, we developed a ton of hacks (comet, long polling, infinite frames, et
cetera) before we finally bit the bullet and added web-sockets.

But, from the rest of your explanation, I surmise that the property you
really want is accessibility - to put users and their agents
(screen-readers, translators, scripts, etc.) closer to the data model and
capabilities. JavaScript and Flash tend to hurt accessibility.

I don't believe there is an inherent conflict between low-level rendering
and accessibility. But we must impose a separation between: where we obtain
the data, and the service we use to render it. This separation could be
better encouraged with a few tweaks to the browser model.


> Back to useful features of the REST model, having the "what can I do next"
> information embedded in the answer to my request ("Hypertext as the carrier
> of application state") also seems to be a powerful way of really, really,
> really late binding APIs.
>

Yeah, I think this sort of 'navigation as driver of app state' is a good
lesson to learn from the web with respect to achieving UIs. We might relate
it to continuation-passing style in a programming language.

I certainly prefer navigation to 'buttons' and 'events' updating 'state'.

Regards,

Dave
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to