Re: [fonc] Issues with understanding obj.c

2011-06-09 Thread Julian Leviston
See below...

On 09/06/2011, at 2:59 PM, Josh Gargus wrote:

 I really don't understand what this means:
 
 typedef struct object *(*method_t)(struct object *receiver, ...);
 
 method_t is a pointer to a function that returns an object pointer and 
 takes receiver and additional argument
 
 Thanks for this. Okay, I understand that, but why is there a struct in 
 there twice? considering object is defined as a struct earlier in the 
 piece... is it because they're object pointers? when specifying a struct 
 pointer, do you need to write struct even though you've previously 
 specified a struct with that name?
 
 The latter.  In C++ you only need to use struct when declaring the type.   
 However, in C you need to explicitly use struct every time you want to refer 
 to the type.
 
 One common idiom is to use a typedef while defining the type.  In this case, 
 you might write:
 
 typedef struct object object_t;
 typedef object_t *(*method_t)(object_t *receiver, ...);
 

Okay, so in the initial example, why the typedef? What function is it 
performing here? I thought typedef was used to alias types, and yet here it 
doesn't seem to be doing anything... if method_t is a function pointer, why 
does there need to be a typedef in front of it? 

Sorry I'm so dense. :S

Julian.___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Issues with understanding obj.c

2011-06-09 Thread Julian Leviston
Answering my own question...

On 09/06/2011, at 4:27 PM, Julian Leviston wrote:

 See below...
 
 On 09/06/2011, at 2:59 PM, Josh Gargus wrote:
 
 I really don't understand what this means:
 
 typedef struct object *(*method_t)(struct object *receiver, ...);
 
 method_t is a pointer to a function that returns an object pointer and 
 takes receiver and additional argument
 
 Thanks for this. Okay, I understand that, but why is there a struct in 
 there twice? considering object is defined as a struct earlier in the 
 piece... is it because they're object pointers? when specifying a struct 
 pointer, do you need to write struct even though you've previously 
 specified a struct with that name?
 
 The latter.  In C++ you only need to use struct when declaring the type.   
 However, in C you need to explicitly use struct every time you want to refer 
 to the type.
 
 One common idiom is to use a typedef while defining the type.  In this case, 
 you might write:
 
 typedef struct object object_t;
 typedef object_t *(*method_t)(object_t *receiver, ...);
 
 
 Okay, so in the initial example, why the typedef? What function is it 
 performing here? I thought typedef was used to alias types, and yet here it 
 doesn't seem to be doing anything... if method_t is a function pointer, why 
 does there need to be a typedef in front of it? 
 
 Sorry I'm so dense. :S
 
 Julian.

So basically adding typedef defines the function pointer as a type, whereas 
leaving it off simply creates a variable function pointer. I *think* I've 
finally wrapped my head around this now. Gosh.

Julian.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Issues with understanding obj.c

2011-06-09 Thread BGB

On 6/8/2011 11:36 PM, Julian Leviston wrote:

Answering my own question...

On 09/06/2011, at 4:27 PM, Julian Leviston wrote:


See below...

On 09/06/2011, at 2:59 PM, Josh Gargus wrote:


I really don't understand what this means:

typedef struct object *(*method_t)(struct object *receiver, ...);

method_t is a pointer to a function that returns an object pointer 
and takes receiver and additional argument


Thanks for this. Okay, I understand that, but why is there a 
struct in there twice? considering object is defined as a struct 
earlier in the piece... is it because they're object pointers? when 
specifying a struct pointer, do you need to write struct even 
though you've previously specified a struct with that name?


The latter.  In C++ you only need to use struct when declaring the 
type.   However, in C you need to explicitly use struct every time 
you want to refer to the type.


One common idiom is to use a typedef while defining the type.  In 
this case, you might write:


typedef struct object object_t;
typedef object_t *(*method_t)(object_t *receiver, ...);



Okay, so in the initial example, why the typedef? What function is it 
performing here? I thought typedef was used to alias types, and yet 
here it doesn't seem to be doing anything... if method_t is a 
function pointer, why does there need to be a typedef in front of it?


Sorry I'm so dense. :S

Julian.


So basically adding typedef defines the function pointer as a type, 
whereas leaving it off simply creates a variable function pointer. I 
*think* I've finally wrapped my head around this now. Gosh.


Julian.



yep...

function pointers are fun, and isn't the syntax just so aesthetically 
pleasing?...


now, just what could make it any better?...
... half imagined something here (involving some C++0x features) which 
was too terrible to be described (or typed...).


for now, one can just settle with this:
typedef void *(*(*foo)(int x))(void *(*(*bar)(int y))(int z));



but, in all this, it has managed to cause me a little bit of doubt as to 
the intuitiveness of (in my language) using a number of modifier 
keywords to alter the behavior of variable scoping semantics...


var x;//does one thing (default/lexical scope)
dynamic var x;//dynamic scope (except on a class)
delegate var x;//delegation scope

nevermind:
typedef function Foo(x, y);
Foo x;


note, the language currently has several valid declaration styles as 
well, the above being one of them.


and, recently, a disagreement with someone (Thomas Mertes) over 
basically hard-coding nearly everything language-wise at present into 
the VM... (but, for my uses, extending the language from within the 
language isn't really a pressing issue...).


and, in the past, with someone else over the issue that my type-system 
is based in large-part on type-name strings and magic pointers (as 
opposed to tagged references or similar and type-numbers or similar). 
but... name-strings were easier to work with (and don't require a 
centralized table of assigned type numbers, or similar).


well, and that at present, I have opcodes numbered up to around 556 
(this being in the 2-byte range, with opcodes 0..191 as single byte, and 
256 to 16383 as 2-byte, with 192..255 currently as an unusable hole 
due to an implementation detail, roughly along the lines of handling 
2-byte opcodes via switch-recursion).


but, sadly, some of my stuff gets hairy sometimes.


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Josh Gargus

On May 31, 2011, at 7:30 AM, Alan Kay wrote:

 Hi Cornelius
 
 There are lots of egregiously wrong things in the web design. Perhaps one of 
 the simplest is that the browser folks have lacked the perspective to see 
 that the browser is not like an application, but like an OS. i.e. what it 
 really needs to do is to take in and run foreign code (including low level 
 code) safely and coordinate outputs to the screen (Google is just starting to 
 realize this with NaCl after much prodding and beating.)
 
 I think everyone can see the implications of these two perspectives and what 
 they enable or block

Some of the implications, anyway.  The benefits of the OS-perspective are 
clear.  Once it hits its stride, there will be no (technical) barriers to 
deploying the sorts of systems that we talk about here 
(Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool 
things, and there will be much creativity and innovation.

However, elsewhere in this thread it is noted that the HTML-web is 
structured-enough to be indexable, mashupable, and so forth.  It makes me 
wonder: is there a risk that the searchability, etc. of the web will be 
degraded by the appearance of a number of mutually-incompatible 
better-than-HTML web technologies?  Probably not... in the worst case, someone 
who wants to be searchable can also publish in the legacy format.

However, can we do better than that?   I guess the answer depends on which 
aspect of the status quo we're trying to improve on (searchability, mashups, 
etc).  For search, there must be plenty of technologies that can improve on 
HTML by decoupling search-metadata from presentation/interaction (such as 
OpenSearch, mentioned elsewhere in this thread).  Mashups seem harder... maybe 
it needs to happen organically as some of the newly-possible systems find 
themselves converging in some areas.

But I'm not writing because I know the answers, but rather the opposite.  What 
do you think?

Cheers,
Josh






 Cheers,
 
 Alan
 
 From: Cornelius Toole cornelius.to...@gmail.com
 To: Fundamentals of New Computing fonc@vpri.org
 Sent: Tue, May 31, 2011 7:16:20 AM
 Subject: Re: [fonc] Alternative Web programming models?
 
 Thanks Merik,
 
 I've read/watch the OOPSLA'97 keynote before, but hadn't seen the first 
 video. 
 I'm having problems with the first one(the talk at UIUC). Has anyone been 
 able to watch past the first hour. I get up to the point where Alex speaks 
 and it freezes.
 
 I've just recently read Roy Fielding's dissertation on the architecture of 
 the Web. Two prominent features of web architecture are the (1) client-server 
 hierarchical style and (2) the layering abstraction style. My take away from 
 that is how all of abstraction layers of the web software stack get in the 
 way of the applications that want to use the machine. Style 1 is counter to 
 the notion of the 'no centers' principle and is very limiting when you 
 consider different classes of applications that might involve many entities 
 with ill-defined relationships. Style 2, provides for separation of concerns 
 and supports integration with legacy systems, but incurs so much overhead in 
 terms of structural complexity and performance. I think the stuff about web 
 sockets and what was discussed in the Erlang interview that Micheal linked to 
 in the 1st reply is relevant here. The web was designed for large grain 
 interaction between entities, but many application domain problems don't map 
 to that. Some people just want pipes or channels to exchange messages for 
 fine-grained interactions, but the layer cake doesn't allow it. This is where 
 you get the feeling that the architecture for rich web apps is 
 no-architecture, just piling big stones atop one another.
 
 I think it would be very interesting for someone to take the same approach to 
 networked-based application as Gezira did with graphics (or the STEP project 
 in general) as far assessing what's needed in a modern Internet-scale 
 hypermedia architecture. 
 
 
 
 On Thu, May 26, 2011 at 4:53 PM, Merik Voswinkel a...@knoware.nl wrote:
 Dr Alan Kay addressed the html design a number of times in his lectures and 
 keynotes. Here are two:
 
 [1] Alan Kay, How Complex is Personal Computing?. Normal Considered 
 Harmful. October 22, 2009, Computer Science department at UIUC. 
  http://media.cs.uiuc.edu/seminars/StateFarm-Kay-2009-10-22b.asx 
 (also see http://www.smalltalk.org.br/movies/ )
 
 [2] Alan Kay, The Computer Revolution Hasn't Happened Yet, October 7, 1997, 
 OOPSLA'97 Keynote. 
  Transcript 
 http://blog.moryton.net/2007/12/computer-revolution-hasnt-happened-yet.html 
  Video 
 http://ftp.squeak.org/Media/AlanKay/Alan%20Kay%20at%20OOPSLA%201997%20-%20The%20computer%20revolution%20hasnt%20happened%20yet.avi
  
  (also see http://www.smalltalk.org.br/movies/ )
 
 Merik
 
 On May 26, 2011, at 8:38 PM, Cornelius Toole wrote:
 
 All,
 A criticism by Dr. Kay, has really stuck with me. I 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread BGB

On 6/9/2011 12:56 AM, Josh Gargus wrote:


On May 31, 2011, at 7:30 AM, Alan Kay wrote:


Hi Cornelius

There are lots of egregiously wrong things in the web design. Perhaps 
one of the simplest is that the browser folks have lacked the 
perspective to see that the browser is not like an application, but 
like an OS. i.e. what it really needs to do is to take in and run 
foreign code (including low level code) safely and coordinate outputs 
to the screen (Google is just starting to realize this with NaCl 
after much prodding and beating.)


I think everyone can see the implications of these two perspectives 
and what they enable or block


Some of the implications, anyway.  The benefits of the OS-perspective 
are clear.  Once it hits its stride, there will be no (technical) 
barriers to deploying the sorts of systems that we talk about here 
(Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own 
cool things, and there will be much creativity and innovation.


However, elsewhere in this thread it is noted that the HTML-web is 
structured-enough to be indexable, mashupable, and so forth.  It makes 
me wonder: is there a risk that the searchability, etc. of the web 
will be degraded by the appearance of a number of 
mutually-incompatible better-than-HTML web technologies?  Probably 
not... in the worst case, someone who wants to be searchable can also 
publish in the legacy format.


However, can we do better than that?   I guess the answer depends on 
which aspect of the status quo we're trying to improve on 
(searchability, mashups, etc).  For search, there must be plenty of 
technologies that can improve on HTML by decoupling search-metadata 
from presentation/interaction (such as OpenSearch, mentioned elsewhere 
in this thread).  Mashups seem harder... maybe it needs to happen 
organically as some of the newly-possible systems find themselves 
converging in some areas.


But I'm not writing because I know the answers, but rather the 
opposite.  What do you think?




hmm... it is a mystery

actually, possibly a relevant question here, would be why Java applets 
largely fell on their face, but Flash largely took off (in all its uses 
from YouTube to Punch The Monkey...).


but, yeah, there is another downside to deploying ones' technology in a 
browser:

writing browser plug-ins...


and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the 
browser, then push or pull binary files, which are executed, and may 
perform tasks?...


could be interesting though, as then a tab can be either an open page 
or document, or a running application. hopefully, these could be nicer 
to target, and more capable, than either Flash or Java Applets, although 
probably would require some sort of VM.


NaCl is not a perfect solution, if anything, because, say, x86 NaCl 
apps don't work on x86-64 or ARM. nicer would be able to be able to run 
it natively, if possible, or JIT it to the native ISA if not.


I did my own x86-based VM before, which basically just ran x86 in an 
interpreted (via translation to threaded code) environment. technically, 
I just sort of did a basic POSIX-like architecture, albeit I used 
PE/COFF for binaries and libraries (compiled via MinGW...).
it was written in such a way that likely it wont care what the host 
architecture is (it was all plain C, with no real ASM or 
architecture-specific hacks...).


so, I guess, if something like this existed inside a browser, and was 
isolated from the host OS?...


in my case, I wrote the VM and soon realized I personally had no 
particular use for it...



and, meanwhile, for my own site, it is generally plain HTML...
I did basic CGI-scripts before as a test, but couldn't think up much to 
use them for (I don't personally really do much of anything that really 
needs CGI-scripts).


about the most I would likely do with it would be to perform simple 
surveys, say, a form that is like:

favorite programming language?, MBTI type?, ...
then I could analyze the results to conclude which types of personality 
are more associated with being a programmer, and which prefer which 
sorts of programming languages, ...


for example... how common are other xSTP (ISTP or ESTP) programmers, and 
how many like C?...



in general though, I use HTML for much of my documentation, but 
generally because it is currently one of the least-effort ways to 
provide structured and formatted documentation and have it be readily 
accessible (online or offline).


at least, currently I use SeaMonkey Composer, which is not that much 
more effort than using a word-processor, and IMO a little less silly in 
terms of how it behaves (vs Word or OpenOffice Writer which seem at 
times brain-damaged...). not that Composer is perfect either though.


for editing documentation, a WYSIWYG editor works fine, since ones' 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Julian Leviston

On 09/06/2011, at 5:56 PM, Josh Gargus wrote:

 However, can we do better than that?   I guess the answer depends on which 
 aspect of the status quo we're trying to improve on (searchability, mashups, 
 etc).  For search, there must be plenty of technologies that can improve on 
 HTML by decoupling search-metadata from presentation/interaction (such as 
 OpenSearch, mentioned elsewhere in this thread).  Mashups seem harder... 
 maybe it needs to happen organically as some of the newly-possible systems 
 find themselves converging in some areas.
 
 But I'm not writing because I know the answers, but rather the opposite.  
 What do you think?
 


I'm left wondering about the Adapter design pattern... could it be adapted to 
apply here? Can we take OMeta, which is basically the adapter pattern to end 
all adapter patterns and apply it to the web, and end up with two 
inter-communicable network protocols?

Julian.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Julian Leviston

On 09/06/2011, at 7:04 PM, BGB wrote:

 actually, possibly a relevant question here, would be why Java applets 
 largely fell on their face, but Flash largely took off (in all its uses from 
 YouTube to Punch The Monkey...).

My own opinion of this is the same reason that the iPad feels faster than a 
modern desktop operating system: it was quicker to get to the user interaction 
point.

Julian.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Cornelius Toole
Some of the implications, anyway.  The benefits of the OS-perspective are
 clear.  Once it hits its stride, there will be no (technical) barriers to
 deploying the sorts of systems that we talk about here
 (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool
 things, and there will be much creativity and innovation.
 However, elsewhere in this thread it is noted that the HTML-web is
 structured-enough to be indexable, mashupable, and so forth.  It makes me
 wonder: is there a risk that the searchability, etc. of the web will be
 degraded by the appearance of a number of mutually-incompatible
 better-than-HTML web technologies?  Probably not... in the worst case,
 someone who wants to be searchable can also publish in the legacy format.


Will the web be degraded by the appearance (or should I say proliferation)
of mutually-incompatible, but better than HTML technologies?

First off I would ask better in what way? I think the from a user experience
POV, I think when people say 'better' they mean richer interactively, which
implies better graphical capabilities, access to special hardware (e.g.
camera, mic, accelerometer, GPS, GPUs, GPGPU, etc.), faster startup,
robustness against network failure and so on.

Up until now and may be for some time into the future, the tradeoff of the
web as computing platform versus OS-native ones has been about generality
versus optimizability as enabled by resource specialization(or some such
related thing). Some use cases map  well to the general, others not. Only
within the last 3 years have we seen mass-market deployment and use of
Internet-scale software not entirely based on HTML/JS/CSS client
technologies. This has been mostly in the form of native mobile apps. But
these are still web apps, many of them still use the Web as connector (e.g.
HTTP), but the UI is realized using OS-native frameworks. And so what we
often lose is data transparency and portability. For instance, the Our
Choice interactive book app on iOS looks and feels great, but its worse than
the web in that I cannot even copy text from it. It, like many of the
non-ePub digital publications, is just an archive of images and audio-video
content pre-baked into a handful of layouts.

It's not that non-HTML client technologies degrade the web in and of itself,
take PDF for instance. Many PDF documents are linkable and searchable on the
web. But this is because software to read PDF is widely deployed, which was
enabled by a widespread access to the PDF standard. I think we can mitigate
the opacity introduced by non-HTML client technologies if we expand the ways
in which we implement links. Imagine encapsulating a reference to the
computation (or its type) that would resolve a less-transparent data format.

Probably not... in the worst case, someone who wants to be searchable can
 also publish in the legacy format.


The 'legacy' format is the point. I would say that the web isn't 'legacy',
but what makes legacy systems visible. If the Internet is a world of many
diverse islands of computational and network resources, the Web architecture
define languages for these islands to communicate.

The issues concerning web client UI rendering technologies are orthogonal to
other fundamental issues of the Web architecture.

I think what I've been really trying to get at with my initial question is
this. If the goal of the web architecture is for connecting resources, the
current architecture does well at connecting data, but not computation, not
at scale. Perhaps a theme of the developments around HTML5 is evolving the
Web architecture to better support connecting applications. But because the
Web was designed for exchanging representations of application state
(basically large-grained data), so many applications won't fit this model.
Imagine trying to run a high-frequency equity trading network atop the FedEx
air freight network, or worse the US Postal Service (or chose your local
postal service). Add to the fact of a client-server hierarchy and now you
have to deal with bottlenecks at those endpoints. Many web-based
applications are designed around this bottleneck, and so I see us running
into conceptual and structural scaling issues.







On Thu, Jun 9, 2011 at 2:56 AM, Josh Gargus j...@schwa.ca wrote:


 On May 31, 2011, at 7:30 AM, Alan Kay wrote:

  Hi Cornelius

 There are lots of egregiously wrong things in the web design. Perhaps one
 of the simplest is that the browser folks have lacked the perspective to see
 that the browser is not like an application, but like an OS. i.e. what it
 really needs to do is to take in and run foreign code (including low level
 code) safely and coordinate outputs to the screen (Google is just starting
 to realize this with NaCl after much prodding and beating.)

 I think everyone can see the implications of these two perspectives and
 what they enable or block


 Some of the implications, anyway.  The benefits of the OS-perspective are
 clear.  Once it 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Josh Gargus

On Jun 9, 2011, at 2:04 AM, BGB wrote:

 On 6/9/2011 12:56 AM, Josh Gargus wrote:
 
 
 On May 31, 2011, at 7:30 AM, Alan Kay wrote:
 
 Hi Cornelius
 
 There are lots of egregiously wrong things in the web design. Perhaps one 
 of the simplest is that the browser folks have lacked the perspective to 
 see that the browser is not like an application, but like an OS. i.e. what 
 it really needs to do is to take in and run foreign code (including low 
 level code) safely and coordinate outputs to the screen (Google is just 
 starting to realize this with NaCl after much prodding and beating.)
 
 I think everyone can see the implications of these two perspectives and 
 what they enable or block
 
 Some of the implications, anyway.  The benefits of the OS-perspective are 
 clear.  Once it hits its stride, there will be no (technical) barriers to 
 deploying the sorts of systems that we talk about here 
 (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool 
 things, and there will be much creativity and innovation.
 
 However, elsewhere in this thread it is noted that the HTML-web is 
 structured-enough to be indexable, mashupable, and so forth.  It makes me 
 wonder: is there a risk that the searchability, etc. of the web will be 
 degraded by the appearance of a number of mutually-incompatible 
 better-than-HTML web technologies?  Probably not... in the worst case, 
 someone who wants to be searchable can also publish in the legacy format.
 
 However, can we do better than that?   I guess the answer depends on which 
 aspect of the status quo we're trying to improve on (searchability, mashups, 
 etc).  For search, there must be plenty of technologies that can improve on 
 HTML by decoupling search-metadata from presentation/interaction (such as 
 OpenSearch, mentioned elsewhere in this thread).  Mashups seem harder... 
 maybe it needs to happen organically as some of the newly-possible systems 
 find themselves converging in some areas.
 
 But I'm not writing because I know the answers, but rather the opposite.  
 What do you think?
 
 
 hmm... it is a mystery
 
 actually, possibly a relevant question here, would be why Java applets 
 largely fell on their face, but Flash largely took off (in all its uses from 
 YouTube to Punch The Monkey...).
 
 but, yeah, there is another downside to deploying ones' technology in a 
 browser:
 writing browser plug-ins...
 
 
 and, for browser-as-OS, what exactly will this mean, technically?...
 network-based filesystem and client-side binaries and virtual processes?...
 like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, 
 then push or pull binary files, which are executed, and may perform tasks?...


This isn't quite what I had in mind.  Perhaps hypervisor is better than OS 
to describe what I'm talking about, and I believe Alan too: a thin-as-possible 
platform that provides access to computing resources such as end-user I/O 
(mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, local persistant 
storage, and network.  Just enough to enable others to run OSes on top of this 
hypervisor.  

If it tickles you fancy, then by all means use it to run a sand-boxed Unix.  
Undoubtedly someone will; witness the cool hack to run Linux in the browser, 
accomplished by writing an x86 emulator in Javascript 
(http://bellard.org/jslinux/).

However, such a hypervisor will also host more ambitious OSes, for example, 
platforms for persistent capability-secure peer-to-peer real-time collaborative 
end-use-scriptable augmented-reality environments.  (again, trying to use 
word-associations to roughly sketch what I'm referring to, as I did earlier 
with Croquet-Worlds-Frank-OMeta-whatnot).

Does this make my original question clearer?

Cheers,
Josh




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Toby Watson
How about _recursive_ VM/JITs *beneath* the level that HTML/JS is implemented.

So the browser that ships only supports this recursive VM.

HTML is an application of this that can be evolved by open source at
internet scale / time. Web pages can point at a specific HTML
implementation or a general redirector like google apis to get the
commonly agreed standard version.

Other containers/'plugins', Squeak, Flash, Java run as VMs, can run
their native bytecode/images but also, potentially, expose the VM
interface up again. Nesting VMs is useful also. Though you won't spare
the use-case any love, Flash video players often load multiple ads
SDKs, an arrangement that could benefit from isolation, i.e.
browser-more-like-OS.

If the top and bottom VM interfaces are the same then we can stack
them (as well as nesting them).

Base VM would have exokernel / NaCL like exposure of the native
capabilities of the device. Exokernel  FluxOS have some nifty tricks
to punch through layers so performance is not so impacted by stacking.

An intermediate VM layer could provide ISA / hardware abstraction so
that everything above that looks the same.

I re-read history of Smalltalk recently and was reminded of this from Alan,

'Bob Barton, the main designer of the B5000 and a professor at Utah
had said in one of his talks a few days earlier: The basic principal
of recursive design is to make the parts have the same power as the
whole. For the first time I thought of the whole as the entire
computer and wondered why anyone would want to divide it up into
weaker things called data structures and procedures. Why not divide it
up into little computers, as time sharing was starting to? But not in
dozens. Why not thousands of them, each simulating a useful
structure?'

Toby

On 9 June 2011 10:25, Josh Gargus j...@schwa.ca wrote:

 On Jun 9, 2011, at 2:04 AM, BGB wrote:

 On 6/9/2011 12:56 AM, Josh Gargus wrote:


 On May 31, 2011, at 7:30 AM, Alan Kay wrote:

 Hi Cornelius

 There are lots of egregiously wrong things in the web design. Perhaps one 
 of the simplest is that the browser folks have lacked the perspective to 
 see that the browser is not like an application, but like an OS. i.e. what 
 it really needs to do is to take in and run foreign code (including low 
 level code) safely and coordinate outputs to the screen (Google is just 
 starting to realize this with NaCl after much prodding and beating.)

 I think everyone can see the implications of these two perspectives and 
 what they enable or block

 Some of the implications, anyway.  The benefits of the OS-perspective are 
 clear.  Once it hits its stride, there will be no (technical) barriers to 
 deploying the sorts of systems that we talk about here 
 (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool 
 things, and there will be much creativity and innovation.

 However, elsewhere in this thread it is noted that the HTML-web is 
 structured-enough to be indexable, mashupable, and so forth.  It makes me 
 wonder: is there a risk that the searchability, etc. of the web will be 
 degraded by the appearance of a number of mutually-incompatible 
 better-than-HTML web technologies?  Probably not... in the worst case, 
 someone who wants to be searchable can also publish in the legacy format.

 However, can we do better than that?   I guess the answer depends on which 
 aspect of the status quo we're trying to improve on (searchability, 
 mashups, etc).  For search, there must be plenty of technologies that can 
 improve on HTML by decoupling search-metadata from presentation/interaction 
 (such as OpenSearch, mentioned elsewhere in this thread).  Mashups seem 
 harder... maybe it needs to happen organically as some of the 
 newly-possible systems find themselves converging in some areas.

 But I'm not writing because I know the answers, but rather the opposite.  
 What do you think?


 hmm... it is a mystery

 actually, possibly a relevant question here, would be why Java applets 
 largely fell on their face, but Flash largely took off (in all its uses from 
 YouTube to Punch The Monkey...).

 but, yeah, there is another downside to deploying ones' technology in a 
 browser:
 writing browser plug-ins...


 and, for browser-as-OS, what exactly will this mean, technically?...
 network-based filesystem and client-side binaries and virtual processes?...
 like, say, if one runs a tiny sand-boxed Unix-like system inside the 
 browser, then push or pull binary files, which are executed, and may perform 
 tasks?...


 This isn't quite what I had in mind.  Perhaps hypervisor is better than 
 OS to describe what I'm talking about, and I believe Alan too: a 
 thin-as-possible platform that provides access to computing resources such as 
 end-user I/O (mouse, multitouch, speakers, display, webcam, etc.), CPU/GPU, 
 local persistant storage, and network.  Just enough to enable others to run 
 OSes on top of this hypervisor.

 If it tickles you fancy, then by 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Josh Gargus

On Jun 9, 2011, at 9:26 AM, Cornelius Toole wrote:

 
 
 Some of the implications, anyway.  The benefits of the OS-perspective are 
 clear.  Once it hits its stride, there will be no (technical) barriers to 
 deploying the sorts of systems that we talk about here 
 (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool 
 things, and there will be much creativity and innovation.
 However, elsewhere in this thread it is noted that the HTML-web is 
 structured-enough to be indexable, mashupable, and so forth.  It makes me 
 wonder: is there a risk that the searchability, etc. of the web will be 
 degraded by the appearance of a number of mutually-incompatible 
 better-than-HTML web technologies?  Probably not... in the worst case, 
 someone who wants to be searchable can also publish in the legacy format.
 
 Will the web be degraded by the appearance (or should I say proliferation) of 
 mutually-incompatible, but better than HTML technologies?
 
 First off I would ask better in what way?

Better in every way!  Better languages, better render-target and 
interaction-model than provided by the DOM, better models for distributed 
computation.

It appears that I was assuming more shared context on this list than actually 
exists.  I'll try to fine-tune my question below.


 I think the from a user experience POV, I think when people say 'better' they 
 mean richer interactively, which implies better graphical capabilities, 
 access to special hardware (e.g. camera, mic, accelerometer, GPS, GPUs, 
 GPGPU, etc.), faster startup, robustness against network failure and so on.

Sure, all of these.


 
 Up until now and may be for some time into the future, the tradeoff of the 
 web as computing platform versus OS-native ones has been about generality 
 versus optimizability as enabled by resource specialization(or some such 
 related thing). Some use cases map  well to the general, others not. Only 
 within the last 3 years have we seen mass-market deployment and use of 
 Internet-scale software not entirely based on HTML/JS/CSS client 
 technologies. This has been mostly in the form of native mobile apps. But 
 these are still web apps, many of them still use the Web as connector (e.g. 
 HTTP), but the UI is realized using OS-native frameworks. And so what we 
 often lose is data transparency and portability. For instance, the Our Choice 
 interactive book app on iOS looks and feels great, but its worse than the web 
 in that I cannot even copy text from it. It, like many of the non-ePub 
 digital publications, is just an archive of images and audio-video content 
 pre-baked into a handful of layouts.

Right, this loss of transparency and portability is precisely the type of 
downside I'm envisioning when people start deploying new OSes on the browser 
hypervisor (using these terms as I defined them in my previous email).


 
 It's not that non-HTML client technologies degrade the web in and of itself, 
 take PDF for instance.

I didn't mean to imply this.  We're on the same page here.


 Many PDF documents are linkable and searchable on the web. But this is 
 because software to read PDF is widely deployed, which was enabled by a 
 widespread access to the PDF standard. I think we can mitigate the opacity 
 introduced by non-HTML client technologies if we expand the ways in which we 
 implement links. Imagine encapsulating a reference to the computation (or its 
 type) that would resolve a less-transparent data format.

I'm not sure I understand your last sentence, nor how you suggest we might 
mitigate the opacity of non-HTML client technologies.  Let's say that  you 
embed in an HTML page a view into a persistent 3d virtual environment like 
OpenQwaq.  Can you help me understand how we might expand the ways in which we 
implement links to encompass the rich, persistent, dynamic content in such an 
environment?  (this is basically my original question in a more concrete 
context)


 Probably not... in the worst case, someone who wants to be searchable can 
 also publish in the legacy format.
 
 The 'legacy' format is the point. I would say that the web isn't 'legacy',

I used quotes to indicate that I was using the term as a shorthand label, 
rather than as descriptive.


 but what makes legacy systems visible. If the Internet is a world of many 
 diverse islands of computational and network resources, the Web architecture 
 define languages for these islands to communicate. 

 
 The issues concerning web client UI rendering technologies are orthogonal to 
 other fundamental issues of the Web architecture. 

Conceptually, yes.  In practice, no, because the HTML/DOM render-target is also 
the lingua franca that makes the Web searchable and mashupable.  


 
 I think what I've been really trying to get at with my initial question is 
 this. If the goal of the web architecture is for connecting resources, the 
 current architecture does well at connecting data, but not computation, not 
 at 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Josh Gargus
That all sounds very cool.

However, I don't think that it's feasible to try to ship something like this as 
standard in all browsers, if only for political reasons.  It would be 
impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.

That's what's cool about NaCl.  It's minimal enough to be a feasible candidate 
for universal adoption.  If it's adopted, then an ecosystem springs up with 
people inventing recursive exokernels to run in the browser.  

Cheers,
Josh



On Jun 9, 2011, at 10:56 AM, Toby Watson wrote:

 How about _recursive_ VM/JITs *beneath* the level that HTML/JS is implemented.
 
 So the browser that ships only supports this recursive VM.
 
 HTML is an application of this that can be evolved by open source at
 internet scale / time. Web pages can point at a specific HTML
 implementation or a general redirector like google apis to get the
 commonly agreed standard version.
 
 Other containers/'plugins', Squeak, Flash, Java run as VMs, can run
 their native bytecode/images but also, potentially, expose the VM
 interface up again. Nesting VMs is useful also. Though you won't spare
 the use-case any love, Flash video players often load multiple ads
 SDKs, an arrangement that could benefit from isolation, i.e.
 browser-more-like-OS.
 
 If the top and bottom VM interfaces are the same then we can stack
 them (as well as nesting them).
 
 Base VM would have exokernel / NaCL like exposure of the native
 capabilities of the device. Exokernel  FluxOS have some nifty tricks
 to punch through layers so performance is not so impacted by stacking.
 
 An intermediate VM layer could provide ISA / hardware abstraction so
 that everything above that looks the same.
 
 I re-read history of Smalltalk recently and was reminded of this from Alan,
 
 'Bob Barton, the main designer of the B5000 and a professor at Utah
 had said in one of his talks a few days earlier: The basic principal
 of recursive design is to make the parts have the same power as the
 whole. For the first time I thought of the whole as the entire
 computer and wondered why anyone would want to divide it up into
 weaker things called data structures and procedures. Why not divide it
 up into little computers, as time sharing was starting to? But not in
 dozens. Why not thousands of them, each simulating a useful
 structure?'
 
 Toby
 
 On 9 June 2011 10:25, Josh Gargus j...@schwa.ca wrote:
 
 On Jun 9, 2011, at 2:04 AM, BGB wrote:
 
 On 6/9/2011 12:56 AM, Josh Gargus wrote:
 
 
 On May 31, 2011, at 7:30 AM, Alan Kay wrote:
 
 Hi Cornelius
 
 There are lots of egregiously wrong things in the web design. Perhaps one 
 of the simplest is that the browser folks have lacked the perspective to 
 see that the browser is not like an application, but like an OS. i.e. 
 what it really needs to do is to take in and run foreign code (including 
 low level code) safely and coordinate outputs to the screen (Google is 
 just starting to realize this with NaCl after much prodding and beating.)
 
 I think everyone can see the implications of these two perspectives and 
 what they enable or block
 
 Some of the implications, anyway.  The benefits of the OS-perspective are 
 clear.  Once it hits its stride, there will be no (technical) barriers to 
 deploying the sorts of systems that we talk about here 
 (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool 
 things, and there will be much creativity and innovation.
 
 However, elsewhere in this thread it is noted that the HTML-web is 
 structured-enough to be indexable, mashupable, and so forth.  It makes me 
 wonder: is there a risk that the searchability, etc. of the web will be 
 degraded by the appearance of a number of mutually-incompatible 
 better-than-HTML web technologies?  Probably not... in the worst case, 
 someone who wants to be searchable can also publish in the legacy format.
 
 However, can we do better than that?   I guess the answer depends on which 
 aspect of the status quo we're trying to improve on (searchability, 
 mashups, etc).  For search, there must be plenty of technologies that can 
 improve on HTML by decoupling search-metadata from 
 presentation/interaction (such as OpenSearch, mentioned elsewhere in this 
 thread).  Mashups seem harder... maybe it needs to happen organically as 
 some of the newly-possible systems find themselves converging in some 
 areas.
 
 But I'm not writing because I know the answers, but rather the opposite.  
 What do you think?
 
 
 hmm... it is a mystery
 
 actually, possibly a relevant question here, would be why Java applets 
 largely fell on their face, but Flash largely took off (in all its uses 
 from YouTube to Punch The Monkey...).
 
 but, yeah, there is another downside to deploying ones' technology in a 
 browser:
 writing browser plug-ins...
 
 
 and, for browser-as-OS, what exactly will this mean, technically?...
 network-based filesystem and client-side binaries and virtual processes?...
 like, 

[fonc] MODULARITY: aosd.2012 - Call for Papers on Research Results - Round 2

2011-06-09 Thread Mónica Pinto
*** AOSD 2012 ***

March 25-30, 2012
Hasso-Plattner-Institut Potsdam, Germany
http://aosd.net/2012/

Call for Papers -- Research Results

Modularity transcending traditional abstraction boundaries is
essential for developing complex modern systems - particularly
software and software-intensive systems. Aspect-oriented and other new
forms of modularity and abstraction are attracting a great deal
attention across many domains within and beyond computer science. As
the premier international conference on modularity, AOSD continues to
advance our knowledge and understanding of separation of concerns,
modularity, and abstraction in the broadest senses of these terms.

The 2012 AOSD conference will comprise two main events: Research
Results and Modularity Visions. Both events invite full, scholarly
papers of the highest quality on new ideas and results in areas that
include but are not limited to complex systems, software design and
engineering, programming languages, cyber-physical systems, and other
areas across the whole system life cycle.

Research Results papers are expected to contribute significant new
research results with rigorous and substantial validation of specific
technical claims based on scientifically sound reflections on
experience, analysis, or experimentation.

Modularity Visions papers (solicited in a separate call) are expected
to present compelling new ideas in modularity, including strong cases
for significance, novelty, validity, and potential impact based on
thorough scholarly argumentation and early results.

AOSD 2012 is deeply committed to eliciting works of the highest
caliber by employing a new approach to reviewing with three separate
paper submission deadlines and review stages. A paper accepted in any
round will be published in the proceedings and presented at the
conference. A paper rejected in an early round may be invited to be
revised and resubmitted for review by the same reviewers in a later
round. There is no guarantee that a revised paper will be accepted.
Authors may, on their own initiative, resubmit a rejected work in a
subsequent round, in which case new reviewers may be appointed.
Authors submitting a revised paper should attach a letter explaining
the revisions made to it.

Topics

Topics of interest include, but are not limited to, the following:

* Complex systems: Modularity has emerged as a vital theme in many
domains, from biology to economics to engineered systems to software
and software-intensive systems, and beyond. AOSD 2012 invites works
that explore and establish connections across such disciplinary
boundaries.
* Software design and engineering: Requirements and domain
engineering; architecture; synthesis; evolution; metrics and
evaluation; economics; testing analysis and verification; semantics;
composition and interference; traceability; methodology; patterns.
* Programming languages: Language design; compilation and
interpretation; verification and static program analysis; formal
languages and calculi; execution environments and dynamic weaving;
dynamic and scripting languages; domain-specific languages and other
support for new forms of abstraction.
* Varieties of modularity: Context orientation; feature orientation;
model-driven development; generative programming; software product
lines; traits; meta-programming and reflection; contracts and
components; view-based development.
* Tools: Aspect mining; evolution and reverse engineering;
crosscutting views; refactoring.
* Applications: Data-intensive computing; distributed and concurrent
systems; middleware; service-oriented computing systems;
cyber-physical systems; networking; cloud computing; pervasive
computing; runtime verification; computer systems performance; system
health monitoring and the enforcement of non-functional properties.

Important Dates -- Research Results

(all deadlines are in 2011, 23:59:59 Apia, Samoa, time)

* Round 1: Abstract Submission: April 21 / Paper Submission: April 25 /
Notification: June 22
* Round 2: Abstract Submission: July 14 / Paper Submission: July 18 /
Notification: September 14
* Round 3: Abstract Submission: October 6 / Paper Submission: October 10 /
Notification: December 7

Instructions for Authors

Submissions to AOSD Research Results will be carried out
electronically via CyberChair. (Modularity Visions and Research
Results will have separate CyberChair URLs.) All papers must be
submitted in PDF format. Submissions must be no longer than 12 pages
(including bibliography, figures, and appendices) in SIGPLAN
Proceedings Format (http://www.sigplan.org/authorInformation.htm), 10
point font. Note that by default the SIGPLAN Proceedings Format
produces papers in 9 point font. If you are formatting your paper
using Latex, you will need to set the 10pt option in the
\documentclass command. If you are formatting your paper using Word,
you may wish to use the provided Word template that provides support
for this font size. Please include page numbers in your 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Josh Gargus

On Jun 9, 2011, at 11:42 AM, BGB wrote:
  OSes on top of this hypervisor.
 
 If it tickles you fancy, then by all means use it to run a sand-boxed Unix.  
 Undoubtedly someone will; witness the cool hack to run Linux in the browser, 
 accomplished by writing an x86 emulator in Javascript 
 (http://bellard.org/jslinux/).
 
 
 interesting...
 less painfully slow than I would have expected from the description...
 
 I wasn't thinking exactly like run an emulator, run OS in emulator, but 
 more like, a browser plugin which looked and acted similar to a small Unix 
 (with processes and so on, and a POSIX-like API, and a filesystem), but would 
 likely be different in that it would mount content from the website as part 
 of its local filesystem (probably read-only by default), and possibly each 
 process could have its own local VFS.
 

I know.  I was just saying that if someone has written a Javascript x86 
emulator to run Linux in the browser, that it's a near-certainty that someone 
will eventually use NaCl to host a POSIX-like environment in the browser.

Cheers,
Josh___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Josh Gargus

On Jun 9, 2011, at 12:06 PM, BGB wrote:

 On 6/9/2011 11:10 AM, Josh Gargus wrote:
 That all sounds very cool.
 
 However, I don't think that it's feasible to try to ship something like this 
 as standard in all browsers, if only for political reasons.  It would be 
 impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.
 
 That's what's cool about NaCl.  It's minimal enough to be a feasible 
 candidate for universal adoption.  If it's adopted, then an ecosystem 
 springs up with people inventing recursive exokernels to run in the browser.
 
 Cheers,
 Josh
 
 
 I don't understand though why one needs recursive exokernels though...

You're taking me too literally.  My point is that the first goal is to get 
widespread adoption of something like NaCl that is good enough to host a POSIX 
environment, a recursive exokernel, or whatever.  Once that first goal is 
achieved, well,  let a hundred flowers bloom.  

Cheers,
Josh



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread BGB

On 6/9/2011 12:20 PM, Josh Gargus wrote:

On Jun 9, 2011, at 12:06 PM, BGB wrote:


On 6/9/2011 11:10 AM, Josh Gargus wrote:

That all sounds very cool.

However, I don't think that it's feasible to try to ship something like this as 
standard in all browsers, if only for political reasons.  It would be 
impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.

That's what's cool about NaCl.  It's minimal enough to be a feasible candidate 
for universal adoption.  If it's adopted, then an ecosystem springs up with 
people inventing recursive exokernels to run in the browser.

Cheers,
Josh


I don't understand though why one needs recursive exokernels though...

You're taking me too literally.  My point is that the first goal is to get widespread 
adoption of something like NaCl that is good enough to host a POSIX environment, a 
recursive exokernel, or whatever.  Once that first goal is achieved, well,  let a 
hundred flowers bloom.



yes, ok.

FWIW, I suspect I tend to be fairly literal/concrete minded in general 
(although I can still imagine lots of stuff as well...). however, I 
guess I am just not very good at dealing with abstract thinking/concepts/...



but, yeah, widespread Unix-like NaCl would be cool (and personally less 
off-putting than the idea of having to go do apps with all 
CGI+JavaScript or by using Flash... although before I saw something 
where Adobe was showing off a C - AVM2 compiler, and demoing Quake 
running on top of Flash...).


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Cornelius Toole
Josh, All

I'm not sure I understand your last sentence, nor how you suggest we might
 mitigate the opacity of non-HTML client technologies.  Let's say that  you
 embed in an HTML page a view into a persistent 3d virtual environment like
 OpenQwaq.  Can you help me understand how we might expand the ways in which
 we implement links to encompass the rich, persistent, dynamic content in
 such an environment?  (this is basically my original question in a more
 concrete context)


I have some background in scientific and information visualization. A couple
years I met, T.J Jankun-Kelly, a researcher at Mississippi State University,
who did dissertation work on a formal model for the visualization
exploration process, called the visualization parameter settings or p-set:

http://scholar.google.com/scholar?q=author%3Ajankun-kelly+visualization+explorationhl=enbtnG=Searchas_sdt=1%2C25as_sdtp=on

For any given visualization result (e.g. typically an image), its p-set
encapsulates all the information needed to reproduce that result. Roughly a
viz p-set is a set of references to the the data, the algorithms/filters
that process that data, and run-time parameters for a system involved in
producing a visualization result. VR is very similar to data visualization,
so suppose you could formulate a usable VR exploration model and model for
user interaction. If you wanted reproduce a moment or set of moment(s)
within a virtual world, you don't recapture  replay the user interaction
event streams you record  recompute VR state based on deltas between each
p-set for some time or space instance. Any deterministic process or property
should be able to be represented within the model. I don't expect be able to
link to a perfect reproduction of the way simulated leaves blew in the wind
based on (pseudo)random algorithm. It then becomes an issue of deciding the
resolution and periodicity of parameter capture. Now to make a VR time-space
slab linkable, you need to be able to encode the VR-set and VR-set deltas
within some URI format. Perhaps it's something based on VRML/X3D or even
haptic encoding:
http://scholar.google.com/scholar?hl=enq=haptic+compression+encoding+author%3AkammerlbtnG=Searchas_sdt=0%2C25as_ylo=as_vis=0

(disclaimer: I have virtually no expertise in VR or tele-haptics or anything
else for that matter.)

To address different VR engine implementations, you extend the VR-set model
based on some type of domain semantics to formulate a generalized VR-set
(let's VRML/X3D plus a transformation model for dynamism) . One could then
define adaptor interfaces to map the transformation of a general VR-set
model to a transformation of a specific implementation's internal model.

To the extent, that process models could be devised for a given application
domain you could take similar approaches to make other types of computation
linkable and searchable.



On Thu, Jun 9, 2011 at 1:01 PM, Josh Gargus j...@schwa.ca wrote:


 On Jun 9, 2011, at 9:26 AM, Cornelius Toole wrote:



 Some of the implications, anyway.  The benefits of the OS-perspective are
 clear.  Once it hits its stride, there will be no (technical) barriers to
 deploying the sorts of systems that we talk about here
 (Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool
 things, and there will be much creativity and innovation.
 However, elsewhere in this thread it is noted that the HTML-web is
 structured-enough to be indexable, mashupable, and so forth.  It makes me
 wonder: is there a risk that the searchability, etc. of the web will be
 degraded by the appearance of a number of mutually-incompatible
 better-than-HTML web technologies?  Probably not... in the worst case,
 someone who wants to be searchable can also publish in the legacy format.


 Will the web be degraded by the appearance (or should I say proliferation)
 of mutually-incompatible, but better than HTML technologies?

 First off I would ask better in what way?


 Better in every way!  Better languages, better render-target and
 interaction-model than provided by the DOM, better models for distributed
 computation.

 It appears that I was assuming more shared context on this list than
 actually exists.  I'll try to fine-tune my question below.


 I think the from a user experience POV, I think when people say 'better'
 they mean richer interactively, which implies better graphical capabilities,
 access to special hardware (e.g. camera, mic, accelerometer, GPS, GPUs,
 GPGPU, etc.), faster startup, robustness against network failure and so on.


 Sure, all of these.



 Up until now and may be for some time into the future, the tradeoff of the
 web as computing platform versus OS-native ones has been about generality
 versus optimizability as enabled by resource specialization(or some such
 related thing). Some use cases map  well to the general, others not. Only
 within the last 3 years have we seen mass-market deployment and use of
 Internet-scale software not entirely based 

Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Casey Ransberger


On Jun 9, 2011, at 11:01 AM, Josh Gargus j...@schwa.ca wrote:

 Conceptually, yes.  In practice, no, because the HTML/DOM render-target is 
 also the lingua franca that makes the Web searchable and mashupable.  

So I'd like to first point out that you're making a great point here, so I hope 
it isn't too odd that I'm about to try to tear it down. Devil's advocate?

While the markup language has proven quite mashable/searchable, I think it's 
worth noting that just about *any* structured, broadly applied _convention_ 
will give you that; it could have been CSV, if SGML hadn't been tapped. 

One of the nicest things about markup has been free-to-cheap accessibility for 
blind folks... with most languages you can embed in a web page, this tends to 
go out the door quickly, and AJAX probably doesn't help either. If the browser 
was an operating system, I imagine we'd find a more traditional route to this 
kind of accessibility, which is about text-to-speech, and if you have the text, 
you should be able to search it too. 

Take a moment to imagine how different the world might be today if the 
convention had been e.g. s-exprs. How many linguistic context shifts do you 
think you'd need to build a web application in that world? While I love 
programming languages, when I have a deadline? bouncing back and forth between 
five or six languages probably hurts my productivity. 

Not to mention that we end up compensating in industry by hyperspecializing. I 
wish it was easier to hire people who just knew how to code, instead of having 
to qualify them as backend vs. frontend. I mean seriously. It's like 
specializing in putting a patty that someone else cooked on a bun, in terms of 
personal empowerment. Factory work, factory thinking. I'm the button pusher and 
your job is to assemble the two parts that I send down the line every five 
seconds when I push the button. Patty, bun.

I hoped Seaside might help a touch, and the backend guys all seem to really dig 
it (hey, now we can make web apps all by ourselves, without the burden of the 
wrangling boring markup-goop) but the frontend folks I've talked to (in-trench, 
not online) are hard pressed to have time to learn a whole new system. 

Since they build the part that the stakeholders actually see, I think they end 
up with more in the way of random asks from business folks, which have this way 
of making it clear over engineering managers, etc. 

There's also the problem wherein you have a whole bunch of people out there 
who've never seen anything else and don't have any context for why someone like 
me might be displeased with the current state of affairs. 

It'd be nice to be able to sort out how many of these problems are 
cognitive+technical versus cultural/social. 

The most interesting thing I've seen so far was when I was at a (now 
sold/defunct) company called Snapvine. We integrated telephony with social 
networking sites. Anyway, I spent more time looking at MySpace than I wanted 
to, and was stunned to discover:

Kids with MySpace pages were learning HTML and CSS just to trick out and add 
something unique to their profiles, and didn't seem to relate what they were 
doing to software at all. I wasn't sure if I was supposed to smile or frown 
when that realization hit me. 

That's about when I started talking to people about HyperCard again, which is 
ultimately how I found my way to Squeak, and then this list. 


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] A Message for Frank about HyperCard

2011-06-09 Thread Casey Ransberger
Frank,

Really good to hear that you've taken your first steps. You have great parents 
and a promising future. Keep up the good work!

I was really impressed that you've already gotten into HyperCard; I must have 
been fully 12 or 13 years old before I noticed it sitting on my own computer. I 
thought I'd let you know about a trouble I had with it, once, way back when, in 
case you ever run into anything similar in your own journey through life:

I looked through the menus, and I couldn't find the share this stack option.

It was particularly frustrating because I knew I wanted to click it, but I 
didn't really know why back then. It took me another roughly 15 years to figure 
that part out. Turns out there's a connection between epidemiology and the 
transmission of ideas, and the upshot is that the share this option ends up 
being the most powerful thing in the whole system, regardless of what other 
features are there.

Also, don't let the popular kids get you down. A little bit of share this can 
change everything overnight when it comes to popularity. Hope this helps!


Your pal,

Casey
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Alternative Web programming models?

2011-06-09 Thread Casey Ransberger
You know this isn't usable with the browser I have handy at the moment, but I 
can already see it. Really interesting, I can imagine it would look more or 
less like this. Thanks for putting me onto this, Ian. 

On Jun 9, 2011, at 2:52 PM, Ian Piumarta piuma...@speakeasy.net wrote:

 On Jun 9, 2011, at 14:38 , Casey Ransberger wrote:
 
 Take a moment to imagine how different the world might be today if the 
 convention had been e.g. s-exprs.
 
 http://hop.inria.fr/
 
 
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc