On Wed, 09 Sep 2009 04:54:30 -0700, frederic <fduboi...@gmail.com> wrote:
On Wed, 09 Sep 2009 10:06:20 +0200, Anselm R Garbe <garb...@gmail.com>
wrote:
2009/9/9 Pinocchio <cchino...@gmail.com>:
I am saying this because even after a lot of marketing muscle and
commercial force, it has been hard for Adobe, Sun and Microsoft to
push
their rendering stacks over HTML + Javascript. Flash is the only thing
which gained major adoption... and the picture might change once HTML
5
comes out.
The Flash strategy is definitely what I have in mind.
I guess the problem would be convincing the 100s of millions of people
to
install our plugin. Much worse than converting web app developers to our
stack. [I have a feeling I didn't quite get your point here...]
If you can attract the developpers, the users will probably follow. The
perfect scenario is when a programmer develops a killer application using
your technology. Users install whatever is required in order to run the
app. It seems to me that convincing a developper to user your platform is
the extremely difficult part. This is where the technology has to be a lot
better in a lot of areas.
Hmmm... I guess doing both doesn't harm anybody :)
Well, before taking the penetration aspect too far -- it is more
important to discuss the actual new web stack first. Key to it is that
it provides benefits wrt the existing web stack in many aspects (like
flash *yuck* or silverlight -- not too sure about silverlight adoption
though), that in itself will drive adoption. (Packaging the new
browser as a plugin for legacy browsers would make a lot of sense
though to drive adoption.)
But what I'm more interested in is this:
- knowing the limitations of HTTP and complexity of HTTP/1.1 compliant
web servers, shouldn't the new web stack rely on a new protocol
instead?
I'm not a specialist, but it seems to me that the only limitation of HTTP
is its stateless-ness, which forces state management at an upper level at
the cost of extra complexity. AFAIK caching mechanisms and
security/encryption are there, but could easily be simpler.
So it looks like it is a secondary issue.
A new protocol would be a good idea. We should probably stick to a subset of
HTTP/1.1. Can somebody come up with a transport scenario which cannot be
fulfilled by HTTP/1.1?
I think it is a good idea to keep the transport layer state-less. That cleanly
separates caching from transport which IMHO is a good thing as applications may
want to decide what and how to cache data. However, I do think that putting in
support for a content hash based local cache right at the transport layer would
be a good idea. By support I really mean that object URIs contain the content
hash in them so that the browser can check after resolving a URI whether it
really needs to fetch the content.
- knowing the limitations of nowadays web applications, how should the
content be organized? Should there be a strict separation of
data/content and views? How would a scripting interface look like?
The Web has evolved from simple, static, linked together documents servers
to full-blown applications and two-way communication (FB, twitter etc.).
All this use-cases coexist nowdays.
"separation of date and views" is clearly a variation on the "code/data"
duality. A priori, one should be neutral on this, in order to "perform" in
an average way in all use-cases. IOW, it should suck averagely in all
cases.
As I see it, a simple,static document should be a program that consists
essentially in a few "print" statements of the text, plus some code for
link-buttons and font selection etc. Of course, the scripting language
must be chosen so that it doesn't get too much in the way in this case. A
full blown app would obviously be 90% of code with a few bits of static
text.
However, in this approach the content is mixed with the way it is
displayed; I think the idea must be refined so that a client may extract
the content rather than just displaying it.
I agree whole heartedly. There is no point of separating code and data at the
browser layer. As I have mentioned earlier, I think a cross platform minimal
bytecode would be good. I would appreciate feedback on pros and cons of this
approach.
How would extension points look like?
I'm not sure what you refer to, but one would use the extension mechanism
of the interpreter of the scripting language.
Extensions should be downloaded, cached and run just like the rest of the web app. There
is little point in separating local extensions and remote "webapps".
What about security to begin with?
This is actually two questions:
- security of the connexion,
- safety of the interpreter. As someone else pointed, The whole thing must
run in a sandbox.
Oh... this is the big hairy mess that would need some thought. I think web
developers would like cross site code execution + data access. However, at the
same time the user should be given control over what a website can or cannot
access. I don't think a lot of this can be made backwards compatible with HTML
+ Javascript. If done well, this could be one of the shining pros our webstack
could have compared to HTML + Javascript.
- what content should be representable?
The more, the better :) Althought on may select only one or two formats
for each category of content (image, sound, video, etc.).
3D? I think I would just stick with a runtime library which supports 3D, sound
and video instead of specing out a declarative language for the content.
I think we need a wiki page for this stuff. Anselm?
--
Pinocchio