Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Erik Möller
On Tue, 04 Sep 2012 20:49:57 +0200, Kornel Lesiński   
wrote:


until improvements in OS/drivers/hardware make this a non-issue (e.g. if  
the OS can notify applications before gfx context is lost, then browsers  
could snapshot then and problem will be gone for good)


We've just worked hard to get this behaviour into the GPUs to allow long  
running shaders to be terminated for security reasons so it's not likely  
to go away. Besides snapshotting right before a lost context doesn't help  
us at all. For all we know the GPU could be half way through rendering  
something when the event is triggered... even if we could read back the  
half rendered content in the rendertarget how do we generate the correct  
output from there? We'd have to take our DeLorian back to before the frame  
was started and replay the rendering commands.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Erik Möller

On Tue, 04 Sep 2012 19:15:46 +0200, Boris Zbarsky  wrote:


On 9/4/12 1:02 PM, David Geary wrote:

Sure, but those use cases will be in the minority


What makes you say that?

Outside of games, I think they're a majority of the canvas-using things  
I've seen.



I think it makes the most sense to add a context lost handler to the
spec and leave it up to developers to redraw the canvas.


OK, yes, let's call that option 9.  And I'll add option 10: do nothing.

So now our list is:


1)  Have a way for pages to opt in to software rendering.
2)  Opt canvases in to software rendering via some sort of heuristic
 (e.g. software by default until there has been drawing to it for
 several event loop iterations, or whatever).
3)  Have a way for pages to opt in to having snapshots taken.
4)  Auto-snapshot based on some heuristics.
5)  Save command stream.
6)  Have a way for pages to explicitly snapshot a canvas.
7)  Require opt in for hardware accelerated rendering.
8)  Authors use toDataURL() when they want their data to stick around.
9)  Context lost event that lets authors regenerate the canvas.
10) Do nothing, assume users will hit reload if their canvas goes blank.

Any other options, before we start trying to actually decide which if  
any of these might be workable?


-Boris


It's important to discuss implementation details so we don't spec  
something that's not implementable on all platforms. That said we  
obviously should try to stay clear of specifying how things should be  
implemented and instead spec what end result we're after. There's little  
distinction from the end users point of view for example between  
snapshotting an saving the command stream.


Can we live with a weaker statement than a guarantee that the canvas  
content will be retained? Perhaps a "best effort" may be enough?
It's obviously in the vendors interests to do as good a job as possible  
with retaining canvas content, and I believe for example on Android it's  
possible to get notified before a power-save event will occur. That would  
enable us to do a read-back and properly restore the canvas (investigation  
needed). For applications that just cannot loose any data then canvas  
obviously isn't the best choice for storage. If we do have the context  
lost event then at least new versions of those applications can be sure to  
render out all their data to a canvas and move it over to an img while  
listening for the context lost event. Existing applications that cannot  
loose data... well, maybe that's a loss we'll have to accept, but by all  
means dazzle me with a brilliant solution if you have one.


JFYI I'd say by far the most common lost context scenario on desktop would  
be browsing to your driver manufacturers page, downloading and installing  
a new driver.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Erik Möller
On Mon, 03 Sep 2012 23:47:57 +0200, Tobie Langel   
wrote:



I apologize in advance, as this is slightly off-topic. I've been
unsuccessfully looking for info on how Canvas hardware acceleration
actually works and haven't found much.

Would anyone have pointers?

Thanks.

--tobie


I think that varies a lot between the vendors and I haven't seen any  
externally available documentation on the topic. In general, images are  
drawn as textured quads, paths are triangulated and drawn as as tristrips.  
Some level of caching are performed to reduce draw calls and improve  
performance. Some hard to do things (on hardware) make use of stencil  
buffers and multipass rendering. I think that's as specific info you'll  
get.


If you're really interested, run your favourite browser through PIX  
http://en.wikipedia.org/wiki/PIX_(Microsoft)


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Erik Möller
On Mon, 03 Sep 2012 03:37:24 +0200, Charles Pritchard   
wrote:



Canvas GPU acceleration today is done via transform3d and transitions.


I hope everyone are aware that this connection is just coincidental. The  
fact that one vendor decided to flip the hardware acceleration switch when  
there was a 3d-transform doesn't mean everyone will. Hardware acceleration  
and 3d-transforms are separate features. 3d transforms should be available  
in software rendering as well.


Most [installed] GPUs are not able to accelerate the Canvas path drawing  
mechanism.

They are able to take an array of floats for WebGL, though.


It's true that there are no dedicated hardware for rendering paths in the  
GPUs of today, but they are very good at rendering line segments and  
triangle strips and paths can be triangulated. With some preprocessing  
paths can even be rendered directly using shaders  
http://research.microsoft.com/en-us/um/people/cloop/loopblinn05.pdf




What is really meant here by Canvas GPU acceleration?



I can of course only speak for Opera, but we strive to hardware accelerate  
all parts of the drawing, and for canvas that also entails triangulating  
paths and batching to reduce the number of drawcalls. I.e. using an image  
atlas to draw several pieces in succession should give a good performance  
boost. Of course if we'd want to take it one step further then adding  
support at the API level for drawing multiple images would be good.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Erik Möller
On Mon, 03 Sep 2012 00:14:49 +0200, Benoit Jacob   
wrote:



- Original Message -

On Sun, 2 Sep 2012, Erik Möller wrote:
>
> As we hardware accelerate the rendering of , not just with
> the webgl
> context, we have to figure out how to best handle the fact that
> GPUs loose the
> rendering context for various reasons. Reasons for loosing the
> context differ
> from platform to platform but ranges from going into power-save
> mode, to
> internal driver errors and the famous long running shader
> protection.
> A lost context means all resources uploaded to the GPU will be gone
> and have
> to be recreated. For canvas it is not impossible, though IMO
> prohibitively
> expensive to try to automatically restore a lost context and
> guarantee the
> same behaviour as in software.
> The two options I can think of would be to:
> a) read back the framebuffer after each draw call.
> b) read back the framebuffer before the first draw call of a
> "frame" and build
> a display list of all other draw operations.
>
> Neither seem like a particularly good option if we're looking to
> actually
> improve on canvas performance. Especially on mobile where read-back
> performance is very poor.
>
> The WebGL solution is to fire an event and let the
> js-implementation deal with
> recovering after a lost context
> http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
>
> My preferred option would be to make a generic context lost event
> for canvas,
> but I'm interested to hear what people have to say about this.

Realistically, there are too many pages that have 2D canvases that
are
drawn to once and never updated for any solution other than "don't
lose
the data" to be adopted. How exactly this is implemented is a quality
of
implementation issue.


With all the current graphics hardware, this means "don't use a GL/D3D  
surface to implement the 2d canvas drawing buffer storage", which  
implies: "don't hardware-accelerate 2d canvases".


If we agree that 2d canvas acceleration is worth it despite the  
possibility of context loss, then Erik's proposal is really the only  
thing to do, as far as current hardware is concerned.


Erik's proposal doesn't worsen the problem in anyway --- it acknowledges  
a problem that already exists and offers to Web content a way to recover  
from it.


Hardware-accelerated 2d contexts are no different from  
hardware-accelerated WebGL contexts, and WebGL's solution has been  
debated at length already and is known to be the only thing to do on  
current hardware. Notice that similar solutions preexist in the system  
APIs underlying any hardware-accelerated canvas context: Direct3D's lost  
devices, EGL's lost contexts, OpenGL's ARB_robustness context loss  
statuses.


Benoit



--
Ian Hickson   U+1047E)\._.,--,'``.
   fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._
,.
Things that are impossible just take longer.
  `._.-(,_..'--(,_..'`-.;.'


I agree with Benoit, this is already an existing problem, I'm just  
pointing the spotlight at it. If we want to take advantage of hardware  
acceleration on canvas this is an issue we will have to deal with.


I don't particularly like this idea, but for the sake of having all the  
options on the table I'll mention it. We could default to the "old  
behaviour" and have an opt in for hardware accelerated canvas in which  
case you would have to respond to said context lost event. That would  
allow the existing content to keep working as it is without changes. It  
would be more work for vendors, but it's up to every vendor to decide how  
to best solve it, either by doing it in software or using the expensive  
read back alternative in hardware.


Like I said, not my favourite option, but I agree it's bad to break the  
web.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


[whatwg] Hardware accelerated canvas

2012-09-02 Thread Erik Möller
As we hardware accelerate the rendering of , not just with the  
webgl context, we have to figure out how to best handle the fact that GPUs  
loose the rendering context for various reasons. Reasons for loosing the  
context differ from platform to platform but ranges from going into  
power-save mode, to internal driver errors and the famous long running  
shader protection.
A lost context means all resources uploaded to the GPU will be gone and  
have to be recreated. For canvas it is not impossible, though IMO  
prohibitively expensive to try to automatically restore a lost context and  
guarantee the same behaviour as in software.

The two options I can think of would be to:
a) read back the framebuffer after each draw call.
b) read back the framebuffer before the first draw call of a "frame" and  
build a display list of all other draw operations.


Neither seem like a particularly good option if we're looking to actually  
improve on canvas performance. Especially on mobile where read-back  
performance is very poor.


The WebGL solution is to fire an event and let the js-implementation deal  
with recovering after a lost context  
http://www.khronos.org/registry/webgl/specs/latest/#5.15.2


My preferred option would be to make a generic context lost event for  
canvas, but I'm interested to hear what people have to say about this.


For reference (since our own BTS isn't public yet).  
http://code.google.com/p/chromium/issues/detail?id=91308


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


[whatwg] Canvas and drawWindow

2011-03-11 Thread Erik Möller
I bet this has been discussed before, but I'm curious as to what people  
think about breathing some life into a more general version of Mozillas  
canvas.drawWindow() that draws a snapshot of a DOM window into the canvas?

https://developer.mozilla.org/en/drawing_graphics_with_canvas#section_9

I know there are some security considerations (for example listed in the  
source of drawWindow):


 // We can't allow web apps to call this until we fix at least the
 // following potential security issues:
 // -- rendering cross-domain IFRAMEs and then extracting the results
 // -- rendering the user's theme and then extracting the results
 // -- rendering native anonymous content (e.g., file input paths;
 // scrollbars should be allowed)

I'm no security expert, but it seems to me there's an easy way to at least  
cater for some of the use-cases by always setting origin-clean to false  
when you use drawWindow(). Sure it's a bit overkill to always mark it  
dirty, but it's simple and would block you from reading any of the pixels  
back which would address most (all?) of the security concerns.


I'm doing a WebGL demo, so the use-case I have for this would be to render  
a same-origin page to a canvas and smack that on a monitor in the  
3d-world. Intercept mouse clicks, transform them into 2d and passing them  
on would of course be neat as well and probably opens up the use-cases you  
could dream up.


So, I'm well aware its a tad unconventional, but perhaps someone has a  
better idea of how something like this could be accomplished... i.e. via  
SVG and foreignObject or punching a hole in the canvas and applying a  
transform etc. I'd like to hear your thoughts.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-11 Thread Erik Möller
On Fri, 11 Jun 2010 06:25:41 +0200, Lars Eggert   
wrote:



Hi,

on a purely managerial level, let me point out that this work is far  
beyond the current charter of the HYBI WG. This defines an entirely new  
protocol, and will definitely require a charter discussion.


(If there is community/developer interest, we should by all means have  
that discussion.)


Lars



Sure, this just felt like the appropriate venue for these initial  
discussions. If not please say so and we'll move it somewhere else.


From my personal experience when discussing this with developers and  
people from adjacent industries there's significant interest in the _final  
product_ but not so much interest in getting directly involved in the  
process.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-11 Thread Erik Möller
On Fri, 11 Jun 2010 00:21:38 +0200, Mark Frohnmayer  
 wrote:



TorqueSocket is not in the same category as RakNet or OpenTNL


Ah, sorry I got the names mixed up, I meant to say RakNet/OpenTNL and not  
RakNet/TorqueSocket.



I'd recommend doing some real-world testing for max packet size.  Back
when the original QuakeWorld came out it started by sending a large
connect packet (could be ~8K) and a good number of routers would just
drop those packets unconditionally.  The solution (iirc) was to keep
all packet sends below the Ethernet max of 1500 bytes.  I haven't
verified this lately to see if that's still the case, but it seems
real-world functionality should be considered.


Absolutely, that's why the path-MTU attribute was suggested. The ~64k  
limit is an absolute limit though at which sends can be rejected  
immediately without even trying.



If WebSocket supports an encrypted and unencrypted mode, why would the
real-time version not support data security and integrity?


The reasoning was that if you do need data security and integrity the  
secure websocket over TCP uses the same state-of-the-art implementation as  
the browsers already have implemented. Secure connections over UDP would  
either require a full TCP over UDP implementation (to use TLS) or a second  
implementation that would need to be maintained. That implementation would  
be either a very complex piece or software or clearly inferior to that  
users are accustomed to.
So what's a good use-case where you want a secure connection over UDP and  
cannot use a second TLS connection?



Client puzzles allow the host to allocate zero resources for a pending
connection until it knows that the source address of the client
request is valid and that the client has done some work; you could
still take a similar (though not client computationally expensive)
approach by having the host hash the client identity (IP/port) with a
server-generated secret.  Any approach that allocates memory or does
work on the host without verifying the client source address first is
vulnerable to a super-trivial DOS attack (connection depletion before
even any bandwidth overwhelm).


Right, this is probably an area that needs to be looked more carefully at  
if/when real work is started on a spec.



I'd propose that doing this in the javascript level would result in
unnecessary extra overhead (sequence numbering, acknowledgements) that
could easily be a part of the underlying protocol.  Having implemented
multiple iterations of a high-level networking API, the notification
function is a critical, low-overhead tool for making effective
higher-level data guarantees possible.


Yes, no doubt it's useful for those implementing higher level APIs. As  
usual it's a matter of at what level to place the API.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-10 Thread Erik Möller
During the Opera Network Seminar held in Oslo this week I discussed the  
possible addition of a new wsd: URL scheme to WebSockets that would allow  
relaxing the packet resends and enable demanding real-time applications to  
be written. I'd like to summarize some of the conclusions a few of us came  
to when discussing this (informally).


Regarding the discussions on at what level the API of a UDP-WebSocket  
should be: One of the most important aspects to remember are that for this  
to be interesting to application developers we need all the browser  
vendors to support this feature in a compatible way. Therefore it doesn't  
seem reasonable to standardize and spec a higher level network API akin to  
RakNet / Torque Socket and hope all vendors will be willing to spend the  
(quite large amount of) resources required for their own implementation of  
TCP over UDP, bandwidth throttling etc. In our opinion we're much better  
off standardizing a minimal UDP-like socket. For most application  
developers it seems likely they will be able to work with a mix of  
XMLHttpRequest, WebSockets and this new UDP-WebSocket to achieve the same  
functionality provided by those higher level APIs. If deemed necessary for  
an application the higher level network API can be written in JavaScript  
and work on-top of the much smaller hopefully cross-browser compatible  
UDP-WebSocket API.


As discussed the following features/limitations are suggested:
-Same API as WebSockets with the possible addition of an attribute that  
allows the application developer to find the path MTU of a connected  
socket.

-Max allowed send size is 65,507 bytes.
-Socket is bound to one remote address at creation and stays connected to  
that host for the duration of its lifetime.
-IP Broadcast/Multicast addresses are not valid remote addresses and only  
a set range of ports are valid.
-Reliable handshake with origin info (Connection timeout will trigger  
close event.)
-Automatic keep-alives (to detect force close at remote host and keep NAT  
traversal active)

-Reliable close handshake
-Sockets open sequentially (like current DOS protection in WebSockets) or  
perhaps have a limit of one socket per remote host.

-Cap on number of open sockets per host and global user-agent limit.

Some additional points that were suggested on this list were:
-Key exchange and encryption
 If you do want to have key exchange and encryption you really shouldn't  
reinvent the wheel but rather use a secure WebSocket connection in  
addition to the UDP-WebSocket. Adding key exchange and encryption to the  
UDP-WebSocket is discouraged.


-Client puzzles to reduce connection depletion/CPU depletion attacks to  
the handshake.
 If the goal is to prevent DOS attacks on the accepting server this seems  
futile. Client puzzles only raises the bar ever so slightly for an  
attacker so this is also discouraged.


-Packet delivery notification to be a part of the API.
 Again this is believed to be better left outside the UDP-WebSockets spec  
and implemented in javascript if the application developer requires it.


Best Regards,

--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSockets: UDP

2010-06-02 Thread Erik Möller
On Wed, 02 Jun 2010 19:48:05 +0200, Philip Taylor  
 wrote:


I'm glad the discussion on this has taken off a bit. I've spoken to a few  
more game devs and even though it's still relatively few there's a slight  
majority that prefer the interface to be at the "Torque/RakNet-level"  
rather than at the "UDP-socket-wrapper-level". I'm hoping I can talk a few  
of them into joining the list and taking a more active part in the  
discussions.



So they seem to suggest things like:
- many games need a combination of reliable and unreliable-ordered and
unreliable-unordered messages.


One thing to remember here is that browsers have other means for  
communication as well. I'm not saying we shouldn't support reliable  
messages over UDP, but just pointing out the option. I believe for example  
World of Warcraft uses this strategy and sends reliable traffic over TCP  
while movement and other real-time data goes over UDP.



- many games need to send large messages (so the libraries do
automatic fragmentation).


Again, this is probably because games have no other means of communication  
than the NW-library. I'd think these large reliable messages would mostly  
be files that need to be transferred asynchronously for which browsers  
already have the tried and tested XMLHttpRequest.



- many games need to efficiently send tiny messages (so the libraries
do automatic aggregation).


This is probably true for many other use-cases than games, but at least in  
my experience games typically use a bit-packer or range-coder to build the  
complete packet that needs to be sent. But again, it's a matter of what  
level you want to place the interface.



Perhaps also:
- Cap or dynamic limit on bandwidth (you don't want a single web page
flooding the user's network connection and starving all the TCP
connections)
- Protection against session hijacking


Great


- Protection against an attacker initiating a legitimate socket with a
user and then redirecting it (with some kind of IP (un)hijacking) to a
service behind the user's firewall (which isn't a problem when using
TCP since the service will ignore packets when it hasn't done the TCP
handshake; but UDP services might respond to a single packet from the
middle of a websocket stream, so every single packet will have to be
careful not to be misinterpreted dangerously by unsuspecting
services).


I don't quite follow what you mean here. Can you expand on this with an  
example?


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-02 Thread Erik Möller
On Wed, 02 Jun 2010 01:07:48 +0200, Mark Frohnmayer  
 wrote:



Glad to see this discussion rolling!  For what it's worth, the Torque
Sockets design effort was to take a stab at answering this question --
what is the least-common-denominator "webby" API/protocol that's
sufficiently useful to be a common foundation for real time games.  I
did the first stab at porting OpenTNL (now tnl2) atop it; from my
reading of the RTP protocol that should easily layer as well, but it
would be worth getting the perspective of some other high-level
network stack folks (RakNet, etc).


For those who missed Mark's initial post on the subject his TorqueSocket  
API can be found here:

http://github.com/nardo/torque_sockets/raw/master/TorqueSocket_API.txt

Perhaps we could communally have a look at how this compares to other  
common network libraries to find a least common denominator of  
functionality?




Only feedback here would be I think p2p should be looked at in this
pass -- many client/server game instances are peers from the
perspective of the hosting service (XBox Live, Quake, Half-Life,
Battle.net) -- forcing all game traffic to pass through the hosting
domain is a severe constraint.  My question -- what does a "webby" p2p
solution look like regarding Origin restrictions, etc?


Although it sure complicates things I don't see why WebSockets couldn't be  
extended to allow peer-to-peer connections if we really wanted to.
User agent A and B connects to WebSocket server C who keeps a list of  
clients connected to it. A and B are informed of the id's of connected  
clients and both decide to set up a peer-to-peer connection. A and B  
simultaneously call socket.connect_to_peer() passing a peer id and the  
call returns a new WebSocket in connecting mode. The user agents  
communicate with the server over the server-socket, retrieves the  
necessary information to connect to the peer including what origin to  
expect and off we go. (Server implementations would of course be free to  
not support this if they choose)
As long as the usual DOS/tamper protection is there it should be  
possible... then there's the success rate of UDP NAT hole punching...


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
On Wed, 02 Jun 2010 00:34:17 +0200, James Salsman   
wrote:



Nothing about UDP is reliable, you just send packets and hope they get  
there.



-Automatic keep-alives


You mean on the incoming-to-client TCP channel in the opposite
direction from the UDP traffic?


-Reliable close handshake


Can we use REST/HTTP/HTTPS persistent connections for this?


-Socket is bound to one address for the duration of its lifetime


That sounds reasonable, but clients do change IP addresses now and
then, so maybe there should be some anticipation of this possibility?


-Sockets open sequentially (like current DOS protection in WebSockets)


Do you mean their sequence numbers should be strictly increasing
incrementally until they roll over?


-Cap on number of open sockets per server and total per user agent


There was some discussion that people rarely check for the error
condition when such caps are exausted, so I'm not sure whether that
should be the same as the system cap, or some fraction, or dozens, or
a developer configuration parameter.



No it can't be UDP, it'll have to be something layered on top of UDP. One  
of the game guys I spoke to last night said "Honestly, I wish we just had  
real sockets.  It always seems like web coding comes down to reinventing a  
very old wheel in a far less convenient or efficient manner." To some  
extent I agree with him, but there's the security aspect we have to take  
into account or we'll see someone hacking the CNN website and injecting a  
little javascript and we'll have the DDOS attack of the century on our  
hands.


The reason I put down "Socket is bound to one address", "Reliable  
handshake", "Reliable close handshake" and "Sockets open sequentially" was  
for that exact reason, to try to make it "DOS and tamper safe". The  
"Sockets open sequentially" means that if you allocate two sockets to the  
same server the second socket will wait for the first one to complete its  
handshake before attempting to connect.


The cap on the number of connections is probably less important, but  
browser vendors will likely want to have some sort of limit in place  
before it completely starves the OS of resources.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
On Tue, 01 Jun 2010 21:14:33 +0200, Philip Taylor  
 wrote:



More feedback is certainly good, though I think the libraries I
mentioned (DirectPlay/OpenTNL/RakNet/ENet (there's probably more)) are
useful as an indicator of common real needs (as opposed to edge-case
or merely perceived needs) - they've been used by quite a few games
and they seem to have largely converged on a core set of features, so
that's better than just guessing.

I guess many commercial games write their own instead of reusing
third-party libraries, and I guess they often reimplement very similar
concepts to these, but it would be good to have more reliable
information about that.



I was hoping to be able to avoid looking at what the interfaces of a high  
vs low level option would look like this early on in the discussions, but  
perhaps we need to do just that; look at Torque, RakNet etc and find a  
least common denominator and see what the reactions would be to such an  
interface. Game companies are pretty restrictive about what they discuss,  
but I think I know enough game devs to at least get some good feedback on  
what would be required to make it work well with their engine/game.


I suspect they prefer to be "empowered with UDP" rather than "boxed  
into a

high level protocol that doesn't fit their needs" but I may be wrong.


If you put it like that, I don't see why anybody would not want to be
empowered :-)


Yeah I wouldn't put it like that when asking :) I'm really not trying to  
sell my view, I just like to see real browser gaming in a not too distant  
future.




But that's not the choice, since they could never really have UDP -
the protocol will perhaps have to be Origin-based, connection-oriented
(to exchange Origin information etc), with complex packet headers so
you can't trick it into talking to a DNS server, with rate limiting in
the browser to prevent DOS attacks, restricted to client-server (no
peer-to-peer since you probably can't run a socket server in the
browser), etc.

[...]


That first option sounds like you're offering something very much like
a plain UDP socket (and I guess anyone who's willing to write their
own high-level wrapper (which is only hundreds or thousands of lines
of code and not a big deal for a complex game) would prefer that since
they want as much power as possible), but (as above) I think that's
misleading - it's really a UDP interface on top of a protocol that has
some quite different characteristics to UDP. So I think the question
should be clearer that the protocol will necessarily include various
features and restrictions on top of UDP, and the choice is whether it
includes the minimal set of features needed for security and hides
them behind a UDP-like interface or whether it includes higher-level
features and exposes them in a higher-level interface.


So, what would the minimal set of limitations be to make a "UDP WebSocket"  
browser-safe?


-No listen sockets
-No multicast
-Reliable handshake with origin info
-Automatic keep-alives
-Reliable close handshake
-Socket is bound to one address for the duration of its lifetime
-Sockets open sequentially (like current DOS protection in WebSockets)
-Cap on number of open sockets per server and total per user agent


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller

On Tue, 01 Jun 2010 18:45:51 +0200, Mike Belshe  wrote:


On Tue, Jun 1, 2010 at 8:52 AM, John Tamplin  wrote:


On Tue, Jun 1, 2010 at 11:34 AM, Mike Belshe  wrote:


FYI:   SCTP is effectively non-deployable on the internet today due to
NAT.

+1 on finding ways to enable UDP.  It's a key missing component to the  
web

platform.



But there is so much infrastructure that would have to be enabled to use
UDP from a web app.  How would proxies be handled?  Even if specs were
written and implementations available, how many years would it be before
corporate proxies/firewalls supported WebSocket over UDP?



Agree - nobody said it would be trivial.  There are so many games
successfully doing it today that it is clearly viable.  For games in
particular, they have had to document to their users how to configure  
their

home routers, and that has been successful too.  If you talk with game
writers - there are a class of games where UDP is just better (e.g. those
communicating real-time, interactive position and other info).  If we can
enable that through the web platform, that is good.




I am all for finding a way to get datagram communication from a web app,
but I think it will take a long time and shouldn't hold up current  
WebSocket

work.



Agree-  no need to stall existing work.

Mike




--
John A. Tamplin
Software Engineer (GWT), Google



I don't think proxies and firewalls are going to be a major problem, like  
Mike said, the myriad of UDP games out there seem to do just fine in the  
real world. Sure, there will be corporate firewalls and proxies blocking  
employees from fragging their colleagues when the boss is in a meeting,  
but I guess they're partly put there to prevent just that so we probably  
shouldn't try to combat it.
If we were talking about peer-to-peer UDP it'd be a whole new ballgame,  
but that's why I specifically said the use-case was for client/server  
games, I don't think we should attempt peer-to-peer before WebSockets is  
all done and shipped.


I fully agree any discussions on UDP (or other protocol) shouldn't stall  
the existing work, but right now there seems to be very little activity  
anyways.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
On Tue, 01 Jun 2010 13:34:51 +0200, Philip Taylor  
 wrote:



On Tue, Jun 1, 2010 at 11:12 AM, Erik Möller  wrote:

The use case I'd like to address in this post is Real-time client/server
games.

The majority of the on-line games of today use a client/server model  
over

UDP and we should try to give game developers the tools they require to
create browser based games. For many simpler games a TCP based protocol  
is
exactly what's needed but for most real-time games a UDP based protocol  
is a

requirement. [...]

It seems to me the WebSocket interface can be easily modified to cope  
with

UDP sockets [...]


As far as I'm aware, games use UDP because they can't use TCP (since
packet loss shouldn't stall the entire stream) and there's no
alternative but UDP. (And also because peer-to-peer usually requires
NAT punchthrough, which is much more reliable with UDP than with TCP).
They don't use UDP because it's a good match for their requirements,
it's just the only choice that doesn't make their requirements
impossible.

There are lots of features that seem very commonly desired in games: a
mixture of reliable and unreliable and reliable-but-unordered channels
(movement updates can be safely dropped but chat messages must never
be), automatic fragmentation of large messages, automatic aggregation
of small messages, flow control to avoid overloading the network,
compression, etc. And there's lots of libraries that build on top of
UDP to implement protocols halfway towards TCP in order to provide
those features:
http://msdn.microsoft.com/en-us/library/bb153248(VS.85).aspx,
http://opentnl.sourceforge.net/doxydocs/fundamentals.html,
http://www.jenkinssoftware.com/raknet/manual/introduction.html,
http://enet.bespin.org/Features.html, etc.

UDP sockets seem like a pretty inadequate solution for the use case of
realtime games - everyone would have to write their own higher-level
networking libraries (probably poorly and incompatibly) in JS to
provide the features that they really want. Browsers would lose the
ability to provide much security, e.g. flow control to prevent
intentional/accidental DOS attacks on the user's network, since they
would be too far removed from the application level to understand what
they should buffer or drop or notify the application about.

I think it'd be much more useful to provide a level of abstraction
similar to those game networking libraries - at least the ability to
send reliable and unreliable sequenced and unreliable unsequenced
messages over the same connection, with automatic
aggregation/fragmentation so you don't have to care about packet
sizes, and dynamic flow control for reliable messages and maybe some
static rate limit for unreliable messages. The API shouldn't expose
details of UDP (you could implement exactly the same API over TCP,
with better reliability but worse latency, or over any other protocols
that become well supported in the network).



I've never heard any gamedevs complain how poorly UDP matches their needs  
so I'm not so sure about that, but you may be right it would be better to  
have a higher level abstraction. If we are indeed targeting the game  
developing community we should ask for their feedback rather than guessing  
what they prefer. I will grep my linked-in account for game-devs tonight  
and see if I can gather some feedback.


I suspect they prefer to be "empowered with UDP" rather than "boxed into a  
high level protocol that doesn't fit their needs" but I may be wrong.  
Those who have the knowledge, time and desire to implement their own  
reliable channels/flow control/security over UDP would be free to do so,  
those who couldn't care less can always use ws: or wss: for their reliable  
traffic and just use UDP where necessary.


So the question to the gamedevs will be, and please make suggestions for  
changes and I'll do an email round tonight:


If browser and server vendors agree on and standardize a socket based  
network interface to be used for real-time games running in the browsers,  
at what level would you prefer the interface to be?
(Note that an interface for communicating reliably via TCP and TLS are  
already implemented.)

- A low-level interface similar to a plain UDP socket
- A medium-level interface allowing for reliable and unreliable channels,  
automatically compressed data, flow control, data priority etc

- A high-level interface with "ghosted entities"

Oh, and I guess we should continue this discussion on the HyBi list... my  
fault for not posting there in the first place.


--
Erik Möller
Core Developer
Opera Software


[whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
The use case I'd like to address in this post is Real-time client/server  
games.


The majority of the on-line games of today use a client/server model over  
UDP and we should try to give game developers the tools they require to  
create browser based games. For many simpler games a TCP based protocol is  
exactly what's needed but for most real-time games a UDP based protocol is  
a requirement. Games typically send small updates to its server at 20-30Hz  
over UDP and can with the help of entity interpolation and if required  
entity extrapolation cope well with intermittent packet loss. When a  
packet loss occur in a TCP based protocol the entire stream of data is  
held up until the packet is resent meaning a game would have to revert to  
entity extrapolation possibly over several seconds, leading to an  
unacceptable gameplay experience.


It seems to me the WebSocket interface can be easily modified to cope with  
UDP sockets (a wsd: scheme perhaps?) and it sounds like a good idea to  
leverage the work already done for WebSockets in terms of interface and  
framing.


The most important distinction between ws: and wsd: is that messages sent  
by send() in wsd: need not be acknowledged by the peer nor be resent. To  
keep the interface the same to the largest possible extent I'd suggest  
implementing a simple reliable 3-way handshake over UDP, keep-alive  
messages (and timeouts) and reliable close frames. If these are  
implemented right the interface in it's entirety could be kept. Only one  
new readonly attribute long maxMessageSize could be introduced to describe  
the min path MTU (perhaps only valid once in connected mode, or perhaps  
set to 0 or 576 initially and updated once in connected mode). This  
attribute could also be useful to expose in ws: and wss: but in that case  
be set to the internal limit of the browser / server.


The actual content of the handshake for wsd: can be vastly simplified  
compared to that of ws: as there's no need to be http compliant. It could  
contain just a magic identifier and length encoded strings for origin,  
location and protocol.


To minimize the work needed on the spec the data framing of wsd: can be  
kept identical to that of ws:, though I'd expect game developers would  
choose whatever the binary framing will be once the spec is done.


I'd be very interested to hear peoples opinion on this.

--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] Real-time networking in web applications and games

2010-05-04 Thread Erik Möller
On Mon, 03 May 2010 23:59:19 +0200, Mark Frohnmayer  
 wrote:



Hey all,

In continuation of the effort to make browsers a home for real-time
peer to peer applications and games without a plugin, I've done a bit
of digging on the spec.  Currently the spec contains section 4.11.6.2
(Peer-to-peer connections) that appears to present a high-level
peer-to-peer api with separate functionality for peer connection and
transmission of text, images and streamed data interleaved in some
unspecified protocol.  Also referenced from the spec is WebSocket, a
thin-layer protocol presenting what amounts to a simple guaranteed
packet stream between client and server.

For the purposes of discussion there seem to be two distinct issues -
peer introduction, where two clients establish a direct connection via
some trusted third party, and data transmission protocol which could
range from raw UDP to higher-level protocols like XML-RPC over HTTP.

For real-time games, specific concerns include flow control, protocol
overhead and retransmission policy; thus most real-time games
implement custom network protocols atop UDP.  Peer introduction is
also important - responsiveness can often be improved and hosting
bandwidth costs reduced by having peers connect directly.  For other
p2p apps (chat, etc), specific control of flow and data retransmission
may be less (or not) important, but peer introduction is still
relevant.

In reading of the current state of the spec's p2p section, it appears
to be poorly suiting to real-time gaming applications, as well as
potentially over-scoped for specific p2p applications.  To demonstrate
an alternative approach, the initial prototype of the TorqueSocket
plugin now works (build script for OS X only).  The API as spec'd
here:  
http://github.com/nardo/torque_sockets/raw/master/TorqueSocket_API.txt

now actually functions in the form of an NPAPI plugin.  I recorded a
capture of the javascript test program here:
http://www.youtube.com/watch?v=HijKc5AwYHM if you want to see it in
action without actually building the example.

This leads me to wonder about (1) the viability of including peer
introduction into WebSocket as an alternative to a high-level peer to
peer interface in the spec, (2) including a lower-level unreliable
protocol mode, either as part or distinct from WebSocket, and (3) who,
if anyone, is currently driving the p2p section of the spec.

Any feedback welcome :).

Cheers,
Mark



I'm an old gamedev recently turned browserdev so this is of particular  
interest to me, especially as I'm currently working on WebSockets.
WebSockets is a nice step towards multiplayer games in browsers and will  
be even better once binary frames are speced out but as Mark says  
(depending on the nature of the game) gamedevs are most likely going to  
want to make their own UDP based protocol (in client-server models as  
well). Has there been any discussions on how this would fit under  
WebSockets?


Opera Unite can be mentioned as an interesting side note, it does peer  
introduction as well as subnet peer detection, but again that's TCP only.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSocket bufferedAmount includes overhead or not

2010-03-25 Thread Erik Möller
On Thu, 25 Mar 2010 13:23:57 +0100, Olli Pettay   
wrote:



On 3/25/10 12:08 PM, Niklas Beischer wrote:

On Thu, 25 Mar 2010 10:21:10 +0100, Olli Pettay
 wrote:


On 3/25/10 12:08 AM, Olli Pettay wrote:

On 3/24/10 11:33 PM, Ian Hickson wrote:

On Sun, 21 Feb 2010, Olli Pettay wrote:

[snip]

I guess I'm unclear on whether bufferedAmount should return:

1. the sum of the count of characters sent?
(what would we do when we add binary?)

I believe this is actually what we want.
If web developer sends a string which is X long,
bufferedAmount should report X.

And when we add binary, if buffer which has size Y is
sent, that Y is added to bufferedAmount.


Though, this is a bit ugly too.
Mixing 16bit and 8bit data...

One option is to remove bufferedAmount,
and have just a boolean flag
hasBufferedData.

Or better could be that the API spec says that WebSocket.send()
converts the data to UTF-8 and bufferedAmount
indicates how much UTF-8 data is buffered.
Then adding support for binary would be easy.
And that way it doesn't matter whether the protocol
actually sends the textual data as UTF-8 or as
something else.

This way web developer can still check what part of the
data is still buffered. (S)he just have to convert
UTF-16 to UTF-8 in JS, when needed.


What about having bufferedAmount represent the number of bytes
(including overhead) buffered by the WebSocket,

The problem here is that how can the API describe what the
bufferedAmount actually is. And since the underlying protocol
may change, the values in bufferedAmount may change.



  for flow control

purposes, and adding a new indicator (bufferedMessages) representing the
number of messages that are not fully pushed to the network? Since the
API is message based there is, besides flow control, little reason to
specify how much of a particular message has been sent, right?


Hmm, would it be enough to have just bufferedMessages, and remove
bufferedAmount.


-Olli





BR,
/niklas



The reason why I'd like it to work this way is that
IMO scripts should be able to check whether the data
they have posted is actually sent over the network.


-Olli




2. the sum of bytes after conversion to UTF-8?

3. the sum of bytes yet to be sent on the wire?

I'm not sure how to pick a solution here. It sounds like WebKit  
people
want 3, and Opera and Mozilla are asking for 2. Is that right? I  
guess

I'll go with 2 unless more people have opinions.







Just to clarify then, the two use cases we're trying to accommodate are:
a) The client wants to be able to limit the data sent over the wire to X  
kb/s.
b) The client wants to make sure some earlier message(s) has been sent  
before queuing a new one.


Is that correct, or are there any other use cases anyone had in mind?

--
Erik Möller
Core Developer
Opera Software