Re: Screen Orientation Feedback

2014-08-08 Thread Mounir Lamouri
+ri...@opera.com

On Wed, 6 Aug 2014, at 07:14, Jonas Sicking wrote:
 Hi All,
 
 I think the current interaction between the screen orientation and
 device orientation specs is really unfortunate.
 
 Any time that you use the device orientation in order to render
 something on screen, you have to do non-obvious math in order to get
 coordinates which are usable. Same thing if you want to use the device
 orientation events as input for objects which are rendered on screen.
 
 I would argue that these are the by far most common use cases for
 
 I agree that the main problem here is that the deviceorientation spec
 defined that their events should be relative to the device rather than
 to the screen. However we can still fix the problem by simply adding
 
 partial interface DeviceOrientationEvent
 {
   readonly attribute double? screenAlpha;
   readonly attribute double? screenBeta;
   readonly attribute double? screenGamma;
 }
 
 No new events needs to be defined.
 
 I guess we can argue that this should be added to the
 DeviceOrientation spec, but that seems unlikely to happen in practice
 anytime soon. I think we would do developers a disservice by blaming
 procedural issues rather than trying to solve the problem.
 
 I think mozilla would be happy to implement such an addition to the
 DeviceOrientation event (I'm currently checking to make sure). Are
 there other UAs that have opinions (positive or negative) to such an
 addition?

Maybe this feedback should be more for DeviceOrientation than Screen
Orientation. There has been a few discussions there
(public-geolocation).

Anyway. I am not convinced that adding new properties will really fix
how developers handle this. I asked around and it seems that native
platforms do not expose Device Orientation relative to the screen. I am
not sure why we should expose something different on the Web platform. I
think we should work on providing developers the right tools in order
for them to do the right thing. For example, without the Screen
Orientation API, they do not know the relative angle between the device
natural orientation and the screen. This API is not yet widely
available. Some version of it ships in Firefox and IE but is prefixed.
It should be in Chrome Beta soon.

In addition, Tim Volodine recently suggested (in public-geolocation)
that we could add a quaternion representation of the device orientation.
If we could introduce quaternions to the platform and offer tools to do
simple math operations on them, we could narrow the complexity of going
from the device origin to the screen origin to one operation.

Finally, my understanding is that the biggest problem with
DeviceOrientation isn't that the orientation is relative to the device
instead of the screen but the rather high incompatibility. Rich Tibbett
made a great VR demo [1] using DeviceOrientation. I had the chance to
chat with him and he told me that he faced a lot of problems with angles
being very different depending on the device, and the browser. Actually,
I tried to show the demo to a colleague a few days after on an old Nexus
7 and it was completely busted.

The number one problem we should tackle with DeviceOrientation is the
interop issue. The API is currently worthless because if you write it
for one browser on one device, the chance of it working as expected on
another device or browser are fairly low. It is a shame for example that
even Chrome and Firefox running on Android do not have full
interoperability. We should start by working together toward a fully
interoperable implementation (at least for browsers running on Android).
Then we should figure out why some devices are outliers.

[1] http://richtr.github.io/threeVR/examples/vr_basic.html

-- Mounir



Re: File API: reading a Blob

2014-08-08 Thread Arun Ranganathan
Welcome back - we missed you :-)


On Aug 5, 2014, at 9:43 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Thu, Jul 17, 2014 at 2:58 PM, Arun Ranganathan a...@mozilla.com wrote:
 There are two questions:
 
 1. How should FileReaderSync behave, to solve the majority of use cases?
 2. What is a useful underlying abstraction for spec. authors that can be 
 reused in present APIs like Fetch and future APIs?
 
 I'm not sure.


I strongly think we should leave FileReaderSync and FileReader alone. Also note 
that FileReaderSync and XHR (sync) are not different, in that both don’t do 
partial data. But we should have a stream api that evolves to read, and it 
might be something off Blob itself.

That leaves us the next problem, and what I think is the last problem in File 
API:



 Yeah, I now think that we want something even lower-level and build
 the task queuing primitive on top of that. (Basically by observing the
 stream that is being read and queuing tasks as data comes in, similar
 to Fetch. The synchronous case would just wait for the stream to
 complete.
 
 If I understand you correctly, you mean something that might be two-part 
 (some hand waving below, but …):
 
 To read a Blob object /blob/, run these steps:
 
 1. Let /s/ be a new buffer.
 
 2. Return /s/, but continue running these steps asynchronously.
 
 3. While /blob/'s underlying data stream is not closed, run these
substeps:
 
1. Let /bytes/ be the result of reading a chunk from /blob/'s
   underlying data.
 
2. If /bytes/ is not failure, push /bytes/ to /s/ and set
   /s/'s transmitted to /bytes/'s length.
 
3. Otherwise, signal some kind of error to /s/ and terminate
   these steps.
 
 AND
 
 To read a Blob object with tasks:
 
 1. Run the read a Blob algorithm above.
 2. When reading the first /bytes/ queue a task called process read.
 3. When pushing /bytes/ to /s/, queue a task called process read data.
 4. When all /bytes/ are pushed to /s/ queue a task called process read EOF.
 5. If an error condition is signaled queue a task called process error with 
 a failure reason.
 
 Is “chunk” implementation defined? Right now we assume 1 byte or 50ms. 
 “Chunk” seems a bit hand-wavy and hard to enforce, but… it might be the 
 right approach.
 
 Would have to discuss with Domenic, but something like chunks seems to
 be much closer to how these things actually work.


Other than “chunks of bytes” which needs some normative backbone, is the basic 
abstract model what you had in mind? If so, that might be worth getting into 
File APi and calling it done, because that’s a reusable abstract model for 
Fetch and for FileSystem.

— A*


Blocking message passing for Workers

2014-08-08 Thread Alan deLespinasse
I would find it extremely useful to have a function available to a Worker
that would block and wait for a message from another Worker or from the
main thread. For example, instead of:

onmessage = function(event) {
  // Do some work
  // Save all state for next time
};

I'd like to have something like this:

while (true) {
  var data = waitForMessage().data;
  // Do some work
}

or:

var subworker = new Worker('subworker.js');
while (true) {
  var data = subworker.waitForMessage().data;
  // Do some work
}

A timeout would probably be nice to have. I don't have an opinion on
whether having a timeout should be optional or required.

Of course you wouldn't want to do this in the main thread, just as you
wouldn't use a synchronous XMLHttpRequest. But it's fine to have a Worker
loop indefinitely and block while waiting for results from somewhere else.

Is this a possibility?

Is there already some way to do this that I'm not aware of?

Do I need to expand on my reasons for wanting such a thing?


Re: Blocking message passing for Workers

2014-08-08 Thread Glenn Maynard
On Fri, Aug 8, 2014 at 12:49 PM, Alan deLespinasse adelespina...@gmail.com
wrote:

 I would find it extremely useful to have a function available to a Worker
 that would block and wait for a message from another Worker or from the
 main thread. For example, instead of:

 onmessage = function(event) {
   // Do some work
   // Save all state for next time
 };

 I'd like to have something like this:

 while (true) {
   var data = waitForMessage().data;
   // Do some work
 }

 or:

 var subworker = new Worker('subworker.js');
 while (true) {
   var data = subworker.waitForMessage().data;
   // Do some work
 }


There have probably been other threads since, but here's a starting point:

http://lists.w3.org/Archives/Public/public-webapps/2010OctDec/1075.html

-- 
Glenn Maynard


RE: User Intentions Explainer (was: List of Intentions)

2014-08-08 Thread Cynthia Shelly
This is a really solid list.  Thank you for pulling it together, so we can 
start towards a harmonized set of user events.

From: Ben Peters [mailto:ben.pet...@microsoft.com]
Sent: Monday, August 4, 2014 7:28 PM
To: public-editing...@w3.org; Julie Parent; public-indie...@w3.org; 
public-webapps
Subject: User Intentions Explainer (was: List of Intentions)


Cross-posted​: Editing TF, IndieUI TF, WebApps WG


In order to solve the broad issue of User Intentions, we have compiled below a 
list of User Intentions derived from all of the sources of intention-style 
events that I am aware of. I agree with Julie that some of the events below are 
not in scope for Editing at this time. However, they are in scope for IndieUI, 
and I think that we have an important opportunity here to bring all concepts of 
Intentions together in a single explainer document so that 1) web developers 
can have a coherent picture of how to understand User Intentions and 2) the 
specs that concern User Intentions do not overlap. To that end, I propose that 
we create a new User Intentions Explainer document, borrowing from the work of 
the Editing Explainer and IndieUI to give an overview of how all of the 
following Intention Events work together in a high-level non-normative form:

Clipboard API
HTML5 Drag and Drop
Selection API (beforeSelectionChange)
DOM Events (beforeInput)
IndieUI

Each of these specs solves some aspect of User Intentions, and they should be 
coherent. For instance, we need to solve how beforeInput does not overlap 
Clipboard and DnD, and consider moving some parts of IndieUI that apply to 
input (undo/redo) to beforeInput.

Further, I propose that we fine-tune the Editing Explainer, making it clear 
that there are two problems we're trying to solve in that space- Editing 
Intentions and Extensible Web Editing. From there we can begin to create 
normative specs for solving these two Editing problems.

The end result of this would be a high-level explainer for all known User 
Intention specs, and a high-level explainer for Editing. Then we can solve each 
individual problem more granularly while maintaining coherence. Thoughts?

From: Julie Parent jpar...@google.commailto:jpar...@google.com

 This is a great list, and I agree it is the right starting point.


 On Mon, Jul 21, 2014 at 6:12 PM, Ben Peters 
 ben.pet...@microsoft.commailto:ben.pet...@microsoft.com wrote:

 We now have a good list of use cases on the Explainer[1]. I believe the next 
 step is to come up with the Intentions that we need to allow sites and 
 frameworks to respond to. After that, we can determine which current or new 
 spec each Intention is covered by. Please let me know what you think of this 
 list and if you agree it covers our use cases. The last several are borrowed 
 from IndieUI[2], which we could use to cover those cases if we believe that 
 is best.

 * Focus/place caret
 * Move caret
 * Start selection (eg mouse down to select)
 * Update selection (eg mouse move during selection)
 * Finish selection (eg mouse up after selecting)
 * Modify selection (extend selection after it's finished. Might be covered 
 by update/finish)
 * Insert text
 * Insert content / insert HTML
 * Delete content
 * Delete forward
 * Delete backward
 * Insert newline


 Do we need newline as a special case? Wouldn't this be covered by insert 
 text/content/HTML?


 * Undo
 * Redo
 * Paste
 * Copy
 * Cut
 * Drag start/over/stop
 * Drop
 * Scroll/pan



 * Activate / Invoke

 * Expand
 * Collapse
 * Dismiss
 * Media next/previous/start/stop/pause
 * Rotate
 * Zoom
 * Value change


 Is this the set from indie-ui?  I think we should make a decision if we are 
 trying to cover these cases or not, as they do not make sense in the context 
 of rich text editing and might be out of scope.  It would help to have a list 
 of arguments for/against merging with indie-ui?



 Ben
 [1] http://w3c.github.io/editing-explainer/commands-explainer.html
 [2] http://www.w3.org/TR/indie-ui-events/


Re: User Intentions Explainer (was: List of Intentions)

2014-08-08 Thread James Craig
From: Julie Parent jpar...@google.com
  
 This is a great list, and I agree it is the right starting point.
 
 On Mon, Jul 21, 2014 at 6:12 PM, Ben Peters ben.pet...@microsoft.com wrote:
 * Activate / Invoke 
 
  * Expand
  * Collapse
  * Dismiss
  * Media next/previous/start/stop/pause
  * Rotate
  * Zoom
  * Value change
 
 Is this the set from indie-ui?  I think we should make a decision if we are 
 trying to cover these cases or not, as they do not make sense in the context 
 of rich text editing and might be out of scope.

Sorry Julie, I didn't see your original email. Perhaps you removed the 
cross-posted lists? I'm on both IndieUI and WebApps, but didn't see it come 
through.

With the possible exception of the media events, all of the above events make 
sense in the context of editing. For example, in Word or Pages:

1. You can expand or collapse sections of a document.
2. You can zoom in on the document view. 
3. You can rotate an embedded photo or chart. 
4. You can scale a variety of elements by focusing on a corner handle (e.g. 
drag handle) and nudging it with arrow keys, or a mouse, or by touch. These can 
be modified in some apps. For example, option+arrow moves this handle by a 
smaller amount than unmodified arrow keys. This is a use case for either 
scale on the main object, or value change on the corner control.


 It would help to have a list of arguments for/against merging with indie-ui?

I don't know if the arguments need to be about merging working groups. I don't 
care where the work happens, as long as it happens. I'd be fine with disbanding 
IndieUI if WebApps is ready to take up the work, but text editing is only one 
piece of it. IMO, it doesn't make sense to design one API for text editing, and 
another API for graphic manipulation, and another for form manipulation. 

One of the use cases that was deferred from the 1.0 version of IndieUI was to 
mark for some later action. For example, in a text editor, click in one spot, 
then shift+click in another to select a range of text. The next action (such as 
Delete) applies to the selection. On most desktop platforms, this works with 
text selection, but also on collections like table views: click then 
shift+click to select multiple songs in a music player, or multiple messages in 
an email inbox. 

INDIEUI-ACTION-25: Add markRequest with variant properties indicating 
fromLast (like Shift+click or select range from last mark) and retainMarks 
(like Mac Cmd+click or Win Ctrl+click meaning select in addition to)

https://www.w3.org/WAI/IndieUI/track/actions/25

Hopefully these examples demonstrate how overlapping the work is. It'd be 
tempting to limit the scope to text editing, but I feel that'd ultimately be 
short-sighted.

James


-- 
Indifference towards people and the reality in 
which they live is actually the one and only 
cardinal sin in design. — Dieter Rams