Re: IndexedDB Proposed API Change: cursor.advance BACKWARD when direction is prev

2014-05-23 Thread marc fawzi
I thought .continue/advance was similar to the 'continue' statement in a
for loop in that everything below the statement will be ignored and the
loop would start again from the next index. So my console logging was
giving confusing results. I figured it out and it works fine now. For
sanity's sake, I've resorted to adding a 'return'  in my code in the
.success callback after every .advance and .continue so the execution flow
is easier to follow. It's very confusing, from execution flow perspective,
for execution to continue past .continue/.advance while at once looping
asynchronously. I understand it's two different instances of the .success
callback but it was entirely not clear to me from reading the docs on MDN
(for example) that .advance / .continue are async.

Also, the description of .advance in browser vendor's documentation, e.g.
on MDN, says Advance the cursor position forward by two places for
cursor.advance(2) but what they should really say is advance the cursor
position forward by two results. For example, let's say cursor first
landed on an item with primary key = 7, and you issue the statement
cursor.advance(2), I would expect it to go to the item with primary key 5
(for cursor direction = prev) but instead it goes to the item with
primary key 2 because that's the 2nd match for the range argument from the
cursor's current position, which means that .advance(n) would be far more
clear semantically speaking if it was simply done as .continue(n)  ... I
guess if there is an understanding that the cursor is always at a matching
item and that it could only continue/advance to the next/prev matching
item, not literal 'positions' in the table (i.e. sequentially through the
list of all items) then there would be no confusion but the very concept of
a cursor is foreign to most front end developers, and that's where the
confusion comes from for many.

My inclination as a front end developer, so far removed from database
terminology, would be

1) to deprecate .advance in favor of .continue(n) and

2) if it makes sense (you have to say why it may not) have
.continue()/.continue(n) cause the return of the execution flow similar to
'continue' in a for loop.

What do you think?



On Wed, May 21, 2014 at 10:42 AM, Joshua Bell jsb...@google.com wrote:




 On Wed, May 21, 2014 at 7:32 AM, Arthur Barstow art.bars...@gmail.comwrote:

 [ Bcc www-tag ; Marc - please use public-webapps for IDB discussions ]

 On 5/20/14 7:46 PM, marc fawzi wrote:

 Hi everyone,

 I've been using IndexedDB for a week or so and I've noticed that
 cursor.advance(n) will always move n items forward regardless of cursor
 direction. In other words, when the cursor direction is set to prev as
 in: range = IDBKeyRange.only(someValue, prev) and primary key is
 auto-incremented, the cursor, upon cursor.advance(n), will actually advance
 n items in the opposite direction to the cursor.continue() operation.


 That runs contrary to the spec. Both continue() and advance() reference
 the steps for iterating a cursor which picks up the direction from the
 cursor object; neither entry point alters the steps to affect the direction.

 When you say you've noticed, are you observing a particular browser's
 implementation or are you interpreting the spec? I did a quick test and
 Chrome, Firefox, and IE all appear to behave as I expected when intermixing
 continue() and advance() calls with direction 'prev' - the cursor always
 moves in the same direction regardless of which call is used.

 Can you share sample code that demonstrates the problem, and indicate
 which browser(s) you've tested?




  This is not only an issue of broken symmetry but it presents an
 obstacle to doing things like: keeping a record of the primaryKey of the
 last found item (after calling cursor.continue for say 200 times) and, long
 after the transaction has ended, call our search function again and, upon
 finding the same item it found first last time, advance the cursor to the
 previously recorded primary key and call cursor.continue 200 times, from
 that offset, and repeat whenever you need to fetch the next 200 matching
 items. Such algorithm works in the forward direction (from oldest to newest
 item) because cursor.advance(n) can be used to position the cursor forward
 at the previously recorded primary key (of last found item) but it does not
 work in the backward direction (from newest to oldest item) because there
 is no way to make the cursor advance backward. It only advances forward,
 regardless of its own set direction.

 This example is very rough and arbitrary. But it appears to me that the
 cursor.advance needs to obey the cursor's own direction setting. It's
 almost like having a car that only moves forward (and can't u-turn) and in
 order to move backward you have to reverse the road. That's bonkers.

 What's up with that?

 How naive or terribly misguided am I being?

 Thanks in advance.

 Marc







Re: IndexedDB Proposed API Change: cursor.advance BACKWARD when direction is prev

2014-05-23 Thread marc fawzi

Thanks for following up! At least two IDB implementers were worried that
you'd found some browser bugs we couldn't reproduce.

Yup. I had to figure this stuff out as the API is very low level (which is
why it can also be used in very powerful ways and also potentially very
confusing for the uninitiated)

Assuming the store has [1,2,3,4,5,6,7,8,9] and the cursor's range is not
restricted, if the cursor's key=7 and direction='prev' then I would expect
after advance(2) that key=5. If you're seeing key=2 can you post a sample
somewhere (e.g. jsfiddle.com?)

In the case I have say 7 items [1,2,3,4,5,6,7] and the cursor's range is
restricted by IDBKeyRange.only(val, prev) ... so if the matching (or in
range) items are at 7, 6, 4, 2, 1 then I can obtain them individually or in
contiguous ranges by advancing the cursor on each consecutive invocation of
my search routine, like so: on first invocation advance(1) from 7 to 6, on
second invocation advance(2) from 7 to 4, on third invocation advance(3)
from 7 to 2 and on fourth invocation advance(4) from 7 to 1. I could also
use advance to advance by 1 within each invocation until no matching items
are found but only up to 2 times an invocation (for a store with 700 or
7 items we can advance by 1 about 200 times per invocation, but that's
arbitrary)

 I can definitely post a jsfiddle if you believe the above is not in
accordance with the spec.

As to continue(n) or continue(any string), i would make that
.find(something)



On Fri, May 23, 2014 at 10:41 AM, Joshua Bell jsb...@google.com wrote:

 On Fri, May 23, 2014 at 9:40 AM, marc fawzi marc.fa...@gmail.com wrote:

 I thought .continue/advance was similar to the 'continue' statement in a
 for loop in that everything below the statement will be ignored and the
 loop would start again from the next index. So my console logging was
 giving confusing results. I figured it out and it works fine now.


 Thanks for following up! At least two IDB implementers were worried that
 you'd found some browser bugs we couldn't reproduce.


  For sanity's sake, I've resorted to adding a 'return'  in my code in the
 .success callback after every .advance and .continue so the execution flow
 is easier to follow. It's very confusing, from execution flow perspective,
 for execution to continue past .continue/.advance while at once looping
 asynchronously. I understand it's two different instances of the .success
 callback but it was entirely not clear to me from reading the docs on MDN
 (for example) that .advance / .continue are async.


 Long term, we expect JS to evolve better ways of expressing async calls
 and using async results. Promises are a first step, and hopefully the
 language also grows some syntax for them. IDB should jump on that train
 somehow.


 Also, the description of .advance in browser vendor's documentation, e.g.
 on MDN, says Advance the cursor position forward by two places for
 cursor.advance(2) but what they should really say is advance the cursor
 position forward by two results. For example, let's say cursor first
 landed on an item with primary key = 7, and you issue the statement
 cursor.advance(2), I would expect it to go to the item with primary key 5
 (for cursor direction = prev) but instead it goes to the item with
 primary key 2 because that's the 2nd match for the range argument from the
 cursor's current position


 What range argument are you referring to?

 Assuming the store has [1,2,3,4,5,6,7,8,9] and the cursor's range is not
 restricted, if the cursor's key=7 and direction='prev' then I would expect
 after advance(2) that key=5. If you're seeing key=2 can you post a sample
 somewhere (e.g. jsfiddle.com?)


 , which means that .advance(n) would be far more clear semantically
 speaking if it was simply done as .continue(n)  ... I guess if there is an
 understanding that the cursor is always at a matching item and that it
 could only continue/advance to the next/prev matching item, not literal
 'positions' in the table (i.e. sequentially through the list of all items)
 then there would be no confusion but the very concept of a cursor is
 foreign to most front end developers, and that's where the confusion comes
 from for many.

 My inclination as a front end developer, so far removed from database
 terminology, would be

 1) to deprecate .advance in favor of .continue(n) and


 continue(n) already has meaning - it jumps ahead to the key with value n



 2) if it makes sense (you have to say why it may not) have
 .continue()/.continue(n) cause the return of the execution flow similar to
 'continue' in a for loop.


 The API can't change the language - you return from functions via return
 or throw. Further, there are reasons you may want to do further processing
 after calling continue() - e.g. there may be multiple cursors (e.g. in a
 join operation) or for better performance you can call continue() as early
 as possible so that the database can do its work while you're processing

Re: [Bug 25376] - Web Components won't integrate without much testing

2014-05-29 Thread marc fawzi
Excuse my unsolicited comment here, being new to the webapps mailing list,
but here is my two cents feedback as a web developer...

I think the idea behind Web Components is good regardless of the flaws on
the spec. The idea is to create a standard built into the browser that will
allow library--and framework--free, mass distribution of reusable
components. Today, we can build components, without half broken stuff like
iframes, using JS/CSS isolation patterns. So the issue IMO is not about
doing something we couldn't do already, but standardizing the way things
are done so that we can have the ability to build and share components
without dependency on a certain library (e.g. jQuery) or framework (e.g.
Angular)

Marc





On Thu, May 29, 2014 at 4:05 AM, Axel Dahmen bril...@hotmail.com wrote:

 Yes, Tab, your below summary is correct.

 First, let me (again) stress the fact that my intention is NOT to give a
 critical judgment on Web Components.

 Many ways lead to Rome. Web Components is one way of implementing discrete
 components into a web page. And the HTML IFRAME element is just another. I
 wouldn't want to keep anyone from walking his way as long as I'm left to go
 mine.

 My sole intention (still) is to have the SEAMLESS attribute of the HTML
 IFRAME element amended for the given reasons. So I'm examining Web
 Components from this single perspective here. I'm not targeting on
 improving Web Components; I'm also not targeting to discuss Web Components
 when used in XML here, just HTML. So my wording here will not be a
 constructive one but one comparing advantages/disadvantages of one approach
 compared to the other within this constrained environment.

 So now here's an elaboration to my list:


 

  Web Components require a plethora of additional browser features and
 behaviours. 


 Everything Web Components promise to provide already exists (e. g. by
 using HTML IFRAME elements). So any effort put into developing or using Web
 Components is a wasted amount of time because it'd just recreate existing
 features while bloating browser engine code, making the engine more
 sluggish and error prone.

 Moreover, Web Components put a hard load on client engines, requiring them
 to support a whole bunch of features flawlessly, making it a hell for
 programmers to test their websites against different browsers. Whereas HTML
 IFRAME elements just display arbitrarily simple HTML pages served by a
 single web server. Any server code displaying static data can be assumed to
 display the data correctly on all browsers when successfully tested once.


 

  Web Components require loads of additional HTML, CSS and client script
 code for displaying content. 


 Let's start with Shadow DOM: The whole one hundred thousand ++ text
 character specification adds a heavy burden to the client's script engine.
 - And its implementation is completely unnecessary while using HTML IFRAME
 elements or anything else than Web Components.

 Custom Elements: It's unnecessary to have Custom Elements (please note the
 explicit capitalization here) to declaratively define complex components if
 you need ECMA script to get them to life. Just create an ECMA script class
 doing all the work to decorate dedicated elements and you're done. Custom
 Elements just add redundancy.

 CSS: I'm just going to name a few additional CSS constructs here: @host,
 :scope, ::distributed -- While IFRAME content can reuse a website's CSS,
 Web Components require their discrete CSS. IFRAME content can be displayed
 as a web page on their own, Web Components can't. End of Story.


 

  Web Components install complex concepts (e.g. decorators) by
 introducing unique, complex, opaque behaviours, abandoning the pure nature
 of presentation. 


 As I've read on one of the replies I've received, decorators are
 deprecated by now, so I won't further elaborate on them. Still, Shadow DOM
 remains being an unnecessary and complex design, compared to using HTML
 IFRAME elements.


 

  Web Components require special script event handling, so existing script
 code cannot be reused. 


 Decorators, Custom Elements and Shadow DOM require additional events for
 them to function properly. HTML IFRAME elements just use the existing
 events - if there is any event required at all to display their content. No
 further ado or implementation required when using HTML IFRAME elements.


 

  Web Components require special CSS handling, so existing CSS cannot be
 reused. 


 Please refer to my above elaboration on CSS for details.


 

  Web Components unnecessarily introduce a new clumsy “custom”, or
 “undefined” element, leaving the path of presentation. Custom Elements
 could as easy be achieved using CSS classes, and querySelector() in ECMA
 Script. 


 Please refer to my above elaboration on Custom Elements for details.


 

Re: WebApp installation via the browser

2014-05-30 Thread marc fawzi
Another question about the subject

https://developers.google.com/chrome/apps/docs/developers_guide

This says that they can also run in the background, which is huge.

I'm not sure if they support content scripts like extensions and packaged
apps do. I would love to find out if the spec says anything about that.

Thanks in advance,

Marc


On Fri, May 30, 2014 at 7:42 PM, Jeffrey Walton noloa...@gmail.com wrote:

 On Fri, May 30, 2014 at 9:04 PM, Brendan Eich bren...@mozilla.org wrote:
  Jeffrey Walton wrote:
 
  Are there any platforms providing the feature? Has the feature gained
  any traction among the platform vendors?
 
  Firefox OS wants this.
 Thanks Brendan.

 As a second related question, is an Installable WebApp considered a
 side-loaded app?




IndexedDB: MultiEntry index limited to 50 indexed values? Ouch.

2014-06-05 Thread marc fawzi
Hi Joshua, IDB folks,

I was about to wrap up work on a small app that uses IDB but to my absolute
surprise it looks that the number of indexed values in a MultiEntry index
is limited to 50. Maybe it's not meant to contain an infinite number but 50
seems small and arbitrary. Why not 4096? Performance? If so, why is it NOT
mentioned in any of the IDB docs published by the browser vendors?

Following from my previous example (posted to this list), tags is a
multiEntry index defined like so:

objectStore.createIndex(tags, tags, {unique: false, multiEntry: true})

When I put in say 3000 tags as follows:

var req = objectStore.add({tags: myTagsArray, someKey: someValue, etc: etc})

Only the first 50 elements of myTagsArray show up in the Keys column within
the Chrome Web Console (under Resources--IndexedDB---tags) and it's not a
display issue only: The cursor (shown below) cannot find any value beyond
the initial 50 values in myTagsArray. This is despite the cursor.value.tags
containing all 100+ values.

var range = IDBKeyRange.only(tags[0], prev)

var cursor = index.openCursor(range)

Is this by design? Anyway to get around it (or do it differently) ? and why
is the limit of 50 on indexed values not mentioned in any of the docs?

I bet I'm missing something... because I can't think of why someone would
pick the number 50.

Thanks,

Marc


Re: IndexedDB: MultiEntry index limited to 50 indexed values? Ouch.

2014-06-05 Thread marc fawzi
You are correct:

A case of BROKEN debugging tool (Chrome Web Console in Chrome 35) and a
typo that produced no error (I had e.target.cursor instead of
e.target.result)

If the debugger is broken (I realize it's been fixed now) it makes it hard
to tell weather the bug is in my code or in the implementation.

Given our prior conversation on this list resulted in some useful feedback,
according to you, I figured that you wanted to continue getting feedback on
this list. I guess the distinction to make is between reports of potential
bugs (which should go to chromium-html5 unless they are cross browser in
which case I'd post them here, right?)  and API design issues (spec issues)
which is what my other post from a couple weeks ago was about. If it's the
latter then I'll post here, right? If the former I'll post to
chromium-html5. Since I only use Chrome in my current work, I'll probably
never bother verifying any bugs I find on other browsers, so I'll direct
them to chromium-html5.

Anything else?

Thanks for coding the test, and will post more DEBUGGER BUGS on
chromium-html5 that impact debug-ability of IDB apps. There are at least a
couple more.




On Thu, Jun 5, 2014 at 2:53 PM, Joshua Bell jsb...@google.com wrote:

 The spec has no such limitation, implicit or explicit. I put this together:

 http://pastebin.com/0GLPxekE

 In Chrome 35, at least, I had no problems indexing 100,000 tags. (It's a
 bit slow, though, so the pastebin code has only 10,000 by default)

 You mention 50 items, which just happens to be how many records are shown
 on one page of Chrome's IDB inspector in dev tools. And paging in the
 inspector was recently broken (known bug, fix just landed:
 http://crbug.com/379483). Are you sure you're not just seeing that?

 If you're seeing this consistently across browsers, my guess is that
 there's a subtle bug in your code (assuming we've ruled out a double-secret
 limit imposed by the cabal of browser implementors...) This isn't a support
 forum, so you may want to take the issue elsewhere - the chromium-html5 is
 one such forum I lurk on.

 If you're not seeing this across browsers, then this is definitely not the
 right forum. As always, please try and reduce any issue to a minimal test
 case; it's helpful both to understand what assumptions you may be making
 (i.e. you mention a cursor; is that a critical part of your repro or is a
 simple count() enough?) and for implementors to track down actual bugs. If
 you do find browser bugs, please report them - crbug.com,
 bugzilla.mozilla.org, etc.



 On Thu, Jun 5, 2014 at 2:15 PM, marc fawzi marc.fa...@gmail.com wrote:

 Hi Joshua, IDB folks,

 I was about to wrap up work on a small app that uses IDB but to my
 absolute surprise it looks that the number of indexed values in a
 MultiEntry index is limited to 50. Maybe it's not meant to contain an
 infinite number but 50 seems small and arbitrary. Why not 4096?
 Performance? If so, why is it NOT mentioned in any of the IDB docs
 published by the browser vendors?

 Following from my previous example (posted to this list), tags is a
 multiEntry index defined like so:

 objectStore.createIndex(tags, tags, {unique: false, multiEntry: true})

 When I put in say 3000 tags as follows:

 var req = objectStore.add({tags: myTagsArray, someKey: someValue, etc:
 etc})

 Only the first 50 elements of myTagsArray show up in the Keys column
 within the Chrome Web Console (under Resources--IndexedDB---tags) and
 it's not a display issue only: The cursor (shown below) cannot find any
 value beyond the initial 50 values in myTagsArray. This is despite the
 cursor.value.tags containing all 100+ values.

 var range = IDBKeyRange.only(tags[0], prev)

 var cursor = index.openCursor(range)

 Is this by design? Anyway to get around it (or do it differently) ? and
 why is the limit of 50 on indexed values not mentioned in any of the docs?

 I bet I'm missing something... because I can't think of why someone would
 pick the number 50.

 Thanks,

 Marc








Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-21 Thread Marc Fawzi
I think the same thought pattern can be applied elsewhere in the API design
for v2.

Consider the scenario of trying to find whether a given index exists or not
(upon upgradeneeded). For now, we have to write noisy code like
[].slice.call(objectStore.indexNames()).indexOf(someIndex)  Why couldn't
indexNames be an array?  and dare we ask for this to either return the
index or null: objectStore.index(someIndex)  ? I understand the argument
for throwing an error here but I think a silent null is more practical.

So yes, anything that makes the API easier to consume would be terrific.

I had a very hard time until I saw the light. There's some solid thought
behind the existing API, but it's also not designed for web development in
terms of how it implements a good idea, not wether or not the idea is good.
Sorry for the mini rant. It took me a little too long to get this app done
which is my first time using IndexedDB (with a half broken debugger in
Chrome): https://github.com/idibidiart/AllSeeingEye






On Sat, Jun 21, 2014 at 5:39 PM, Jonas Sicking jo...@sicking.cc wrote:

 Hi all,

 I found an old email with notes about features that we might want to put
 in v2.

 Almost all of them was recently brought up in the recent threads about
 IDBv2. However there was one thing on the list that I haven't seen brought
 up.

 It might be a nice perf improvement to add support for a
 IDBObjectStore/IDBIndex.exists(key) function.

 This would require less IO and less object creation than simply using
 .get(). It is probably particularly useful when doing a filtering join
 operation between two indexes/object stores. But it is probably useful
 other times too.

 Is this something that others think would be useful?

 / Jonas



Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Marc Fawzi
Joshua,

you're on, and I'll be happy to make suggestions once I've thought them 
through... At least to some extent :)

Jonas,


There is a small performance difference between them though when
applied to indexes. Indexes could have multiple entries with the same
key (but different primaryKey), in which case count() would have to
find all such entries, whereas exists() would only need to find the
first.


Isn't it also possible to open cursor on an index with IDBKeyRange.only(key) ? 
Wouldn't that confirm/deny existence and you may abandon the cursor/transaction 
after it. 

Having said that, and speaking naively here, a synchronous .exists() or 
.contains() would be useful as existence checks shouldn't have to be 
exclusively asynchronous as that complicates how we'd write: if this exists 
and that other thing doesn't exists then do xyz 

However, a good Promises implementation (see Dexie.js as a potential 
inspiration/candidate for such) may allow us to work with such asynchronous 
existence checks in a way that keeps the code complexity manageable.

Take all this with a grain of salt. I'm learning how to use IDB as I go and 
these are my mental notes during this process, so not always making total 
sense. 

Marc

Sent from my iPhone

 On Jun 23, 2014, at 12:21 PM, Jonas Sicking jo...@sicking.cc wrote:
 
 There is a small performance difference between them though when
 applied to indexes. Indexes could have multiple entries with the same
 key (but different primaryKey), in which case count() would have to
 find all such entries, whereas exists() would only need to find the
 first.



Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Marc Fawzi
No, I was suggesting .exists() can be synchronous to make it useful

I referred to it as .contains() too so sorry if that conflated them for you but 
it has nothing to do with the .contains Joshua was talking about.

In short, an asynchronous .exists() as you proposed does seem redundant 

But I was wondering what about a synchronous .exists() (the same proposal you 
had but synchronous as opposed to asynchronous) 

Makes any sense?

Sent from my iPhone

 On Jun 23, 2014, at 1:28 PM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Mon, Jun 23, 2014 at 1:03 PM, Marc Fawzi marc.fa...@gmail.com wrote:
 Having said that, and speaking naively here, a synchronous .exists() or 
 .contains() would be useful as existence checks shouldn't have to be 
 exclusively asynchronous as that complicates how we'd write: if this exists 
 and that other thing doesn't exists then do xyz
 
 Note that the .contains() discussion is entirely separate from the
 .exists() discussion. I.e. your subject is entirely off-topic to this
 thread.
 
 The .exists() function I proposed lives on IDBObjectStore and IDBIndex
 and is an asynchronous database operation.
 
 The .contains() function that you are talking about lives on an
 array-like object and just does some in-memory tests which means that
 it's synchronous.
 
 So the two are completely unrelated.
 
 / Jonas



Re: IDBObjectStore/IDBIndex.exists(key)

2014-06-23 Thread Marc Fawzi

We can do synchronous tests against the schema as it is feasible for 
implementations to maintain a copy of the current schema for an open connection 
in memory in the same thread/process as script. (Or at least, no implementer 
has complained.)


Oh cool. So I could have a 3rd party component in my app that can then test the 
schema directly and run certain functions only if some combination of 
conditions are met and having those test be synchronous makes the tests simple. 
For example, does xyz object store exist and does it have the right indices. If 
so then the component would run. Else it wouldn't. 

I get the thing about the cost of looking up a value and why that has to be 
asynchronous.

Jonas,

I have no request per se, just super curious about the rationalizations  around 
v2 APIs, so I might have questions or curiosity that are expressed indirectly 
as suggestion. Sometimes I say naive things and other times the suggestions may 
be directly useful or bring up some other thoughts. I'll try to minimize the 
confusion. But do look out there: almost every front end developer I've talked 
to things IndexedDB is less than usable and the library makers haven't yet 
provided something both truly solid and worth throwing the native APIs for, so 
I'm trying to understand things better for myself so I can help build a better 
library but would rather have the IDB native API come to a point in its 
evolution where front end developers would be able to consume it directly with 
no intervening layer, and that's why  asking and making sometimes dumb and 
sometimes useful suggestions, to try and understand how the IDB designers 
think. If that actually makes any sense.


Sent from my iPhone

 On Jun 23, 2014, at 2:22 PM, Joshua Bell jsb...@google.com wrote:
 
 We can do synchronous tests against the schema as it is feasible for 
 implementations to maintain a copy of the current schema for an open 
 connection in memory in the same thread/process as script. (Or at least, no 
 implementer has complained.)



Re: Proposal for User Agent Augmented Authorization

2014-08-07 Thread Marc Fawzi
Probably a naive comment, but I'm curious and interested in learning since
it's one thing that's been missing from browsers:

Does your last comment mean that you'd be baking in dependency on a certain
auth standard in the user agent? What happens when the part of the
authentication model that is outside the user-agent has a breaking change
but not every website updates to that version? By augmented do you mean
it's an additional optional layer?




On Wed, Aug 6, 2014 at 7:02 PM, Sam Penrose spenr...@mozilla.com wrote:

 I wrote some user stories for RPs and IdPs with your comments in mind, and
 it feels like I may have taken the initial cut of the API too far from HTTP
 semantics:

   https://github.com/SamPenrose/ua-augmented-auth/issues/9

 It also feels like the API and stories need a second protocol, or at least
 a second Oauth implementation, to firm them up. I'm going to look at
 $MAJOR_SOCIAL_NETWORK_FEDERATED_AUTH. If anyone can suggest specific
 HTTP-based protocols to consider*, I'd be much obliged. Expect a revised
 proposal after a couple clock days of work; calendar ETA TDB.

 * IndieAuth was suggested here:
 https://github.com/SamPenrose/ua-augmented-auth/issues/1

 - Original Message -
  From: Sam Penrose spenr...@mozilla.com
  To: Mike West mk...@google.com
  Cc: Webapps WG public-webapps@w3.org
  Sent: Wednesday, August 6, 2014 10:52:52 AM
  Subject: Re: Proposal for User Agent Augmented Authorization
 
 
 
  - Original Message -
   From: Mike West mk...@google.com
   Hey Sam, this looks interesting indeed!
 
  Thanks for the very helpful comments. My main takeaway is that I have
 failed
  to communicate the use cases we are trying to solve. I really appreciate
  your getting down into the weeds of my proposal; you would have had less
  work to do if I had put clear user stories up front. I will remedy that.
 
   It's not clear to me how this proposal interacts with the credential
   management proposal I sent out last week. Does the following more or
 less
   describe the integration you're thinking about, or have I completely
   misunderstood the proposal?
 
  I haven't thought of a specific integration yet, but to be crystal
 clear: I
  am not proposing a *replacement* for Credentials Management as you have
  defined it. It may be that UAA is a vague, handy-wavy, redundant
 abstraction
  of what you've so specifically laid out with CM. Or it may be that CM is
 a
  one specific path through the general functionality I'm trying to enable.
  See below.
 
   ```
   navigator.credentials.request({ federations: ['https://idp1.net/', '
   https://idp2.net' ] }).then(function(c) {
 // If the user picks a supported IDP, authenticate:
 if (c  c instanceof FederatedCredential) {
   navigator.auth.authenticate({
 authURL: ...,
 returnURL: ...
   });
 }
   });
   ```
  
   I was hoping that we could find a way to hide some of that magic
 behind the
   initial call to `.request()`. If the user picks a stored credential
 from
   IDP #1, it seems like we'd be able to come up with a system that
 returned
   whatever IDP-specific tokens directly as part of resolving the promise.
   That is, rather than popping up one picker, then resolving the promise,
   returning control to the website, and then popping up some additional
 UI,
   we could handle the IDP-side authentication process in the browser
 before
   returning a credential.
 
  Identity and authentication are coarser, higher-level concepts than
  credentials, and HTTP requests are coarser, high-level objects than
  Javascript promises. Most of all, user agent is a coarser, higher-level
  term than browser. You are correct that my proposal does not fit the
  specific CM-in-browser-with-promises flow that you put forth -- it's not
  meant to. It's also not meant to compete with it :-). We may just need a
  little time to figure out how they fit together, or nest, or at worst
  coexist happily side-by-side. Let me add specific user stories to my
 repo,
  and then we can both ponder the situation.
 
   We could, for instance, remove the need for parameters to
 `authenticate` by
   defining suitable attributes in an IDP manifest, as sketched out at
  
 http://projects.mikewest.org/credentialmanagement/spec/#identity-provider-manifest
 
  Generally I like the idea of *augmenting* functionality with manifests. I
  think that *requiring* IdPs to implement manifests adds a hurdle for IdP
  support, and the benefit ought to match the cost. Since lack of support
 from
  an IdP is a game-over cost for a chunk of the web, the benefit of
  requiring manifests ought to be similarly high. Much higher than
 removing
  the need for parameters seems to me, though maybe I am mistaken. Of
 course,
  if we must require a manifest for other reasons, then by all means let's
 add
  all the invariant fields we can to them.
 
   -mike
  
   --
   Mike West mk...@google.com
   Google+: https://mkw.st/+, Twitter: 

Re: Encapsulating CSS in Shadow DOM

2014-08-20 Thread Marc Fawzi
Hmm. I thought that's part of the purpose of Web Component, i.e. to
encapsulate CSS and JS? Is it not so?


On Wed, Aug 20, 2014 at 1:42 AM, Henrik Haugberg henrik.haugb...@gmail.com
wrote:

 I am hoping it will be possible to have root em like font size based
 values, but based on the shadow root (or host) when using shadow dom. Like
 10hem (host em) or something like that. That would make it a lot easier to
 make scalable widgets/components where only one or a few css properties
 would have to be changed when scaling the content in different contexts.

 I have described this in more detail here:


 http://stackoverflow.com/questions/24953647/font-size-css-values-based-on-shadow-dom-root



Re: Encapsulating CSS in Shadow DOM

2014-08-20 Thread Marc Fawzi
n/m ... the request is more specific than the email subject... the JS
solution to the problem is certainly less appealing than a CSS only
solution.. .will be watching this :)


On Wed, Aug 20, 2014 at 9:20 AM, Marc Fawzi marc.fa...@gmail.com wrote:

 Hmm. I thought that's part of the purpose of Web Component, i.e. to
 encapsulate CSS and JS? Is it not so?


 On Wed, Aug 20, 2014 at 1:42 AM, Henrik Haugberg 
 henrik.haugb...@gmail.com wrote:

 I am hoping it will be possible to have root em like font size based
 values, but based on the shadow root (or host) when using shadow dom. Like
 10hem (host em) or something like that. That would make it a lot easier to
 make scalable widgets/components where only one or a few css properties
 would have to be changed when scaling the content in different contexts.

 I have described this in more detail here:


 http://stackoverflow.com/questions/24953647/font-size-css-values-based-on-shadow-dom-root





Re: What I am missing

2014-11-18 Thread Marc Fawzi
Allowing this script to run may open you to all kinds of malicious attacks
by 3rd parties not associated with the party whom you're trusting.

If I give App XYZ super power to do anything, and XYZ gets
compromised/hacked then I'll be open to all sorts of attacks.

It's not an issue of party A trusting party B. It's an issue of trusting
that party B has no security holes in their app whatsoever, and that is one
of the hardest things to guarantee.


On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz michaela.m...@hermetos.com
wrote:


 Yes Boris - I know. As long as it doesn't have advantages for the user
 or the developer - why bother with it? If signed code would allow
 special features - like true fullscreen or direct file access  - it
 would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.

 Michaela



 On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
  On 11/18/14, 10:26 PM, Michaela Merz wrote:
  First: We need signed script code.
 
  For what it's worth, Gecko supported this for a while.  See
  
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
 .
   In practice, people didn't really use it, and it made the security
  model a _lot_ more complicated and hard to reason about, so the
  feature was dropped.
 
  It would be good to understand how proposals along these lines differ
  from what's already been tried and failed.
 
  -Boris
 






Re: What I am missing

2014-11-18 Thread Marc Fawzi

Signed code doesn't protect against malicious or bad code. It only
guarantees that the code is actually from the the certificate owner


if I trust you and allow your signed script the permissions it asks for and
you can't guarantee that it would be used by some malicious 3rd party site
to hack me (i.e. the security holes in your script get turned against me)
then there is just too much risk in allowing the permissions

the concern is that the average user will not readily grasp the risk
involved in granting certain powerful permissions to some insecure script
from a trusted source

On Tue, Nov 18, 2014 at 9:35 PM, Michaela Merz michaela.m...@hermetos.com
wrote:

  Well .. it would be a all scripts signed or no script signed kind of
 a deal. You can download malicious code everywhere - not only as scripts.
 Signed code doesn't protect against malicious or bad code. It only
 guarantees that the code is actually from the the certificate owner .. and
 has not been altered without the signers consent.

 Michaela




 On 11/19/2014 06:14 AM, Marc Fawzi wrote:

 Allowing this script to run may open you to all kinds of malicious
 attacks by 3rd parties not associated with the party whom you're
 trusting.

  If I give App XYZ super power to do anything, and XYZ gets
 compromised/hacked then I'll be open to all sorts of attacks.

  It's not an issue of party A trusting party B. It's an issue of trusting
 that party B has no security holes in their app whatsoever, and that is one
 of the hardest things to guarantee.


 On Tue, Nov 18, 2014 at 8:00 PM, Michaela Merz michaela.m...@hermetos.com
  wrote:


 Yes Boris - I know. As long as it doesn't have advantages for the user
 or the developer - why bother with it? If signed code would allow
 special features - like true fullscreen or direct file access  - it
 would make sense. Signed code would make script much more resistant to
 manipulation and therefore would help in environments where trust and/or
 security is important.

 We use script for much, much more than we did just a year or so ago.

 Michaela



 On 11/19/2014 04:40 AM, Boris Zbarsky wrote:
  On 11/18/14, 10:26 PM, Michaela Merz wrote:
  First: We need signed script code.
 
  For what it's worth, Gecko supported this for a while.  See
  
 http://www-archive.mozilla.org/projects/security/components/signed-scripts.html
 .
   In practice, people didn't really use it, and it made the security
  model a _lot_ more complicated and hard to reason about, so the
  feature was dropped.
 
  It would be good to understand how proposals along these lines differ
  from what's already been tried and failed.
 
  -Boris
 








Re: What I am missing

2014-11-18 Thread Marc Fawzi
So there is no way for an unsigned script to exploit security holes in a
signed script?

Funny you mention crypto currencies as an idea to get inspiration
from...Trust but verify is detached from that... a browser can monitor
what the signed scripts are doing and if it detects a potentially malicious
pattern it can halt the execution of the script and let the user decide if
they want to continue...


On Tue, Nov 18, 2014 at 10:34 PM, Florian Bösch pya...@gmail.com wrote:

 There are some models that are a bit better than trust by royalty
 (app-stores) and trust by hirarchy (TLS). One of them is trust flowing
 along flow limited edges in a graph (as in Advogato). This model however
 isn't free from fault, as when a highly trusted entity gets compromised,
 there's no quick or easy way to revoke that trust for that entity. Also, a
 trust graph such as this doesn't solve the problem of stake. We trust say,
 the twitter API, because we know that twitter has staked a lot into it. If
 they violate that trust, they suffer proportionally more. A graph doesn't
 solve that problem, because it cannot offer a proof of stake.

 Interestingly, there are way to provide a proof of stake (see various
 cryptocurrencies that attempt to do that). Of course proof of stake
 cryptocurrencies have their own problems, but that doesn't entirely
 invalidate the idea. If you can prove you have a stake of a given size,
 then you can enhance a flow limited trust graph insofar as to make it less
 likely an entity gets compromised. The difficulty with that approach of
 course is, it would make aquiring high levels of trust prohibitively
 expensive (as in getting the priviledge to access the filesystem could run
 you into millions of $ of stake shares).




Re: What I am missing

2014-11-19 Thread Marc Fawzi


 So there is no way for an unsigned script to exploit security holes in a
 signed script?

Of course there's a way. But by the same token, there's a way a signed
script can exploit security holes in another signed script. Signing itself
doesn't establish any trust, or security.


Yup, that's also what I meant. Signing does not imply secure, but to the
average non-technical user a signed app from a trusted party may convey
both trust and security, so they wouldn't think twice about installing such
a script even if it asked for some powerful permissions that can be
exploited by another script.



 Funny you mention crypto currencies as an idea to get inspiration
 from...Trust but verify is detached from that... a browser can monitor
 what the signed scripts are doing and if it detects a potentially malicious
 pattern it can halt the execution of the script and let the user decide if
 they want to continue...

That's not working for a variety of reasons. The first reason is that
identifying what a piece of software does intelligently is one of those
really hard problems. As in Strong-AI hard.


Well, the user can setup the rules of what is considered a malicious action
and that there would be ready made configurations (best practices codified
in config) that would be the default in the browser. And then they can
exempt certain scripts.

I realize this is an open ended problem and no solution is going to address
it 100% ... It's the nature of open systems to be open to attacks but it's
how the system deals with the attack that differentiates it. It's a wide
open area of research I think, or should be.

But do we want a security model that's not extensible and not flexible? The
answer is most likely NO.





On Tue, Nov 18, 2014 at 11:03 PM, Florian Bösch pya...@gmail.com wrote:

 On Wed, Nov 19, 2014 at 7:54 AM, Marc Fawzi marc.fa...@gmail.com wrote:

 So there is no way for an unsigned script to exploit security holes in a
 signed script?

 Of course there's a way. But by the same token, there's a way a signed
 script can exploit security holes in another signed script. Signing itself
 doesn't establish any trust, or security.


 Funny you mention crypto currencies as an idea to get inspiration
 from...Trust but verify is detached from that... a browser can monitor
 what the signed scripts are doing and if it detects a potentially malicious
 pattern it can halt the execution of the script and let the user decide if
 they want to continue...

 That's not working for a variety of reasons. The first reason is that
 identifying what a piece of software does intelligently is one of those
 really hard problems. As in Strong-AI hard. Failing that, you can monitor
 what APIs a piece of software makes use of, and restrict access to those.
 However, that's already satisfied without signing by sandboxing.
 Furthermore, it doesn't entirely solve the problem as any android user will
 know. You get a ginormeous list of premissions a given piece of software
 would like to use and the user just clicks yes. Alternatively, you get
 malware that's not trustworthy, that nobody managed to properly review,
 because the non trusty part was burried/hidden by the author somewhere deep
 down, to activate only long after trust extension by fiat has happened.

 But even if you'd assume that this somehow would be an acceptable model,
 what do you define as malicious? Reformatting your machine would be
 malicious, but so would be posting on your facebook wall. What constitutes
 a malicious pattern is actually more of a social than a technical problem.



Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Marc Fawzi
You either block the JS event loop or you don't. If you do, I'm not sure
how a long running synchronous call to the server won't result in this
script is taking too long alert and basically hold up all execution till
it's done. What am I missing? If you want to synchronize tasks you can
promises or callbacks or (in ES6/7) I'm sure other ways too

On Fri, Feb 6, 2015 at 10:38 AM, Michaela Merz michaela.m...@hermetos.com
wrote:

  Well yeah. But the manufacturer of your audio equipment doesn't come
 into your home to yank the player out of your setup. But that's not really
 the issue here. We're talking about technology that is being developed so
 that people like me can build good content. As long as there are a lot of
 people out there using synchronous calls, it would be the job of the
 browser development community to find a way to make such calls less harmful.

 Michaela



 On 02/06/2015 12:28 PM, Marc Fawzi wrote:

 I have several 8-track tapes from the early-to-mid 70s that I'm really
 fond of. They are bigger than my iPod. Maybe I can build an adapter with
 mechanical parts, magnetic reader and A/D convertor etc. But that's my job,
 not Apple's job.

  The point is, old technologies die all the time, and people who want to
 hold on to old content and have it play on the latest player (browser) need
 to either recode the content or build adapters/hosts/wrappers such that the
 old content will think it's running in the old player.

  As far as stuff we build today, we have several options for waiting
 until ajax response comes back, and I'm not why we'd want to block
 everything until it does. It sounds unreasonable. There are legitimate
 scenarios for blocking the event loop but not when it comes to fetching
 data from a server.





 On Fri, Feb 6, 2015 at 9:27 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:


 Well .. may be some folks should take a deep breath and think what they
 are doing. I am 'just' coding web services and too often I find myself
 asking: Why did the guys think that this would make sense? Indexeddb is
 such a case. It might be a clever design, but it's horrible from a coders
 perspective.

 Would it have been the end of the world to stick with some kind of
 database language most coders already are familiar with? Same with (sand
 boxed) file system access. Google went the right way with functions trying
 to give us what we already knew: files, dirs, read, write, append.  But
 that's water under the bridge.

 I have learned to code my stuff in a way that I have to invest time and
 work so that my users don't have to. This is IMHO a good approach.
 Unfortunately - some people up the chain have a different approach.
 Synchronous calls are bad. Get rid of them. Don't care if developers have a
 need for it. Why bother. Our way or highway. Frankly - I find that
 offensive.  If you believe that synchronous calls are too much of a problem
 for the browser, find a way for the browser to deal with it.

 Building browsers and adding functionality is not and end in itself. The
 purpose is not to make cool stuff. We don't need people telling us what we
 are allowed to do. Don't get me wrong: I really appreciate your work and I
 am exited about what we can do in script nowadays. But please give more
 thought to the folks coding web sites. We are already dealing with a wide
 variety of problems: From browser incompatibilities, to responsive designs,
 server side development, sql, memcached, php, script - you name it. Try to
 make our life easier by keeping stuff simple and understandable even for
 those, who don't have the appreciation or the understanding what's going on
 under the hood of a browser.

 Thanks.

 Michaela





 On 02/06/2015 09:54 AM, Florian Bösch wrote:
 
  I had an Android device, but now I have an iPhone. In addition to
 the popup problem, and the fake X on ads, the iPhone browsers (Safari,
 Chrome, Opera) will start to show a site, then they will lock up for 10-30
 seconds before finally becoming responsive.
 
 
  Via. Ask Slashdot:
 http://ask.slashdot.org/story/15/02/04/1626232/ask-slashdot-gaining-control-of-my-mobile-browser
 
  Note: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird 30.0 /
 SeaMonkey 2.27), synchronous requests on the main thread have been
 deprecated due to the negative effects to the user experience.
 
 
 
  Via
 https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests
 
  Heads up! The XMLHttpRequest2 spec was recently changed to prohibit
 sending a synchronous request whenxhr.responseType is set. The idea behind
 the change is to help mitigate further usage of synchronous xhrs wherever
 possible.
 
 
  Via
 http://updates.html5rocks.com/2012/01/Getting-Rid-of-Synchronous-XHRs
 
 







Re: do not deprecate synchronous XMLHttpRequest

2015-02-06 Thread Marc Fawzi
I have several 8-track tapes from the early-to-mid 70s that I'm really fond
of. They are bigger than my iPod. Maybe I can build an adapter with
mechanical parts, magnetic reader and A/D convertor etc. But that's my job,
not Apple's job.

The point is, old technologies die all the time, and people who want to
hold on to old content and have it play on the latest player (browser) need
to either recode the content or build adapters/hosts/wrappers such that the
old content will think it's running in the old player.

As far as stuff we build today, we have several options for waiting until
ajax response comes back, and I'm not why we'd want to block everything
until it does. It sounds unreasonable. There are legitimate scenarios for
blocking the event loop but not when it comes to fetching data from a
server.





On Fri, Feb 6, 2015 at 9:27 AM, Michaela Merz michaela.m...@hermetos.com
wrote:


 Well .. may be some folks should take a deep breath and think what they
 are doing. I am 'just' coding web services and too often I find myself
 asking: Why did the guys think that this would make sense? Indexeddb is
 such a case. It might be a clever design, but it's horrible from a coders
 perspective.

 Would it have been the end of the world to stick with some kind of
 database language most coders already are familiar with? Same with (sand
 boxed) file system access. Google went the right way with functions trying
 to give us what we already knew: files, dirs, read, write, append.  But
 that's water under the bridge.

 I have learned to code my stuff in a way that I have to invest time and
 work so that my users don't have to. This is IMHO a good approach.
 Unfortunately - some people up the chain have a different approach.
 Synchronous calls are bad. Get rid of them. Don't care if developers have a
 need for it. Why bother. Our way or highway. Frankly - I find that
 offensive.  If you believe that synchronous calls are too much of a problem
 for the browser, find a way for the browser to deal with it.

 Building browsers and adding functionality is not and end in itself. The
 purpose is not to make cool stuff. We don't need people telling us what we
 are allowed to do. Don't get me wrong: I really appreciate your work and I
 am exited about what we can do in script nowadays. But please give more
 thought to the folks coding web sites. We are already dealing with a wide
 variety of problems: From browser incompatibilities, to responsive designs,
 server side development, sql, memcached, php, script - you name it. Try to
 make our life easier by keeping stuff simple and understandable even for
 those, who don't have the appreciation or the understanding what's going on
 under the hood of a browser.

 Thanks.

 Michaela





 On 02/06/2015 09:54 AM, Florian Bösch wrote:
 
  I had an Android device, but now I have an iPhone. In addition to
 the popup problem, and the fake X on ads, the iPhone browsers (Safari,
 Chrome, Opera) will start to show a site, then they will lock up for 10-30
 seconds before finally becoming responsive.
 
 
  Via. Ask Slashdot:
 http://ask.slashdot.org/story/15/02/04/1626232/ask-slashdot-gaining-control-of-my-mobile-browser
 
  Note: Starting with Gecko 30.0 (Firefox 30.0 / Thunderbird 30.0 /
 SeaMonkey 2.27), synchronous requests on the main thread have been
 deprecated due to the negative effects to the user experience.
 
 
 
  Via
 https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Synchronous_and_Asynchronous_Requests
 
  Heads up! The XMLHttpRequest2 spec was recently changed to prohibit
 sending a synchronous request whenxhr.responseType is set. The idea behind
 the change is to help mitigate further usage of synchronous xhrs wherever
 possible.
 
 
  Via
 http://updates.html5rocks.com/2012/01/Getting-Rid-of-Synchronous-XHRs
 
 





Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest

2015-02-11 Thread Marc Fawzi

even if the DOM must remain a single-threaded and truly
lock/barrier/fence-free data structure, what you are reaching for is doable
now, with some help from standards bodies. ***But not by vague blather***


You're contradicting yourself within a single two-line paragraph, being
vague in your own statement ... that what I'm reaching for is doable.

I know you know what you're talking about and I know what I'm reaching for
is doable.

What I don't know are the details of how that might be doable. I think a
lot of developers would be interested in that. Not just me. I think you
dropped some hint there but it's no where near a detailed and clear answer,
so again shutting down the discussion because you know more? What the hey!
Mr. Eich. This is a public discussion forum. If it wasn't open to the
public, it would be private. In discussions, vagueness is not the enemy of
the truth, only part of the journey... Relax.

On a more serious basis, please provide us with clarity or point us to
discussions on this topic that might help us understand how to get _to_
there!




On Tue, Feb 10, 2015 at 7:19 PM, Brendan Eich bren...@secure.meer.net
wrote:

 Marc Fawzi wrote:

 I've recently started using something called an atom in ClojureScript and
 it is described as a mutable reference to an immutable value. It holds the
 state for the app and can be safely mutated by multiple components, and
 has an interesting thing called a cursor. It is lock free but synchronous.
 I think I understand it to some degree.


 The win there is the mutations are local to the clients of the atom, but
 the underlying data structure it reflects is immutable. The DOM is not
 immutable and must not be for backward compatibility.

  I don't understand the implementation of the DOM but why couldn't we have
 a representation of it that acted like the atom in clojure and then write
 the diff to the actual DOM.


 Because browsers don't work that way. I wish they did, but they can't
 afford to stop the world, reimplement, optimize (if possible -- they will
 probably see regressions that are both hard to fix, and that hurt them in
 the market), and then restart the world.

  Is that what React does with I virtual DOM? No idea but I wasn't dropping
 words, I was describing what was explained to me about the atom in clojure
 and I saw parallels and possibility of something similar in JS to manage
 the DOM.


 I'm a big React fan. But it can virtualize the DOM using JS objects and do
 diffing/patching, without having to jack up the browsers (all of them; note
 stop the world above), rewrite their DOMs to match, and get them
 optimized and running again.

  With all the brains in this place are you telling me flat out that it is
 impossible to have a version of the DOM (call it virtual or atomic DOM)
 that could be manipulated from web workers?


 I'm not. People are doing this. My explicit point in a previous reply was
 that you don't need public-webapps or browser vendors to agree on doing
 this in full to start, and what you do in JS can inform smaller, easier
 steps in the standards body. One such step would be a way to do sync i/o
 from workers. Clear?

  Also I was mentioning immutable and transient types because they are so
 necessary to performant functional programming, as I understand it.


 Sure, but we're back to motherhood-and-apple-pie rhetoric now.

  Again the clojure atom is lock free and synchronous and is mutable and
 thread safe. Why couldn't something like that act as a layer to hold DOM
 state.


 First, lock-free data structures are not free. They require a memory
 barrier or fence, e.g., cmpxchg on Intel. Study this before endorsing it as
 a free lunch. Competing browsers will not add such overhead to their DOMs
 right now.

 Second, even if the DOM must remain a single-threaded and truly
 lock/barrier/fence-free data structure, what you are reaching for is doable
 now, with some help from standards bodies. But not by vague blather, and
 nothing to do with sync XHR, to get back on topic.

Maybe that's how React's virtual DOM works? I don't know but I find the
 idea behind the atom very intriguing and not sure why it wouldn't be
 applicable to making the DOM thread safe. What do the biggest brains in the
 room think? That's all. A discussion. If people leave the list because of
 it then that's their right but it is a human right to speak ones mind as
 long as the speech is not demeaning or otherwise hurtful.


 I think you're on the wrong list. This isn't the place for vague albeit
 well-intentioned -- but as you allow above, uninformed (I don't know) --
 speculations and hopes.

  I really don't understand the arrogance here.


 Cut it out, or I'll cite your faux-humility as tit-for-tat. We need to be
 serious, well-informed, and concrete here. No speculations based on
 free-lunch (!= lock-free) myths.

 As for sync XHR, I agree with you (I think! I may be misremembering your
 position) that compatibility trumps intentions

Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi
What? a good cop bad cop routine? Jonas asks for a constructive contribution or 
ideas for missing functionality in the web platform and the inventor of JS 
honors me with a condescending response, as if ... 
 
What the hey! Mr. Eich!

I guess this explains the origin of JS: a knee jerk reaction to then-trendy 
ideas... 

That's not the way to go about all inclusive debate.

Thank you.

Sent from my iPhone

 On Feb 10, 2015, at 5:44 PM, Brendan Eich bren...@secure.meer.net wrote:
 
 Please stop overloading public-webapps with idle chatter.
 
 React and things like it or based on it are going strong. Work there, above 
 the standards. De-jure standardization will follow, and we'll all be better 
 off for that order of work.
 
 /be
 
 Marc Fawzi wrote:
 How about a thread-safe but lock-free version of the DOM based on something 
 like Clojure's atom? So we can manipulate the DOM from web workers? With 
 cursor support?
 
 How about immutable data structures for side-effect-free functional 
 programming?
 
 How about  Will think of more



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi


Sent from my iPhone

 On Feb 10, 2015, at 5:44 PM, Brendan Eich bren...@secure.meer.net wrote:
 
 Please stop overloading public-webapps with idle chatter.
 
 React and things like it or based on it are going strong. Work there, above 
 the standards. De-jure standardization will follow, and we'll all be better 
 off for that order of work.
 
 /be
 
 Marc Fawzi wrote:
 How about a thread-safe but lock-free version of the DOM based on something 
 like Clojure's atom? So we can manipulate the DOM from web workers? With 
 cursor support?
 
 How about immutable data structures for side-effect-free functional 
 programming?
 
 How about  Will think of more



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi
I've recently started using something called an atom in ClojureScript and it is 
described as a mutable reference to an immutable value. It holds the state for 
the app and can be safely mutated by multiple components, and has an 
interesting thing called a cursor. It is lock free but synchronous. I think I 
understand it to some degree.

I don't understand the implementation of the DOM but why couldn't we have a 
representation of it that acted like the atom in clojure and then write the 
diff to the actual DOM. Is that what React does with I virtual DOM? No idea but 
I wasn't dropping words, I was describing what was explained to me about the 
atom in clojure and I saw parallels and possibility of something similar in JS 
to manage the DOM.

With all the brains in this place are you telling me flat out that it is 
impossible to have a version of the DOM (call it virtual or atomic DOM) that 
could be manipulated from web workers?

Also I was mentioning immutable and transient types because they are so 
necessary to performant functional programming, as I understand it.

Again the clojure atom is lock free and synchronous and is mutable and thread 
safe. Why couldn't something like that act as a layer to hold DOM state. Maybe 
that's how React's virtual DOM works? I don't know but I find the idea behind 
the atom very intriguing and not sure why it wouldn't be applicable to making 
the DOM thread safe. What do the biggest brains in the room think? That's all. 
A discussion. If people leave the list because of it then that's their right 
but it is a human right to speak ones mind as long as the speech is not 
demeaning or otherwise hurtful. 

I really don't understand the arrogance here.

Sent from my iPhone

 On Feb 10, 2015, at 6:43 PM, Brendan Eich bren...@secure.meer.net wrote:
 
 Your message to which I replied is not cited accurately below by you. The 
 text you wrote is here, in between  lines:
 
 
 How about a thread-safe but lock-free version of the DOM based on something 
 like Clojure's atom? So we can manipulate the DOM from web workers? With 
 cursor support?
 
 How about immutable data structures for side-effect-free functional 
 programming?
 
 How about  Will think of more
 
 
 This message text is exactly what I wrote my reply against.
 
 It's useless; sorry, this happens, but don't make a habit of it, or most 
 practitioners will unsubscribe to public-webapps. The DOM is a mutable 
 single-threaded store, so there's no lock-free version possible. You'd have 
 snapshots, with some cost in the snapshotting mechanism, at best. Then, you 
 wouldn't be able to manipulate in any shared-state sense of that word, the 
 DOM from workers.
 
 Sorry, but that's the way things are. Dropping words like immutable and 
 lock-free doesn't help. That, plus a lot of attitude about deprecating sync 
 XHR (on all sides; I'm not in favor of useless deprecation, myself -- good 
 luck to browsers who go first on actually *removing* sync XHR support), 
 adds up to noise in this list. What good purpose does noise to signal serve?
 
 /be
 
 Marc Fawzi mailto:marc.fa...@gmail.com
 February 10, 2015 at 6:24 PM
 What? a good cop bad cop routine? Jonas asks for a constructive contribution 
 or ideas for missing functionality in the web platform and the inventor of 
 JS honors me with a condescending response, as if ...
 
 What the hey! Mr. Eich!
 
 I guess this explains the origin of JS: a knee jerk reaction to then-trendy 
 ideas...
 
 That's not the way to go about all inclusive debate.
 
 Thank you.
 
 Sent from my iPhone
 
 
 Brendan Eich mailto:bren...@secure.meer.net
 February 10, 2015 at 5:44 PM
 Please stop overloading public-webapps with idle chatter.
 
 React and things like it or based on it are going strong. Work there, above 
 the standards. De-jure standardization will follow, and we'll all be better 
 off for that order of work.
 
 /be
 
 
 
 Marc Fawzi mailto:marc.fa...@gmail.com
 February 10, 2015 at 12:51 PM
 i agree that it's not a democratic process and even though some W3C/TAG 
 people will engage you every now and then the end result is the browser 
 vendors and even companies like Akamai have more say than the users and 
 developers. It's a classic top-down system, but at least most debates and 
 discussions happen over open-access mailing lists.
 
 I wish there was an app like Hacker News where browser vendors via W3C, TAG, 
 webapps etc engage users and developers in discussions and use up/down votes 
 to tell what matters most to users and developers.
 
 But design by committee is really hard and sub-optimal, and you need a group 
 of true and tried experts (open minded ones) to call the shots on various 
 technical aspects.
 
 
 
 
 
 



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi
Here is a really bad idea:

Launch an async xhr and monitor its readyState in a while loop and don't exit 
the loop till it has finished.

Easier than writing charged emails. Less drain on the soul. 

Sent from my iPhone

 On Feb 10, 2015, at 8:48 AM, Michaela Merz michaela.m...@hermetos.com wrote:
 
 No argument in regard to the problems that might arise from using sync
 calls.  But it is IMHO not the job of the browser developers to decide
 who can use what, when and why. It is up the guys (or gals) coding a
 web site to select an appropriate AJAX call to get the job done.
 
 Once again: Please remember that it is your job to make my (and
 countless other web developers) life easier and to give us more
 choices, more possibilities to do cool stuff. We appreciate your work.
 But must of us don't need hard coded education in regard to the way we
 think that web-apps and -services should be created.
 
 m.
 
 On 02/10/2015 08:47 AM, Ashley Gullen wrote:
 I am on the side that synchronous AJAX should definitely be
 deprecated, except in web workers where sync stuff is OK.
 
 Especially on the modern web, there are two really good
 alternatives: - write your code in a web worker where synchronous
 calls don't hang the browser - write async code which doesn't hang
 the browser
 
 With modern tools like Promises and the new Fetch API, I can't
 think of any reason to write a synchronous AJAX request on the main
 thread, when an async one could have been written instead with
 probably little extra effort.
 
 Alas, existing codebases rely on it, so it cannot be removed
 easily. But I can't see why anyone would argue that it's a good
 design principle to make possibly seconds-long synchronous calls on
 the UI thread.
 
 
 
 
 On 9 February 2015 at 19:33, George Calvert 
 george.calv...@loudthink.com
 mailto:george.calv...@loudthink.com wrote:
 
 I third Michaela and Gregg.
 
 __ __
 
 It is the app and site developers' job to decide whether the user 
 should wait on the server — not the standard's and, 99.9% of the 
 time, not the browser's either.
 
 __ __
 
 I agree a well-designed site avoids synchronous calls.  BUT —
 there still are plenty of real-world cases where the best choice is
 having the user wait: Like when subsequent options depend on the
 server's reply.  Or more nuanced, app/content-specific cases where
 rewinding after an earlier transaction fails is detrimental to the
 overall UX or simply impractical to code.
 
 __ __
 
 Let's focus our energies elsewhere — dispensing with browser 
 warnings that tell me what I already know and with deprecating 
 features that are well-entrenched and, on occasion, incredibly 
 useful.
 
 __ __
 
 Thanks, George Calvert
 
 



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi
If readyState is async then have set a variable in the readyState change
callback and monitor that variable in a while loop :D

What am I missing?

On Tue, Feb 10, 2015 at 9:44 AM, Elliott Sprehn espr...@chromium.org
wrote:



 On Tuesday, February 10, 2015, Marc Fawzi marc.fa...@gmail.com wrote:

 Here is a really bad idea:

 Launch an async xhr and monitor its readyState in a while loop and don't
 exit the loop till it has finished.

 Easier than writing charged emails. Less drain on the soul


 This won't work, state changes are async and long running while loops
 result in the hung script dialog which means we'll probably just kill your
 page.

 The main thread of your web app is the UI thread, you shouldn't be doing
 IO there (or anything else expensive). Some other application
 platforms will even flash the whole screen or kill your process if you do
 that to warn you're doing something awful.




 Sent from my iPhone

  On Feb 10, 2015, at 8:48 AM, Michaela Merz michaela.m...@hermetos.com
 wrote:
 
  No argument in regard to the problems that might arise from using sync
  calls.  But it is IMHO not the job of the browser developers to decide
  who can use what, when and why. It is up the guys (or gals) coding a
  web site to select an appropriate AJAX call to get the job done.
 
  Once again: Please remember that it is your job to make my (and
  countless other web developers) life easier and to give us more
  choices, more possibilities to do cool stuff. We appreciate your work.
  But must of us don't need hard coded education in regard to the way we
  think that web-apps and -services should be created.
 
  m.
 
  On 02/10/2015 08:47 AM, Ashley Gullen wrote:
  I am on the side that synchronous AJAX should definitely be
  deprecated, except in web workers where sync stuff is OK.
 
  Especially on the modern web, there are two really good
  alternatives: - write your code in a web worker where synchronous
  calls don't hang the browser - write async code which doesn't hang
  the browser
 
  With modern tools like Promises and the new Fetch API, I can't
  think of any reason to write a synchronous AJAX request on the main
  thread, when an async one could have been written instead with
  probably little extra effort.
 
  Alas, existing codebases rely on it, so it cannot be removed
  easily. But I can't see why anyone would argue that it's a good
  design principle to make possibly seconds-long synchronous calls on
  the UI thread.
 
 
 
 
  On 9 February 2015 at 19:33, George Calvert
  george.calv...@loudthink.com
  mailto:george.calv...@loudthink.com wrote:
 
  I third Michaela and Gregg.
 
  __ __
 
  It is the app and site developers' job to decide whether the user
  should wait on the server — not the standard's and, 99.9% of the
  time, not the browser's either.
 
  __ __
 
  I agree a well-designed site avoids synchronous calls.  BUT —
  there still are plenty of real-world cases where the best choice is
  having the user wait: Like when subsequent options depend on the
  server's reply.  Or more nuanced, app/content-specific cases where
  rewinding after an earlier transaction fails is detrimental to the
  overall UX or simply impractical to code.
 
  __ __
 
  Let's focus our energies elsewhere — dispensing with browser
  warnings that tell me what I already know and with deprecating
  features that are well-entrenched and, on occasion, incredibly
  useful.
 
  __ __
 
  Thanks, George Calvert
 
 




Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi
Async xhr with callback let's you manage the flow such that you don't do 
anything until a successful response from the server. Promises make it even 
nicer. 

Sent from my iPhone

 On Feb 10, 2015, at 9:15 AM, George Calvert george.calv...@loudthink.com 
 wrote:
 
 Ashley,
  
 Isn't it for the app dev team — rather than the standards body or even the 
 browser team — to decide whether the UI thread should wait on the server?
  
 I see it like this: The user, with the app as middleman, is conversing with 
 the server.  Just as we want modal dialogs because sometimes it makes sense 
 to say Answer this before proceeding, sometimes we want a synchronous call 
 in the main thread (because we want an answer from the server before 
 proceeding).
  
 Sure, I can present a dialog as non-modal — but then I've got to manage the 
 loose-ends if left unfinished.  Maybe I can toss all that into a cookie — and 
 maybe I can't.  Having a modal dialog as an option allows us to simplify the 
 code and avoid a lot of what-happened-to-my-data calls to the help desk.
  
 For me, it's the same with calls to the server.  Common use-cases are log-in 
 and master/parent-object creates.   In my apps, the UI depends on 
 user-specific config that is returned upon log-in.  As well, there are 
 instances where creating a parent object precedes creating child objects and 
 it just creates a dozen headaches to let the user proceed without 
 confirmation that the parent exists server-side.
  
 I agree the goal is to model as much as possible as asynchronous.  My issue 
 is that there are still real-world, practical applications for S-JAX and that 
 identifying those is the app developer's job, not W3C's.
  
 Heck, why not go the other way and deprecate AJAX now that web workers make 
 background threads a first-class object available for any processing?  ;-)
  
 Best,
 George
  
  
  


Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest

2015-02-11 Thread Marc Fawzi
this backward compatibility stuff is making me think that the web is
built upon the axiom that we will never start over and we must keep piling
up new features and principles on top of the old ones

this has worked so far, miraculously and not without overhead, but I can
only assume that it's at the cost of growing complexity in the browser
codebase. I'm sure you have to manage a ton of code that has to do with old
features and old ideas...

how long can this be sustained? forever? what is the point in time where
the business of retaining backward compatibility becomes a huge nightmare?








On Wed, Feb 11, 2015 at 12:33 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/11/15 3:04 PM, Brendan Eich wrote:

 If you want multi-threaded DOM access, then again based on all that I
 know about the three open source browser engines in the field, I do not
 see any implementor taking the huge bug-risk and opportunity-cost and
 (mainly) performance-regression hit of adding barriers and other
 synchronization devices all over their DOM code. Only the Servo project,
 which is all about safety with maximal hardware parallelism, might get
 to the promised land you seek (even that's not clear yet).


 A good start is defining terms.  What do we mean by multi-threaded DOM
 access?

 If we mean concurrent access to the same DOM objects from both a window
 and a worker, or multiple workers, then I think that's a no-go in Servo as
 well, and not worth trying to design for: it would introduce a lot of spec
 and implementation complexity that I don't think is warranted by the use
 cases I've seen.

 If we mean the much more modest have a DOM implementation available in
 workers then that might be viable.  Even _that_ is pretty hard to do in
 Gecko, at least, because there is various global state (caches of various
 sorts) that the DOM uses that would need to either move into TLS or become
 threadsafe in some form or something...  Again, various specs (mostly DOM
 and HTML) would need to be gone over very carefully to make sure they're
 not making assumptions about the availability of such global shared state.

  We should add lighter-weight workers and immutable data structures


 I should note that even some things that could be immutable might involved
 a shared cache in current implementations (e.g. to speed up sequential
 indexed access into a child list implemented as a linked list)...
 Obviously that sort of thing can be changed, but your bigger point that
 there is a lot of risk to doing that in existing implementations remains.

 -Boris




Re: Shadow tree style isolation primitive

2015-01-12 Thread Marc Fawzi
Can someone shed light at why Scoped Style Element was removed from Chrome
experimental features?

http://caniuse.com/#feat=style-scoped

In suggesting @isolate declaration, I meant it would go inside a scoped
style element. If there are nested scope style elements and each have
@isolate then it means that the styles don't bleed from parent with scoped
style to child with scoped style if child has @isolate

The big question is why was scoped style element removed from Chrome 37's
experimental flags?

Just curious.



On Mon, Jan 12, 2015 at 6:27 PM, Ryosuke Niwa rn...@apple.com wrote:


  On Jan 12, 2015, at 6:11 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
 
  On Mon, Jan 12, 2015 at 5:59 PM, Ryosuke Niwa rn...@apple.com wrote:
  On Jan 12, 2015, at 5:41 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  [ryosuke, your mail client keeps producing flattened replies. maybe
  send as plain-text, not HTML?]
 
  Weird.  I'm not seeing that at all on my end.
 
  It's sending HTML-quoted stuff, which doesn't survive the flattening
  to plain-text that I and a lot of others do.  Plain-text is more
  interoperable.
 
  The style defined for bar *in bar's setup code* (that is, in a
  style contained inside bar's shadow tree) works automatically
  without you having to care about what bar is doing.  bar is like a
  replaced element - it has its own rendering, and you can generally
  just leave it alone to do its thing.
 
  If that's the behavior we want, then we should simply make @isolate
 pierce through isolates.  You previously mentioned that:
 
  On Jan 12, 2015, at 1:28 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  Alternately, say that it does work - the @isolate selector pierces
  through isolation boundaries.  Then you're still screwed, because if
  the outer page wants to isolate .example blocks, but within your
  component you use .example normally, without any isolation, whoops!
  Suddenly your .example blocks are isolated, too, and getting weird
  styles applied to them, while your own styles break since they can't
  cross the unexpected boundary.
 
  But this same problem seems to exist in shadow DOM as well.  We can't
 have a bar inside a foo behave differently from ones outside foo
 since all bar elements share the same implementation.  I agree
 
  Yes!  But pay attention to precisely what I said: it's problematic to,
  for example, have a command to isolate all class=example elements
  pierce through isolation boundaries, because classes aren't expected
  to be unique in a page between components - it's very likely that
  you'll accidentally hit elements that aren't supposed to be isolated.
  It's okay to have *element name* isolations pierce, though, because we
  expect all elements with a given tagname to be the same kind of thing
  (and Web Components in general is built on this assumption; we don't
  scope the tagnames in any way).

 I don't want to go too much on a tangent but it seems like this is a
 dangerous assumption to make once components start depending on different
 versions (e.g. v1 versus v2) of other components.  Also, it's not hard to
 imagine authors may end up defining custom elements of the same name to be
 used in their own components.  If someone else then pulls in those two
 components, one of them will be broken.

 To solve a dependency problem like this, we need a real dependency
 resolution mechanism for components.

  But then we're not actually providing selectors to the isolate
  mechanism, we're just providing tagnames, and having that affect the
  global registry of tagnames.

 I don't think having a global registry of tag names is a sufficient nor a
 necessary mechanism to address the issue at hand.  As such, I'm not
 suggesting or supporting that.

 - R. Niwa





Re: Shadow tree style isolation primitive

2015-01-12 Thread Marc Fawzi

If the goal is to isolate a style sheet or several per a DOM sub tree then why 
not just use scoped style element that has imports that apply the stylesheet(s) 
only to the sub tree in scope? Obviously, you are talking about preventing 
stylesheets applied at a higher level from leaking in. So maybe then in the 
scoped style element there can be some @ declaration like @isolate that would 
tell the browser not to apply any styles defined at a higher level. Not sure 
how browsers would implement that but it seems that since we (developers) 
already have a way to define scoped style that it ought to be possible to 
isolate the elements that we're applying the scoped style to from styles 
defined by the user at a higher level while still applying user agent styles. 
Just a thought... 

Sent from my iPhone

 On Jan 12, 2015, at 8:47 AM, Brian Kardell bkard...@gmail.com wrote:
 
 
 
 On Mon, Jan 12, 2015 at 7:04 AM, Anne van Kesteren ann...@annevk.nl wrote:
 On Fri, Jan 9, 2015 at 10:11 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
  tl;dr: Cramming a subtree into a TreeScope container and then hanging
  that off the DOM would do the job for free (because it bakes all
  that functionality in).
 
 Sure, or we could expose a property that when set isolates a tree.
 Both a lot simpler than requiring ShadowRoot. However, it seems to me
 that ideally you can control all of this through CSS. The ability to
 isolate parts of a tree and have them managed by some other stylesheet
 or selector mechanism.
 
 Controlling it through CSS definitely seems to be very high-level.  To me at 
 least it feels like it requires a lot more answering of how since it deals 
 with identifying elements by way of rules/selection in order to 
 differentially identify other elements by way of rules/selection.  At the end 
 of the day you have to identify particular elements as different somehow and 
 explain how that would work.  It seems better to start there at a reasonably 
 low level and just keep in mind that it might be a future aim to move control 
 of this sort of thing fully to CSS.  Since CSS matching kind of conceptually 
 happens on 'not exactly the DOM tree' (pseudo elements, for example) it seems 
 kind of similar to me and it might be worth figuring that out before 
 attempting another high-level feature which could make answering 'what's the 
 path up' all that much harder.
 
  
 
 
 
 
 
 
 
 
  
 
 
 
 
  
 --
 https://annevankesteren.nl/
 
 
 
 -- 
 Brian Kardell :: @briankardell :: hitchjs.com


Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi

How about a thread-safe but lock-free version of the DOM based on something 
like Clojure's atom? So we can manipulate the DOM from web workers? With cursor 
support? 

How about immutable data structures for side-effect-free functional 
programming? 

How about  Will think of more

Sent from my iPhone

 On Feb 10, 2015, at 1:09 PM, Jonas Sicking jo...@sicking.cc wrote:
 
 On Tue, Feb 10, 2015 at 12:51 PM, Marc Fawzi marc.fa...@gmail.com wrote:
 i agree that it's not a democratic process and even though some W3C/TAG
 people will engage you every now and then the end result is the browser
 vendors and even companies like Akamai have more say than the users and
 developers
 
 Developers actually have more say than any other party in this process.
 
 Browsers are not interested in shipping any APIs that developers
 aren't going to use. Likewise they are not going to remove any APIs
 that developers are using (hence sync XHR is not going to go anywhere,
 no matter what the spec says).
 
 Sadly W3C and the developer community has not yet figured out a good
 way to communicate.
 
 But here's where you can make a difference!
 
 Please do suggest problems that you think needs to be solved. With new
 specifications that are still in the development phase, with existing
 specifications that have problems, and with specifications that
 doesn't exist yet but you think should.
 
 Looking forward to your constructive contributions.
 
 / Jonas



Re: do not deprecate synchronous XMLHttpRequest

2015-02-10 Thread Marc Fawzi

How about a thread-safe but lock-free version of the DOM based on something
like Clojure's atom? So we can manipulate the DOM from web workers? With
cursor support?

How about immutable data structures for side-effect-free functional
programming?


and transients! to complete the picture.

I think maybe aside from the thread-safe DOM idea, everything below that
should belong to the ES7 discussion list.



On Tue, Feb 10, 2015 at 1:46 PM, Marc Fawzi marc.fa...@gmail.com wrote:


 How about a thread-safe but lock-free version of the DOM based on
 something like Clojure's atom? So we can manipulate the DOM from web
 workers? With cursor support?

 How about immutable data structures for side-effect-free functional
 programming?

 How about  Will think of more

 Sent from my iPhone

  On Feb 10, 2015, at 1:09 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  On Tue, Feb 10, 2015 at 12:51 PM, Marc Fawzi marc.fa...@gmail.com
 wrote:
  i agree that it's not a democratic process and even though some W3C/TAG
  people will engage you every now and then the end result is the browser
  vendors and even companies like Akamai have more say than the users and
  developers
 
  Developers actually have more say than any other party in this process.
 
  Browsers are not interested in shipping any APIs that developers
  aren't going to use. Likewise they are not going to remove any APIs
  that developers are using (hence sync XHR is not going to go anywhere,
  no matter what the spec says).
 
  Sadly W3C and the developer community has not yet figured out a good
  way to communicate.
 
  But here's where you can make a difference!
 
  Please do suggest problems that you think needs to be solved. With new
  specifications that are still in the development phase, with existing
  specifications that have problems, and with specifications that
  doesn't exist yet but you think should.
 
  Looking forward to your constructive contributions.
 
  / Jonas



Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest

2015-02-12 Thread Marc Fawzi

Legacy problems

Across the computing industry, we spend enormous amounts of money and
effort on keeping older, legacy systems running. The examples range from
huge and costly to small and merely annoying: planes circle around in
holding patterns burning precious fuel because air traffic control can't
keep up on systems that are less powerful than a smartphone; WiFi networks
don't reach their top speeds because an original 802.11(no letter), 2Mbps
system *could* show up—you never know. So when engineers dream, we dream of
leaving all of yesterday's technology behind and starting from scratch. But
such clean breaks are rarely possible.

For instance, the original 10 megabit Ethernet specification allows for
1500-byte packets. Filling up 10Mbps takes about 830 of those 1500-byte
packets. Then Fast Ethernet came along, which was 100Mbps, but the packet
size remained the same so that 100Mbps ethernet gear could be hooked up to
10Mbps ethernet equipment without compatibility issues. Fast Ethernet needs
8300 packets per second to fill up the pipe. Gigabit Ethernet needs 83,000
and 10 Gigabit Ethernet needs *almost a million packets per second* (well,
830,000).

For each faster Ethernet standard, the switch vendors need to pull out even
more stops to process an increasingly outrageous numbers of packets per
second, running the CAMs that store the forwarding tables at insane speeds
that demand huge amounts of power. The need to connect antique NE2000 cards
meant sticking to 1500 bytes for Fast Ethernet, and then the need to talk
to those rusty Fast Ethernet cards meant sticking to 1500 bytes for Gigabit
Ethernet, and so on. At each point, the next step makes sense, but* the
entire journey ends up looking irrational.*


Source:
http://arstechnica.com/business/2010/09/there-is-no-plan-b-why-the-ipv4-to-ipv6-transition-will-be-ugly/


This guy here is bypassing the DOM and using WebGL for user interfaces

https://github.com/onejs/onejs

He even has a demo, with no event handling other than arrow keys at this
point, and as the author admits ugly graphics, but with projects like
React-Canvas (forget the React part, focus on Canvas UIs) and attempts like
these it looks like the way of the future is to relegate the DOM to old
boring business apps and throw more creative energy at things like WebGL
UIToolKit (the idea that guy is pursuing)



On Thu, Feb 12, 2015 at 3:46 AM, Aryeh Gregor a...@aryeh.name wrote:

 On Thu, Feb 12, 2015 at 4:45 AM, Marc Fawzi marc.fa...@gmail.com wrote:
  how long can this be sustained? forever? what is the point in time where
 the
  business of retaining backward compatibility becomes a huge nightmare?

 It already is, but there's no way out.  This is true everywhere in
 computing.  Look closely at almost any protocol, API, language, etc.
 that dates back 20 years or more and has evolved a lot since then, and
 you'll see tons of cruft that just causes headaches but can't be
 eliminated.  Like the fact that Internet traffic is largely in
 1500-byte packets because that's the maximum size you could have on
 ancient shared cables without ambiguity in the case of collision.  Or
 that e-mail is mostly sent in plaintext, with no authentication of
 authorship, because that's what made sense in the 80s (or whatever).
 Or how almost all web traffic winds up going over TCP, which performs
 horribly on all kinds of modern usage patterns.  For that matter, I'm
 typing this with a keyboard layout that was designed well over a
 century ago to meet the needs of mechanical typewriters, but it became
 standard, so now everyone uses it due to inertia.

 This is all horrible, but that's life.



Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest

2015-02-13 Thread Marc Fawzi
Travis,

That would be awesome.

I will go over that link and hopefully have starting points for the
discussion.

My day job actually allows me to dedicate time to experimentation (hence
the ClojureScript stuff), so if you have any private branches of IE with
latest DOM experiments, I'd be very happy to explore any new potential or
new efficiency that your ideas may give us! I'm very keen on that, too.

Off list seems to be best here..

Thank you Travis. I really appreciate being able to communicate freely
about ideas.

Marc

On Fri, Feb 13, 2015 at 11:20 AM, Travis Leithead 
travis.leith...@microsoft.com wrote:

 Marc,

 I'd first mention that I am keenly interested in improving the
 state-of-the-art in DOM (I'm driving the project to update IE's 20-year-old
 DOM as my day job.) I've also done a lot of thinking about thread-safe DOM
 designs, and would be happy to chat with you more in depth about some ideas
 (perhaps off-list if you'd like).

 I'd also refer you to a breakout session I held during last TPAC on a
 similar topic [1]. It had lots of interested folks in the room and I
 thought we had a really productive and interesting discussion (most of it
 captured in the IRC notes).

 [1] https://www.w3.org/wiki/Improving_Parallelism_Page

 -Original Message-
 From: Boris Zbarsky [mailto:bzbar...@mit.edu]
 Sent: Wednesday, February 11, 2015 12:34 PM
 To: public-webapps@w3.org
 Subject: Re: Thread-Safe DOM // was Re: do not deprecate synchronous
 XMLHttpRequest

 On 2/11/15 3:04 PM, Brendan Eich wrote:
  If you want multi-threaded DOM access, then again based on all that I
  know about the three open source browser engines in the field, I do
  not see any implementor taking the huge bug-risk and opportunity-cost
  and
  (mainly) performance-regression hit of adding barriers and other
  synchronization devices all over their DOM code. Only the Servo
  project, which is all about safety with maximal hardware parallelism,
  might get to the promised land you seek (even that's not clear yet).

 A good start is defining terms.  What do we mean by multi-threaded DOM
 access?

 If we mean concurrent access to the same DOM objects from both a window
 and a worker, or multiple workers, then I think that's a no-go in Servo as
 well, and not worth trying to design for: it would introduce a lot of spec
 and implementation complexity that I don't think is warranted by the use
 cases I've seen.

 If we mean the much more modest have a DOM implementation available in
 workers then that might be viable.  Even _that_ is pretty hard to do in
 Gecko, at least, because there is various global state (caches of various
 sorts) that the DOM uses that would need to either move into TLS or become
 threadsafe in some form or something...  Again, various specs (mostly DOM
 and HTML) would need to be gone over very carefully to make sure they're
 not making assumptions about the availability of such global shared state.

  We should add lighter-weight workers and immutable data structures

 I should note that even some things that could be immutable might involved
 a shared cache in current implementations (e.g. to speed up sequential
 indexed access into a child list implemented as a linked list)...
 Obviously that sort of thing can be changed, but your bigger point that
 there is a lot of risk to doing that in existing implementations remains.

 -Boris




Re: Thread-Safe DOM // was Re: do not deprecate synchronous XMLHttpRequest

2015-04-02 Thread Marc Fawzi
Boom!

http://pixelscommander.com/en/web-applications-performance/render-html-css-in-webgl-to-get-highest-performance-possibl/

This looks pretty amazing.

On Sat, Feb 14, 2015 at 4:01 PM, Brendan Eich bren...@secure.meer.net
wrote:

 Hang on a sec before going off to a private or single-vendor thread
 because you think I sent you packing on topics that are of interest (as
 opposed to Thread-Safe DOM).

 I'm sorry I missed Travis's mail in my Inbox, but I see it now in the
 archives. The topics listed at the link he cites *are* interesting to many
 folks here, even if public-webapps may not always be the best list:

 -

 IRC log: http://www.w3.org/2014/10/29-parallel-irc

 See also: Mohammad (Moh) Reza Haghighat's presentation on parallelism in
 the 29 October 2014 Anniversary Symposium talks

 We covered three main potential areas for parallelism:

 1. Find additional isolated areas of the web platform to enable
 parallelism. We noted Canvas Contexts that can migrate to workers to enable
 parallelism. Initial thoughts around UIWorkers are brewing for handling
 scrolling effects. Audio Workers are already being developed with specific
 real-time requirements. What additional features can be made faster by
 moving them off to workers?

 2. Shared memory models. This seems to require an investment in the
 JavaScript object primitives to enable multiple threaded access to object
 dictionaries that offer robust protections around multi-write scenarios for
 properties.

 3. Isolation boundaries for DOM access. We realized we needed to find an
 appropriate place to provide isolation such that DOM accesses could be
 assigned to a parallelizable JS engine. Based on discussion it sounded
 like element sub-trees wouldn't be possible to isolate, but that documents
 might be. Iframes of different origins may already be parallelized in some
 browsers.


 -

 Mozilla people have done work in all three areas, collaborating with Intel
 and Google people at least. Ongoing work continues as far as I know. Again,
 some of it may be better done in groups other than public-webapps. I cited
 roc's blog post about custom view scrolling, which seems to fall under
 Travis's (1) above.

 Please don't feel rejected about any of these work items.

 /be



  Marc Fawzi mailto:marc.fa...@gmail.com
 February 13, 2015 at 12:45 PM
 Travis,

 That would be awesome.

 I will go over that link and hopefully have starting points for the
 discussion.

 My day job actually allows me to dedicate time to experimentation (hence
 the ClojureScript stuff), so if you have any private branches of IE with
 latest DOM experiments, I'd be very happy to explore any new potential or
 new efficiency that your ideas may give us! I'm very keen on that, too.

 Off list seems to be best here..

 Thank you Travis. I really appreciate being able to communicate freely
 about ideas.

 Marc


 Boris Zbarsky mailto:bzbar...@mit.edu
 February 11, 2015 at 12:33 PM
 On 2/11/15 3:04 PM, Brendan Eich wrote:

 If you want multi-threaded DOM access, then again based on all that I
 know about the three open source browser engines in the field, I do not
 see any implementor taking the huge bug-risk and opportunity-cost and
 (mainly) performance-regression hit of adding barriers and other
 synchronization devices all over their DOM code. Only the Servo project,
 which is all about safety with maximal hardware parallelism, might get
 to the promised land you seek (even that's not clear yet).


 A good start is defining terms.  What do we mean by multi-threaded DOM
 access?

 If we mean concurrent access to the same DOM objects from both a window
 and a worker, or multiple workers, then I think that's a no-go in Servo as
 well, and not worth trying to design for: it would introduce a lot of spec
 and implementation complexity that I don't think is warranted by the use
 cases I've seen.

 If we mean the much more modest have a DOM implementation available in
 workers then that might be viable.  Even _that_ is pretty hard to do in
 Gecko, at least, because there is various global state (caches of various
 sorts) that the DOM uses that would need to either move into TLS or become
 threadsafe in some form or something...  Again, various specs (mostly DOM
 and HTML) would need to be gone over very carefully to make sure they're
 not making assumptions about the availability of such global shared state.

  We should add lighter-weight workers and immutable data structures


 I should note that even some things that could be immutable might
 involved a shared cache in current implementations (e.g. to speed up
 sequential indexed access into a child list implemented as a linked
 list)...  Obviously that sort of thing can be changed, but your bigger
 point that there is a lot of risk to doing that in existing implementations
 remains.

 -Boris

 Brendan Eich mailto:bren...@secure.meer.net
 February 11, 2015 at 12:04 PM


 Sorry, I was too grumpy -- my apologies.

 I don't see

Re: Indexed DB + Promises

2015-09-28 Thread Marc Fawzi
Yes, sorry.

<>

That's for implementors such as yourself to work through, I had assumed.

I just went over the Readme from the perspective of an IDB user. Here is
some potentially very naive feedback but i it s so obvious I can't help but
state it:

Instead of having .promise to appended to the IDB methods as in
`store.openCursor(query).promise`
why couldn't you configure the IDB API to be in Promise returning mode and
in that case openCursor(query) would return a Promise.

This way the syntax is not polluted with .promise everywhere and there
would be no chance of forgetting to add .promise or intentionally mixing
the bare IDB API with the promise returning version in the same code base
which can be very confusing 

I apologize if this makes no sense. I am a fan of IDB and had used is
successfully in the past without the Promise stuff. It takes some getting
used to but the IDB API is powerful AS IS and if you're going to make it
more approachable while keeping its row power I would probably skip
ES5/Promise (in terms of usage examples) and focus on async/await usage
because as you state under Concerns:

   -

   Methods that return requests still throw rather than reject on invalid
   input, so you must still use try/catch blocks. Fortunately, with ES2016
   async/await syntax, asynchronous errors can also be handled by try/catch
   blocks.

Also, I think with the scenario of potentially infinite wait (waitUntil(new
Promise())) the only thing I can think of is a timeout that is configurable
on waitUntil, e.g. waitUntil(new Promise()).maxTime(30s)






On Mon, Sep 28, 2015 at 12:36 PM, Joshua Bell <jsb...@google.com> wrote:

> On Mon, Sep 28, 2015 at 11:42 AM, Marc Fawzi <marc.fa...@gmail.com> wrote:
>
>> Have you looked at ES7 async/await? I find that pattern makes both simple
>> as well as very complex (even dynamic) async coordination much easier to
>> deal with than Promise API. I mean from a developer perspective.
>>
>>
> The linked proposal contains examples written in both "legacy" syntax
> (marked "ES2015") and in ES7 syntax with async/await (marked "ES2016").
> Please do read it.
>
> As the syntax additions are "just sugar" on top of Promises, the
> underlying issue of mixing IDB+Promises remains. The proposal attempts to
> make code using IDB with async/await syntax approachable, while not
> entirely replacing the existing API.
>
>
>>
>> Sent from my iPhone
>>
>> On Sep 28, 2015, at 10:43 AM, Joshua Bell <jsb...@google.com> wrote:
>>
>> One of the top requests[1] we've received for future iterations of
>> Indexed DB is integration with ES Promises. While this initially seems
>> straightforward ("aren't requests just promises?") the devil is in the
>> details - events vs. microtasks, exceptions vs. rejections, automatic
>> commits, etc.
>>
>> After some noodling and some very helpful initial feedback, I've got what
>> I think is a minimal proposal for incrementally evolving (i.e. not
>> replacing) the Indexed DB API with some promise-friendly affordances,
>> written up here:
>>
>> https://github.com/inexorabletash/indexeddb-promises
>>
>> I'd appreciate feedback from the WebApps community either here or in that
>> repo's issue tracker.
>>
>> [1] https://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
>>
>>
>


Re: Indexed DB + Promises

2015-09-28 Thread Marc Fawzi
<<
Instead of having .promise to appended to the IDB methods as in
`store.openCursor(query).promise` why couldn't you configure the IDB API to
be in Promise returning mode and in that case openCursor(query) would
return a Promise.
>>

I meant user configurable, maybe as a global config.

On Mon, Sep 28, 2015 at 1:12 PM, Marc Fawzi <marc.fa...@gmail.com> wrote:

> Yes, sorry.
>
> < underlying issue of mixing IDB+Promises remains. >>
>
> That's for implementors such as yourself to work through, I had assumed.
>
> I just went over the Readme from the perspective of an IDB user. Here is
> some potentially very naive feedback but i it s so obvious I can't help but
> state it:
>
> Instead of having .promise to appended to the IDB methods as in 
> `store.openCursor(query).promise`
> why couldn't you configure the IDB API to be in Promise returning mode and
> in that case openCursor(query) would return a Promise.
>
> This way the syntax is not polluted with .promise everywhere and there
> would be no chance of forgetting to add .promise or intentionally mixing
> the bare IDB API with the promise returning version in the same code base
> which can be very confusing 
>
> I apologize if this makes no sense. I am a fan of IDB and had used is
> successfully in the past without the Promise stuff. It takes some getting
> used to but the IDB API is powerful AS IS and if you're going to make it
> more approachable while keeping its row power I would probably skip
> ES5/Promise (in terms of usage examples) and focus on async/await usage
> because as you state under Concerns:
>
>-
>
>Methods that return requests still throw rather than reject on invalid
>input, so you must still use try/catch blocks. Fortunately, with ES2016
>async/await syntax, asynchronous errors can also be handled by try/catch
>blocks.
>
> Also, I think with the scenario of potentially infinite
> wait (waitUntil(new Promise())) the only thing I can think of is a timeout
> that is configurable on waitUntil, e.g. waitUntil(new
> Promise()).maxTime(30s)
>
>
>
>
>
>
> On Mon, Sep 28, 2015 at 12:36 PM, Joshua Bell <jsb...@google.com> wrote:
>
>> On Mon, Sep 28, 2015 at 11:42 AM, Marc Fawzi <marc.fa...@gmail.com>
>> wrote:
>>
>>> Have you looked at ES7 async/await? I find that pattern makes both
>>> simple as well as very complex (even dynamic) async coordination much
>>> easier to deal with than Promise API. I mean from a developer perspective.
>>>
>>>
>> The linked proposal contains examples written in both "legacy" syntax
>> (marked "ES2015") and in ES7 syntax with async/await (marked "ES2016").
>> Please do read it.
>>
>> As the syntax additions are "just sugar" on top of Promises, the
>> underlying issue of mixing IDB+Promises remains. The proposal attempts to
>> make code using IDB with async/await syntax approachable, while not
>> entirely replacing the existing API.
>>
>>
>>>
>>> Sent from my iPhone
>>>
>>> On Sep 28, 2015, at 10:43 AM, Joshua Bell <jsb...@google.com> wrote:
>>>
>>> One of the top requests[1] we've received for future iterations of
>>> Indexed DB is integration with ES Promises. While this initially seems
>>> straightforward ("aren't requests just promises?") the devil is in the
>>> details - events vs. microtasks, exceptions vs. rejections, automatic
>>> commits, etc.
>>>
>>> After some noodling and some very helpful initial feedback, I've got
>>> what I think is a minimal proposal for incrementally evolving (i.e. not
>>> replacing) the Indexed DB API with some promise-friendly affordances,
>>> written up here:
>>>
>>> https://github.com/inexorabletash/indexeddb-promises
>>>
>>> I'd appreciate feedback from the WebApps community either here or in
>>> that repo's issue tracker.
>>>
>>> [1] https://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
>>>
>>>
>>
>


Re: Indexed DB + Promises

2015-09-28 Thread Marc Fawzi
Have you looked at ES7 async/await? I find that pattern makes both simple as 
well as very complex (even dynamic) async coordination much easier to deal with 
than Promise API. I mean from a developer perspective. 


Sent from my iPhone

> On Sep 28, 2015, at 10:43 AM, Joshua Bell  wrote:
> 
> One of the top requests[1] we've received for future iterations of Indexed DB 
> is integration with ES Promises. While this initially seems straightforward 
> ("aren't requests just promises?") the devil is in the details - events vs. 
> microtasks, exceptions vs. rejections, automatic commits, etc.
> 
> After some noodling and some very helpful initial feedback, I've got what I 
> think is a minimal proposal for incrementally evolving (i.e. not replacing) 
> the Indexed DB API with some promise-friendly affordances, written up here:
> 
> https://github.com/inexorabletash/indexeddb-promises
> 
> I'd appreciate feedback from the WebApps community either here or in that 
> repo's issue tracker.
> 
> [1] https://www.w3.org/2008/webapps/wiki/IndexedDatabaseFeatures
> 


Re: Indexed DB + Promises

2015-09-28 Thread Marc Fawzi
How about using ES7 decorators, like so:

@idb_promise
function () {
  //some code that uses the IDB API in Promise based way
}

and it would add .promise to the IDB APIs



On Mon, Sep 28, 2015 at 1:26 PM, David Rajchenbach-Teller <
dtel...@mozilla.com> wrote:

> On 28/09/15 22:14, Marc Fawzi wrote:
> > <<
> > Instead of having .promise to appended to the IDB methods as
> > in `store.openCursor(query).promise` why couldn't you configure the IDB
> > API to be in Promise returning mode and in that case openCursor(query)
> > would return a Promise.
> >>>
> >
> > I meant user configurable, maybe as a global config.
>
> That sounds problematic. What if a codebase is modernized piecewise, and
> only some modules have been made Promise-friendly?
>
>
> --
> David Rajchenbach-Teller, PhD
>  Performance Team, Mozilla
>
>