Re: [fileapi] urn - URL

2009-11-12 Thread Anne van Kesteren
On Thu, 12 Nov 2009 07:45:30 +0100, Julian Reschke julian.resc...@gmx.de  
wrote:

Anne van Kesteren wrote:
I don't see a reason why we should call the member urn. URL is much  
more consistent with other parts of the Web platform and works just as  
well. I thought we agreed on this previously so I'm just mentioning it  
here since it seems to have changed again.


URN seems to be fine as long the identifier actually *is* a URN (which  
it currently is).


That being said, and as mentioned before, I'm still not convinced that  
the spec needs to recommend a specific URI scheme. We have talked about  
that before; is there something in the mailing list archives that  
actually summarizes why this is needed?


Finally, *at this time* (while it *is* a URN) renaming to URL would be  
inconsistent with the relevant base specs, and produce even more  
confusion. The right thing to do here is to stay consistent with WebArch  
and RFC 3986, thus fix the terminology in HTML5.


It would however be consistent with WebSocket.URL, input type=url,  
url(image), EventSource.URL, HTMLDocument.URL, etc. Keeping the  
author-facing APIs the same would be a good thing IMO.



--
Anne van Kesteren
http://annevankesteren.nl/



Re: CSRF vulnerability in Tyler's GuestXHR protocol?

2009-11-12 Thread Devdatta
Hi Tyler,

Some parts of the protocol are not clear to me. Can you please clarify
the following :
1 In msg 1, what script context is the browser running in ? Site A or
Site B ? (in other words who initiates the whole protocol ?)

2 Msg 3 is a form POST or a XHR POST ? If the latter , 5 needs to be
marked as a GuestXHR

3 The 'secret123' token : Does it expire? If yes when/how ? Also, if
it expires, will the user have to again confirm the grant from A ?


Thanks
Devdatta



2009/11/10 Tyler Close tyler.cl...@gmail.com:
 I've elaborated on the example at:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I've tried to include all the information from our email exchange.
 Please let me know what parts of the description remain ambiguous.

 Just so that we're on the same page, the prior description was only
 meant to give the reader enough information to see that the scenario
 is possible to implement under Maciej's stated constraints. I expected
 the reader to fill in their favored technique where that choice could
 be done safely in many ways. Many of the particulars of the design
 (cookies vs URL arguments, 303 vs automated form post, UI for noting
 conflicts) can be done in several different ways and the choice isn't
 very relevant to the current discussion. All that said, I'm happy to
 fill out the scenario with as much detail as you'd like, if that helps
 us reach an understanding.

 --Tyler

 On Thu, Nov 5, 2009 at 8:31 PM, Adam Barth w...@adambarth.com wrote:
 You seem to be saying that your description of the protocol is not
 complete and that you've left out several security-critical steps,
 such as

 1) The user interface for confirming transactions.
 2) The information the server uses to figure out which users it is talking 
 to.

 Can you please provide a complete description of your protocol with
 all the steps required?  I don't see how we can evaluate the security
 of your protocol without such a description.

 Thanks,
 Adam


 On Thu, Nov 5, 2009 at 12:05 PM, Tyler Close tyler.cl...@gmail.com wrote:
 Hi Adam,

 Responses inline below...

 On Thu, Nov 5, 2009 at 8:56 AM, Adam Barth w...@adambarth.com wrote:
 Hi Tyler,

 I've been trying to understand the GuestXHR protocol you propose for
 replacing CORS:

 http://sites.google.com/site/guestxhr/maciej-challenge

 I don't understand the message in step 5.  It seems like it might have
 a CSRF vulnerability.  More specifically, what does the server do when
 it receives a GET request for https://B/got?A=secret123?

 Think of the resource at /got as like an Inbox for accepting an add
 event permission from anyone. The meta-variable A in the query
 string, along with the secret, is the URL to send events to. So a
 concrete request might look like:

 GET /got?site=https%3%2F%2Fcalendar.example.coms=secret123
 Host: upcoming.example.net

 When upcoming.example.net receives this request, it might:

 1) If no association for the site exists, add it
 2) If an existing association for the site exists respond with a page
 notifying the user of the collision and asking if it should overwrite
 or ignore.

 Notice that step 6 is a response from Site B back to the user's browser.

 Alternatively, the response in step 6 could always be a confirmation
 page asking the user to confirm any state change that is about to be
 made. So, the page from the upcoming event site might say:

 I just received a request to add a calendar to your profile. Did you
 initiate this request? yes no

 Note that such a page would also be a good place to ask the user for a
 petname for the new capability, if you're into such things, but I
 digress...

 The slides say Associate user,A with secret123.  That sounds like
 server B changes state to associate secret123 with the the pair (user,
 A).  What stops an attacker from forging a cross-site request of the
 form https://B/got?A=evil123?

 In the design as presented, nothing prevents this. I considered the
 mitigation presented above sufficient for Maciej's challenge. If
 desired, we could tighten things up, without resorting to an Origin
 header, but I'd have to add some more stuff to the explanation.

  Won't that overwrite the association?

 That seems like a bad idea.

 There doesn't seem to be anything in the protocol that binds the A
 in that message to server A.

 The A is just the URL for server A.

 More generally, how does B know the message https://B/got?A=secret123
 has anything to do with user?  There doesn't seem to be anything in
 the message identifying the user.  (Of course, we could use cookies to
 do that, but we're assuming the cookie header isn't present.)

 This request is just a normal page navigation, so cookies and such
 ride along with the request. In the diagrams, all requests are normal
 navigation requests unless prefixed with GXHR:.

 We used these normal navigation requests in order to keep the user
 interface and network communication diagram as similar to Maciej's
 solution as possible. If I 

Re: STS and lockCA

2009-11-12 Thread Adam Barth
On Wed, Nov 11, 2009 at 7:25 AM, Bil Corry b...@corry.biz wrote:
 Would LockCA prevent the site from loading if it encountered a new cert from 
 the same CA?

My understanding is that it would not.

 Or are you talking about a site that wants to switch CAs and is using LockCA?

I think Gervase means that you want some overlap so that folks that
connect to your site the day after you renew your certificate are
protected.

 How about instead there's a way to set the max-age relative to the cert 
 expiration?  So -3024000 is two weeks before the cert expiration and 3024000 
 is two weeks after.  I'm in agreement with Devdatta that it would be easy for 
 someone to lock out their visitors, and I think this is easier to implement.

That seems overly complicated and contrary to the semantics of max-age
in other HTTP headers.

I'm not convinced we need to paternally second-guess site operators.
Keep in mind that the site operator can supply a lower max-age in a
subsequent request if they realize they screwed up and want to reduce
the duration.  That said, it might be worth caping the max-age at one
or two years.

Adam



DAP and security (was: Rename “File API” to “FileReader API”?)

2009-11-12 Thread Dominique Hazael-Massieux
Le mardi 10 novembre 2009 à 17:47 -0800, Maciej Stachowiak a écrit :
 I would be concerned with leaving file writing to DAP, because a  
 widely held view in DAP seems to be that security can be ignored while  
 designing APIs and added back later with an external policy file  
 mechanism.

Frederick already mentioned this isn’t the case at all, and I want to
strongly reject the notion that DAP is considering security as an
after-the-fact or out-of-band aspect in the design of its APIs.

Our charter clearly stipulates that our policy model “must be consistent
with the existing same origin policies (as documented in the HTML5
specification), in the sense that a deployment of the policy model in
Web browsers must be possible”.

In fact, most of models that have been discussed in this thread to
reduce the risks exposed by new APIs (sandbox for writing, user
interaction or markup-based element for sharing data) were already
mentioned as options by DAP WG participants during our F2F last week.

More generally, I don’t think assuming that DAP would create worse/less
secure APIs than WebApps or any other group would is either right nor
useful to ensure a good collaboration between our groups. And clearly,
we will actively be seeking feedback and input from the WebApps Working
Group when we produce new APIs, which should also contribute to reduce
the fears that we would get it all wrong :)

Regards,

Dom





Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread Jonas Sicking
On Wed, Nov 11, 2009 at 6:59 PM, Maciej Stachowiak m...@apple.com wrote:

 On Nov 11, 2009, at 3:51 PM, Eric Uhrhane wrote:

 On Mon, Nov 9, 2009 at 4:21 PM, Maciej Stachowiak m...@apple.com wrote:

 On Nov 9, 2009, at 12:08 PM, Ian Hickson wrote:

 On Mon, 2 Nov 2009, Doug Schepers wrote:

 Please send in use cases, requirements, concerns, and concrete
 suggestions about the general topic (regardless of your opinion about
 my
 suggestion).

 Some use cases:

 * Ability to manage attachments in Web-based mail clients, both
 receiving
  and sending
 * Ability to write a Web-based mail client that uses mbox files or the
  Maildir format locally
 * Ability to write a Web-based photo management application that handles
  the user's photos on the user's computer
 * Ability to expose audio files to native media players
 * Ability to write a Web-based media player that indexes the user's
 media

 These are good use cases.


 Basically these require:

 - A per-origin filesystem (ideally exposed as a directory on the user's
  actual filesystem)
 - The ability to grant read and/or write privileges to a particular
  directory to an origin
 - An API for files that supports reading and writing arbitrary ranges
 - An API for directories that supports creating, renaming, moving, and
  enumerating child directories and files

 Can you explain how these requirements follow from the use cases? It
 seems
 to me the use cases you cited would be adequately covered by:

 - Existing facilities including input type=file with multiple
 selection.
 - File read facilities as outlined in the File API spec.
 - Ability to create named writable files in a per-origin private use area
 (with no specific requirement that they be browsable by the user, or in
 hierarchical directories).

 I think that exposing audio files to native players would require the
 ability to create directories in the native filesystem, thus making
 them browsable.  Sure, you could just toss them in a single directory
 without hierarchy, but that's not a great user experience, and it hits
 serious performance problems with large audio collections.  The same
 problems would affect the photo manager.

 With the native music player I'm most familiar with, iTunes, the user is not
 even really aware of where audio files are in the file system. It does use a
 directory hierarchy, but it's pretty rare for users to actually poke around
 in there. And the iPod application on iPhone (as well as the iPod itself) do
 not even have a user-visible filesystem hierarchy. So overall I don't buy
 hierarchical directories as a hard requirement to build a music player or to
 expose content to a music player.

 That being said, I think creating subdirectories in a per-origin private use
 area is probably less risky than user-granted privilege to manipulate
 directories elsewhere in the filesystem. But I would be inclined to avoid
 this mechanism at first, and if it is needed, start with the bare minimum.
 I'm not convinced by your argument that it is necessary.

I can think of two security concerns if a website is able to store
executable files with a proper .exe extension on windows:
1. It's happened several times in the past that exploits have made it
possible to run a executable stored on the users system. If a website
is able to first store an arbitrary executable and then execute it,
that's much worse than being able to run the executables that live on
the system already. In other words, being able to write a executable
to the users system can be a important first step in a two-step
attack.
This could be fixed by 'salting' all the directory names. I.e. make
the directory where the files are stored unguessable. We do this for
the profile directories in Firefox.
2. Having a untrusted executable stored on the users system is
somewhat scary. A user browsing around on his hard drive could easily
accidentally run such an executable. Especially since the executable
could contain a arbitrary icon, such as an icon similar some other
program. Imagine for example writing a file called skype.exe with a
skype icon being written. A user could very well accidentally find
this while searching for skype on his/her computer.

I think that if we were to implement something like this in firefox,
we would probably never write executable files. Instead we would
mangle their on-disk-name such that windows wouldn't recognize it as
executable. (on mac/linux I think never setting the 'executable' flag
would have the same effect). This could of course be hidden from the
API, such that the web page API still saw a file with a proper .exe
extension.

There's quite possibly other issues like this as well. Writing .doc
files with evil macros comes to mind.

/ Jonas



Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread イアンフェッティ
This is really getting into fantasy-land... Writing a file and hoping that
the user actually opens up explorer/finder/whatever and browses to some
folder deep within the profile directory, and then double clicks something?
Telling a user click here and run blah to get a pony is so much easier.

2009/11/12 Jonas Sicking jo...@sicking.cc

 On Wed, Nov 11, 2009 at 6:59 PM, Maciej Stachowiak m...@apple.com wrote:
 
  On Nov 11, 2009, at 3:51 PM, Eric Uhrhane wrote:
 
  On Mon, Nov 9, 2009 at 4:21 PM, Maciej Stachowiak m...@apple.com
 wrote:
 
  On Nov 9, 2009, at 12:08 PM, Ian Hickson wrote:
 
  On Mon, 2 Nov 2009, Doug Schepers wrote:
 
  Please send in use cases, requirements, concerns, and concrete
  suggestions about the general topic (regardless of your opinion about
  my
  suggestion).
 
  Some use cases:
 
  * Ability to manage attachments in Web-based mail clients, both
  receiving
   and sending
  * Ability to write a Web-based mail client that uses mbox files or the
   Maildir format locally
  * Ability to write a Web-based photo management application that
 handles
   the user's photos on the user's computer
  * Ability to expose audio files to native media players
  * Ability to write a Web-based media player that indexes the user's
  media
 
  These are good use cases.
 
 
  Basically these require:
 
  - A per-origin filesystem (ideally exposed as a directory on the
 user's
   actual filesystem)
  - The ability to grant read and/or write privileges to a particular
   directory to an origin
  - An API for files that supports reading and writing arbitrary ranges
  - An API for directories that supports creating, renaming, moving, and
   enumerating child directories and files
 
  Can you explain how these requirements follow from the use cases? It
  seems
  to me the use cases you cited would be adequately covered by:
 
  - Existing facilities including input type=file with multiple
  selection.
  - File read facilities as outlined in the File API spec.
  - Ability to create named writable files in a per-origin private use
 area
  (with no specific requirement that they be browsable by the user, or in
  hierarchical directories).
 
  I think that exposing audio files to native players would require the
  ability to create directories in the native filesystem, thus making
  them browsable.  Sure, you could just toss them in a single directory
  without hierarchy, but that's not a great user experience, and it hits
  serious performance problems with large audio collections.  The same
  problems would affect the photo manager.
 
  With the native music player I'm most familiar with, iTunes, the user is
 not
  even really aware of where audio files are in the file system. It does
 use a
  directory hierarchy, but it's pretty rare for users to actually poke
 around
  in there. And the iPod application on iPhone (as well as the iPod itself)
 do
  not even have a user-visible filesystem hierarchy. So overall I don't buy
  hierarchical directories as a hard requirement to build a music player or
 to
  expose content to a music player.
 
  That being said, I think creating subdirectories in a per-origin private
 use
  area is probably less risky than user-granted privilege to manipulate
  directories elsewhere in the filesystem. But I would be inclined to avoid
  this mechanism at first, and if it is needed, start with the bare
 minimum.
  I'm not convinced by your argument that it is necessary.

 I can think of two security concerns if a website is able to store
 executable files with a proper .exe extension on windows:
 1. It's happened several times in the past that exploits have made it
 possible to run a executable stored on the users system. If a website
 is able to first store an arbitrary executable and then execute it,
 that's much worse than being able to run the executables that live on
 the system already. In other words, being able to write a executable
 to the users system can be a important first step in a two-step
 attack.
 This could be fixed by 'salting' all the directory names. I.e. make
 the directory where the files are stored unguessable. We do this for
 the profile directories in Firefox.
 2. Having a untrusted executable stored on the users system is
 somewhat scary. A user browsing around on his hard drive could easily
 accidentally run such an executable. Especially since the executable
 could contain a arbitrary icon, such as an icon similar some other
 program. Imagine for example writing a file called skype.exe with a
 skype icon being written. A user could very well accidentally find
 this while searching for skype on his/her computer.

 I think that if we were to implement something like this in firefox,
 we would probably never write executable files. Instead we would
 mangle their on-disk-name such that windows wouldn't recognize it as
 executable. (on mac/linux I think never setting the 'executable' flag
 would have the same effect). This could of course be hidden from 

Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread Jonas Sicking
2009/11/12 Ian Fette (イアンフェッティ) ife...@google.com:
 This is really getting into fantasy-land... Writing a file and hoping that
 the user actually opens up explorer/finder/whatever and browses to some
 folder deep within the profile directory, and then double clicks something?
 Telling a user click here and run blah to get a pony is so much easier.

So first off that only addresses one of the two attacks I listed.

But even that case I don't think is that fantasy-y. The whole point of
writing actual files is so that users can interact with the files,
right? In doing so they'll be just a double-click away from running
arbitrary malicious code. No warning dialogs or anything. Instead the
attacker has a range of social engineering opportunities using file
icon and name as to make doubleclicking the file inviting.

Like I said, I think this might be possible to work around in the
implementation by making sure to neuter all executable files before
they go to disk.

/ Jonas



Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread Adam Barth
2009/11/12 Jonas Sicking jo...@sicking.cc:
 2009/11/12 Ian Fette (イアンフェッティ) ife...@google.com:
 This is really getting into fantasy-land... Writing a file and hoping that
 the user actually opens up explorer/finder/whatever and browses to some
 folder deep within the profile directory, and then double clicks something?
 Telling a user click here and run blah to get a pony is so much easier.

 So first off that only addresses one of the two attacks I listed.

 But even that case I don't think is that fantasy-y. The whole point of
 writing actual files is so that users can interact with the files,
 right? In doing so they'll be just a double-click away from running
 arbitrary malicious code. No warning dialogs or anything. Instead the
 attacker has a range of social engineering opportunities using file
 icon and name as to make doubleclicking the file inviting.

 Like I said, I think this might be possible to work around in the
 implementation by making sure to neuter all executable files before
 they go to disk.

Keep in mind that some users interact with their file systems via
search, not browse.  For example, if I use Quicksilver or Spotlight to
launch skype.exe (sorry for mixing platforms), I might easily launch
the skype.exe buried in my profile instead of the one in Program
Files.

Adam



Re: STS and lockCA

2009-11-12 Thread Gervase Markham
On 11/11/09 15:25, Bil Corry wrote:
 Would LockCA prevent the site from loading if it encountered a new
 cert from the same CA?

No. Hence the name - lock _CA_. :-P

(BTW, I'm not subscribed to public-webapps; you'll need to CC me on any
conversation you want me in.)

 Or are you talking about a site that wants to
 switch CAs and is using LockCA?

Exactly. If sites never wanted to switch CAs, then LockCA could be
designed to never expire and no-one would care. The design difficulty is
setting things up so they can easily switch with the minimum decrease in
protection.

 How about instead there's a way to set the max-age relative to the
 cert expiration?  So -3024000 is two weeks before the cert expiration
 and 3024000 is two weeks after.  I'm in agreement with Devdatta that
 it would be easy for someone to lock out their visitors, and I think
 this is easier to implement.

I don't think this helps much, and it certainly makes things more
complicated.

Musings: STS already has an expiration mechanism. However, a site like
Paypal wants ForceTLS basically forever, but may at some point want to
switch CAs. If we tie LockCA expiry to ForceTLS (STS) expiry, they have
to weaken their ForceTLS protection to switch CAs. This is bad. On the
other hand, if we don't tie them together, we have to have a different
expiry mechanism, thereby having two of them. This is also bad. :-|

The expiry design needs to minimise the vulnerability window. Ideally,
this means that every single client would stop checking for the same CA
at the same moment, and then the site could switch and re-enable the
protection. We also ideally want this to work on machines with badly-set
clocks, and across time zones.

Therefore, I think the right answer is to say LockCA expires in X
seconds, defining X in the header. A site which wants to obsessively
minimise its vulnerability window can dynamically generate the value,
decreasing it from X by 1 second per second until the change time. All
clients will do the math independent of their clocks, and stop checking
at the right moment. Sites which care a bit less can just have it
normally X, then remove the header entirely X seconds before the
scheduled change, and wait for all the clients to stop checking.

Gerv



Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread イアンフェッティ
2009/11/12 Jonas Sicking jo...@sicking.cc

 2009/11/12 Ian Fette (イアンフェッティ) ife...@google.com:
  This is really getting into fantasy-land... Writing a file and hoping
 that
  the user actually opens up explorer/finder/whatever and browses to some
  folder deep within the profile directory, and then double clicks
 something?
  Telling a user click here and run blah to get a pony is so much easier.

 So first off that only addresses one of the two attacks I listed.


Fair


 But even that case I don't think is that fantasy-y. The whole point of
 writing actual files is so that users can interact with the files,
 right? In doing so they'll be just a double-click away from running
 arbitrary malicious code. No warning dialogs or anything. Instead the


Why do you assume this? On Windows, we can write the MotW identifier, which
would lead to windows showing a warning. On linux, we could refuse to chmod
+x.


 attacker has a range of social engineering opportunities using file
 icon and name as to make doubleclicking the file inviting.

 Like I said, I think this might be possible to work around in the
 implementation by making sure to neuter all executable files before
 they go to disk.

 / Jonas



Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread Jonas Sicking
2009/11/12 Ian Fette (イアンフェッティ) ife...@google.com:
 2009/11/12 Jonas Sicking jo...@sicking.cc

 2009/11/12 Ian Fette (イアンフェッティ) ife...@google.com:
  This is really getting into fantasy-land... Writing a file and hoping
  that
  the user actually opens up explorer/finder/whatever and browses to some
  folder deep within the profile directory, and then double clicks
  something?
  Telling a user click here and run blah to get a pony is so much
  easier.

 So first off that only addresses one of the two attacks I listed.


 Fair


 But even that case I don't think is that fantasy-y. The whole point of
 writing actual files is so that users can interact with the files,
 right? In doing so they'll be just a double-click away from running
 arbitrary malicious code. No warning dialogs or anything. Instead the

 Why do you assume this? On Windows, we can write the MotW identifier, which
 would lead to windows showing a warning. On linux, we could refuse to chmod
 +x.

Ah, don't know enough about this feature so can't really comment. All
the information I found was regarding MotW on webpages, not on
executables.

/ Jonas



Summary of Media Annotations WG 5th F2F in Santa Clara

2009-11-12 Thread Joakim Söderberg
Hello everyone, 
I hope you all had a fruitful meeting during the TPAC!
Here is a short summary for Media Annotations WG (MAWG) 5th F2F meeting.
 
The charter of the group is to facilitate for web developers to access metadata 
in multimedia objects. Our approach is to devise a metadata ontology that 
defines relationships between metadata properties/attributes. 
 
Please note that the mission of this WG is to collaborate with existing 
formats, and that we do not introduce any new metadata attributes. It follows 
that we don't not have any placeholders to store or accumulate metadata, but 
only translate already existing values through our mediating metadata 
vocabulary, comprised of the ontology and API.
 
The group met for two days in Santa Clara during the TPAC 2009 meeting. 12 
group participants and 13 observers were registered. Three documents has been 
published by the group as FPWD:
 
- Use Cases and Requirements for Ontology and API for Media Object 
1.0, June 2009
 
- Ontology for Media Resource 1.0, June 2009 
 
- API for Media Resource 1.0, October 2009 
 
The Ontology and API for Media Resource Documents are intended to become 
recommendations, and are scheduled to go for last call on March 2010.
 
More highlights from the F2F:
- API architecture discussions focused on Web Broswer vendors with Dough 
Schepers of the W3C and Silvia Pfeiffer from Mozilla.
 
- The group now considers having just one function to return any metadata from 
any type of media object (see: 
http://lists.w3.org/Archives/Public/public-media-annotation/2009Nov/0012.html).
 
- Discussions with Rigo from PLING (Policy Language Interest Group) concerning 
support for Policy documents.
 
- Coordination with John at Stanford about accessibility support in HTML5 and 
the MAWG vocabulary.
 
 
 
Best Regards
Joakim Söderberg
co-Chair Media Annotations WG


RE: [fileapi] urn - URL

2009-11-12 Thread paul.downey
 Anne van Kesteren wrote:

 It would however be consistent with WebSocket.URL, input type=url,  
 url(image), EventSource.URL, HTMLDocument.URL, etc. Keeping the  
 author-facing APIs the same would be a good thing IMO.

+1 I found the use of the URN scheme a little opaque and magical.

--
Paul (psd)
http://blog.whatfettle.com
http://osmosoft.com




Re: CfC: to publish Last Call Working Draft of XHR (1); deadline 18 November

2009-11-12 Thread Arthur Barstow

Anne, All,

On Nov 10, 2009, at 5:01 PM, Barstow Art (Nokia-CIC/Boston) wrote:


As with all of our CfCs, positive response is preferred and
encouraged and silence will be assumed to be assent. The deadline for
comments is November 18.


I support this publication.

Assuming we do get consensus to publish it, two things:

1. Length of the comment period. 3 weeks is minimum and would be OK  
with me, especially since this spec has been previously published as  
a LCWD.


2. Who do we ask to review the LC, both W3C WGs and external groups?

-Regards, Art Barstow





Re: CfC: to publish Last Call Working Draft of XHR (1); deadline 18 November

2009-11-12 Thread Anne van Kesteren
On Thu, 12 Nov 2009 12:49:22 +0100, Arthur Barstow art.bars...@nokia.com  
wrote:
1. Length of the comment period. 3 weeks is minimum and would be OK with  
me, especially since this spec has been previously published as a LCWD.


Sounds good.



2. Who do we ask to review the LC, both W3C WGs and external groups?


HTTP WG, Device APIs WG, HTML WG (due to dependencies),  
Internationalization Core WG (they commented before) and maybe the TAG?


Every group is of course welcome to provide review. In fact, encouraged.

Cheers,


--
Anne van Kesteren
http://annevankesteren.nl/



Re: Use Cases and Requirements for Saving Files Securely

2009-11-12 Thread Charles McCathieNevile
On Wed, 11 Nov 2009 09:51:56 +0100, Maciej Stachowiak m...@apple.com  
wrote:




On Nov 10, 2009, at 11:45 PM, Charles McCathieNevile wrote:

On Tue, 10 Nov 2009 01:21:06 +0100, Maciej Stachowiak m...@apple.com  
wrote:




On Nov 9, 2009, at 12:08 PM, Ian Hickson wrote:


On Mon, 2 Nov 2009, Doug Schepers wrote:


Please send in use cases, requirements, concerns, and concrete
suggestions about the general topic (regardless of your opinion  
about my

suggestion).


Some use cases:

* Ability to manage attachments in Web-based mail clients, both  
receiving  and sending

* Ability to write a Web-based mail client that uses mbox files or the
Maildir format locally
* Ability to write a Web-based photo management application that  
handles

the user's photos on the user's computer
* Ability to expose audio files to native media players
* Ability to write a Web-based media player that indexes the user's  
media


These are good use cases.


I would like to expand them a little, in each case making it possible  
to use existing content, or expose content directly to the user  
enabling them to change the software they use, or even use multiple  
tools on the same content - a web app one day, a different one next  
week, a piece of shrink-wrap software from time to time.


I'm having trouble following. Could you give more specific examples of  
what you have in mind?


* Ability to make a web-based mail interface that has access to the actual  
files that my local mail client has.


* Ability to make a web-based audio player that lets me play the audio I  
already own (but with a different UI accessing different metadata), which  
have been filed by iTunes in a set of directories on my local drive.


Does your expansion imply new requirements, on top of either Ian's list  
or my list?


As I understand it, your list is just a restriction of Ian's. I am not  
sure if this requires an extension of Ian's list - I am listing the things  
I am actually trying to do or know of people actually working on, before  
trying to get the requirements for each different approach.



And add:

* A document management system as hybrid web app, allowing file-based  
access to the documents as well.


I don't exactly follow this either. By hybrid web app, do you mean  
something running locally with some portion of native code doing part of  
the job?


No, I mean a web-app running as a web app, which assumes that other  
applications, some local, will want to use the document.


For example, I use several different applications to interact with images,  
with PDF documents and Documents in Word/OpenOffice formats, depending on  
what I am doing with the document at the time. I would like to add to that  
range of applications the ability to use a web-app without having to  
maintain separate copies - synching is even harder than managing my file  
system.


cheers

Chaals

--
Charles McCathieNevile  Opera Software, Standards Group
je parle français -- hablo español -- jeg lærer norsk
http://my.opera.com/chaals   Try Opera: http://www.opera.com



RE: [WARP] Comments to WARP spec

2009-11-12 Thread Marcin Hanclik
Hi,

What about semantic distinctions?
tag as proposed till now seems to be too detailed and does not scale.
For HTML/XHR:
script means an executable content retrieved from the remote host.
img, video etc means a displayable content retrieved from the remote host.
iframe means a container (possibly for executable and displayable content) 
retrieved from the remote host.
form means form submission, i.e. data is sent and not retrieved (topic 
discussed at TPAC. This also relates to the notion of retrievable content that 
is currently defined in WARP).
API means that the network resource is to be requested by some API and not 
markup.

We could have similar model to @rel on link from HTML, i.e. some meta 
information.
We probably would like to distinguish between executable/non-executable (e.g. 
displayable or styling) contents and a kind of containers into which we 
have/not have insights.
Keeping WARP on an abstract level, we could specify that the semantics of the 
particular content in the WARP model is out of scope for WARP.
Then e.g. for HTML we could adopt the above distinctions in some other spec. It 
should work for HTML+SVG.

The proposal is:
add type attribute on access element that must have a value that is a set 
of space-separated tokens:
exec -  any retrievable content that is executed within the user agent (i.e. 
something that - when retrieved - will be executed),
display - any retrievable content that is (only) displayed by the user agent,
form - any data submitted by the user agent,
container - any (markup) container that could be used to load executable, 
displayable or any other type of content by the user agent (i.e. e.g. some html 
page. This touches upon a being clicked in a widget: should the browser be 
opened? ),
api - any retrievable and displayable content that is to be processed by the 
executable content within the user agent (e.g. by XHR. But what to do with the 
submissions based on XHR...? It seems API blurs this model a bit, since it is 
undefined what would happened to the retrieved data. Also e.g. the retrieved 
XML may be executed by some processor developed in script.),
any - all/any of the above.
Missing value equals to any (the default).
This attribute specifies the origin of the access request and purpose for the 
submitted/retrieved data.

Any views on this?

Thanks,
Marcin

Marcin Hanclik
ACCESS Systems Germany GmbH
Tel: +49-208-8290-6452  |  Fax: +49-208-8290-6465
Mobile: +49-163-8290-646
E-Mail: marcin.hanc...@access-company.com

-Original Message-
From: public-webapps-requ...@w3.org [mailto:public-webapps-requ...@w3.org] On 
Behalf Of Marcos Caceres
Sent: Tuesday, November 10, 2009 4:30 PM
To: SULLIVAN, BRYAN L (ATTCINW)
Cc: WebApps WG
Subject: Re: [WARP] Comments to WARP spec



SULLIVAN, BRYAN L (ATTCINW) wrote:
 Marcos,
 I agree there is an assumption behind the approach I proposed, which I also 
 believe will be valid for the vast majority of widgets which will actually 
 have index.html or something like that as the start page. Further, the 
 statements in the config.xml apply to all resources in the widget, not just 
 the start page, i.e. I can start with a non-HTML which references an HTML 
 file in the package, to which the tag attribute applies.

So we are clear, the tag attribute does not work in the following
situation. I want to disable x:script, but allow v:script... unless you
know what the things different namespaces will not be added dynamically
to the DOM:

x:html xmlns:x=http://www.w3.org/1999/xhtml;
...
x:script ... /x:script

v:svg v:width=6cm v:height=5cm v:viewBox=0 0 600 500
  xmlns:v=http://www.w3.org/2000/svg; version=1.1
   v:script src=../v:script
/v:svg

/x:html

 If the proposed solution is inadequate, I welcome other suggestions.

I don't have a suggestion because I don't believe this part of WARP is
broken or is necessary.

 But as it stands, the WARP spec is not consistent with the web
security model, so we need to fix theaccess  element definition somehow.

  Well, the whole point of WARP is to put these boundaries around the
behavior of widgets because they run locally. How a browsing context
should behave when run locally is not really defined by HTML5. This
leaves a gap for us to fill.




Access Systems Germany GmbH
Essener Strasse 5  |  D-46047 Oberhausen
HRB 13548 Amtsgericht Duisburg
Geschaeftsfuehrer: Michel Piquemal, Tomonori Watanabe, Yusuke Kanda

www.access-company.com

CONFIDENTIALITY NOTICE
This e-mail and any attachments hereto may contain information that is 
privileged or confidential, and is intended for use only by the
individual or entity to which it is addressed. Any disclosure, copying or 
distribution of the information by anyone else is strictly prohibited.
If you have received this document in error, please notify us promptly by 
responding to this e-mail. Thank you.


Re: [WARP] Comments to WARP spec

2009-11-12 Thread Marcos Caceres



Marcin Hanclik wrote:

Hi,

What about semantic distinctions?
tag as proposed till now seems to be too detailed and does not scale.
For HTML/XHR:
script  means an executable content retrieved from the remote host.
img,video  etc means a displayable content retrieved from the remote host.
iframe  means a container (possibly for executable and displayable content) 
retrieved from the remote host.
form  means form submission, i.e. data is sent and not retrieved (topic 
discussed at TPAC. This also relates to the notion of retrievable content that is 
currently defined in WARP).
API means that the network resource is to be requested by some API and not 
markup.

We could have similar model to @rel onlink  from HTML, i.e. some meta 
information.
We probably would like to distinguish between executable/non-executable (e.g. 
displayable or styling) contents and a kind of containers into which we 
have/not have insights.
Keeping WARP on an abstract level, we could specify that the semantics of the 
particular content in the WARP model is out of scope for WARP.
Then e.g. for HTML we could adopt the above distinctions in some other spec. It 
should work for HTML+SVG.

The proposal is:
add type attribute on access element that must have a value that is a set 
of space-separated tokens:
exec -  any retrievable content that is executed within the user agent (i.e. 
something that - when retrieved - will be executed),
display - any retrievable content that is (only) displayed by the user agent,
form - any data submitted by the user agent,
container - any (markup) container that could be used to load executable, 
displayable or any other type of content by the user agent (i.e. e.g. some html page. This 
touches upona  being clicked in a widget: should the browser be opened? ),
api - any retrievable and displayable content that is to be processed by the executable 
content within the user agent (e.g. by XHR. But what to do with the submissions based on XHR...? It 
seems API blurs this model a bit, since it is undefined what would happened to the retrieved data. 
Also e.g. the retrieved XML may be executed by some processor developed in script.),
any - all/any of the above.
Missing value equals to any (the default).
This attribute specifies the origin of the access request and purpose for the 
submitted/retrieved data.

Any views on this?



My view is that all this is overkill. I would prefer to keep things simple.

To add the above would mean that a UA would have to flag every single 
element and every future supported element, as well as every feature, 
into a particular class (or into multiple classes or worst, do this 
dynamically (e.g., script style=display: block; 
background-color:red;.../script)). This proposal does not scale either.


Kind regards,
Marcos



[widgets] Draft Minutes for 12 November 2009 Voice Conference

2009-11-12 Thread Arthur Barstow
The draft minutes from the November 12 Widgets voice conference are  
available at the following and copied below:


 http://www.w3.org/2009/11/12-wam-minutes.html

WG Members - if you have any comments, corrections, etc., please send  
them to the public-webapps mail list before 19 November 2009 (the  
next Widgets voice conference); otherwise these minutes will be  
considered Approved.


-Regards, Art Barstow

   [1]W3C

  [1] http://www.w3.org/

   - DRAFT -

   Widgets Voice Conference

12 Nov 2009

   [2]Agenda

  [2] http://lists.w3.org/Archives/Public/public-webapps/ 
2009OctDec/0631.html


   See also: [3]IRC log

  [3] http://www.w3.org/2009/11/12-wam-irc

Attendees

   Present
  Art, Marcin, Marcos, Arve, Robin, David_Rogers

   Regrets
  Frederick, David

   Chair
  Art

   Scribe
  Art

Contents

 * [4]Topics
 1. [5]Review and tweak agenda
 2. [6]Announcements
 3. [7]PC spec: Constrained specification of icon
 4. [8]PC spec: Candidate publication plans
 5. [9]TWI spec: test suite status
 6. [10]TWI spec: Call for consensus to publish LC#2
 7. [11]VM-MF spec: issues by Magus
 8. [12]VM-MF spec: more precision on full screen
 9. [13]VM-MF spec: security considerations by Davi
10. [14]WARP spec: IRI normalization
11. [15]WARP spec: comments from Bryan
12. [16]WARP spec: local addresses and UPnP
13. [17]URI Scheme spec: LC comment processing
14. [18]AOB
 * [19]Summary of Action Items
 _



   scribe ScribeNick: ArtB

   scribe Scribe: Art

   Date: 12 November 2009

   trackbot Sorry... I don't know anything about this channel

   trackbot If you want to associate this channel with an existing
   Tracker, please say 'trackbot, associate this channel with #channel'
   (where #channel is the name of default channel for the group)

   trackbot, associate this channel with #webapps

   trackbot Associating this channel with #webapps...

   Marcos yikes!

Review and tweak agenda

   AB: draft agenda submitted on Nov 11 (
   [20]http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/06
   31.html ).
   ... One change request is to add a third topic for the VM-MF spec
   more precision on full screen (
   [21]http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/05
   41.html ).
   ... another change request is to talk briefly about our plans for
   the PC Candidate #2
   ... and we will drop 5.b since David won't be here and we'll discuss
   that topic on next call if it remains open
   ... any other change requests?

 [20] http://lists.w3.org/Archives/Public/public-webapps/ 
2009OctDec/0631.html
 [21] http://lists.w3.org/Archives/Public/public-webapps/ 
2009OctDec/0541.html


   [ None ]

Announcements

   AB: any short announcements? I don't have any

   [ None ]

   marcin Agenda point 5. should point to:
   [22]http://dev.w3.org/2006/waf/widgets-vmmf/

 [22] http://dev.w3.org/2006/waf/widgets-vmmf/

PC spec: Constrained specification of icon

   AB: during last week's f2f meeting we discussed an icon issue that
   Magnus raised (
   [23]http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/04
   45.html ). Since then, one of his colleagues expanded on their
   concern via (
   [24]http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/05
   67.html ).
   ... Marcos and I discussed this issue in IRC earlier today. The PC
   spec doesn't actually specify what a WUA will do with the icon
   elements. Thus, it seems like the text about the optional width and
   height attributes only applying to formats with intrinsic
   width/height can be removed.
   ... what do people think about this issue?

 [23] http://lists.w3.org/Archives/Public/public-webapps/ 
2009OctDec/0445.html
 [24] http://lists.w3.org/Archives/Public/public-webapps/ 
2009OctDec/0567.html


   MC: I don't think removing the text would be problematic
   ... but I want to review it more thoroughly
   ... now, width and height processing is limited
   ... WUA are free to interpret w/h as they want
   ... thus I think we should remain silent on what the WUA does with
   these two attributes

   AB: Marcin, any comments?

   MH: I haven't looked at it yet

   AB: the action now is for people to respond on the mail list
   ... Marcos, will you do that?

   MC: yes, but need to check again the proposal
   ... my gut feel is that we should leave this to impl
   ... but if we delete those two statements, I don't think it will
   affect implementations

   AB: agree on the will not affect impls

PC spec: Candidate publication plans

   AB: LCWD#3 comments end on 19 November
   ... assuming we get no major comments, we will want to publish CR#2
   ASAP

   MC: yes, that's correct

   AB: I need to schedule a director's call
   ... I 

Re: [XHR2] timeout

2009-11-12 Thread Anne van Kesteren

On Wed, 11 Nov 2009 00:03:07 +0100, Jonas Sicking jo...@sicking.cc wrote:
On Tue, Nov 10, 2009 at 10:17 AM, Anne van Kesteren ann...@opera.com  
wrote:
Anyway, do you have opinions on the synchronous case? Do you agree we  
should use TIMEOUT_ERR there? What do the people from Microsoft think?


That makes sense to me.


I've now defined the timeout feature:

  http://dev.w3.org/2006/webapi/XMLHttpRequest-2/


Please review it! (Same for FormData and really all other parts of the  
specification.)



--
Anne van Kesteren
http://annevankesteren.nl/



RE: [WARP] Comments to WARP spec

2009-11-12 Thread Marcin Hanclik
Hi Marcos,

I understand that too many details may not work or be an obstacle in the 
adoption.
However, I derive that from the security point of view we still would like to 
distinguish at least between executable and non-executable content.
The distinction between retrievable and submissible touches upon the privacy 
(at present the users do not complain when they submit any data), but seems to 
be out of concerns at present.

Thanks,
Marcin

Marcin Hanclik
ACCESS Systems Germany GmbH
Tel: +49-208-8290-6452  |  Fax: +49-208-8290-6465
Mobile: +49-163-8290-646
E-Mail: marcin.hanc...@access-company.com

-Original Message-
From: Marcos Caceres [mailto:marc...@opera.com]
Sent: Thursday, November 12, 2009 3:04 PM
To: Marcin Hanclik
Cc: SULLIVAN, BRYAN L (ATTCINW); WebApps WG
Subject: Re: [WARP] Comments to WARP spec



Marcin Hanclik wrote:
 Hi,

 What about semantic distinctions?
 tag as proposed till now seems to be too detailed and does not scale.
 For HTML/XHR:
 script  means an executable content retrieved from the remote host.
 img,video  etc means a displayable content retrieved from the remote host.
 iframe  means a container (possibly for executable and displayable content) 
 retrieved from the remote host.
 form  means form submission, i.e. data is sent and not retrieved (topic 
 discussed at TPAC. This also relates to the notion of retrievable content 
 that is currently defined in WARP).
 API means that the network resource is to be requested by some API and not 
 markup.

 We could have similar model to @rel onlink  from HTML, i.e. some meta 
 information.
 We probably would like to distinguish between executable/non-executable (e.g. 
 displayable or styling) contents and a kind of containers into which we 
 have/not have insights.
 Keeping WARP on an abstract level, we could specify that the semantics of the 
 particular content in the WARP model is out of scope for WARP.
 Then e.g. for HTML we could adopt the above distinctions in some other spec. 
 It should work for HTML+SVG.

 The proposal is:
 add type attribute on access element that must have a value that is a set 
 of space-separated tokens:
 exec -  any retrievable content that is executed within the user agent 
 (i.e. something that - when retrieved - will be executed),
 display - any retrievable content that is (only) displayed by the user 
 agent,
 form - any data submitted by the user agent,
 container - any (markup) container that could be used to load executable, 
 displayable or any other type of content by the user agent (i.e. e.g. some 
 html page. This touches upona  being clicked in a widget: should the 
 browser be opened? ),
 api - any retrievable and displayable content that is to be processed by 
 the executable content within the user agent (e.g. by XHR. But what to do 
 with the submissions based on XHR...? It seems API blurs this model a bit, 
 since it is undefined what would happened to the retrieved data. Also e.g. 
 the retrieved XML may be executed by some processor developed in script.),
 any - all/any of the above.
 Missing value equals to any (the default).
 This attribute specifies the origin of the access request and purpose for the 
 submitted/retrieved data.

 Any views on this?


My view is that all this is overkill. I would prefer to keep things simple.

To add the above would mean that a UA would have to flag every single
element and every future supported element, as well as every feature,
into a particular class (or into multiple classes or worst, do this
dynamically (e.g., script style=display: block;
background-color:red;.../script)). This proposal does not scale either.

Kind regards,
Marcos



Access Systems Germany GmbH
Essener Strasse 5  |  D-46047 Oberhausen
HRB 13548 Amtsgericht Duisburg
Geschaeftsfuehrer: Michel Piquemal, Tomonori Watanabe, Yusuke Kanda

www.access-company.com

CONFIDENTIALITY NOTICE
This e-mail and any attachments hereto may contain information that is 
privileged or confidential, and is intended for use only by the
individual or entity to which it is addressed. Any disclosure, copying or 
distribution of the information by anyone else is strictly prohibited.
If you have received this document in error, please notify us promptly by 
responding to this e-mail. Thank you.


Re: [WARP] Comments to WARP spec

2009-11-12 Thread Marcos Caceres



Marcin Hanclik wrote:

Hi Marcos,

I understand that too many details may not work or be an obstacle in the 
adoption.
However, I derive that from the security point of view we still would like to 
distinguish at least between executable and non-executable content.


I think this is established by the content type, see the SNIFF spec. If 
that is not what you mean, then can you please elaborate.



The distinction between retrievable and submissible touches upon the privacy 
(at present the users do not complain when they submit any data), but seems to be out of 
concerns at present.


I don't understand the above.



Re: [widgets interface] Tests generated from WebIDL

2009-11-12 Thread Dominique Hazael-Massieux
Hi Marcos,

I saw that the test suite for TWI was discussed on the WebApps call
today:
http://www.w3.org/2009/11/12-wam-minutes.html#item05

Since the discussion didn’t allude at all to my mail below about
generated test cases, I thought I would point you to it explicitly in
case you had missed it.

Dom

Le mercredi 28 octobre 2009 à 22:43 +0100, Dominique Hazael-Massieux a
écrit :
 Using wttjs [1], I have generated a bunch of low-level test cases for
 the Widgets Interface spec, based on its WebIDL, and uploaded them to:
 http://dev.w3.org/2006/waf/widgets-api/tests/idl-gen/
 
 Since I don’t have a widgets engine that would implement the spec, I
 haven’t been able to check if they detect anything remotely useful, but
 I’m hoping they do — maybe someone with such a runtime engine could load
 http://dev.w3.org/2006/waf/widgets-api/tests/idl-gen/all.html and see if
 any tests pass? I guess the tests might need to be wrapped up in a
 widget for that, in which case I have one available at:
 http://dev.w3.org/2006/waf/widgets-api/tests/idl-gen/idl.wgt
 
 (obviously, these tests wouldn’t be enough to test the semantics of the
 Javascript interface, but hopefully they can serve as a basis for
 ensuring everyone is using the same interfaces)
 
 Side-note: The Widgets Interface spec uses the old AE title in its
 title element.
 
 Dom
 
 1. http://suika.fam.cx/www/webidl2tests/readme






Re: [widgets interface] Tests generated from WebIDL

2009-11-12 Thread Dominique Hazael-Massieux
Le jeudi 12 novembre 2009 à 17:35 +0100, Marcos Caceres a écrit :
 On the other hand, automated test generation can generate a large number 
 of test cases and is less prone to human errors. But, at the same time, 
 it cannot test some things that are written in the prose. For example, a 
 AU must not fire Storage events when first populating the preferences 
 attribute. This is impossible to express in IDL.

I complete agree that manual tests bring a lot of value, but I think it
would be unwise to refuse automated tests that express exactly what the
spec expresses — in particular, they can be extremely useful to detect
bugs in the WebIDL defined in the specs, bugs that are extremely
unlikely to be detected through manual testing.

In other words, I don’t see why manually and automatically created tests
are mutually exclusive, and I see very clearly how they can complete
each other.

Dom





Re: [widgets interface] Tests generated from WebIDL

2009-11-12 Thread Marcos Caceres


Hi Dom,
Dominique Hazael-Massieux wrote:

Hi Marcos,

I saw that the test suite for TWI was discussed on the WebApps call
today:
http://www.w3.org/2009/11/12-wam-minutes.html#item05

Since the discussion didn’t allude at all to my mail below about
generated test cases, I thought I would point you to it explicitly in
case you had missed it.


We had not missed it, but we are not up to the point where we can use 
automated tests. For the purpose of standardization, the creation of 
*manual* test cases forces the Working Group (in this case, me!) to 
verify that every testable assertion is correctly written and testable 
against a product that can claim conformance to a specification.


We believe that an important side effect is that, through the *manual* 
creation of test cases, the test suite can reduce the variability of the 
specification. Reducing the variability makes the specification more 
precise, which in turn can make the specification easier to interpret 
and implement (to paraphrase my testing mentor - you ;)).


On the other hand, automated test generation can generate a large number 
of test cases and is less prone to human errors. But, at the same time, 
it cannot test some things that are written in the prose. For example, a 
AU must not fire Storage events when first populating the preferences 
attribute. This is impossible to express in IDL.


Kind regards,
Marcos



Re: [widgets interface] Tests generated from WebIDL

2009-11-12 Thread Marcos Caceres



Dominique Hazael-Massieux wrote:

Le jeudi 12 novembre 2009 à 17:35 +0100, Marcos Caceres a écrit :

On the other hand, automated test generation can generate a large number
of test cases and is less prone to human errors. But, at the same time,
it cannot test some things that are written in the prose. For example, a
AU must not fire Storage events when first populating the preferences
attribute. This is impossible to express in IDL.


I complete agree that manual tests bring a lot of value, but I think it
would be unwise to refuse automated tests that express exactly what the
spec expresses — in particular, they can be extremely useful to detect
bugs in the WebIDL defined in the specs, bugs that are extremely
unlikely to be detected through manual testing.


Like I said, we are certainly not rejecting automated testing, we (me) 
are just not up to that stage yet. I completely agree with you that it 
will help us find more potential bugs in the IDL itself.



In other words, I don’t see why manually and automatically created tests
are mutually exclusive, and I see very clearly how they can complete
each other.


I did not mean to imply that they are. They are certainly complimentary 
(even for PC, I refined the ABNF by using the abnfgen app, which helped 
me find a lot of errors - so I certainly know the value that comes with 
automated test generation).


Kind regards,
Marcos



Re: [widgets interface] Tests generated from WebIDL

2009-11-12 Thread Dominique Hazael-Massieux
Le jeudi 12 novembre 2009 à 17:52 +0100, Marcos Caceres a écrit :
  I complete agree that manual tests bring a lot of value, but I think it
  would be unwise to refuse automated tests that express exactly what the
  spec expresses — in particular, they can be extremely useful to detect
  bugs in the WebIDL defined in the specs, bugs that are extremely
  unlikely to be detected through manual testing.
 
 Like I said, we are certainly not rejecting automated testing, we (me) 
 are just not up to that stage yet. I completely agree with you that it 
 will help us find more potential bugs in the IDL itself.

I see, sorry I misunderstood then :)

Thanks for the clarification,

Dom





Re: [widgets interface] Tests generated from WebIDL

2009-11-12 Thread Marcos Caceres



Dominique Hazael-Massieux wrote:

Le jeudi 12 novembre 2009 à 17:52 +0100, Marcos Caceres a écrit :

I complete agree that manual tests bring a lot of value, but I think it
would be unwise to refuse automated tests that express exactly what the
spec expresses — in particular, they can be extremely useful to detect
bugs in the WebIDL defined in the specs, bugs that are extremely
unlikely to be detected through manual testing.

Like I said, we are certainly not rejecting automated testing, we (me)
are just not up to that stage yet. I completely agree with you that it
will help us find more potential bugs in the IDL itself.


I see, sorry I misunderstood then :)

Thanks for the clarification,



No probs. I'll be in touch RSN about the automated testing part! Say, 
about 1 week or so:) I will certainly need your help setting all that 
stuff up. I don't really know WebIDL, so will probably need your 
guidance if there are issues.


Kind regards,
Marcos




RE: [WARP] Comments to WARP spec

2009-11-12 Thread SULLIVAN, BRYAN L (ATTCINW)
Hi Marcos,
Opera 9.5 running on Windows Mobile 6.1 and Opera 10 running on PC both allow 
access to scripts and images from different domains than a widget was obtained 
from. I have tested this and can provide a working example (see below for the 
index.html - package it yourself and see).

Thus the same-origin restriction does not apply in current Opera 
implementations for externally referenced scripts and images. The processing of 
the access element as defined in WARP is not consistent with the current 
Opera implementation.

So what do you mean by We've had a similar model in place for a long time in 
our proprietary implementation?

!DOCTYPE html
html
head
meta charset=utf-8 /
link rel=stylesheet type=text/css href=style.css /
script src=http://www.json.org/json2.js;/script
script
function bodyLoad() {
var str = boohoo!;
try { str = JSON.stringify(['e', {pluribus: 'unum'}]); 
str = hooray!;}
catch (e) { } 
document.getElementById(test1).innerHTML = str;
}
/script
/head
body onload=javascript:bodyLoad();
pNot Same-Origin Resource Access Test: a test of the same-origin rule 
for resources 
accessed from domains other than where the widget was obtained./p
hr/
pTest 1: If the widget engine does not allow external script 
references, no you will 
see boohoo! below:/p
div id=test1/div
hr/
pTest 2: If the widget engine does not allow external image 
references, no image will 
be shown below:/p
img src=http://dev.opera.com/img/logo-beta.gif/
/body
/html

Best regards,
Bryan Sullivan | ATT

-Original Message-
From: Marcos Caceres [mailto:marc...@opera.com] 
Sent: Tuesday, November 10, 2009 1:02 PM
To: SULLIVAN, BRYAN L (ATTCINW)
Cc: WebApps WG
Subject: Re: [WARP] Comments to WARP spec



SULLIVAN, BRYAN L (ATTCINW) wrote:
 Placing broad restrictions on widget-context webapp access to network 
 resources (substantially different from browser-context webapps) is not an 
 effective approach to creating a useful widget-context webapp platform. That 
 would create a significant barrier to market acceptance of the W3C widget 
 standards.

Opera does not agree. We've had a similar model in place for a long time 
in our proprietary implementation and we have not faced any issues in 
the marketplace.

The WARP spec solves many problems that arise from not actually having a 
network established origin, and may even avoid the confused deputy 
problem CORS is currently facing (which locally running widgets won't be 
able to use anyway).

I think that technically we are in agreement; but we are just in 
disagreement about the level of granularity that the WARP spec affords 
to authors. For the record, I like the way WARP is currently specified: 
it's easy to use, and essentially works in much the same way as the same 
origin policy does for Web documents... but with the added bonus of 
being able to do cross origin - but with the restriction of not being 
unrestricted, like it's the case for web documents.


Re: What do we mean by parking Web Database? [Was: Re: TPAC report day 2]

2009-11-12 Thread Jonas Sicking
On Mon, Nov 9, 2009 at 12:58 AM, Maciej Stachowiak m...@apple.com wrote:
 On Nov 8, 2009, at 11:12 PM, Jonas Sicking wrote:
 -Regards, Art Barstow

 [1] http://www.w3.org/2009/11/02-webapps-minutes.html#item12
 [2]
 http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0477.html

 From a technical point of view, are we expecting that there will

 actually be multiple independent interoperable implementations? If
 Operas implementations uses SQLite in the backend, it seems like all
 implementations are SQLite based and thus technically not independent.

 It should be noted that many aspects of the spec must be implemented
 independently even if SQLite is the underlying storage back end.

 Indeed. I still personally wouldn't call it multiple independent
 implementations though.

 Would you call multiple implementations that use the standard C library
 independent? Obviously there's a judgment call to be made here. I realize
 that in this case a database implementation is a pretty key piece of the
 problem.

Indeed, I think that makes a big difference. It's also been shown that
the standard C library can be implemented by multiple different
implementations, the same can not be said for the SQLite SQL dialect.

 But I also think it would be more fruitful for you to promote
 solutions you do like, than to try to find lawyerly reasons to stop the
 advancement of specs you don't (when the later have been implemented and
 shipped and likely will see more implementations).

I have very intentionally not pushed back heavily on the Web SQL DB
until I felt that we had an alternative solution to promote. So I do
have a different solution to promote, which is SimpleDB.

Further, I think the multiple independent implementations rule is
there exactly for the reason that I've been concerned about, to ensure
that the spec is reasonably implementable by several different
parties. You have yourself raised very similar concerns that the
theora spec might for patent reasons only be implementable by a single
library. So I don't think it's lawyering to point out that we're
potentially breaking that rule here.

I also very intentionally said Ultimately I don't care much either
way really in order to show that I wasn't going to block progress
based on this rule.

Ultimately what matters is that all UAs that want to can implement the
API. It might be ok to rely on a specific library if we were
reasonably certain that all vendors were ok with embedding that
library. But I don't think we are. I'm personally not ok with
embedding SQLite 3.6.19 (or versions with a compatible SQL dialect)
forever into Gecko. At some point I'm certain that we will want to
upgrade the SQLite implementation we use internally, and at that point
we'd have to ship two implementations. And at some point that version
SQLite 3.6.19 (and versions with compatible SQL dialect) of SQLite
will become unsupported, at which point it becomes our responsibility
to fix these bugs in a timely manner.

 The reason I bring this aspect up is that this was one of the big
 reasons that we didn't want to implement this at mozilla, that it
 seemed to lock us in to a specific backend-library, and likely a
 specific version of that library.

 We actually have a bit of a chicken-and-egg problem here. Hixie has said
 before he's willing to fully spec the SQL dialect used by Web Database.
 But
 since Mozilla categorically refuses to implement the spec (apparently
 regardless of whether the SQL dialect is specified), he doesn't want to
 put
 in the work since it would be a comparatively poor use of time. If
 Mozilla's
 refusal to implement is only conditional, then perhaps that could change.

 I intentionally said one of above :)

 There's several other reasons. I am not experienced enough to
 articulate all of the reasons well enough, but here are a few:

 * If we do specify a specific SQL dialect, that leaves us having to
 implement it. It's very unlikely that the dialect would be compatible
 with SQLite (especially given that SQLite uses a fairly unusual SQL
 dialect with regards to datatypes) which likely leaves us implementing
 our own SQL engine.

 The SQL dialect could, of course, be specified in a way that it could work
 on top of SQLite with some reasonable filtering and preprocessing steps. I
 don't see a reason to expect otherwise, and in particular that it would be
 so different that it would require writing a new SQL engine. So this reason
 seems to be based on an unwarranted assumption.

My impression is that there's two choices:

Either spec something that is so close to the SQLite dialect that it
effectively requires embedding a specific version SQLite to implement
the spec.
Or spec something that is unlikely to be compatible enough with SQLite
that it doesn't require a full SQL parser and optimizer.

Neither of this is particularly desired.

 * The feedback we received from developer indicated that a SQL based
 solution wasn't beneficial over a lower level API.