Re: [whatwg] header for JSON-LD ???

2017-07-26 Thread Melvin Carvalho
On 26 July 2017 at 15:43, Mark Kaplun  wrote:

> Well, in practice, since it is an SEO signal what google does in practice
> is more important than any theoretical discussion.
>
> Not being in any way affiliated with google, my own impression is that
> google do not care which format you use as long as it can be parsed by
> them.
>
> The main problem with the metadata that I see as a developer is that it
> bloats the HTML, while adding very little value to the human browsing the
> web page, this means that pages are being served slower which is probably a
> bigger concern on mobile and slower networks.
> One thing that google support with JSON-LD is asynchronous loading of it.
> Basically the HTML is loaded first, and at some point you can have some JS
> that will load the JSON by an AJAX request. google is happy to get the
> JSON-LD this way, to the human eye the page loads fater, the only thing not
> being solved is that there is still communication bloat even if now it is
> divided into two requests instead of one.
>

While this is typically true of today's browsers, which are human
oriented.  I have been experimenting for some time with a next generation
browser (via addon/shim) which can view both human readable data and
machine readable data.  Typically what you would do is take the data, and
pass it over to a 'viewer' which acts as a display app.  I've had very good
experiences with this.   My personal hope is that data browsing will become
more common over time, so instead of seeing JSON as plain text, or a tree,
it will come a bit more to life as browsers get more features.


> I assume that what Micheal was trying to say is that instead of hacks as
> described above, it might be a better thing to be able to specify in some
> way where the JSON-LD is located (link with relevant rel attribute ?).
>
> just my 2 cents. I currently work in a team that develops a meta data
> plugin for wordpress that utilizes JSON-LD. I do not claim to fully
> understand the historical reasons why things are like they are right now.
>
>
> On Wed, Jul 26, 2017 at 4:04 PM, Jonathan Zuckerman  >
> wrote:
>
> > After reading just a bit more - it seems like JSON-LD and schema.org
> have
> > slightly different goals - schema.org suggests conventions for data cues
> > in HTML, JSON-LD suggests it for JSON (e.g. API responses for dynamic
> > websites) - exactly how "best practice" is this pattern of stuffing
> JSON-LD
> > into the script tag of an HTML document? Most of the articles I could
> find
> > on the subject come from Google, not the JSON-LD community...
> >
> > On Wed, Jul 26, 2017 at 8:30 AM Mark Kaplun  wrote:
> >
> >> hmmm http://blog.schema.org/2013/06/schemaorg-and-json-ld.html
> >>
> >> If you use a CMS like wordpress for your content, and you are just a
> >> content person, it is a big meh to try to add manually the attributes,
> and
> >> it is also a meh to develop software that will need to parse the
> content to
> >> add it as you might break the structure required for the proper
> functioning
> >> of CSS and JS. You can have all kinds of "macros" for that, but the
> result
> >> is unreadable content on the editing side.
> >>
> >> Whatever are the cons of disconnecting the data from the content, it is
> >> probably more likely that you will have the data, or at least it will be
> >> more complete if you can use json-ld as it is easier to manage.
> >>
> >> On Wed, Jul 26, 2017 at 3:11 PM, Jonathan Zuckerman <
> >> j.zucker...@gmail.com> wrote:
> >>
> >>> I agree that reducing the bloat of JSON-LD is a noble goal. Sorry to
> >>> belabor this point, but can you explain why JSON-LD is needed in the
> >>> first
> >>> place? I've tried to point out that HTML is capable of doing it without
> >>> another spec, which obviates the need for content duplication and bloat
> >>> that JSON-LD introduces (and the extra headers you are suggesting). To
> >>> your
> >>> other example, CSS media queries can be employed by authors to respect
> >>> user
> >>> preferences for reduced motion or other visual features. This makes a
> lot
> >>> of sense because it colocates those rules in the place where the
> >>> problematic feature would be defined in the first place. Why should a
> >>> problem introduced by CSS be fixed by some other technology?
> >>>
> >>> What I'm saying is that there are alternatives to JSON-LD which are
> >>> superior and (this is crucial) already supported globally. I'm
> confident
> >>> that we can expand the scenarios endlessly and still not come across
> one
> >>> where JSON-LD accomplishes something HTML couldn't already do better.
> Can
> >>> you explain why you are such a fan of JSON-LD? I'm open minded, I'm
> ready
> >>> to be convinced, but I feel like I've suggested obviously superior
> >>> alternatives to each of the use cases you've presented (if I missed
> any,
> >>> please remind me and I'll be happy to circle back) I was honestly quite
> 

Re: [whatwg] header for JSON-LD ???

2017-07-26 Thread Melvin Carvalho
On 26 July 2017 at 15:04, Jonathan Zuckerman  wrote:

> After reading just a bit more - it seems like JSON-LD and schema.org have
> slightly different goals - schema.org suggests conventions for data cues
> in
> HTML, JSON-LD suggests it for JSON (e.g. API responses for dynamic
> websites) - exactly how "best practice" is this pattern of stuffing JSON-LD
> into the script tag of an HTML document? Most of the articles I could find
> on the subject come from Google, not the JSON-LD community...
>

JSON-LD is open ended

schema.org is a vocabulary that allows you to say different things

You could think of JSON-LD as a language syntax and schema.org as verbs,
subjects and objects


>
>
> On Wed, Jul 26, 2017 at 8:30 AM Mark Kaplun  wrote:
>
> > hmmm http://blog.schema.org/2013/06/schemaorg-and-json-ld.html
> >
> > If you use a CMS like wordpress for your content, and you are just a
> > content person, it is a big meh to try to add manually the attributes,
> and
> > it is also a meh to develop software that will need to parse the content
> to
> > add it as you might break the structure required for the proper
> functioning
> > of CSS and JS. You can have all kinds of "macros" for that, but the
> result
> > is unreadable content on the editing side.
> >
> > Whatever are the cons of disconnecting the data from the content, it is
> > probably more likely that you will have the data, or at least it will be
> > more complete if you can use json-ld as it is easier to manage.
> >
> > On Wed, Jul 26, 2017 at 3:11 PM, Jonathan Zuckerman <
> j.zucker...@gmail.com
> > > wrote:
> >
> >> I agree that reducing the bloat of JSON-LD is a noble goal. Sorry to
> >> belabor this point, but can you explain why JSON-LD is needed in the
> first
> >> place? I've tried to point out that HTML is capable of doing it without
> >> another spec, which obviates the need for content duplication and bloat
> >> that JSON-LD introduces (and the extra headers you are suggesting). To
> >> your
> >> other example, CSS media queries can be employed by authors to respect
> >> user
> >> preferences for reduced motion or other visual features. This makes a
> lot
> >> of sense because it colocates those rules in the place where the
> >> problematic feature would be defined in the first place. Why should a
> >> problem introduced by CSS be fixed by some other technology?
> >>
> >> What I'm saying is that there are alternatives to JSON-LD which are
> >> superior and (this is crucial) already supported globally. I'm confident
> >> that we can expand the scenarios endlessly and still not come across one
> >> where JSON-LD accomplishes something HTML couldn't already do better.
> Can
> >> you explain why you are such a fan of JSON-LD? I'm open minded, I'm
> ready
> >> to be convinced, but I feel like I've suggested obviously superior
> >> alternatives to each of the use cases you've presented (if I missed any,
> >> please remind me and I'll be happy to circle back) I was honestly quite
> >> ambivalent about JSON-LD when this discussion started but I'm convinced
> >> now
> >> that it's a bad direction for the web.
> >>
> >> In case you haven't seen it, schema.org suggests an approach to
> >> structured
> >> data that works with HTML instead of sidestepping it. Google provides
> >> a Structured
> >>
> > Data Testing Tool  structured-data/testing-tool>
> >
> >
> >> so you can be sure that the search engine is interpreting the cues
> >> correctly.
> >>
> >> Ok so, I think I've made clear my opinion of JSON-LD ;) taking a big
> step
> >> back, no action can be taken by the WHATWG about the new header because
> >> those are defined (a quick web search reveals) by the IANA and IETF. The
> >> header you suggest can be implemented at any time by website owners, you
> >> just need to bring this up with the search engines so their bots start
> >> sending the appropriate header. If you can get search engines on board
> (or
> >> convince enough site owners to only return JSON-LD when the appropriate
> >> request header is present so the search engines are forced to send it)
> >> then
> >> your job will be done.
> >>
> >>
> >> On Tue, Jul 25, 2017 at 18:41 Michael A. Peters  >
> >> wrote:
> >>
> >> > On 07/25/2017 02:42 PM, Qebui Nehebkau wrote:
> >> > > On 25 July 2017 at 17:32, Michael A. Peters  >
> >> > wrote:
> >> > >
> >> > >> Nor does his assumption that I am "new" to the web somehow
> >> disqualify me
> >> > >> from making suggestions with current use cases that could reduce
> the
> >> > bloat
> >> > >> of traffic.
> >> > >>
> >> > >
> >> > > Oh, then I think you misunderstood his statement. As I read it,
> "spend
> >> > more
> >> > > time working with the web you have before trying to change it" was a
> >> > > suggestion to look for a way to do what you want with current
> >> technology,
> >> > > not an argument that you don't have enough web 

Re: [whatwg] header for JSON-LD ???

2017-07-24 Thread Melvin Carvalho
On 21 July 2017 at 23:21, Michael A. Peters  wrote:

> I am (finally) starting to implement JSON-LD on a site, it generates a lot
> of data that is useless to the non-bot typical user.
>
> I'd prefer to only stick it in the head when the client is a crawler that
> wants it.
>
> Wouldn't it be prudent if agents that want JSON-LD can send a standardized
> header as part of their request so web apps can optionally choose to only
> send the JSON-LD data to clients that want it? Seems it would be kinder to
> mobile users on limited bandwidth if they didn't have to download a bunch
> of JSON that is meaningless to them.
>
> Is this the right group to suggest that?
>

Firstly, well done for going the extra mile and producing structured data
on the web

Typically, if I am primarily interested in JSON-LD data, there is a
mechanism in web architecture which allows me to specify this.

I would use an accept header such as

Accept: application/ld+json

In this way, machines can receive machine readable data, and humans can
receive human readable.


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-11 Thread Melvin Carvalho
On 11 April 2017 at 18:01, Domenic Denicola  wrote:

> From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of
> Patrick Dark
>
> > I can't see this being addressed. The only good reason to distribute an
> application this way is because you want it to be confidential and there's
> no incentive to accommodate what one might call "walled gardens"
> in HTML because they naturally have a limited audience.
>
> Bingo. This mailing list is for developing technology for the world wide
> web, not for peoples' local computers.
>

That is one perspective of the world wide web.  But perhaps not a
perceptive shared by all.

Another view which I think is held by many, is that you should equally be
able to access public data on the web, data in the cloud and persona data
on your machine.

>
>
> You can use the same technology that people use on the web for local app
> development---many people do, e.g. Apache Cordova, Microsoft's
> Metro/Modern/UWP apps, or GitHub's Electron. But all those technologies are
> outside the standards process, and in general are not the focus of browser
> vendors in developing their products (which are, like the standards
> process, focused on the web). The same is true of file: URLs.
>
>
Yes, Im currently using Electron for this.  But would much prefer to use
the browser.  If there are browser have this restriction, I'd simply like a
way to turn it off.  It's a heavily requested feature, why wouldnt an open
source browser not be a suitable target for such an improvement (and
thereby gain market share).


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-10 Thread Melvin Carvalho
On 9 April 2017 at 11:51, David Kendal  wrote:

> Moin,
>
> Over the last few years there has been a gradual downgrading of support
> in browsers for running pages from the file: protocol. Most browsers now
> have restrictions on the ability of JavaScript in such pages to access
> other files.
>
> Both Firefox and Chrome seem to have removed this support from XHR, and
> there appears to be no support at all for Fetching local files from
> other local files. This is an understandable security restriction, but
> there is no viable replacement at present.
>
> This is a shame because there are many possible uses for local static
> files accessing other local static files: the one I have in mind is
> shipping static files on CD-ROM or USB stick, but there is also the more
> obvious (and probably more common) use of local files by developers
> prototyping their apps before deploying them live to an HTTP server.
>
> This is an inconvenience to many web developers, and I'm far from the
> only one to complain about it. For instance, this from a very prolific
> reporter of Chrome bugs:
>
> > I've filed hundreds of Chrome bugs and I would rather would see this
> > fixed than any of them
>
> in . That
> bug was the number two most starred Blink bug in 2016.
>
> I'd like to see APIs that solve this problem securely, in a way that's
> portable across all browsers. I know this isn't trendy or sexy but
> 'single-page apps' are still in vogue (I think?) and it would be
> useful/cool to be able to run them locally, even only for development
> purposes.
>
>
> A proposed solution, though far from the only one possible:
>
> There should be a new API something like this:
>
> window.requestFilesystemPermission(requestedOrigin);
>
> which does something like
>
> - If permission was already granted for the specified requestedOrigin or
>   some parent directory of it, return true.
>
> - If the current page origin is not a URL on the file: protocol, raise a
>   permissions error.
>
> - If requestedOrigin does not share a root path with the current page
>   origin, raise a permissions error. That is, a file with the name
>   file:///mnt/html/index.html can request access to file:///mnt or to
>   file:///mnt/html, but *not* to file:///etc, where it could read the
>   local password file.
>
> - The browser displays an alert to the page user showing the name and
>   path to the directory which has requested this permission. The user
>   can then choose to allow or deny access.
>
> - If the user chose not to allow access to the files, false is returned
>   or some other error is raised.
>
> - If they chose to allow access, return true.
>
> - For the remainder of the session (user agent specific), all files
>   in the requestedOrigin directory, including the current page, have
>   total read access (with Fetch, XHR, etc.) to all other files in
>   the directory.
>
> requestedOrigin is allowed to be an absolute or relative URI.
>
> Some useful Fetch semantics for file: URLs should also be defined.
>
> I like this solution because it maintains portability of scripts between
> HTTP(S) and local files without too much extra programming work: if
> scripts only request relative URLs, they can both (a) detect that
> they're running locally from file: URLs, and request permission if so
> and (b) detect that they're running on HTTP, and make exactly the same
> API calls as they would on the local system.
>
> This is also a beneficial property for those using file:// URLs for
> development purposes.
>
> Of course, this is just one solution that's possible. I would welcome
> feedback on this proposal and any progress towards any solution to this
> very common problem.
>

I thought I'd share this design issues note by Tim Berners-Lee, on this
topic, which some my find interesting

https://www.w3.org/DesignIssues/HTTPFilenameMapping.html

"It is actually pretty interesting to live on the edge, or more
specifically on the intersection of these worlds where you can address the
same files both as local files and as resources on the web. Why do both?
Well, different things work better in different worlds"


>
>
> Thanks,
>
> --
> dpk (David P. Kendal) · Nassauische Str. 36, 10717 DE · http://dpk.io/
><+grr> for security reasons I've switched to files:// urls instead
>
>


Re: [whatwg] Accessing local files with JavaScript portably and securely

2017-04-09 Thread Melvin Carvalho
On 9 April 2017 at 11:51, David Kendal  wrote:

> Moin,
>
> Over the last few years there has been a gradual downgrading of support
> in browsers for running pages from the file: protocol. Most browsers now
> have restrictions on the ability of JavaScript in such pages to access
> other files.
>
> Both Firefox and Chrome seem to have removed this support from XHR, and
> there appears to be no support at all for Fetching local files from
> other local files. This is an understandable security restriction, but
> there is no viable replacement at present.
>
> This is a shame because there are many possible uses for local static
> files accessing other local static files: the one I have in mind is
> shipping static files on CD-ROM or USB stick, but there is also the more
> obvious (and probably more common) use of local files by developers
> prototyping their apps before deploying them live to an HTTP server.
>
> This is an inconvenience to many web developers, and I'm far from the
> only one to complain about it. For instance, this from a very prolific
> reporter of Chrome bugs:
>
> > I've filed hundreds of Chrome bugs and I would rather would see this
> > fixed than any of them
>
> in . That
> bug was the number two most starred Blink bug in 2016.
>

Thanks for the pointer, I just starred this too.  I am currently hitting a
wall with this issue as well.

I have looked for a way to override this, but cannot find something.  As a
consequence, I have switched to electron, which seems to have this feature.


>
> I'd like to see APIs that solve this problem securely, in a way that's
> portable across all browsers. I know this isn't trendy or sexy but
> 'single-page apps' are still in vogue (I think?) and it would be
> useful/cool to be able to run them locally, even only for development
> purposes.
>
>
> A proposed solution, though far from the only one possible:
>
> There should be a new API something like this:
>
> window.requestFilesystemPermission(requestedOrigin);
>
> which does something like
>
> - If permission was already granted for the specified requestedOrigin or
>   some parent directory of it, return true.
>
> - If the current page origin is not a URL on the file: protocol, raise a
>   permissions error.
>
> - If requestedOrigin does not share a root path with the current page
>   origin, raise a permissions error. That is, a file with the name
>   file:///mnt/html/index.html can request access to file:///mnt or to
>   file:///mnt/html, but *not* to file:///etc, where it could read the
>   local password file.
>
> - The browser displays an alert to the page user showing the name and
>   path to the directory which has requested this permission. The user
>   can then choose to allow or deny access.
>
> - If the user chose not to allow access to the files, false is returned
>   or some other error is raised.
>
> - If they chose to allow access, return true.
>
> - For the remainder of the session (user agent specific), all files
>   in the requestedOrigin directory, including the current page, have
>   total read access (with Fetch, XHR, etc.) to all other files in
>   the directory.
>
> requestedOrigin is allowed to be an absolute or relative URI.
>
> Some useful Fetch semantics for file: URLs should also be defined.
>
> I like this solution because it maintains portability of scripts between
> HTTP(S) and local files without too much extra programming work: if
> scripts only request relative URLs, they can both (a) detect that
> they're running locally from file: URLs, and request permission if so
> and (b) detect that they're running on HTTP, and make exactly the same
> API calls as they would on the local system.
>
> This is also a beneficial property for those using file:// URLs for
> development purposes.
>
> Of course, this is just one solution that's possible. I would welcome
> feedback on this proposal and any progress towards any solution to this
> very common problem.
>

+1 looks like a good solution.  Another way would be to set a flag in the
options.


>
>
> Thanks,
>
> --
> dpk (David P. Kendal) · Nassauische Str. 36, 10717 DE · http://dpk.io/
><+grr> for security reasons I've switched to files:// urls instead
>
>


Re: [whatwg] possible new parameters to video.play() ?

2016-09-18 Thread Melvin Carvalho
On 18 September 2016 at 14:44, Simon Pieters <sim...@opera.com> wrote:

> On Sun, 18 Sep 2016 01:21:27 +0200, Melvin Carvalho <
> melvincarva...@gmail.com> wrote:
>
> Apologies if this has come up before, but I was wondering if it would be
>> possible to add simple parameters to the play() function.
>>
>> They would be
>>
>> play(start, end)
>>
>> Where start and end are the times in seconds.
>>
>> I know you can do
>>
>> video.currentTime = start ; video.play()
>>
>> But there's no real easy way to stop it to play a clip
>>
>> The media fragments URIs spec [1] handles this quite nicely by adding to
>> the URI
>>
>> #t=start,end
>>
>> But yet there seems to be no way to do this in JS, resorting to changing
>> location.hash and then doing a reload, which seems a bit of a kludge
>>
>> I may be missing something extremely obvious, if so, I'd love to know!
>>
>> [1] https://www.w3.org/TR/media-frags/
>>
>
> The pauseOnExit attribute on VTTCue can be used for this purpose. See
> https://html.spec.whatwg.org/multipage/embedded-content.html
> #text-track-api:the-audio-element for an example.


Thank you for both answers!

I found pauseOnExit to work very well for my use case.  I ended up with.

v.addTextTrack('metadata')
cue = new VTTCue(start, end, '')
cue.pauseOnExit = true
cues.addCue(cue)
v.currentTime = start
v.play()

Regarding

var cue = new VTTCue(start, end, '');

As best I could tell that last parameter is a 'message', tho Im not sure I
got any message when the video stopped, even when I populated it.  Maybe I
wasnt supposed to.

I'm quite happy to use this solution.  My slight concert is whether there
are any side effects from adding a TextTrack to a video.

Should this be considered best practice, or would there perhaps still be
room in future for (start, end) parameters?


>
> --
> Simon Pieters
> Opera Software
>


[whatwg] possible new parameters to video.play() ?

2016-09-17 Thread Melvin Carvalho
Apologies if this has come up before, but I was wondering if it would be
possible to add simple parameters to the play() function.

They would be

play(start, end)

Where start and end are the times in seconds.

I know you can do

video.currentTime = start ; video.play()

But there's no real easy way to stop it to play a clip

The media fragments URIs spec [1] handles this quite nicely by adding to
the URI

#t=start,end

But yet there seems to be no way to do this in JS, resorting to changing
location.hash and then doing a reload, which seems a bit of a kludge

I may be missing something extremely obvious, if so, I'd love to know!

[1] https://www.w3.org/TR/media-frags/


Re: [whatwg] deprecating

2015-09-03 Thread Melvin Carvalho
On 3 September 2015 at 20:21, Ian Hickson  wrote:

> On Wed, 2 Sep 2015, henry.st...@bblfish.net wrote:
> > >
> > > The spec just reflects implementations. The majority of
> > > implementations of  (by usage) have said they want to drop it,
> >
> > There was a lot of pushback on those lists against dropping it, and no
> > clear arguments have been made for dropping keygen there.
>
> That's debatable (see foolip's e-mail), but more to the point, it's
> irrelevant. We're not trying to reflect consensus here. We're trying to
> reflect reality. That's why the spec still has , but why it warns
> that browsers are planning on dropping it.
>
>
> > > and the other major implementation has never supported it.
> >
> > You mean IE? IE has always had something that did the same:
> >
> > https://msdn.microsoft.com/en-us/library/aa374863(VS.85).aspx
> >
> > It is not idea, and it is easy to use both.
>
> I'm not really sure what you're trying to argue here. Are you saying we
> should specify this API? Are other browsers planning on implementing it?
>
>
> > To replace it with what? That is the problem that needs discussing and
> > not partially across twenty lists where the issues are only ever half
> > addressed.
>
> Again, the point of the spec here is just to reflect reality; if browser
> vendors say they want to drop something, then we have to reflect that,
> even if they don't plan on replacing it with anything. Otherwise, our spec
> is just rather dry science fiction.
>
> Having said that, it's my understanding that there are replacement APIs
> being developed. I'm not personally familiar with that work so I can't
> comment on it. Should the authors of a cryptography specification that
> browser vendors want to implement be interested in publishing their work
> through the WHATWG, I'm sure that could be arranged, and at that point
> this list would make for a very relevant place to discuss those
> technologies.
>
>
> > Indeed: they seem to be working as one would expect where one thinking
> > that forces that don't like asymetric key cryptography to be widely
> > deployed were trying to remove that capability as far as possible. The
> > manner of doing this - by secret evidence, and pointers to closed non
> > deployed standards - seems to be very much the way of doing of
> > organisations that like to keep things secret and closed.
>
> Asymetric key cryptography forms the basis of the entire Web's security
> system. It's pretty much the only possible way to have solid security in
> an environment where you don't trust the carrier. I doubt it's going
> anywhere anytime soon, unless we suddenly get a supply of securely-
> sharable one-time-pads...
>
> The post foolip pointed to points out that  is actually rather
> insecure (e.g. using MD5). One could argue that _keeping_  is
> actually more harmful to asymetric-key cryptography than removing it...
>

Im not an expert here, but my understanding from reading some wikipedia
articles was that a preimage attack on md5 was 2^123.  If so, isnt that
pretty secure?  I asked on the blink thread why md5 was thought to be
insecure, but no one was able to answer, or point to a reference.  It would
be great to understand if there is a feasible attack here.

Looking at:

SignedPublicKeyAndChallenge ::= SEQUENCE {
publicKeyAndChallenge PublicKeyAndChallenge
,
signatureAlgorithm AlgorithmIdentifier,
signature BIT STRING
}


http://www.w3.org/html/wg/drafts/html/master/semantics.html#the-keygen-element

There appears to be a field signatureAlgorithm.  Does that not suggest that
switching away from MD5 is future proofed?

>
>
>
> > The case has been made that things should go the other way around: the
> > problems should be brought up, and then improvements or replacements
> > found. When those are found and are satisfactory ( as evaluated against
> > a wider set of values that take the whole web security into account )
> > then one moves forward.
>
> I certainly encourage people to follow such a pattern, but when they
> don't, we can't just ignore reality.
>
> I encourage you to read our FAQ. The WHATWG has a much more pragmatic
> approach than other standards organisations you may be more familiar with.
>
>https://whatwg.org/faq
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>


Re: [whatwg] deprecating

2015-09-03 Thread Melvin Carvalho
On 3 September 2015 at 21:27, Ian Hickson <i...@hixie.ch> wrote:

> On Thu, 3 Sep 2015, Melvin Carvalho wrote:
> > >
> > > The post foolip pointed to points out that  is actually rather
> > > insecure (e.g. using MD5). One could argue that _keeping_  is
> > > actually more harmful to asymetric-key cryptography than removing
> > > it...
> >
> > Im not an expert here, but my understanding from reading some wikipedia
> > articles was that a preimage attack on md5 was 2^123.  If so, isnt that
> > pretty secure?  I asked on the blink thread why md5 was thought to be
> > insecure, but no one was able to answer, or point to a reference.  It
> > would be great to understand if there is a feasible attack here.
>
> Wikipedia's article on MD5 is pretty comprehensive:
>
>https://en.wikipedia.org/wiki/MD5


Thanks, I have tried to work out the attack from this article, but still
dont see how it applies to KEYGEN.

The private key never leaves the browser.

All that is (optionally) sent to a server is a signed challenge.  Using the
randomchars field I think was designed to prevent the kind of attack that I
think people are imagining.  As yet, no one has actually *said* what the
attack is.  I am assuming people are trying to derive a private key from a
signature and public key.

It strikes me that this is impossible, even if dealing with a malicious
server.  But I'd like to understand if this is not the case.


>
>
>
> > Looking at:
> >
> > SignedPublicKeyAndChallenge ::= SEQUENCE {
> > publicKeyAndChallenge PublicKeyAndChallenge
> > <
> http://www.w3.org/html/wg/drafts/html/master/semantics.html#publickeyandchallenge
> >,
> > signatureAlgorithm AlgorithmIdentifier,
> > signature BIT STRING
> > }
> >
> >
> >
> http://www.w3.org/html/wg/drafts/html/master/semantics.html#the-keygen-element
>
> That's the W3C's fork of the specification. The relevant spec for this
> mailing list is:
>
>https://html.spec.whatwg.org/multipage/#the-keygen-element
>
> I wouldn't use the W3C's fork for discussions here because the W3C version
> has many subtle differences and it can cause us great confusion when
> discussing these issues.
>

OK, thanks!


>
>
> > There appears to be a field signatureAlgorithm.  Does that not suggest
> > that switching away from MD5 is future proofed?
>
> In principle  itself could have new signature algorithms added.
> This of course wouldn't be backwards compatible (in that it wouldn't be
> supported by legacy UAs or legacy servers), so it would be no different
> than introducing an entirely new feature that didn't suffer from all the
> other problems that  suffers from.
>
> This is somewhat academic, though. When there are no browser vendors
> supporting a particular feature, arguing about how it could be improved
> misses the point. That's why we added a warning to the spec.
>
> --
> Ian Hickson   U+1047E)\._.,--,'``.fL
> http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
> Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'
>


Re: [whatwg] Shared storage

2014-10-28 Thread Melvin Carvalho
On 15 February 2014 03:04, Brett Zamir bret...@yahoo.com wrote:

 *The opportunity and current obstacles*

 The desktop PC thankfully evolved into allowing third-party software which
 could create and edit files shareable by other third-party software which
 would have the same rights to do the same. The importance of this can
 hardly be overestimated.

 Yet today, on the web, there appears to be no standard way to create
 content in such an agnostic manner whereby users have full, built-in,
 locally-controlled portability of their data.

 *Workarounds*

 Sure, there is postMessage or CORS requests which can be used to allow one
 site to be the arbiter of this data.

 And one could conceivably create a shared data store built upon even
 postMessage alone, even one which can work fully offline through cache
 manifests and localStorage or IndexedDB (I have begun some work on this
 concept at https://gist.github.com/brettz9/8876920 ), but this can only
 work if:

 1. A site or set of domains is trusted to host the shared content.
 2. Instead of being built into the browser, it requires that the shared
 storage site be visited at least one time.

 *Proposal*

 1. Add support for sharedStorage (similar to globalStorage but requiring
 approval), SharedIndexedDB, and SharedFileWriter/SharedFileSystem which,
 when used, would cause the browser to prompt the user to require user
 approval whenever storing or retrieving from such data stores (with an
 option to remember the choice for a particular site/domain), informing
 users of potential risks depending on how the data might be used, and
 potentially allowing them to view, on the spot, the specific data that was
 being stored.

 Optional API methods could deter XSS by doing selective escaping, but the
 potential for abuse should not be used as an excuse for preventing
 arbitrary shared storage, since again, it is worked well on the desktop,
 despite risks there, and as works with postMessage despite it also having
 risks.

 2. Add support for corresponding ReadonlyShared storage mechanisms,
 namespaced by the origin site of the data. A site, http://example.com
 might add such shared storage under example.com which
 http://some-other-site.example could retrieve but not alter or delete
 (unless perhaps a grave warning were given to users about the fact that
 this was not the same domain). This would have the benefit above
 postMessage in that if the origin site goes down, third party sites would
 still be able to have access to the data.

 3. Encourage browsers to allow direct editing of this stored data in a
 human-readable manner (with files at least being ideally directly viewable
 from the OS desktop).

 I proposed something similar earlier, and received a reply about doing
 this through shared workers, but as I understood it, I did not like that
 possibility because:

 a. it would limit the neutrality of the storage, creating one site as
 an unchallengeable arbiter of the data
 b. it would increase complexity for developers
 c. it would presumably depend on the setting of CORS directives to
 distinguish it from same-domain shared workers.

 While https://wiki.mozilla.org/WebAPI/DeviceStorageAPI appears to meet a
 subset of these needs, it does not meet all.


+1



 Thank you,
 Brett




Re: [whatwg] Shared storage

2014-10-28 Thread Melvin Carvalho
On 28 October 2014 21:32, Nils Dagsson Moskopp 
n...@dieweltistgarnichtso.net wrote:

 Melvin Carvalho melvincarva...@gmail.com writes:

  On 15 February 2014 03:04, Brett Zamir bret...@yahoo.com wrote:
 
  *The opportunity and current obstacles*
 
  The desktop PC thankfully evolved into allowing third-party software
 which
  could create and edit files shareable by other third-party software
 which
  would have the same rights to do the same. The importance of this can
  hardly be overestimated.
 
  Yet today, on the web, there appears to be no standard way to create
  content in such an agnostic manner whereby users have full, built-in,
  locally-controlled portability of their data.
 
  *Workarounds*
 
  Sure, there is postMessage or CORS requests which can be used to allow
 one
  site to be the arbiter of this data.
 
  And one could conceivably create a shared data store built upon even
  postMessage alone, even one which can work fully offline through cache
  manifests and localStorage or IndexedDB (I have begun some work on this
  concept at https://gist.github.com/brettz9/8876920 ), but this can only
  work if:
 
  1. A site or set of domains is trusted to host the shared content.
  2. Instead of being built into the browser, it requires that the shared
  storage site be visited at least one time.
 
  *Proposal*
 
  1. Add support for sharedStorage (similar to globalStorage but requiring
  approval), SharedIndexedDB, and SharedFileWriter/SharedFileSystem which,
  when used, would cause the browser to prompt the user to require user
  approval whenever storing or retrieving from such data stores (with an
  option to remember the choice for a particular site/domain), informing
  users of potential risks depending on how the data might be used, and
  potentially allowing them to view, on the spot, the specific data that
 was
  being stored.
 
  Optional API methods could deter XSS by doing selective escaping, but
 the
  potential for abuse should not be used as an excuse for preventing
  arbitrary shared storage, since again, it is worked well on the desktop,
  despite risks there, and as works with postMessage despite it also
 having
  risks.
 
  2. Add support for corresponding ReadonlyShared storage mechanisms,
  namespaced by the origin site of the data. A site, http://example.com
  might add such shared storage under example.com which
  http://some-other-site.example could retrieve but not alter or delete
  (unless perhaps a grave warning were given to users about the fact that
  this was not the same domain). This would have the benefit above
  postMessage in that if the origin site goes down, third party sites
 would
  still be able to have access to the data.
 
  3. Encourage browsers to allow direct editing of this stored data in a
  human-readable manner (with files at least being ideally directly
 viewable
  from the OS desktop).
 
  I proposed something similar earlier, and received a reply about doing
  this through shared workers, but as I understood it, I did not like that
  possibility because:
 
  a. it would limit the neutrality of the storage, creating one site
 as
  an unchallengeable arbiter of the data
  b. it would increase complexity for developers
  c. it would presumably depend on the setting of CORS directives to
  distinguish it from same-domain shared workers.
 
  While https://wiki.mozilla.org/WebAPI/DeviceStorageAPI appears to meet
 a
  subset of these needs, it does not meet all.
 
 
  +1

 Stop doing this.


Excuse me?



 --
 Nils Dagsson Moskopp // erlehmann
 http://dieweltistgarnichtso.net



Re: [whatwg] Administrivia: Update on the relationship between the WHATWG HTML living standard and the W3C HTML5 specification

2012-07-25 Thread Melvin Carvalho
On 20 July 2012 14:38, Steve Faulkner faulkner.st...@gmail.com wrote:

 Hi Hixie,

 I believe you have made some spurious claims, one of them being;

 The WHATWG effort is focused on developing the
 canonical description of HTML and related technologies

 The claim that HTML the living standard is canonical appears to imply that
 the requirements and advice contained within HTML the living standard is
 more correct than what is in the HTML5 specification.
 I do not consider this to be wholly that case, in particular in regards to
 author level conformance requirements and advice, where the HTML standard
 has no special claim to authority, it is not the domain of browser vendors
 to decide what is good authoring practise and any authoring requirements
 that go beyond implementation realities.

 The HTML living standard is not a canonical description of HTML, if it was
 there would be no need for the existence of specifications such as
 HTML to Platform Accessibility APIs Implementation
 Guidehttp://dvcs.w3.org/hg/html-api-map/raw-file/tip/Overview.html,
 this document is in existence and is being developed because neither the
 HTML5 specification nor the HTML living standard contains anything bearing
 a resemblance of what could be considered and adequate description of how
 user agents can implement accessibility support for HTML features in an
 interoperable way.

 Neither HTML5 in its current form or HTML the living standard can claim to
 be a canonical description of author conformance requirements for the
 provision of text alternatives, as there is another document in existence
 also published by the W3C that provides normative requirements for the
 subject:http://dev.w3.org/html5/alt-techniques/

 The HTML standard contradicts the HTML5 specification (or vice versa) on a
 number of author conformance requirements and advisory techniques,
 including use of tables, use of ARIA and use of the title attribute.

 In respect to those author related requirements mentioned above the HTML5
 specification can currently claim to be contain a more accurate set of
 requirements and advice, that takes into account current implementation
 realities, thus providing author with more practical advice and thus end
 users with a better experience.

 All in all I do not agree with your claim of  the HTML living standard
 being canonical. It is unfortunately the case that we now have at least 2
 specifications; HTML5 and the living standard neither of which can claim to
 be canonical description of HTML for stakeholders other than browser
 vendors.


There's been some commentary about this in blogosphere e.g.

http://www.xmltoday.org/content/inevitable-forking-html

Is it accurate to say that html5 is being 'forked', or would that be an
overstatement?




 --
 with regards

 Steve Faulkner
 Technical Director - TPG

 www.paciellogroup.com | www.HTML5accessibility.com |
 www.twitter.com/stevefaulkner
 HTML5: Techniques for providing useful text alternatives -
 dev.w3.org/html5/alt-techniques/
 Web Accessibility Toolbar -
 www.paciellogroup.com/resources/wat-ie-about.html



Re: [whatwg] Administrivia: Update on the relationship between the WHATWG HTML living standard and the W3C HTML5 specification

2012-07-25 Thread Melvin Carvalho
On 25 July 2012 18:12, Ian Hickson i...@hixie.ch wrote:


 To reiterate the statement I made in the original post on this thread:

 If you have any questions, I encourage you to e-mail me privately or ask
 on the IRC channel (#whatwg on Freenode); process-related discussion is
 discouraged on this mailing list so that we can maintain a high technical
 signal to noise ratio.

 You can also cc an archive mailing list, e.g. the www-arch...@w3.org
 mailing list, or cc me on a public post on a system such as Google+ or
 e-mail me a link to your blog post.

 Answers to any frequent questions about process issues will be added to
 the FAQ (something that you can do directly or which you can wait for
 someone else such as me to do).

 This mailing list (wha...@whatwg.org) is specifically for technical
 discussions and not for process discussions.


Thank you for the response, and the guidance.

I presume that naming is still a technical matter.

Just so that it's possible to understand how to name the two new branches
correctly, can you confirm that the W3C branch is now called HTML5 and
the WHATWG branch is named 'HTML Living Standard'.

Is this the long term project name, or just a working title?

Just thinking out loud, is that wise given that the acronym (T)LS (The)
Living Standard, (which someone has used already) is also used often used
as 'Transport Layer Security'?



 Thanks,
 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Web Notifications

2012-06-20 Thread Melvin Carvalho
On 20 June 2012 10:58, Anne van Kesteren ann...@annevk.nl wrote:

 Hi,

 The Web Notifications WG is planning to move Web Notifications to W3C
 Last Call meaning we don't intend to change it. But we might have
 missed something and would therefore appreciate your review of
 http://dvcs.w3.org/hg/notifications/raw-file/tip/Overview.html and any
 comments you might have at public-web-notificat...@w3.org.


My one comment was that it was not immediately obvious what the term 'tag'
was meant to mean.  It might just me, but tag is kind of an overloaded
term on the web these days, I needed to drill down into the examples to get
a good sense of it.



 Cheers,


 --
 Anne — Opera Software
 http://annevankesteren.nl/
 http://www.opera.com/



Re: [whatwg] Web-sockets + Web-workers to produce a P2P website or application

2010-01-19 Thread Melvin Carvalho
On Tue, Jan 19, 2010 at 5:59 PM, Andrew de Andrade
and...@deandrade.com.brwrote:

 I have an idea for a possible use case that as far as I can tell from
 previous discussions on this list has not been considered or at least
 not in the form I present below.

 I have a friend whose company produces and licenses online games for
 social networks such as Facebook, Orkut, etc.

 One of the big problems with these games is the shear amount of static
 content that must be delivered via HTTP once the application becomes
 popular. In fact, if a game becomes popular overnight, the scaling
 problems with this static content quickly becomes a technical and
 financial problem.

 To give you an idea of the magnitude and scope, more than 4 TB of
 static content is streamed on a given day for one of the applications.
 It's very likely that others with similarly popular applications have
 encountered the same challenge.

 When thinking about how to resolve this, I took my usual approach of
 thinking how do we decentralize the content delivery and move towards
 an agent-based message passing model so that we do not have a single
 bottleneck technically and so we can dissipate the cost of delivering
 this content.

 My idea is to use web-sockets to allow the browser function more a
 less like a bit-torrent client. Along with this, web-workers would
 provide threads for handling the code that would function as a server,
 serving the static content to peers also using the program.

 If you have lots of users (thousands) accessing the same application,
 you effectively have the equivalent of one torrent with a large swarm
 of users, where the torrent is a package of the most frequently
 requested static content. (I am assuming that the static content
 requests follow a power law distribution, with only a few static files
 being responsible for the overwhelming bulk of static data
 transferred.).

 As I have only superficial knowledge of the technologies involved and
 the capabilities of HTML5, I passed this idea by a couple of
 programmer friends to get their opinions. Generally they thought is
 was a very interesting idea, but that as far as they know, the
 specification as it stands now is incapable of accommodating such a
 use case.

 Together we arrived at a few criticisms of this idea that appear to be
 resolvable:

 -- Privacy issues
 -- Security issues (man in the middle attack).
 -- content labeling (i.e. how does the browser know what content is
 truly static and therefore safe to share.)
 -- content signing (i.e. is there some sort of hash that allows the
 peers to confirm that the content has not been adulterated).
 -- privacy issues


Yes I sort of see this kind of thing as the future of the web.  There's an
argument to say that it should have been done 10 or even 20 years ago, but
we're still not there.  I think websockets will be a huge step forward for
this kind of thing.  One issue still remains NAT traversal, perhaps this is
what has held developers back, though notable exceptions such as skype have
provided a great UX here.

Gaming is one obvious application for this, which in many ways is the
pinnacle of software engineering.

I see this kind of technique really bringing linked data into its own
(including RDFa) where browsers become more data aware and more socially
aware and are able interchage relevant information.   Something like FOAF
(as a means to mark up data) is well suited to provide a distributed network
of peers, can certainly handle global namespaced data naming, and is getting
quite close to solving privacy and security challenges.

Im really looking forward to seeing what people start to build on top of
this technology, and your idea certainly sounds exciting.



 All in all, many of these issues have been solved by the many talented
 programmers that have developed the current bit-torrent protocol,
 algorithms and security features. The idea would simply to design the
 HTML5 in such a way that it can permit the browser to function as a
 full-fledged web-application bit-torrent client-server.

 Privacy issues can be resolved by possibly defining something such as
 browser security zones or content label whereby the content
 provider (application developer) labels content (such as images and
 CSS files) as safe to share (static content) and labels dynamic
 content (such as personal photos, documents, etc.) as unsafe to share.

 Also in discussing this, we come up with some potentially useful
 extensions to this use case.

 One would be the versioning of the torrent file, such that the
 torrent file could represent versions of the application. i.e. I
 release an application that is version 1.02 and it becomes very
 popular and there is a sizable swarm. At some point in the future I
 release a new version with bug-fixes and additional features (such as
 CSS sprites for the social network game). I should be able to
 propagate this new version to all clients in the swarm so that over
 some time 

[whatwg] Question re: WSS

2009-08-08 Thread Melvin Carvalho
Hi All

Since WebSockets seem to be talked about recently.  I've have a couple
of questions re: the http websocket implementation and https ie wss

1. Will wss be able to perform a normal TLS handshake over wss?
2. Will it be able to send an X.509 certificate down the wire?
3. Is it likely that the receiving websocket will be able to access
fields such as SubjectAltName in the certificate?

This will enable, for example, the foaf+ssl protocol to function over
wss, which would be excellent, as this provides both strong
authentication, and realtime data stream over a single browser call,
and will also give an entry point to the linked data world.

Last that I looked at the spec, this seemed likely, but I wasnt 100%
sure, if anyone could offer any detail on this, or a pointer to
implementations, it would be much appreciated.

Thanks
Melvin