Re: Proposed W3C Charter: Web of Things Working Group

2019-12-12 Thread Benjamin Francis
Hi,

I'd like to provide Mozilla IoT  team feedback on
this charter, the content of which has already been modified slightly based
on our earlier feedback  to the
Working Group during the drafting stages.

We are happy overall with the contents of the charter and recommend
approving it (with comment), but the Working Group are aware that we still
have reservations in some areas, which we would like to note.

We would like to join the WoT Working Group under its new charter (we are
already members of the Interest Group, but made a formal objection
 to the
previous charter for the Working Group in 2016). Our comments on the new
charter are as follows.

We welcome the "interoperability profiles" and "discovery" work items which
we hope may improve interoperability by defining a common cross-domain
default protocol binding, and we note the progress which has been made with
regard to privacy and security considerations.

We still have some areas of concern around the scope of the charter,
specifically:

   1. The work item to continue to define protocol bindings for non-web
   protocols makes the scope unreasonably large and makes ad-hoc
   interoperability very challenging
   2. Thing Description Templates are an unnecessary complication and
   overlap in use cases with interoperability profiles and capability schemas
   defined through semantic annotations
   3. We think that the WoT Architecture specification should really be a
   non-normative note in order to reduce the number of normative
   specifications needed for implementers
   4. Non-normative deliverables for WoT Scripting, Management and
   Packaging also have the potential to unnecessarily increase scope further
   in future and could benefit from further incubation in the WoT Interest
   Group rather than being Working Group deliverables

However, we have found the core Thing Description specification produced by
the Working Group to be very useful and have implemented a (modified
version of) this specification in Mozilla's IoT platform
 which has now been in production for two years.
We have gradually been converging our implementation with the Working
Group's specification over time. We would therefore like to support the
continued work of this Working Group to further improve that specification.

On Tue, 3 Dec 2019 at 14:59, L. David Baron  wrote:

> The W3C is proposing a revised charter for:
>
>   Web of Things Working Group
>   https://www.w3.org/2019/11/proposed-wot-wg-charter-2019.html
>   https://lists.w3.org/Archives/Public/public-new-work/2019Nov/0005.html
>
> The differences from the previous charter are:
>
> https://services.w3.org/htmldiff?doc1=https%3A%2F%2Fwww.w3.org%2F2016%2F12%2Fwot-wg-2016.html=https%3A%2F%2Fwww.w3.org%2F2019%2F11%2Fproposed-wot-wg-charter-2019.html
>
> Mozilla has the opportunity to send comments or objections through
> Tuesday, December 17.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web of Things Interest Group

2019-10-03 Thread Benjamin Francis
Hi,

On behalf of the Mozilla IoT team I'd like to recommend that Mozilla
support this Interest Group charter.

There are a few topic areas I think are probably unnecessary (e.g. Thing
Templates and Scripting API), but these are all "explorations" and
non-normative deliverables. Overall the charter covers the broad topics we
would like to cover in our continued participation in this group.

We are now mainly concerned with the wording of the the proposed next WoT
Working Group charter
 which
will cover normative deliverables. I have already provided some feedback
 on that prior to AC review, in the
hope we can get it to the point where Mozilla would want to join that group
(of which we are not currently a member).

Just a reminder, the deadline for any further feedback on the Interest
Group charter is Friday 11th October.

Thanks

Ben

On Wed, 18 Sep 2019 at 01:50, L. David Baron  wrote:

> The W3C is proposing a revised charter for:
>
>   Web of Things Interest Group
>   https://www.w3.org/2019/07/wot-ig-2019.html
>   https://lists.w3.org/Archives/Public/public-new-work/2019Sep/0008.html
>
> The differences from the previous charter are:
>
> https://services.w3.org/htmldiff?doc1=https%3A%2F%2Fwww.w3.org%2F2016%2F07%2Fwot-ig-charter.html=https%3A%2F%2Fwww.w3.org%2F2019%2F07%2Fwot-ig-2019.html
>
> Mozilla has the opportunity to send comments or objections through
> Friday, October 11.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: DNS Rebinding protection

2018-06-27 Thread Benjamin Francis
On 25 June 2018 at 16:50, Brannon Dorsey  wrote:

> As far as I see it, a
> domain name should never be allowed to respond with a private IP address
> moments after it first responded with a public IP address.
>

If I understand correctly, this is exactly what we plan to do on our Mozilla
IoT gateway  project.

The use case  we have is
for a Web of Things gateway in the home to continue to be accessible
locally over HTTPS when the Internet goes down.

In our current implementation a user's Web of Things gateway can be reached
securely over the Internet via https://foo.mozilla-iot.org or insecurely
over the local network via http://gateway.local. We want it to be possible
for the gateway to still be accessible locally when the home Internet
connection goes down (e.g. so that you can still turn on your lights!), but
for that to use HTTPS rather than plain HTTP.

The current proposed solution

to this problem is to use an Alt-Svc HTTP header (RFC 7838) which tells the
browser to load resources from https://gateway.local rather than
https://foo.mozilla-iot.org when gateway.local is available. The idea is
that the browser will then cache that response so it will always try the
local IP first, so it will still work without an Internet connection for as
long as the cache lives. As an added bonus, the user can always use the
same web address whether they're inside or outside the network, and always
get the fastest route.

We're currently in the process of implementing this, and we're not sure yet
whether it will work, but if it does this could be a use case that we
wouldn't want Firefox to block. (Daniel's comments about DNS-over-HTTPS are
a bit concerning).

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: JSON-LD Working Group

2018-04-30 Thread Benjamin Francis
Thank you for looping me in.

It's probably worth mentioning that some proposed features for JSON-LD 1.1
may actually help us to keep JSON-LD *out* of the Web of Things
specifications, or at least make it an optional mechanism for adding
semantic annotations to an otherwise plain JSON serialisation [1] of the
Thing Description format [2] (like adding semantic annotations to HTML).

The rigid representation of RDF triples in JSON-LD 1.0 imposes significant
constraints on the design of a JSON-based format if consumers want to be
able to optionally parse it as JSON-LD (which many members of the Web of
Things Working Group feel strongly that they do). Features proposed for
JSON-LD 1.1 would allow much more flexibility to create a simpler and more
intuitive plain JSON serialisation (e.g. using JSON objects instead of
arrays in various places), with an implied default context, which can still
optionally have semantic annotations added if someone wants to do that.

These proposed JSON-LD 1.1 features have enabled us to successfully argue
for a plain JSON serialisation instead of the current JSON-LD
serialisation, while not fragmenting the Web of Things effort.

I understand there are arguments against resources which can be parsed both
as plain JSON and as JSON-LD, but there are lots of use cases people have
for adding semantic annotations to JSON. Supporting (or at least not
objecting to) the charter for JSON-LD 1.1 may actually help keep JSON-LD
out of the web platform by making it an optional extension, rather than a
dependency, in specifications.

Note that we've gone to great lengths to keep JSON-LD out of our Web of
Things proposal [2] so far, but I think it's probably inevitable that we're
eventually going to need some kind of lightweight extensible schema based
system for describing things in the real world, in order to make ad-hoc
interoperability possible. Hopefully something lightweight like Open Graph
or Microformats rather than a heavyweight solution like full JSON-LD (I'd
value input on that [3]). Currently a leading effort in that area is
iotschema.org which, like schema.org, will probably support JSON-LD style
annotations using @context and @type.

I should also probably mention the Web of Things specifications don't
require any implementation in web browsers, we're using it as an entirely
server side technology, so this work has no impact on Firefox.

By all means feel free to express your grumbles with JSON-LD, but note that
JSON-LD 1.1. could actually be a positive step forward for some Mozilla
efforts.

Thanks

Ben

1. Our proposed plain JSON serialisation of a Thing Description
https://iot.mozilla.org/wot
2. Current W3C Thing Description specification with JSON-LD serialisation
https://w3c.github.io/wot-thing-description/
3. Discussion on a schema based "capabilities" system for the Web of Things
https://github.com/mozilla-iot/wot/issues/57

On 30 April 2018 at 16:16, Peter Saint-Andre  wrote:

> On 4/29/18 10:42 AM, Henri Sivonen wrote:
> > On Sun, Apr 29, 2018, 19:35 L. David Baron  wrote:
> >
> >> OK, here's a draft of an explicit abtension that I can submit later
> >> today.  Does this seem reasonable?
> >>
> >
> > This looks good to me. Thank you.
>
> +1
>
> We might want to also check in with folks who are involved with the
> relevant WGs (e.g., Ben Francis, cc'd, w.r.t. Web of Things). I'll
> forward to him Tantek's earlier message.
>
> Peter
>
> >
> >
> >
> >>
> >> One concern that we've had over the past few years about JSON-LD
> >> is that some people have been advocating that formats adopt
> >> JSON-LD semantics, but at the same time allow processing as
> >> regular JSON, as a way to make the adoption of JSON-LD
> >> lighter-weight for producers and consumers who (like us) don't
> >> want to have to implement full JSON-LD semantics.  This yields a
> >> format with two classes of processors that will produce different
> >> results on many inputs, which is bad for interoperability.  And
> >> full JSON-LD implementation is often more complexity than needed
> >> for both producers and consumers of content.  We don't want
> >> people who produce Web sites or maintain Web browsers to have to
> >> deal with this complexity.  For more details on this issue, see
> >> https://hsivonen.fi/no-json-ns/ .
> >>
> >> This leads us to be concerned about the Coordination section in
> >> the charter, which suggests that some W3C Groups that we are
> >> involved in or may end up implementing the work of (Web of
> >> Things, Publishing) will use JSON-LD.  We would prefer that the
> >> work of these groups not use JSON-LD for the reasons described
> >> above, but this charter seems to imply that they will.
> >>
> >> While in general we support doing maintenance (and thus aren't
> >> objecting), we're also concerned that the charter is quite
> >> open-ended about what new features will be included (e.g.,
> >> referring to "requests for new features" and "take into 

Re: WoT Gateway Implementation

2016-12-15 Thread Benjamin Francis
Oops, I sent this to the wrong platform list :)

Moving to mozilla.dev@mozilla.org

On 15 December 2016 at 11:57, Benjamin Francis <bfran...@mozilla.com> wrote:

> Hi,
>
> In the last platform meeting
> <https://docs.google.com/document/d/1yo1AtuozukgfkZwzO745LdEsB1fdZrEWoIYr8F9eydo/edit?usp=sharing>
> we talked about kicking off the implementation of some of the key
> components of the platform, one of which is the WoT gateway implementation.
> As HomeWeb will initially be the main (only) consumer of this component, I
> wanted to make a suggestion about its implementation.
>
> We already have the fxbox <https://github.com/fxbox> source code to use
> as a starting point, but before we just go ahead and fork all those
> repositories under http://github.com/moziot/ (the new home of the
> platform source code?) I'd like to suggest that we use the right tools for
> the right jobs:
>
>- Protocol adaptors - Rust
>- WoT API - NodeJS
>
> Reasons for using Rust for Protocol Adaptors:
>
>- Protocol adaptors (or whatever we end up calling them) are
>essentially hardware drivers that talk to an underlying hardware component
>(e.g. a ZigBee/ZWave/Bluetooth radio over a UART connection). This is
>exactly the type of systems programming use cases that Rust was designed
>for.
>- This low level work is particularly performance and timing sensitive
>and these requirements justify the overhead of using/learning and
>cross-compiling a low level language.
>- Rust is a modern and safe programming language which happens to be
>maintained by Mozilla, which means we can add features that are missing and
>improve it where necessary (we already added support for a new ARM chipset
>on the HomeWeb project).
>
> Reasons for using Node for WoT APIs:
>
>- NodeJS is now a very popular language for server side web
>development and has a huge existing community and collection of modules to
>draw from. Using NodeJS will help attract community contribution from web
>developers who can easily add their own modules when hacking on the 
> project.
>- By comparison Rust is still relatively immature as a language for
>web development as you can read about here
><http://www.arewewebyet.org/>. It's likely that our platform will need
>to interact with a large range of existing software and services,
>especially once third parties start to use it for their own projects. Rust
>may hold us back here while we wait for it to mature.
>- SensorWeb already started implementing the SensorThings API in
>NodeJS. I'd suggest we will eventually want to share code between the cloud
>and gateway WoT API <https://moziot.github.io/wot/> implementations.
>
> Using Rust for low level components and NodeJS for the WoT API seems to me
> like a good compromise and has a parallel in Gecko where we use both
> C++/Rust and JavaScript for different jobs.
>
> However, using two separate languages comes at a cost (both cognitive and
> complexity-wise). I personally have zero experience of writing Rust and
> while I know it's possible
> <https://blog.risingstack.com/how-to-use-rust-with-node-when-performance-matters/>
> to call Rust components from NodeJS I've never actually done it. I'm
> interested to hear other peoples' views on this (without descending into
> needless bikeshedding or a holy war!), particularly if anyone has
> experience of using these two languages together. How feasible is this?
>
> Ben
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


WoT Gateway Implementation

2016-12-15 Thread Benjamin Francis
Hi,

In the last platform meeting

we talked about kicking off the implementation of some of the key
components of the platform, one of which is the WoT gateway implementation.
As HomeWeb will initially be the main (only) consumer of this component, I
wanted to make a suggestion about its implementation.

We already have the fxbox  source code to use as
a starting point, but before we just go ahead and fork all those
repositories under http://github.com/moziot/ (the new home of the platform
source code?) I'd like to suggest that we use the right tools for the right
jobs:

   - Protocol adaptors - Rust
   - WoT API - NodeJS

Reasons for using Rust for Protocol Adaptors:

   - Protocol adaptors (or whatever we end up calling them) are essentially
   hardware drivers that talk to an underlying hardware component (e.g. a
   ZigBee/ZWave/Bluetooth radio over a UART connection). This is exactly the
   type of systems programming use cases that Rust was designed for.
   - This low level work is particularly performance and timing sensitive
   and these requirements justify the overhead of using/learning and
   cross-compiling a low level language.
   - Rust is a modern and safe programming language which happens to be
   maintained by Mozilla, which means we can add features that are missing and
   improve it where necessary (we already added support for a new ARM chipset
   on the HomeWeb project).

Reasons for using Node for WoT APIs:

   - NodeJS is now a very popular language for server side web development
   and has a huge existing community and collection of modules to draw from.
   Using NodeJS will help attract community contribution from web developers
   who can easily add their own modules when hacking on the project.
   - By comparison Rust is still relatively immature as a language for web
   development as you can read about here .
   It's likely that our platform will need to interact with a large range of
   existing software and services, especially once third parties start to use
   it for their own projects. Rust may hold us back here while we wait for it
   to mature.
   - SensorWeb already started implementing the SensorThings API in NodeJS.
   I'd suggest we will eventually want to share code between the cloud and
   gateway WoT API  implementations.

Using Rust for low level components and NodeJS for the WoT API seems to me
like a good compromise and has a parallel in Gecko where we use both
C++/Rust and JavaScript for different jobs.

However, using two separate languages comes at a cost (both cognitive and
complexity-wise). I personally have zero experience of writing Rust and
while I know it's possible

to call Rust components from NodeJS I've never actually done it. I'm
interested to hear other peoples' views on this (without descending into
needless bikeshedding or a holy war!), particularly if anyone has
experience of using these two languages together. How feasible is this?

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web of Things Working Group

2016-12-01 Thread Benjamin Francis
ices in their own homes (*but recently
> approved in the UK[1]), and thus should be opposed.
>
>
> If anything, we (Mozilla) should be reaching out to EFF and any other
> W3C Members who value an open internet and respecting users privacy
> more than profit (perhaps university members of W3C) and asking them
> to join our formal objection to anything WoT/IoT at W3C.
>
>
> Tantek
>
> [1] http://www.wired.co.uk/article/ip-bill-law-details-passed
>
>
> On Tue, Nov 29, 2016 at 7:23 AM, Benjamin Francis <bfran...@mozilla.com>
> wrote:
> > Hi David,
> >
> > Have you had any more correspondence with the W3C on Mozilla's behalf
> > regarding this charter?
> >
> > From the Web of Things Interest Group mailing list
> > <https://lists.w3.org/Archives/Public/public-wot-ig/> it appears that
> the
> > group is happy to remove the dependency on RDF as suggested in our
> feedback
> > (although they claim this wasn't intended as a dependency in the first
> > place). Instead I understand they would like to include an extension
> point
> > in the Thing Description such that semantic annotations could be added
> > externally to the Thing Description specification if desired. This seems
> > reasonable to me.
> >
> > On the point of the charter being too broad I don't think much has been
> > done to address this. The group still seems intent on including a
> > language-agnostic "scripting API" in the charter, despite Google's
> feedback
> > that the Thing Description should be the central focus of the charter and
> > that the scripting API should be moved to a supporting research themed
> > status.
> >
> > I'd like to share a recommendation from the IoT platform team in
> Connected
> > Devices that the charter should include only a *Web Thing Description*
> with
> > a default JSON encoding and a *Web Thing API* which is a REST API that
> can
> > be implemented using HTTP (or HTTP/2 or CoAP). We have started to draft a
> > potential member submission <https://moziot.github.io/wot/> to
> illustrate
> > this proposal (this is just a skeleton at the moment, contributions
> welcome
> > on GitHub <https://github.com/moziot/wot/issues>).
> >
> > With this reduced scope no scripting API should be necessary (most
> > programming languages already have the capability to call a REST API via
> > HTTP and anyone can create a helper library to call the WoT REST API). It
> > should also simplify the security and privacy requirements considerably
> > given this is a well established and well understood technology stack on
> > the web.
> >
> > This kind of RESTful approach is already becoming a de-facto standard in
> > IoT (e.g. Google Weave, Apple HomeKit, Samsung SmartThings, EVRYTHNG, AWS
> > IoT, Azure IoT, IoTivity, AllJoyn). What's missing is a standard data
> model
> > and common API using this pattern. This is also the direction the Open
> > Connectivity Foundation <https://openconnectivity.org> is taking with
> CoAP
> > and their OIC specification, and the direction we expect the Mozilla IoT
> > Framework to take.
> >
> > We'd very much like to collaborate on this specification via the W3C but
> > currently the charter still seems too broad and I would argue not in line
> > with the direction of the wider industry.
> >
> > Ben
> >
> >
> >
> > On 17 October 2016 at 19:15, L. David Baron <dba...@dbaron.org> wrote:
> >
> >> The comments I submitted on the WoT charter are archived at:
> >> https://lists.w3.org/Archives/Public/www-archive/2016Oct/0004.html
> >>
> >> -David
> >>
> >> On Friday 2016-10-14 15:03 +0100, Benjamin Francis wrote:
> >> > Hi David,
> >> >
> >> > We collected some feedback in a document
> >> > <https://docs.google.com/document/d/1jbZUgqFiJa_
> >> R5E3OxPduFSiVsmOYGSWw66VVLij9FyA/edit?usp=sharing>
> >> > and I'm going to try to summarise it here. Please let me know if you
> feel
> >> > this feedback is appropriate and feel free to edit it before sending.
> I
> >> > also welcome further feedback from this list if it can be provided in
> >> time.
> >> >
> >> >
> >> >
> >> > There were some concerns expressed around the clarity of the goals set
> >> out
> >> > in the charter and whether there has been sufficient research and
> >> > incubation in order to proceed with the drafting of specifications
> via a
> >> 

Re: Proposed W3C Charter: Web of Things Working Group

2016-11-29 Thread Benjamin Francis
Hi David,

Have you had any more correspondence with the W3C on Mozilla's behalf
regarding this charter?

From the Web of Things Interest Group mailing list
<https://lists.w3.org/Archives/Public/public-wot-ig/> it appears that the
group is happy to remove the dependency on RDF as suggested in our feedback
(although they claim this wasn't intended as a dependency in the first
place). Instead I understand they would like to include an extension point
in the Thing Description such that semantic annotations could be added
externally to the Thing Description specification if desired. This seems
reasonable to me.

On the point of the charter being too broad I don't think much has been
done to address this. The group still seems intent on including a
language-agnostic "scripting API" in the charter, despite Google's feedback
that the Thing Description should be the central focus of the charter and
that the scripting API should be moved to a supporting research themed
status.

I'd like to share a recommendation from the IoT platform team in Connected
Devices that the charter should include only a *Web Thing Description* with
a default JSON encoding and a *Web Thing API* which is a REST API that can
be implemented using HTTP (or HTTP/2 or CoAP). We have started to draft a
potential member submission <https://moziot.github.io/wot/> to illustrate
this proposal (this is just a skeleton at the moment, contributions welcome
on GitHub <https://github.com/moziot/wot/issues>).

With this reduced scope no scripting API should be necessary (most
programming languages already have the capability to call a REST API via
HTTP and anyone can create a helper library to call the WoT REST API). It
should also simplify the security and privacy requirements considerably
given this is a well established and well understood technology stack on
the web.

This kind of RESTful approach is already becoming a de-facto standard in
IoT (e.g. Google Weave, Apple HomeKit, Samsung SmartThings, EVRYTHNG, AWS
IoT, Azure IoT, IoTivity, AllJoyn). What's missing is a standard data model
and common API using this pattern. This is also the direction the Open
Connectivity Foundation <https://openconnectivity.org> is taking with CoAP
and their OIC specification, and the direction we expect the Mozilla IoT
Framework to take.

We'd very much like to collaborate on this specification via the W3C but
currently the charter still seems too broad and I would argue not in line
with the direction of the wider industry.

Ben



On 17 October 2016 at 19:15, L. David Baron <dba...@dbaron.org> wrote:

> The comments I submitted on the WoT charter are archived at:
> https://lists.w3.org/Archives/Public/www-archive/2016Oct/0004.html
>
> -David
>
> On Friday 2016-10-14 15:03 +0100, Benjamin Francis wrote:
> > Hi David,
> >
> > We collected some feedback in a document
> > <https://docs.google.com/document/d/1jbZUgqFiJa_
> R5E3OxPduFSiVsmOYGSWw66VVLij9FyA/edit?usp=sharing>
> > and I'm going to try to summarise it here. Please let me know if you feel
> > this feedback is appropriate and feel free to edit it before sending. I
> > also welcome further feedback from this list if it can be provided in
> time.
> >
> >
> >
> > There were some concerns expressed around the clarity of the goals set
> out
> > in the charter and whether there has been sufficient research and
> > incubation in order to proceed with the drafting of specifications via a
> > Working Group.
> >
> > We propose the charter could benefit from a reduced scope, a more
> > lightweight approach and a simplified set of deliverables. This might
> > include a simpler initial data model with a reduced set of metadata and a
> > default encoding without a dependency on RDF (e.g. plain JSON), the
> > specification of a single REST/WebSockets API and a reduced scope around
> > methods for device discovery. We propose that the deliverables could be
> > reduced down to a single specification describing a Web of Things
> > architecture, data model and API and separate notes documenting bindings
> to
> > non-web protocols and a set of test cases.
> >
> > It is suggested that the WoT Current Practices
> > <http://w3c.github.io/wot/current-practices/wot-practices.html> and WoT
> > Architecture <https://w3c.github.io/wot/architecture/wot-architecture.
> html>
> > documents referenced in the charter are not currently a good basis on
> which
> > to build a specification and that the member submission
> > <http://model.webofthings.io/> from EVRYTHNG and the Barcelona
> > Supercomputing Center could provide a better starting point.
> >
> > Mozilla welcomes the activity in this area but the charter as currently
> > proposed 

Re: Proposed W3C Charter: Web of Things Working Group

2016-10-14 Thread Benjamin Francis
Hi David,

We collected some feedback in a document

and I'm going to try to summarise it here. Please let me know if you feel
this feedback is appropriate and feel free to edit it before sending. I
also welcome further feedback from this list if it can be provided in time.



There were some concerns expressed around the clarity of the goals set out
in the charter and whether there has been sufficient research and
incubation in order to proceed with the drafting of specifications via a
Working Group.

We propose the charter could benefit from a reduced scope, a more
lightweight approach and a simplified set of deliverables. This might
include a simpler initial data model with a reduced set of metadata and a
default encoding without a dependency on RDF (e.g. plain JSON), the
specification of a single REST/WebSockets API and a reduced scope around
methods for device discovery. We propose that the deliverables could be
reduced down to a single specification describing a Web of Things
architecture, data model and API and separate notes documenting bindings to
non-web protocols and a set of test cases.

It is suggested that the WoT Current Practices
 and WoT
Architecture 
documents referenced in the charter are not currently a good basis on which
to build a specification and that the member submission
 from EVRYTHNG and the Barcelona
Supercomputing Center could provide a better starting point.

Mozilla welcomes the activity in this area but the charter as currently
proposed may need some work.




Let me know what you think

Ben

On 11 October 2016 at 02:52, L. David Baron  wrote:

> The W3C is proposing a new charter for:
>
>   Web of Things Working Group
>   https://lists.w3.org/Archives/Public/public-new-work/2016Sep/0005.html
>   https://www.w3.org/2016/09/wot-wg-charter.html
>
> Mozilla has the opportunity to send comments or objections through
> this Friday, October 14.
>
> Please reply to this thread if you think there's something we should
> say as part of this charter review, or if you think we should
> support or oppose it.
>
> My initial reaction would be to worry about whether there's
> properly-incubated material here that's appropriate to charter a
> working group for, or whether this is more of a (set of?) research
> projects.  W3C has an existing Interest Group (not a Working Group,
> so not designed to write Recommendation-track specifications) in
> this area: https://www.w3.org/WoT/IG/ .
>
> -David
>
> --
> 턞   L. David Baron http://dbaron.org/   턂
> 턢   Mozilla  https://www.mozilla.org/   턂
>  Before I built a wall I'd ask to know
>  What I was walling in or walling out,
>  And to whom I was like to give offense.
>- Robert Frost, Mending Wall (1914)
>
> ___
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web of Things Working Group

2016-10-13 Thread Benjamin Francis
On 13 October 2016 at 01:51, Martin Thomson  wrote:

> I agree with this sentiment, but I don't think that we need to insist
> that a new W3C group solve these issues.  I'm very much concerned with
> the question of how a new "thing" might be authenticated, even how
> clients of the thing are authenticated, those are definitely well
> within their remit and it should be an important consideration.
>
> We shouldn't hold the group responsible for the failings of the
> industry at large though, no matter how egregious those failings.
>

Yes, and let's not be so quick to criticise without an alternative to
propose.

*Building the Web of Things* has a chapter on "Securing and sharing web
Things" which covers encryption (TLS, HTTPS, WSS), authentication (OAuth),
authorization and access control (API tokens and ACLs). EVRYTHNG have a white
paper

on this topic which also touches on other areas like network layer
encryption, firmware vulnerabilities, ISO 27001, SOC 1/2/3, PCI DSS and
addresses the "OWASP Internet of Things Top Ten vulnerabilities". That
seems like a good foundation to build on.

I mention this because EVRYTHNG is one of the members of the Interest Group
so I think the expertise is there, it's just a bit buried at the moment in
all the noise. Maybe that's something we can help with.

This is probably OK.  I would start with this though:
> * insufficiently precise statement of goals; needs more research and
> incubation time
>

I hope we can come up with something a bit more constructive than
"insufficiently precise statement of goals".

I suggest moving this discussion to dev-iot
.
dev-platform is now only really about the back end of Firefox which isn't
very relevant here. WoT mainly concerns the server side of the web stack.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web of Things Working Group

2016-10-12 Thread Benjamin Francis
On 12 October 2016 at 02:00, Martin Thomson  wrote:

> Does anyone at Mozilla intend to join this working group? I see no
> Mozilla members in the IG.
>

Yes, in Connected Devices we have recently started looking at this area in
some detail and I think we should seriously consider joining the Working
Group. I previously applied to join the Interest Group but at that point we
decided we weren't far enough along with our thinking in this space to make
that commitment, I think things have changed now.

Firstly, I agree the documents produced by the Interest Group are a a mess.
In fact almost incomprehensible.

Much more compelling is the member submission 
from EVRYTHNG which also forms the basis of the book, Building the Web of
Things . This is
a well thought out blueprint for the Web of Things which I think could
serve as a better starting point. In fact we are planning our own
implementation of something along these lines for Mozilla's IoT platform
which could serve as a reference implementation. We've just started to
draft a white paper on this topic.

I'm going to try and find time for a more detailed review of the proposed
charter this week. They keep giving short notice for the review deadline
and then extending the deadline.

Thanks David for flagging this up, I'm interested to hear others' views on
the charter specifically.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Removal of B2G from mozilla-central

2016-09-30 Thread Benjamin Francis
On 30 September 2016 at 11:31, Gabriele Svelto  wrote:

> Since gonk is a widget on its own, during the internal discussions about
> it I - and others who worked on B2G - repeatedly asked for the gonk
> widget to be left in the code even after the removal of all the
> B2G-specific APIs (as a tier3 platform obviously).
>
...

> - We would still have the functionality to render web content into the
> low-levels of the Android graphics stack
>

+1

I still personally think that the Gonk widget layer is a valuable asset in
its own right and that we may eventually find that we need it in Connected
Devices. If it helps, think of it as the Brillo widget layer given Gonk is
much the same thing as Google Brillo (AOSP minus Java).

Part of the B2G decision appears to be to remove this code, but it's true
there was no public discussion about this as there was with the Qt widget
layer where the option was given for someone to volunteer to maintain it.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G OS Announcements on Tuesday

2016-09-27 Thread Benjamin Francis
Thank you to everyone who attended this meeting today, the meeting notes
<https://wiki.mozilla.org/B2G/Meeting/2016-09-27> are now on the wiki.

Ben

On 27 September 2016 at 16:02, Benjamin Francis <bfran...@mozilla.com>
wrote:

> The meeting will be streamed on Air Mozilla in addition to Vidyo
>
> https://air.mozilla.org/b2g-os-announcements-2016-09-27/
>
> On 27 September 2016 at 10:05, Benjamin Francis <bfran...@mozilla.com>
> wrote:
>
>> A reminder that this meeting is today, Tuesday, at 9am PT.
>>
>> The main announcements to be discussed are outlined here
>> https://groups.google.com/d/msg/mozilla.dev.fxos/FoAwifahNPY/Lppm0VHVBAAJ
>>
>> Ben
>>
>> On 23 September 2016 at 10:10, Benjamin Francis <bfran...@mozilla.com>
>> wrote:
>>
>>> Dear B2G OS community,
>>>
>>> Our weekly meeting on Tuesday 27th September will be attended by some
>>> senior members of staff from Mozilla who would like to make some
>>> announcements to the community about the future of the B2G project and
>>> Mozilla's involvement.
>>>
>>> I would strongly recommend that you attend if you can as this will be
>>> your opportunity to hear those announcements first hand and to ask any
>>> questions you may have.
>>>
>>> You can join the meeting by using the usual guest link
>>> <http://j.mp/K03h7e> which requires Vidyo <http://www.vidyo.com/> to
>>> join the B2G Vidyo room. The meeting is at *9am PT (Pacific Time) on
>>> Tuesday 27th September*, full details are on the wiki
>>> <https://wiki.mozilla.org/B2G/Meeting>.
>>>
>>> Meeting notes will be kept on the Etherpad
>>> <https://public.etherpad-mozilla.org/p/b2g-weekly-meeting> as usual.
>>> Back channel chat is on the #b2g IRC channel (which is mirrored to
>>> Telegram). This week the meeting will also be streamed on Air Mozilla
>>> <https://air.mozilla.org/> and you can submit questions via IRC if you
>>> prefer.
>>>
>>> Regards
>>>
>>> Ben
>>>
>>
>>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G OS Announcements on Tuesday

2016-09-27 Thread Benjamin Francis
The meeting will be streamed on Air Mozilla in addition to Vidyo

https://air.mozilla.org/b2g-os-announcements-2016-09-27/

On 27 September 2016 at 10:05, Benjamin Francis <bfran...@mozilla.com>
wrote:

> A reminder that this meeting is today, Tuesday, at 9am PT.
>
> The main announcements to be discussed are outlined here
> https://groups.google.com/d/msg/mozilla.dev.fxos/FoAwifahNPY/Lppm0VHVBAAJ
>
> Ben
>
> On 23 September 2016 at 10:10, Benjamin Francis <bfran...@mozilla.com>
> wrote:
>
>> Dear B2G OS community,
>>
>> Our weekly meeting on Tuesday 27th September will be attended by some
>> senior members of staff from Mozilla who would like to make some
>> announcements to the community about the future of the B2G project and
>> Mozilla's involvement.
>>
>> I would strongly recommend that you attend if you can as this will be
>> your opportunity to hear those announcements first hand and to ask any
>> questions you may have.
>>
>> You can join the meeting by using the usual guest link
>> <http://j.mp/K03h7e> which requires Vidyo <http://www.vidyo.com/> to
>> join the B2G Vidyo room. The meeting is at *9am PT (Pacific Time) on
>> Tuesday 27th September*, full details are on the wiki
>> <https://wiki.mozilla.org/B2G/Meeting>.
>>
>> Meeting notes will be kept on the Etherpad
>> <https://public.etherpad-mozilla.org/p/b2g-weekly-meeting> as usual.
>> Back channel chat is on the #b2g IRC channel (which is mirrored to
>> Telegram). This week the meeting will also be streamed on Air Mozilla
>> <https://air.mozilla.org/> and you can submit questions via IRC if you
>> prefer.
>>
>> Regards
>>
>> Ben
>>
>
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: B2G OS Announcements on Tuesday

2016-09-27 Thread Benjamin Francis
A reminder that this meeting is today, Tuesday, at 9am PT.

The main announcements to be discussed are outlined here
https://groups.google.com/d/msg/mozilla.dev.fxos/FoAwifahNPY/Lppm0VHVBAAJ

Ben

On 23 September 2016 at 10:10, Benjamin Francis <bfran...@mozilla.com>
wrote:

> Dear B2G OS community,
>
> Our weekly meeting on Tuesday 27th September will be attended by some
> senior members of staff from Mozilla who would like to make some
> announcements to the community about the future of the B2G project and
> Mozilla's involvement.
>
> I would strongly recommend that you attend if you can as this will be your
> opportunity to hear those announcements first hand and to ask any questions
> you may have.
>
> You can join the meeting by using the usual guest link
> <http://j.mp/K03h7e> which requires Vidyo <http://www.vidyo.com/> to join
> the B2G Vidyo room. The meeting is at *9am PT (Pacific Time) on Tuesday
> 27th September*, full details are on the wiki
> <https://wiki.mozilla.org/B2G/Meeting>.
>
> Meeting notes will be kept on the Etherpad
> <https://public.etherpad-mozilla.org/p/b2g-weekly-meeting> as usual. Back
> channel chat is on the #b2g IRC channel (which is mirrored to Telegram).
> This week the meeting will also be streamed on Air Mozilla
> <https://air.mozilla.org/> and you can submit questions via IRC if you
> prefer.
>
> Regards
>
> Ben
>
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


B2G OS Announcements on Tuesday

2016-09-23 Thread Benjamin Francis
Dear B2G OS community,

Our weekly meeting on Tuesday 27th September will be attended by some
senior members of staff from Mozilla who would like to make some
announcements to the community about the future of the B2G project and
Mozilla's involvement.

I would strongly recommend that you attend if you can as this will be your
opportunity to hear those announcements first hand and to ask any questions
you may have.

You can join the meeting by using the usual guest link 
which requires Vidyo  to join the B2G Vidyo room.
The meeting is at *9am PT (Pacific Time) on Tuesday 27th September*, full
details are on the wiki .

Meeting notes will be kept on the Etherpad
 as usual. Back
channel chat is on the #b2g IRC channel (which is mirrored to Telegram).
This week the meeting will also be streamed on Air Mozilla
 and you can submit questions via IRC if you
prefer.

Regards

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-standard stuff in the /dom/ directory

2016-08-17 Thread Benjamin Francis
On 17 August 2016 at 13:07, Benjamin Francis <bfran...@mozilla.com> wrote:

> As discussed in the public B2G Weekly
> <https://wiki.mozilla.org/B2G/Meeting> meetings, the B2G community want
> to remove ~30 APIs (~10 of which they have already removed) from Gecko, but
> keep ~3 chrome-only APIs and the Gonk widget layer which are still in use
> by the B2G open source project.
>

Specifics: https://public.etherpad-mozilla.org/p/b2g-transition-requirements
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Non-standard stuff in the /dom/ directory

2016-08-17 Thread Benjamin Francis
On 17 August 2016 at 05:05, Kyle Huey  wrote:

> s/moved out/deleted/
>
> This is a complicated subject caught in a variety of internal
> politics.  I suggest you raise it with your management chain ;)


It's really not that complicated and there's no need to be so Machiavellian
about it :)

As discussed in the public B2G Weekly 
meetings, the B2G community want to remove ~30 APIs (~10 of which they have
already removed) from Gecko, but keep ~3 chrome-only APIs and the Gonk
widget layer which are still in use by the B2G open source project. Some
module owners from platform want to go further by removing all B2G-related
code from mozilla-central, requiring the B2G community to fork Gecko if
they want to continue. They feel the B2G code is getting in the way of
other important platform work.

The issue has been escalated up the module ownership chain as per Mozilla's
governance system and a decision will be made and communicated in due
course.

I think Marcos' suggestion is a good one. It seems like a good idea to
clearly separate standard web APIs from chrome-only APIs which are not on a
standards track (e.g. the Browser API which Firefox is using).

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Proposed W3C Charter: Web of Things Interest Group

2016-06-22 Thread Benjamin Francis
On 22 June 2016 at 17:18, L. David Baron  wrote:

> So opposing it takes both a good bit of energy and a potentially a
> good bit of political capital (in that it might reduce the
> seriousness with which people take future objections that we make).
> Do you think it's actually worth getting involved here?  If so,
> would you be willing to help write the objection?
>

Can we make suggestions about how to improve the charter rather than just
oppose it? I don't necessarily agree with the technical approach the group
is currently taking, but I do agree that the Web of Things could be a
valuable area for standardisation, as a layer of abstraction above various
incompatible IoT protocols which makes "things" linkable, discoverable and
interoperable via URLs.

Perhaps the effort should be more focused on use cases and real world
implementations before, as Marcos says, going off into the RDF weeds.

Ben 
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: now available on desktop Firefox

2016-03-08 Thread Benjamin Francis
On 8 March 2016 at 06:06, Jonas Sicking  wrote:

> On Mon, Mar 7, 2016 at 9:51 PM, Tim Guan-tin Chien 
> wrote:
> > This implies the Gaia System app, which is framed by shell.html (a
> > chrome document), will switch cookie jar when the default is changed,
> > no?
>
> No, the Gaia System app doesn't run as chrome (i.e. doesn't have a
> system principal). So it would not be affected by this proposal.
>

Just a heads up that this may not be the case for much longer. As part of
the B2G OS transition project we are currently looking at making all the
Gaia system UI (at least all the certified apps with role=system) into
chrome.

I have a simple working prototype
 and
Fabrice is currently looking at getting this to work for the existing Gaia
system app. After some experimentation the approach Fabrice is taking is to
load the existing system UI using a chrome:// URL, but still inside the
 inside shell.html in order to try to maintain some
level of abstraction away from XPCOM.

The idea of this is to bring us much closer to how Firefox works (just
chrome and web content), but the upshot is that the Gaia system UI will in
fact have the system principal. This may not be a problem though, we have
no plans for data migration and we can probably share whatever Firefox does.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-29 Thread Benjamin Francis
On 26 February 2016 at 21:26, Ehsan Akhgari  wrote:

> Without intending to start a shadow discussion on top of what's already
> happening on the internal list (sadly), to answer your technical
> question, the "platform"/Firefox point is a false dichotomy.  As an
> example, you can create a new application target similar to browser,
> b2g/dev or mobile/android, select that using --enable-application, and
> start to hack away.  That should make it possible to create a
> non-Firefox project on top of Gecko.  You can use an HTML file for
> browser.startup.homepage, and you can use  if you
> need to load Web content.  So it's definitely possible to achieve what
> you want as things stand today.
>

OK, so to follow this logic in the case of B2G, your recommended solution
would be to transform /b2g/chrome/content/shell.html [1] into a replacement
for Gaia's system app and land that code directly in mozilla-central?

That could potentially work for the B2G use case and would remove an entire
layer from the architecture which sounds great. But it would mean putting
that code in mozilla-central rather than a completely separate project on
GitHub.

Ben

1.
https://dxr.mozilla.org/mozilla-central/source/b2g/chrome/content/shell.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-27 Thread Benjamin Francis
On 27 February 2016 at 03:07, Myk Melez  wrote:

> Nevertheless, the more significant factor is that this would be a cultural
> sea change in the Gecko project.
>

Eh? I didn't realise it was so radical.

My entire involvement with Mozilla and Mozilla technologies over the last
10 years has been building projects on top of XULRunner, Chromeless and
B2G/Graphene. This was something which was possible until literally the
last month when B2G became tier 3 and XULRunner was removed from the
codebase. I'm fine with losing XUL, all I'm really asking for is a simple
way to bootstrap an HTML-based project with chrome privileges so I can
continue to do what I was hired to do, which is innovating on the front end.

Why not use Electron for your project?
>

That does seem like a more attractive option than forking mozilla-central
for every new project I work on, which I think is what you're suggesting.

I've been kind of taken aback at the lengths the platform team is willing
to go to to prevent the platform being used by anyone. If Mozilla limits
itself to just being about Firefox then I think we really will be going the
way of the dinosaur.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-26 Thread Benjamin Francis
On 26 February 2016 at 15:21, Paul Rouget  wrote:

> So there are 2 things here.
>
> Browser API change. Sure, what do you propose? I don't care too much
> about the final syntax. But there are things we can improve in the
> current API. See https://github.com/browserhtml/browserhtml/issues/639
> for some ideas.
>

OK, let's discuss those specifics off-list.

A Electron-like project. I don't think there's anybody who would think
> that having a Electron-compatible tool based on gecko is a bad idea
> (we will likely go this route with Servo). I'm just not sure we have
> the resources to work on something like that *today* though.


I don't buy that. We just cancelled a bunch of projects and we still don't
have the resources? If that is really true then maybe we should be hiring
people.

But I'm happy to help in any way I can and there are quite a few people in
Connected Devices who have some spare cycles right now...

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-26 Thread Benjamin Francis
On 26 February 2016 at 17:55, Myk Melez  wrote:

>
> I'd like to see this too, if only because  is more ergonomic and
> easier to distinguish from the existing  API. However,
> the isolated  from bug 1238160 is reasonable and a great
> start.
>

I agree, let's build on that.


> I like this idea in theory. But I want to understand how it's different
> from Electron, besides simply using different underlying technology. In
> other words: what makes it unique that justifies the effort?


Why does it even have to be unique? Being able to build a browser using a
browser engine seems like table stakes to me...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-26 Thread Benjamin Francis
On 25 February 2016 at 23:08, Ehsan Akhgari  wrote:

> They're orthogonal in that  can load something within
> an "app context", or not, depending on whether you use a mozapp
> attribute.  Bug 1238160 makes it so that you can use the non-app variant
> on desktop.
>

I really meant the API being gated on mozApps permissions and having mozApp
specific features like events for manifests and web activities, not the
separate mozApp attribute, but those things can certainly be changed.

The Electron compatibility aside, this seems to me like replacing one
> proprietary API for another one.  The vendor prefix doesn't hurt us in
> this case since this API is completely invisible to the Web.
>

Electron compatibility would be neat, but isn't really what I'm asking for.
As you say, this isn't exposed to the web and doesn't necessarily have to
follow a standard.


> So I think it's better to separate out the different things you're
> asking for here.  It seems to me that if we decide that Electron
> compatibility is desirable, we should implement the webview API, but if
> we decide that's not valuable, there is little value in implementing
> webview.
>

I mainly propose the change of syntax because this transition period seems
like an opportunity to make a clean break, get rid of the vendor prefixes
and define a long term, explicitly separate to standard HTML, chrome-only
solution with a cleaner API and without having to worry about backwards
compatibility because the mozBrowser API could exist in parallel until we
phase it out.

But I think a more important piece than webview is the ability to execute a
Gecko-based user agent with HTML-based chrome without having to run it on
top of the Firefox binary. If we no longer have XULRunner, if mozApps is
phased out and B2G loses platform support we're really running out of
options for how to use Gecko for non-Firefox projects. At what point does
the platform stop being a platform and just becomes Firefox? How are we
promoting innovation if we're effectively forcing alternative user agents
to use WebKit? Unless there is another existing solution I'm missing?

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-25 Thread Benjamin Francis
On 25 February 2016 at 19:14, Ehsan Akhgari  wrote:

> mozApps and the browser API are orthogonal for the most part.  Removing
> mozApps doesn't mean that we would remove the mozbrowser API (and AFAIK
> that's not what Myk is planning.)
>

They are not entirely orthogonal. It seems they will become less
inter-dependent with bug 1238160 which is a good first step, though that
only applies to desktop.

> Both. My first priority is a Webview API for Gecko that works without
> > mozApps on multiple platforms. But I'd like that to be useful for
> multiple
> > projects. For it to be useful for a project like Browser.html, Hope,
> > Planula and even Firefox I think we need something along the lines of
> > BrowserWindow to spawn multiple native windows.
>
> So is this just a matter of the  syntax versus the  mozbrowser> syntax?
>

Partly a new chrome-only HTML element and API to allow us to remove the
non-standard vendor-prefixed extensions to the standard HTML iframe element
which have been hanging around for the last four years, but also a new
embedding model along the lines of Electron as I described above. I
understand that something the Browser.html team don't like about the
current setup is that you can't spawn multiple windows in the native window
manager because it was built for B2G where it's assumed Gaia will provide
the window manager. More generally I'd like to be able to run native apps
written in HTML-based chrome using Gecko without having to run them on top
of Firefox - like XULRunner but for HTML instead of XUL.

For the Firefox OS use case I think just making sure the Browser API is
exposed to chrome on Gonk without any mozApp permissions like it's planned
to be on desktop would be a good first step. Maybe that will already work
with bug 1238160?

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-25 Thread Benjamin Francis
On 25 February 2016 at 16:22, Paul Rouget  wrote:

> As a user, using  or  doesn't really
> matter.
> How would they differ? Same set of JS methods and events?
>

When Justin Lebar added the mozbrowser attribute to iframes for me in 2011
(see [1] for the whole story) it was meant to be a temporary solution, I'm
trying to figure out a more future proof solution which is less tied to
mozApps and mozApps-only functionality (e.g. mozbrowsermanifestchange [2]
and mozbrowseractivitydone [3], the separate data jars etc.).

The reason for all the moz-prefixed events and attributes was that it was
intended that the Browser API could eventually become a web standard
exposed to web content - but that never happened because we never came up
with a security model for mozApps fit for that purpose. If we remove
mozApps from Gecko I think we'll still want something like the Browser API,
but if it's only exposed to chrome then I think it makes sense to create a
completely separate chrome-only HTML element (a bit like xul:browser) and
take the opportunity to clean up the API. I imagine a new Webview API would
look a lot like the current Browser API, but with less vendor prefixes and
less issues (e.g. Some of those listed at [4]).

These are 2 different things.
>
> This is about BrowserWindow, which is a different beast.
>
> BrowserWindow is a window that loads a HTML page. That HTML page can
> use .
> Electron is basically a way to start and handle privileged HTML
> windows. It does what shell.js does (but headless!).
>
> For Servo, what i'd like us to do is:
> - have a JS runtime à-la nodejs, in Rust (with rust modules)
> - one of the rust module will be BrowserWindow, that will spawn a
> servo window that would allow the use of webviews
> - implement webview/mozbrowser within Servo
>
> I describe that here: https://github.com/servo/servo/issues/7379 and
> it's basically a subset of Electron, but with Spidermonkey, Servo, and
> Rust.
>
> If you want the same thing with Gecko, you'd need to be able to spawn
> gecko windows from a xpcshell-like tool (with an event loop that is
> not tied to a window). A headless JS script should be the entry point,
> not a window (shell.html).
>
> I'm almost certain that the Electron approach is what should be done
> if one wants to build desktop applications with web technologies. And
> if such application is a browser, the web engine needs to support
> webviews.
>
> An interesting issue we have today is system XHR. We are not sure we
> want to pollute the existing XHR code in Servo to add special
> behaviors. But in the headless JS, because it's a node-like
> environment (no need to think about security or web specifications,
> http://tola.me.uk/blog/2014/08/07/building-the-firefox-browser-for-firefox-os/
> it's part of the app, like shell.html), we could imagine using any
> crates.io's packages exposed in the JS world, and implement
> cross-origin http requests. We don't want to mix both worlds. The
> browser itself (the top level window, above the webviews) can
> communicate with the headless JS via message passing.
>
> Basically,  is the Browser API, Electron is nodejs +
> BrowserWindow that spawn webkit windows.
>
> --
>
> So what do you want to do here?
> You'd like to redesign the Browser API?
> Or propose a way to create apps with Gecko with a Electron-like project?
> Both?
>

Both. My first priority is a Webview API for Gecko that works without
mozApps on multiple platforms. But I'd like that to be useful for multiple
projects. For it to be useful for a project like Browser.html, Hope,
Planula and even Firefox I think we need something along the lines of
BrowserWindow to spawn multiple native windows.

I don't have a good answer to cross-origin XHR/fetch, but that is
definitely something which is needed to build a browser with HTML chrome.
It could be a method of a BrowserWindow or a webview (like the more special
purpose getManifest() method in the current Browser API [5]), or it could
be something completely separate.

Ben

1.
http://tola.me.uk/blog/2014/08/07/building-the-firefox-browser-for-firefox-os/
2.
https://developer.mozilla.org/en-US/docs/Web/Events/mozbrowsermanifestchange
3.
https://developer.mozilla.org/en-US/docs/Web/Events/mozbrowseractivitydone
4.
https://github.com/browserhtml/browserhtml/issues/639#issuecomment-138670852
5. https://developer.mozilla.org/en-US/docs/Web/API/Using_the_Browser_API
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: HTML-based chrome and

2016-02-25 Thread Benjamin Francis
Hi Ryan,

Thanks for the heads-up. Will this be available to all chrome privileged
code (i.e. not behind a mozApp permission)? If so, this could be a great
starting point for what I'm describing. The main differences being the way
you instantiate a "browser window", and a new chrome-only HTML element with
a new name and a cleaner API (the current mozBrowser extension to iframes
is a bit of a mess).

Also, is this restricted to desktop or could it be exposed on other
platforms too (i.e. Android/Gonk)?


To be more explicit about the type of setup I'm proposing...

Some kind of configuration/manifest which specifies a main worker script,
the example below is like Node's package.json (for Firefox OS which
provides its own window manager this could probably just point directly at
an HTML file, but for browser.html they want to be able to instantiate
multiple windows in the native window manager from a worker script):

{
  "name": "my-browser",
  "version": "1.0",
  "main": "main.js"
}

A script which instantiates a browser window with HTML chrome:

app.on('ready') {
  mainWindow = new BrowserWindow({ width: 800, height: 600});
  mainWindow.loadURL('file://.../index.html');
}

HTML browser chrome which embeds a webview:



  

My Browser

  
  

http://google.com;>
  



Browser logic which uses the webview API:

var backButton = document.getElementById('back-button');
var tab1 = document.getElementById('tab1');
backButton.addEventListener('click', function() {
  var tab1.goBack();
}

The chrome code could then access OS-level services via either further
chrome-only JavaScript APIs (with Electron these are Node APIs separate
from Webkit) or local web services providing some kind of RESTful API.

This is heavily based on Electron's API as a starting point and I'm
completely open to suggestions for something more Gecko-like. For example,
the configuration in package.json could be done in mozconfig or prefs
instead if that's more appropriate.

Ben

On 24 February 2016 at 20:38, J. Ryan Stinnett <jry...@gmail.com> wrote:

> We'll soon have access to  in desktop Firefox (see
> https://bugzilla.mozilla.org/show_bug.cgi?id=1238160).
>
> I realize you are proposing a different API than mozbrowser, but I
> just wanted to point out that there will be some HTML-based approach
> for browser chrome available on desktop soon.
>
> - Ryan
>
> On Wed, Feb 24, 2016 at 2:19 PM, Benjamin Francis <bfran...@mozilla.com>
> wrote:
> > Hi,
> >
> > I've been thinking about working towards deprecating "Open Web Apps" (aka
> > mozApps
> > <
> https://developer.mozilla.org/en-US/docs/Mozilla/Firefox_OS/API/Navigator/mozApps
> >)
> > in Gecko.
> >
> > For the most part I think mozApps should eventually be replaced by
> > standards-based web apps using Web Manifest, Service Workers etc. I'd
> love
> > to see a standalone display mode for Firefox which supports these web
> apps
> > on desktop and mobile to replace the now defunct web runtime, but that's
> > not what this email is about.
> >
> > For the most privileged pieces of UI like the browser chrome of the
> > browser.html project and the Firefox OS system app I think we may need
> > another solution. I'd like us to be able to split Gaia
> > <https://github.com/mozilla-b2g/gaia> into chrome (the system pieces)
> and
> > standard web content (everything else). (I'd also like to see a lot of
> the
> > current mozApps-only DOM APIs become web services).
> >
> >- In the past we had XULRunner but this has recently been removed and
> it
> >seems the continued use of XUL is being discouraged in favour of HTML.
> >- There was an attempt at rebooting the Chromeless project
> ><https://github.com/mikedeboer/chromeless2> but it looks like that
> was
> >still based on XULRunner.
> >- The browser.html <https://github.com/browserhtml/browserhtml/>
> project
> >currently uses Graphene
> ><
> https://github.com/browserhtml/browserhtml/wiki/Building-Graphene-%28Gecko-flavor%29
> >,
> >a build of Gecko/Servo which runs locally hosted web content as
> browser
> >chrome, but that's based on certified mozApps and the mozBrowser API.
> >
> > After chatting with members of the browser.html team I'd like to propose
> a
> > solution inspired by Electron <http://electron.atom.io/> (which they
> > already proposed <https://github.com/browserhtml/browserhtml/issues/639>
> > before <https://github.com/servo/servo/issues/7379>). This would
> involve a
> > new type of HTML-based chrome including a new  element.
> >
> > 

HTML-based chrome and

2016-02-24 Thread Benjamin Francis
Hi,

I've been thinking about working towards deprecating "Open Web Apps" (aka
mozApps
)
in Gecko.

For the most part I think mozApps should eventually be replaced by
standards-based web apps using Web Manifest, Service Workers etc. I'd love
to see a standalone display mode for Firefox which supports these web apps
on desktop and mobile to replace the now defunct web runtime, but that's
not what this email is about.

For the most privileged pieces of UI like the browser chrome of the
browser.html project and the Firefox OS system app I think we may need
another solution. I'd like us to be able to split Gaia
 into chrome (the system pieces) and
standard web content (everything else). (I'd also like to see a lot of the
current mozApps-only DOM APIs become web services).

   - In the past we had XULRunner but this has recently been removed and it
   seems the continued use of XUL is being discouraged in favour of HTML.
   - There was an attempt at rebooting the Chromeless project
    but it looks like that was
   still based on XULRunner.
   - The browser.html  project
   currently uses Graphene
   
,
   a build of Gecko/Servo which runs locally hosted web content as browser
   chrome, but that's based on certified mozApps and the mozBrowser API.

After chatting with members of the browser.html team I'd like to propose a
solution inspired by Electron  (which they
already proposed 
before ). This would involve a
new type of HTML-based chrome including a new  element.

Kan-Ru previously did a comparison
 of Mozilla's
mozBrowser API, Google's  and Microsoft's  and I started
to spec something out  with a view
to potentially standardising this, but the web lacks the security model
needed to expose this API and there's currently not much interest in a
standard. So my proposal is a chrome-only  element for Gecko which
would replace the mozApps-only mozBrowser API
,
along the lines of Electron's  element
, to allow you to
safely embed web content in an application with HTML-based chrome.

We could also potentially emulate other parts of Electron's APIs too, see
their quick start tutorial
 to get an idea
of how their embedding works.

Initially this would be used by the browser.html and Firefox OS projects,
but I think it could be a possible route away from XUL for Firefox in the
future too.

I've chatted with a few people working on browser.html and Firefox OS about
this, but I'd like to get broader feedback. Vivien told me he's already
prototyping a similar solution  to
the same problem so I'd like to kick off some discussion here about which
direction we should take.

Thanks

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement W3C Manifest for web application

2015-07-16 Thread Benjamin Francis
On 16 July 2015 at 01:44, Robert O'Callahan rob...@ocallahan.org wrote:

 As long as platforms exist with homescreens and other inventories of
 installed apps, of which the browser is one, it seems worthwhile to me to
 support adding Web apps to those inventories so they're peers of native
 apps instead of having to go through a level of indirection by launching a
 browser, making them second-class.

 We can argue that such platforms shouldn't exist, but we also have to work
 with the reality that they do.


Exactly. We can no longer talk about merging the web and native as some
potential future thing that may or may not happen. It is already happening:

   1. Android's Chrome Custom Tabs will keep users in native apps when
   following external hyperlinks
   https://developer.chrome.com/multidevice/android/customtabs
   2. Android's App Links will let native apps register to handle a
   particular web URL scope and remove the choice of the user to choose to
   open in the browser instead
   https://developer.android.com/preview/features/app-linking.html
   3. App install banners in Chrome may prompt users to install a web app
   or a native app, and the user may not even be able to tell the difference
   
https://developers.google.com/web/updates/2015/03/increasing-engagement-with-app-install-banners-in-chrome-for-android?hl=en
   http://www.w3.org/TR/appmanifest/#related_applications-member
   4. Android's App Indexing will surface content from inside Android apps
   in Google results https://developers.google.com/app-indexing/
   5. iOS's Universal Links, Smart App Banners and new search API will
   do much of the same on iOS
   https://developer.apple.com/videos/wwdc/2015/?id=509

This all points towards a future where the web and proprietary app
platforms are so intertwined that users may not even know the difference.
The question is how we respond to that. On Firefox OS we have the freedom
to define the entire experience, but on the other operating systems we
touch we need to accept the reality of the environment that we find
ourselves in. My personal conclusion is that we should react to all of the
above by pushing back in the other direction by:

   1. Helping users discover a web app before they discover its native
   equivalent, whilst browsing and searching the web
   2. Making web content a first class citizen on every OS Firefox touches,
   with a standalone display mode for Firefox
   3. Promoting re-engagement with web content through icons in launchers,
   offline and push notifications
   4. Guiding users to the best of the web through a crowd-sourced,
   community curated guide

The web has some unique advantages over other platforms, but those
advantages are being eroded. It's up to us to prove that the web can still
compete.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement W3C Manifest for web application

2015-07-16 Thread Benjamin Francis
On 16 July 2015 at 14:36, Ehsan Akhgari ehsan.akhg...@gmail.com wrote:

 As far as I can tell, neither of the above are things that another UA can
 hook into.  Am I correct in my understanding here?


I asked about that for Chrome Custom Tabs, a Googler told me there's an API
so that other browsers can create the equivalent if they want to. I'm not
sure if that's something we'd want to do with Fennec. But the important
point is that it will keep users in the context of native apps and has the
potential to reduce mindshare of the concept of the browser altogether.

App Links and App Indexing bypass browsers altogether, install banners are
something we could implement.


 If the above assertion is true, then you need to replace the web with
 Chrome and Safari on their respective OSes.


Given the market share of those browsers on their respective operating
systems I'd argue that amounts to the same thing. But yes, Firefox is its
own island. With the new Android and iOS features I described that island
is increasingly cut off from users.


 How does supporting W3C manifest or lack thereof help or hurt this?
 Couldn't we detect these web apps by looking at the meta tags inside
 pages?  It seems like manifests are not technically needed for this.


Sure. Web apps are just web sites with extra metadata. A manifest linked
from a web page using a manifest link relation is a way for a page to
associate itself with that metadata, for the purpose of discovery. The fact
that it's JSON rather than HTML is largely a practical implementation
detail, because adding 12+ meta tags to the head of every web page
doesn't scale very well. As I said, for the simple use case of defining an
application-name and icon meta tags work fine, and we will support that.


 How are we planning to hook into the OS through Firefox?  It seems like a
 lot of the web integration coming to Android and iOS is essentially
 integration with the browser developed by the OS vendor.


I'm trying to find that out, I'd be interested to see some experimentation
of how possible it is to create a standalone display mode for Firefox on
various operating systems, accessible by launchers added from the browser,
treated as a separate app by the OS, but using the same Gecko instance
and profile as Firefox. Like Chrome does on Android.



  3. Promoting re-engagement with web content through icons in
 launchers,
 offline and push notifications


 Again, how does supporting W3C manifest or lack thereof help or hurt
 this?  It seems like we can pick up the icons/application-name through the
 meta tag as well.


An icon and name is not enough metadata to describe modern web apps, as I
said above using a manifest file is just a practicality, like having CSS
and JavaScript in separate files to HTML.


 And given that the manifest format helps people link to the native app
 offerings, it seems like supporting it will slightly hurt this goal!


We don't have to implement that feature and we should argue to have it
removed from the spec. I don't think this is a good enough reason to give
up on the standardisation altogether.



  4. Guiding users to the best of the web through a crowd-sourced,
 community curated guide


 I'm not sure how W3C manifest helps here either.


W3C manifests allow web apps to be crawled by search engines and discovered
by users through their user agent without needing them to be submitted to a
central app store by the developer. They provide the metadata needed to
describe a web app, and provide a built-in discovery mechanism.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement W3C Manifest for web application

2015-07-15 Thread Benjamin Francis
On 15 July 2015 at 10:42, Jonas Sicking jo...@sicking.cc wrote:

 But it'd be *really* nice to get rid of features that are there
 specifically to migrate users away from the web and to native Android
 and iOS apps. If google/apple wants to implement that then that's fine
 with me, it's their browsers. I just don't see why that needs to be
 sanctioned by a web specification. It'd be nice if the spec took as
 hard of a line against native app stores as it did against web app
 stores.


I strongly agree with this. I think the related_applications [1] and
prefer_related_applications [2] properties have no place in a W3C
specification and are potentially very harmful to the web.

1. http://w3c.github.io/manifest/#related_applications-member
2. http://w3c.github.io/manifest/#prefer_related_applications-member
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Intent to implement W3C Manifest for web application

2015-07-15 Thread Benjamin Francis
On 15 July 2015 at 10:42, Jonas Sicking jo...@sicking.cc wrote:

 I also think that display-mode and orientation (and maybe
 theme_color) properties seem to make much less sense given the
 current model of manifests. That seems like information that we'd want
 to apply during normal browsing too, which means that it's not really
 appropriate for the manifest but rather for a meta tag.


We already have a way for an individual web page to set orientation and
theme-color while browsing with page metadata and a JavaScript API. I think
the value of having these properties in the manifest is that they can be
applied to the URL scope of a whole site rather than just an individual
page by applying the manifest to a browsing context to create what the spec
calls an application context, which just means that it already has
default metadata applied for a group of web pages. Otherwise you have to
wait for each individual page to download to know what display properties
to use. This is bad for UX.

I don't think the display property is relevant whilst browsing because you
are, by definition, in the browser display mode.



 I also can't think of a really good use of the scope property. I
 know it's something we're planning on using in the FirefoxOS pinning
 feature, but I'm not convinced that the resulting UI will be
 understandable to users. User testing will show.


Yes we are using this for Pin the Web in Firefox OS, and we are putting
that UI through user testing, I agree it needs testing. FWIW I think the
scope and display properties could be even more important for an
implementation in Firefox (on mobile and on desktop), if that was to go
ahead.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data must die. (was: Linked Data and a new Browser API event)

2015-07-02 Thread Benjamin Francis
On 2 July 2015 at 03:37, Tantek Çelik tan...@cs.stanford.edu wrote:

 tl;dr: It's time. Let's land microformats parsing support in Gecko as
 a Q3 Platform deliverable that Gaia can use.


Happy to hear this!


 I think there's rough consensus that a subset of OG, as described by
 Ted, satisfies this. Minimizing our exposure to OG (including Twitter
 Cards) is ideal for a number of reasons (backcompat/proprietary
 maintenance etc.).


That's certainly a good start. It seems a shame to intentionally filter out
all the extra meta tags used by other Open Graph types like:

   - music.song
   - music.album
   - music.playlist
   - music.radio_station
   - video.movie
   - video.episode
   - video.tv_show
   - article
   - book
   - profile
   - business
   - fitness.course
   - game.achievement
   - place
   - product
   - restaurant.menu

I envisage allowing the community to contribute addons to add extra
experimental card packs for types we don't support out of the box from day
one. Filtering out this data would make it very difficult for them to do
that, for no good reason.

I absolutely understand the argument about having to maintain backwards
compatibility with a format if we don't want to promote it going forward
though, which is why I agree we should be conservative when adding built-in
Open Graph types.

There appear to be multiple options for this, with the best (most
 open, aligned with our mission, already open source interoperably
 implemented, etc.) being microformats.


That is your opinion. There may be things you don't like about JSON-LD for
example, but it is a W3C Recommendation created through a standards body
and has open source implementations in just as many languages as
Microformats. There may be other more subjective measures of open you're
talking about, but I think it would be better for us all to stick to
arguments about technical merit and adoption statistics when making
comparisons in this case, at the risk of falling into the Not Invented Here
trap.


 fulfils mostly in theory. Schema is 99% overdesigned and
 aspirational, most objects and properties not showing up anywhere even
 in search results (except generic testing tools perhaps).


 A small handful of Schema objects and subset of properties are
 actually implemented by anyone in anything user-facing.


As I mentioned, level of current usage is not the most important criteria
for Gaia's own requirements, but if we're talking about how proven these
schemas are, according to schema.org these are the number of domains which
use the schemas we're talking about:

   - Person - over 1,000,000 domains
   - Event - 100,000 - 250,000 domains
   - ImageObject - over 1,000,000 domains
   - AudioObject - 10,000 - 50,000 domains
   - VideoObject - 100,000 - 200,000 domains
   - RadioChannel - fewer than 10 domains
   - EmailMessage - 100 - 1000 domains
   - Comment - 10,000 - 50,000 domains

The only equivalent data I have for Microformats is for hCard (equivalent
to the Person schema) from a crawl at the end of last year [1], and it has
about the same usage:

   - hCard - 1,095,517 domains

The data also shows that Microdata and RDFa are used on more pages per
domain than Microformats.

I'd say that Microformats looks at best equally as unproven on that basis,
though I'm open to new data.


 Everything else is untested, and claiming fulfils these use cases
 puts far too much faith in a company known for abandoning their
 overdesigned efforts (APIs, vocabularies, syntaxes!) every few years.
 Google Base / gData / etc. likely fulfilled these use cases too.


Our Gecko and Gaia code is not going to stop working if Google decides to
use something else. Content authors on the wider web might migrate to newer
vocabularies (or even syntaxes) over time, but that's something we're going
to have to monitor on an ongoing basis anyway.

Existing interoperably implemented microformats support most of these:

 - Contact - http://microformats.org/wiki/h-card
 - Event - http://microformats.org/wiki/h-event
 - Photo - http://microformats.org/wiki/h-entry with u-photo property
 - Song - no current vocabulary - classic hAudio vocabulary could be
 simplified for this
 - Video - http://microformats.org/wiki/h-entry with u-video property
 - Radio station - no current vocabulary - worth researching with
 schema RadioChannel as input
 - Email - http://microformats.org/wiki/h-entry with u-in-reply-to property
 - Message - http://microformats.org/wiki/h-entry


OK, so there are actually three Microformats that are useful to us here.
For photos, videos, emails and messages we have to re-use the same hEntry
Microformat and try to figure out from its properties which type of thing
it is. For song and radio station we'd need to invent something new.

This is not very attractive for Firefox OS where we'd like to have cleary
defined types of cards with different card templates. It also makes it
harder for the community to create new types of cards (e.g. via addons)

Re: Linked Data must die. (was: Linked Data and a new Browser API event)

2015-06-29 Thread Benjamin Francis
Thanks for the responses,

Let me reiterate the Product requirements:

   1. Support for a syntax and vocabulary already in wide use on the web to
   allow the creation of cards for the largest possible volume of existing
   pinnable content
   2. Support for a syntax with a large enough and/or extensible vocabulary
   to allow cards to be created for all the types of pinnable content and
   associated actions we need in Gaia

We need to deliver this by B2G 2.5 FL in September.

*Existing Web Content*
I think we're agreed that Open Graph gives us enough of a minimum viable
product for the first requirement. However, it's not OK to just hard code
particular og types into Gecko, we need to be able to experiment with cards
for lots of different Open Graph types without having to modify Gecko every
time (imagine system app addons with experimental card packs).

Open Graph is just meta tags and we already have a mechanism for detecting
specific meta tags in Gaia - the metachange event on the Browser API. As a
minimum all we need to do to access Open Graph meta tags is to extend this
event to include all meta tags with a property attribute, which is only
used by Open Graph. We could go a step further and extend the event to all
meta tags, which would also give us access to Twitter card markup for
example, but that isn't essential. We do not need an RDFa parser for this,
we can filter/clean up the data in the system app in Gaia where necessary
(the system app is widely regarded to be part of the platform itself).

*Gaia Content*

Open Graph does not have a large enough vocabulary, or (as Kelly says) the
ability to associate actions with content, needed for the second
requirement. Schema.org has a large existing vocabulary which basically
fulfils these use cases, though some parts are more tested than others,
with examples given in Microdata, RDFa and JSON-LD syntaxes, eg:

   - Contact - http://schema.org/Person
   - Event - http://schema.org/Event
   - Photo - http://schema.org/Photograph
   - Song - http://schema.org/MusicRecording
   - Video - http://schema.org/VideoObject
   - Radio station - http://schema.org/RadioChannel
   - Email - http://schema.org/EmailMessage
   - Message - http://schema.org/Comment

Schema.org also provides existing schemas for actions associated with items
(https://schema.org/docs/actions.html), although examples are only given in
JSON-LD syntax. Schema.org is just a vocabulary and Tantek tells me it's
theoretically possible to express this vocabulary in Microformats syntax
too - it's possible to create new vendor prefixed types, or suggest new
standard types to be added to the Microformats wiki. This would be required
because Microformats does not have a big enough existing vocabulary for
Gaia's needs. Microdata, RDFa and JSON-LD use URL namespaces so are
extensible by design with a non-centralised vocabulary (this is seen as a
strength by some, as a weakness by others).

The data we have [1][2][3][4] shows that Microdata, then RDFa (sometimes
considered to include Open Graph), is used by the most pinnable content on
the web, but the data does not include all modern Microformats. We also
don't have any data for JSON-LD usage. However, existing usage is not the
most important criteria for the second requirement, it's how well it fits
the more complex use cases in Gaia (and how much work it is to implement).

There is resistance to implementing a full Microdata or RDFa parser in
Gecko due to its complexity. JSON-LD is more self-contained by design (for
better or worse) and could be handed over to the Gaia system app directly
via the Browser API without any parsing in Gecko. Microformats is possibly
less Gecko work to implement than Microdata or RDFa, but more than JSON-LD.

*Conclusions*

My conclusion is that the least required work in Gecko for the highest
return would be:

   1. *Open Graph* (bug 1178484) - Extending the existing metachange
   Browser API event to include all meta tags with a property attribute.
   This would allow Gaia to add support for all of the Open Graph types,
   fulfilling requirement 1.
   2. *JSON-LD* (bug 1178491) - Adding a linkeddatachange event to the
   Browser API which is dispatched by Gecko whenever it encounters a script
   tag with a type of application/ld+json (as per the W3C recommendation
   [5]), including the JSON content in the payload of the event. This would
   allow the Gaia system app to support existing schema.org schemas
   (including actions), with the least amount of work in Gecko, and already in
   a JSON format it can store directly in the Places database
   (DataStore/IndexedDB).

Kan-Ru is the owner of the Browser API module in Gecko and has said he's
happy with this approach and is happy to review the code. Let's go ahead
with that now, unblocking the work on the Gaia side. (Note that I have no
intention of building a full RDF style parser in Gaia, we'll just extract
the data we need from the JSON, for the good reasons that 

Re: Linked Data must die. (was: Linked Data and a new Browser API event)

2015-06-27 Thread Benjamin Francis
On 26 June 2015 at 19:25, Marcos Caceres mar...@marcosc.com wrote:

 Could we see some examples of the cards you are generating already with
 existing data from the Web (from your prototype)? The value is really in
 seeing that users will get some real benefit, without expecting developers
 to add additional metadata to their sites.


The prototype only supports Open Graph, you can see some example cards in
this video https://www.youtube.com/watch?v=FiLnRoRjD5k
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data must die. (was: Linked Data and a new Browser API event)

2015-06-26 Thread Benjamin Francis
On 26 June 2015 at 12:58, Ted Clancy tcla...@mozilla.com wrote:

 My apologies for the fact that this is such an essay, but I think this has
 become necessary.

 Firefox OS 2.5 will be unveiling a new feature called Pinning The Web, and
 there's been some discussion about whether we should leverage technologies
 like RDFa, Microdata, JSON-LD, Open Graph, and Microformats for this
 purpose.

 First, I'd like to give some background on these technologies.

 In 2001, Tim Berners-Lee said that the Semantic Web was the future of
 the web and was going to revolutionize our world. (
 http://www.scientificamerican.com/article/the-semantic-web/)

 The Semantic Web was a doomed idea, for reasons best articulated in essay
 by Cory Doctorow entitled Metacrap, also written in 2001. (
 http://www.well.com/~doctorow/metacrap.htm) After 14 years of the
 Semantic Web not revolutionizing our world, I think history suggests that
 Cory Doctorow was right.

 But because the Semantic Web was the next big thing, millions of dollars
 were poured into it (mostly in the form of research grants and crappy
 specs, from what I can gather). In 2004, RDFa became the first big standard
 to emerge from this work. RDFa is a W3C Recommendation, and work is still
 proceeding on it.

 JSON-LD was started in 2008 as a JSON-based alternative to RDFa. As the
 author of JSON-LD, Manu Sporny, states:

 RDF is a shitty data model. It doesn’t have native support for lists.
 LISTS for fuck’s sake! [...] to work with RDF you typically needed a quad
 store, a SPARQL engine, and some hefty libraries. Your standard web
 developer has no interest in that toolchain because it adds more complexity
 to the solution than is necessary. (
 http://manu.sporny.org/2014/json-ld-origins-2/)

 However, though it originally wanted to distance itself from RDFa, JSON-LD
 ended up being chosen as a serialization for RDFa:

 Around mid-2012, the JSON-LD stuff was going pretty well and the newly
 chartered RDF Working Group was going to start work on RDF 1.1. One of the
 work items was a serialization of RDF for JSON. [...] The biggest problem
 being that many of the participants in the RDF Working Group at the time
 didn’t understand JSON. (ibid)

 (I just want everyone to note that in 2012, *THE AUTHORS OF RDFa DID NOT
 KNOW JSON*. This is in a spec that casually throws around propositional
 logic terms like entails, and subject-predicate-object triples.)

 JSON-LD is now a W3C recommendation, and has undergone added complexity to
 align it with RDFa. As Manu Sporny states, Nobody was happy with the
 result (ibid).

 Microdata is similar to RDFa, but without the benefit of being a W3C
 recommendation.

 Open Graph is a technology developed by Facebook. It's putatively a subset
 of RDFa. There is a small subset of Open Graph tags (og:title, og:type,
 og:url, and og:image) which are widely used for sharing content on social
 media like Facebook and Twitter.

 RDFa, Microdata, and JSON-LD can collectively be described as Linked
 Data technologies, so called because their intention is that semantic
 objects across different web pages would link to each other to create a
 Semantic Web.

 Microformats was developed circa 2005 as a lightweight way of putting
 semantic information into web pages, but does not aim to be a Linked Data
 or Semantic Web technology. It does not have an official standards body
 behind it, instead being maintained by a community of volunteers. One of
 our Mozilla employees, Tantek Çelik, was instrumental in its development.


Thanks for the history lesson :) When I started to research this area I
learnt very quickly that there are a lot of strong feelings on all sides
about which format is the best, and many formats claim to supersede each
other. The reality is that there's still no clear winner on the web. So
what I've tried to do is to take a data driven approach to look at which
syntaxes and vocabularies are getting the most traction according to
research papers based on the Common Crawl corpus, the Bing corpus and the
Yahoo corpus (all the data I've found so far).

There are two high level requirements for the Pin the Web features:
1) Getting the most possible user value out of the data that already exists
on the web today
2) Finding the best solution for the use cases we have in Gaia apps which
can be implemented in the time frame we have for the 2.5 release (Feature
Landing on 21st September)

Based on the data available and the level of effort of implementation my
most recent conclusions for those requirements were:

1) Open Graph
2) JSON-LD

However, there's also a case for bonus points for a solution that we as
Mozilla actually want to see used in the future!


 Okay, now I'd like to discuss whether or not we should use these
 technologies for Pinning The Web.

 Open Graph: I think we need to use the four tags og:title, og:type,
 og:url and og:image, since they are widely used. Apart from that, I
 don't think we need to support the rest of 

Re: Linked Data and a new Browser API event

2015-06-26 Thread Benjamin Francis
On 26 June 2015 at 08:29, Karl Dubost kdub...@mozilla.com wrote:

 Maybe there is a way to start small. Iterate. Look at the results. And
 push further in the direction which appears to be meaningful.


Exactly, I'm looking for a solid MVP that we can iterate on. More detailed
response to Ted's post coming...
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data must die. (was: Linked Data and a new Browser API event)

2015-06-26 Thread Benjamin Francis
On 26 June 2015 at 17:02, Anne van Kesteren ann...@annevk.nl wrote:

 I would encourage you to go a little deeper...
 We need to judge standards on their merits


I did look deeper. I read most of all the specifications and several papers
on their adoption. My personal conclusion was that not only does
Microformats appear to be used less widely than other competing formats,
but that from a technical point of view just adding h- prefixes to class
names seems like a massive hack.

Many of the arguments I've heard in favour of Microformats are that it's
the grassroots or non-evil solution.

It's equally true that not being a W3C recommendation doesn't automatically
make something better either.

But I'm not the person that will have to implement this, and the people who
are think we should use Microformats.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data and a new Browser API event

2015-06-26 Thread Benjamin Francis
On 26 June 2015 at 08:00, Anne van Kesteren ann...@annevk.nl wrote:

 Is the idea to just keep adding events for each bit of
 information we might need from a document?


That is how the Browser API works.

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data and a new Browser API event

2015-06-25 Thread Benjamin Francis
To follow up on this, there is resistance against implementing the more
complex Microdata or RDFa specifications in Gecko.

We definitely now need some form of Linked Data support for Firefox OS 2.5
so I'm suggesting the following: We should support Open Graph (because of
its wide usage by existing web content) and JSON-LD (because it supports
Gaia's more complex use cases).

Both of these should be simple to implement in Gecko as events on the
Browser API, without requiring any complex parsing on the Gecko side.

Open Graph just requires firing metachange events (see bug 962626 for an
example) for all meta tags which specify a property attribute. (This would
be a crude subset of RDFa. We don't need to specify particular vocabularies
in Gecko, just include the value of the property attribute in the payload
of the event).

meta property=og:title content=The Rock /

JSON-LD just requires firing a new linkeddatachange event whenever a
JSON-LD script tag is encountered, sending the contents of the script tag
in the payload of the event.

script type=application/ld+json

We can then easily parse the JSON in Gaia and even directly store it
directly in our Places database.

If there's resistance against implementing the more complex Microdata and
RDFa specifications in Gecko then I don't think we should implement
Microformats either, the data I have and our experience through prototyping
just don't justify it.

Unless there's a really good reason not to do so, I'm going to file the
bugs and look towards getting this implemented on the Browser API as soon
as possible.

Thanks

Ben

On 4 June 2015 at 10:19, Benjamin Francis bfran...@mozilla.com wrote:

 On 3 June 2015 at 19:42, Benjamin Francis bfran...@mozilla.com wrote:

 This is what I'd really like to get more of, particularly usage data.


 I've reached out to a few people at Yahoo, Google and a couple of
 universities and have managed to turn up a few studies with useful data
 [1][2][3][4].

 My conclusions so far are:

- Microformats are used on a large number of web sites but are limited
by their case by case syntax and more fixed vocabulary and are less
formally defined.
- Microdata and RDFa are vocabulary agnostic which makes them
inherently more extensible, they're increasing in popularity due to
schema.org and consumption by major search engines, whilst the use of
Microformats has remained relatively constant over time.
- Microdata is a bit more concise than RDFa but doesn't allow for the
mixing of vocabularies.
- Open Graph is a simplistic form of RDFa with a limited vocabularly
and limited usefulness in comparison to other formats, but is very widely
used due to Facebook and Twitter being major consumers.
- Microformats is used by more websites (domains) but Microdata is
used by more web pages (more URLs, more typed entities and more triples)
and is growing the fastest. Microformats has the breadth, but Microdata has
the depth. In our case I think what we care about is the latter - the
amount of pinnable content.
- JSON-LD is the newest format, the main difference being that it
isn't intended to be embedded in with HTML markup, but is included
separately in a script tag. It's also useful as a canonical JSON-based
format to represent all of the other formats.

 That leads me to recommend that we do the following:

- Parse Microdata and RDFa (including Open Graph) from web pages in
Gecko
- Expose all of this data to Gaia via a single getLinkedData() or
getStructuredData() method on the Browser API which returns a Promise that
resolves with the data in a canonical JSON-LD format
- Also consider supporting JSON-LD directly as no parsing is required,
we just need to detect a script tag

 If anyone finds any more usage data, or has a different interpretation of
 the data below, then please do share.

 Thanks

 Ben

1. Web Data Commons website based on Common Crawl corpus (2009-2014)
http://webdatacommons.org/
2. Web Data Commons Paper based on Common Crawl Corpus (2009-2012)
http://events.linkeddata.org/ldow2012/papers/ldow2012-inv-paper-2.pdf
3. Yahoo post based on Yahoo corpus (2011)

https://tripletalk.wordpress.com/2011/01/25/rdfa-deployment-across-the-web/
4. Yahoo paper based on Bing corpus (2012)
http://events.linkeddata.org/ldow2012/papers/ldow2012-inv-paper-1.pdf



___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser API: iframe.executeScript()

2015-06-17 Thread Benjamin Francis
On 16 June 2015 at 16:24, Paul Rouget p...@mozilla.com wrote:

 In bug 1174733, I'm proposing a patch to implement the equivalent of
 Google's webview.executeScript:

 https://developer.chrome.com/apps/tags/webview#method-executeScript

 This will be useful to any consumer of the Browser API to access and
 manipulate the content.


In 2011 when we started talking about how to build a web browser using web
technologies for B2G we discussed two options:
1. Provide the ability to inject scripts into cross-origin iframes with
access to the whole document
2. Build an explicit DOM API which just pokes the holes needed in the
cross-origin boundary to access the information needed to build a browser,
and nothing more

What you're talking about is the first option. We chose the second option
because we wanted to build a relatively safe API which could be used to
build third party browser apps (and other webview type use cases). I've
written about this in some length in this blog post
https://hacks.mozilla.org/2014/08/building-the-firefox-browser-for-firefox-os/

The Browser API we have today is accessible to privileged apps, there are
currently 13 apps in the Firefox Marketplace using the Browser API. Those
apps can do basic things like frame a cross-origin web page with
X-Frame-Options headers, navigate it and know when its location, title,
icon etc. change, but they can't read or modify the content of all the web
pages you visit and read your credit card details etc. The most privileged
thing the Browser API provides is getScreenshot(), which I have argued
should be a separate API only available to certified apps for this very
reason.

The existing iframe mozbrowser element is very similar to Google's
webview element in Chrome OS and Microsoft's x-ms-webview element in
Windows. I still hold out hope that one day we might be able to standardise
this as a new webview HTML element http://benfrancis.github.io/webview/ -
what's missing is a standards-based security model fit for the purpose of
exposing this API to web content (as is the issue with many of our other
APIs).

It seems taking the script injection approach is a fundamentally different
approach to the current Browser API and would essentially give the
embedding document chrome privileges over the embedded document and remove
the cross-origin boundary entirely. This is a valid approach, but would
essentially make the Browser API redundant and there would be no point in
ever trying to standardise it.

I'm not saying that you shouldn't take this approach, it would certainly
make some of the new use cases we have around Linked Data easier because we
could just walk the DOM of any web page the user visits. But I want to make
it clear that this is not just an addition to the Browser API, it's a
replacement for it. My preference would be to keep the current approach and
extend it where needed for browser.html rather than taking this shortcut,
but I am probably biased. I know Jonas thinks there's little prospect of
standardising this API any time soon.

Please keep me in the loop in this discussion because it has huge
implications for the work me and my team do. In particular, let's talk
about what data browser.html needs access to to build its tab previews
because it sounds very similar to our use cases of extracted Linked Data
from a page.

Thanks

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser API: iframe.executeScript()

2015-06-17 Thread Benjamin Francis
On 17 June 2015 at 13:29, Paul Rouget p...@mozilla.com wrote:

 Extending the API every time we want to do something that goes beyond the
 API
 capabilities is painful and slow.


Yes I'm acutely aware of this, having done it for the last three and half
years :)


 The executeScript approach makes our
 life a lot
 easier and gives us a lot more flexibility.


I agree. But I also think it makes browser.html more like chrome privileged
code than web content. I'm not saying that's the wrong approach, it's just
a different approach to the Browser API. The goal of the Browser API was to
enable you to build a browser as web content.

I guess what I'm saying is that I don't really think an executeScript() API
has much to do with the Browser API, it's a parallel solution to the same
sorts of problems. I suppose it comes down to the same debate as if none
of our privileged APIs get standardised, is privileged/certified content
just a new type of chrome?.

I'd still be interested to learn more about your use cases.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Browser API: iframe.executeScript()

2015-06-17 Thread Benjamin Francis
On 17 June 2015 at 15:57, Paul Rouget p...@mozilla.com wrote:

 - access the computed style of the body to update the theme of the browser


By theme do you mean like a kind of automatic theme-color? You probably
know the b2g browser currently just uses the metachange event to get
theme-color meta tags for this, and falls back to a default.


 - walk through the DOM to get data to build a preview of the tab

- access any metadata (today the list is limited)
 - find the largest image of the page for a tab card


This all sounds like it might be similar to the cards we're creating to
represent pages in Pinning the Web
https://wiki.mozilla.org/FirefoxOS/Pinning_the_Web - we're planning on
using Linked Data to get the key metadata we need for this - like title,
description, image and other more specialised data for various content
types (we have a working prototype). We fallback to a default card composed
from a screenshot, theme-color and the document title.

But mostly, being able to do more with the browser api without
 requiring an update of gecko.


That's just cheating ;)

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data and a new Browser API event

2015-06-04 Thread Benjamin Francis
On 3 June 2015 at 19:42, Benjamin Francis bfran...@mozilla.com wrote:

 This is what I'd really like to get more of, particularly usage data.


I've reached out to a few people at Yahoo, Google and a couple of
universities and have managed to turn up a few studies with useful data
[1][2][3][4].

My conclusions so far are:

   - Microformats are used on a large number of web sites but are limited
   by their case by case syntax and more fixed vocabulary and are less
   formally defined.
   - Microdata and RDFa are vocabulary agnostic which makes them inherently
   more extensible, they're increasing in popularity due to schema.org and
   consumption by major search engines, whilst the use of Microformats has
   remained relatively constant over time.
   - Microdata is a bit more concise than RDFa but doesn't allow for the
   mixing of vocabularies.
   - Open Graph is a simplistic form of RDFa with a limited vocabularly and
   limited usefulness in comparison to other formats, but is very widely used
   due to Facebook and Twitter being major consumers.
   - Microformats is used by more websites (domains) but Microdata is used
   by more web pages (more URLs, more typed entities and more triples) and is
   growing the fastest. Microformats has the breadth, but Microdata has the
   depth. In our case I think what we care about is the latter - the amount of
   pinnable content.
   - JSON-LD is the newest format, the main difference being that it isn't
   intended to be embedded in with HTML markup, but is included separately in
   a script tag. It's also useful as a canonical JSON-based format to
   represent all of the other formats.

That leads me to recommend that we do the following:

   - Parse Microdata and RDFa (including Open Graph) from web pages in Gecko
   - Expose all of this data to Gaia via a single getLinkedData() or
   getStructuredData() method on the Browser API which returns a Promise that
   resolves with the data in a canonical JSON-LD format
   - Also consider supporting JSON-LD directly as no parsing is required,
   we just need to detect a script tag

If anyone finds any more usage data, or has a different interpretation of
the data below, then please do share.

Thanks

Ben

   1. Web Data Commons website based on Common Crawl corpus (2009-2014)
   http://webdatacommons.org/
   2. Web Data Commons Paper based on Common Crawl Corpus (2009-2012)
   http://events.linkeddata.org/ldow2012/papers/ldow2012-inv-paper-2.pdf
   3. Yahoo post based on Yahoo corpus (2011)

   https://tripletalk.wordpress.com/2011/01/25/rdfa-deployment-across-the-web/
   4. Yahoo paper based on Bing corpus (2012)
   http://events.linkeddata.org/ldow2012/papers/ldow2012-inv-paper-1.pdf
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data and a new Browser API event

2015-06-04 Thread Benjamin Francis
On 4 June 2015 at 03:27, Michael[tm] Smith m...@w3.org wrote

 As came up in some off-list discussion with Anne, is the “Manifest for a
 web application” spec at https://w3c.github.io/manifest/ not relevant
 here?
 (Nothing to reverse engineer, since it has an actual spec—with defined
 processing requirements—and at least one other browser-engine project is
 also contributing to it and implementing it.)


Yes, we already support W3C web app manfiests in our prototype, and it's a
key part of the implementation.

A manifest provides metadata for a website as a whole, whereas Linked Data
provides metadata to a particular web page.

When you pin a whole site we use the manifest (and fall back to other
metadata when not available), and when you pin a page we use Linked Data
(and fall back to other metadata when not available).

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data and a new Browser API event

2015-06-03 Thread Benjamin Francis
Thanks for all the responses so far! Comments inline...

On 30 May 2015 at 21:09, Jonas Sicking jo...@sicking.cc wrote:

 We should use whatever formats people are using to mark up pages. If that
 is microdata we should use that. If it's RDF we should use that. If its
 JSONLD we should use that.

Agree, that's what I'd like to find out.

On 30 May 2015 at 21:31, Gordon Brander gbran...@mozilla.com wrote:

 We should consider a series of fallbacks for this internal API.

 The metadata story for things like icon, title, description, hero images,
 is complicated. Implementation in the follows real-world use cases like
 posting rich snippets to Facebook or getting an image to show up on
 Twitter, rather than some standard.

 I think it would be best to think of this kind of API as a sort of light
 scraper that crawls through a collection of known in-the-wild patterns to
 provide a good-enough answer.


Agree, I think we will at least initially need to support multiple formats,
ideally translating them all internally into a single format that Gaia can
consume via the Browser API.

On 2 June 2015 at 00:53, Karl Dubost kdub...@mozilla.com wrote:

 if not done yet, you might want to talk with Dan Brickley. He is working
 at Google on everything related to schema.org. danbri -AT- google.com


Thanks, I will!

It might be possible to have a conversion tables in between the different
 markups so making it easier to start with one markup and build up little by
 little.


The JSON-LD spec gives examples of how Turtle, RDFa, Microformats and
Microdata can be expressed in JSON-LD [1]. Given JSON is an ideal format
for us to use in Gaia, I like the idea of internally translating into that.

On 2 June 2015 at 01:34, Jonas Sicking jo...@sicking.cc wrote:

 I think we're already talking about reverse-engineering what search
 engines and twitter/facebook/etc do.


Exactly, this is about getting more value out of the content that already
exists on the web, not defining new ways to create content.

But given how small marketshare browsers in general have as metadata
 consumers, I think any standardization efforts would have to be driven
 by the current matadata consumers, like search engines and social
 networks.


Agree. Though that doesn't prevent us from contributing to that discussion
if we have something to say.

On 2 June 2015 at 01:42, Gordon Brander gbran...@mozilla.com wrote:

 Yup. We’re really talking about 2 things in parallel:

 1. Defining a standards-based approach to marking these things up (using
 pre-existing patterns where it makes sense). Encouraging authors to use it.

 2. Creating internal APIs that will leverage this metadata, and in cases
 where the standards-based metadata does not exist, scraping reasonable
 results from other common metadata or markup patterns.


Agree, except I don't want to solve the problem of multiple formats by
creating another format, I'd like to either pick one of the existing
formats or (more likely) hedge our bets and support multiple popular
formats, giving developer warnings for non-standard usage where necessary.
If we find we have suggestions of how to improve the existing formats, then
we should participate in the groups that already exist to make that happen.

On 3 June 2015 at 00:45, Tantek Çelik tan...@cs.stanford.edu wrote:

 The summary among all the myriad proprietary (read: single corp /
 oligopoly controlled) proposals is that Facebook OGP meta tags have a
 strong lead over all the other proprietary approaches


That seems to match our anecdotal experience in building a prototype. Open
Graph is quite primitive in comparison to other formats in terms of what
can be expressed (and it's not clear to me whether it validates as either
valid HTML5 or valid RDFa), but it does seem like a clear contender.


 (for various
 reasons we can get into offline if desired),


I would like to understand those reasons. Are there reasons beyond 
Facebook and Twitter make use of this data so people add it to their web
pages?

while among the open
 standards community options - i.e. per Mozilla open web principles,
 microformats have the lead.


That is the answer I would expect from the person whose name happens to be
used as an hCard example in a W3C spec under the heading of Microformats
[2] ;)


 This analysis and conclusion matches what we've been figuring out with
 implementations and deployments in the IndieWebCamp community as well
 (which has several use-cases similar to pins/cards for providing
 summaries/link-previews of pages on the web). In short, the general
 approach involves parsing for two general sets of published data /
 markup:

 1. Pragmatic parsing of what's most out there:
   a) according to anecdotal sampling by the indieweb community,
 Facebook OGP, and
   b) according to studies / open/common crawl datasets: classic
 microformats

 2. Simplest open standards based approach (so we can recommend the
 easiest, least work, and most openly developed/maintained 

Re: Linked Data and a new Browser API event

2015-05-30 Thread Benjamin Francis
On 30 May 2015 at 00:56, Anne van Kesteren ann...@annevk.nl wrote:

 We've bitten ourselves before going down the RDF rathole (see
 extensions et al). Not sure we should so rapidly start again. Why
 can't you use the Microdata API?


Is this already supported in Gecko? I can't find it documented anywhere,
except a partial implementation in bug 591467 and a suggestion to remove it
again in bug 909633.

If it was then that would help, but it wouldn't quite solve the problem
we're trying to solve here. We need to get the Linked Data from an embedded
mozbrowser iframe for use in the system app and we don't have access the
Document object (hence the Browser API). If there was an implementation of
the Microdata DOM API I guess we could hook up a getLinkedData() method to
that inside Gecko.

But Microdata is only one of the formats widely used on the web today. I'd
like to see some evidence-based discussion on which format(s) we should
support to get the most possible value out of what already exists on the
web. The examples we used in our prototype all use Open Graph, which seems
quite widely used (mainly due to Facebook and Twitter) and is based on RDFa.

JSON-LD seems like a convenient format which can express them all, and is
useful in its own right. We could quite easily detect script tags in the
DOM like script type=application/ld+json.

Examples are provided for RDFa, Microdata and JSON-LD on all the schema.org
schemas, but I'm not sure what weighting these are given by the various
search engines. If anyone has data on that it would be really helpful!

Ben
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Linked Data and a new Browser API event

2015-05-28 Thread Benjamin Francis
Hi,

In Gaia the Systems Front End team is implementing the Pinning the Web
design concept [1] which amongst other things represents pinned web pages
as cards on the homescreen. The goal is that where possible these cards
should not just be a thumbnail screenshot of the page, but should be a
meaningful representation of key metadata associated with the page based on
a particular type, and even associated actions.

The idea is that these cards could be generated from the Linked Data [2]
that many web pages already include for the consumption of search engines
(to create rich snippets in search results) but which browsers currently
ignore.

All the big search engines agree on schemas for this data at schema.org [3]
but they support multiple formats for encoding this information in a web
page including Microdata, RDFa and JSON-LD. In short, there's a format war
going on.

In the protoype we created [4] we supported some basic use cases by parsing
Open Graph meta tags from web pages [5], Open Graph [6] being a kind of
simplified form of RDFa.

For the real implementation I suggest we investigate supporting one or more
formats for Linked Data in web pages (based on level of adoption) and
surface them to Gaia through a linkeddatachange event on the Browser API. I
propose that the payload of this event should be the contents of the Linked
Data expressed in JSON-LD [7].

JSON-LD is a W3C recommendation and can be used to express data from any of
the other Linked Data formats. The JSON encoding is particularly suitable
for the use cases in Gaia as it can easily be parsed in JavaScript and
stored in the places database in IndexedDB/DataStore.

The best data I've found so far on adoption of these formats is from the
University of Mannheim [8] and maybe suggests that Microdata is on the
increase while RDFa is staying relatively constant, but it's far from
clear. And this study doesn't include data for JSON-LD usage.

In an ideal world we'd support RDFa, Microdata and JSON-LD and convert them
all into JSON-LD for consumption in Gaia, but we could also pick a side in
the format war based on usage data.

Has support for Linked Data been looked into before? Is this something we
can get into the platform?

Thanks

Ben

1.
https://docs.google.com/presentation/d/17CGWPwu59GB7miyY1ErTjr4Wb-kS-rM7dB3MAMVO9HU/pub#slide=id.p
2. http://en.wikipedia.org/wiki/Linked_data
3. http://schema.org/
4. https://www.youtube.com/watch?v=FiLnRoRjD5k
5. https://gist.github.com/mikehenrty/6c506767b0fb15aaa2d4
6. http://ogp.me/
7. http://www.w3.org/TR/json-ld/
8. http://webdatacommons.org/structureddata/index.html
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform


Re: Linked Data and a new Browser API event

2015-05-28 Thread Benjamin Francis
On 28 May 2015 at 18:13, Benjamin Francis bfran...@mozilla.com wrote:

 For the real implementation I suggest we investigate supporting one or
 more formats for Linked Data in web pages (based on level of adoption) and
 surface them to Gaia through a linkeddatachange event on the Browser API. I
 propose that the payload of this event should be the contents of the Linked
 Data expressed in JSON-LD [7].


Actually what might make more sense is a getLinkedData() method on the API
which returns a Promise that resolves with the JSON data, as I think we're
also going to need a getManifest() method which could work in a similar way.
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform