Re: [Wikitech-l] secure slower and slower

2009-07-07 Thread Aryeh Gregor
On Tue, Jul 7, 2009 at 1:19 AM, Marco
Schusterma...@harddisk.is-a-geek.org wrote:
 Public congresses, schools without protection for ARP spoofing (I got 0wned
 this way myself), maybe corporate networks w/o proper network setup... they
 all allow sniffing or in-line traffic manipulation.
 Not that uncommon attacks, and when you know the colleague you do not like
 is WP admin, you simply have to wait for him to visit WP logged in, and you
 have either his pass or the cookies.

Yes, I'm aware all this is possible in theory.  Even more trivially,
just set up a nice high-quality wireless hotspot and do whatever you
want with the traffic.  But do you know of any time this has
*actually* *happened*?  Where a malicious person has successfully
staged a MITM attack in the wild against a typical person using the
Internet, in the last decade or two?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Gregory Maxwell
On Tue, Jul 7, 2009 at 1:54 AM, Aryeh
Gregorsimetrical+wikil...@gmail.com wrote:
[snip]
 * We could support video/audio on conformant user agents without
 the use of JavaScript.  There's no reason we should need JS for
 Firefox 3.5, Chrome 3, etc.


Of course, that could be done without switching the rest of the site to HTML5...

Although I'm not sure that giving the actual video tags is desirable.
It's a tradeoff:

Work for those users when JS is enabled and correctly handle saving
the full page including the videos vs take more traffic from clients
doing range requests to generate the poster image, and potentially
traffic from clients which decide to go ahead and fetch the whole
video regardless of the user asking for it.

There is also still a bug in FF3.5 that where the built-in video
controls do not work when JS is fully disabled. (Because the controls
are written in JS themselves)


(To be clear to other people reading this the mediawiki ogghandler
extension already uses HTML5 and works fine with Firefox 3.5, etc. But
this only works if you have javascript enabled.  The site could
instead embed the video elements directly, and only use JS to
substitute the video tag for fallbacks when it detects that the video
tag can't be used)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Remember the dot
Okay, first thoughts:

On Mon, Jul 6, 2009 at 11:54 PM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 It's clear at this point that HTML 5 will be the next version of HTML.
  It was obvious for a long time that XHTML was going nowhere, but now
 it's official: the XHTML working group has been disbanded and work on
 all non-HTML 5 variants of HTML has ceased.  (Source:
 http://www.w3.org/2009/06/xhtml-faq.html)


That page clearly says that there will be an XHTML 5. XHTML is not going
away.


 * Delete 'meta http-equiv=Content-Style-Type content=text/css
 /'.  Which is a really stupid element anyway.  :P
 * Delete name attributes from all a elements.  They've been
 redundant to id for eternity, and every browser in the universe
 supports id; we can finally move these to the headers themselves.
 * Remove comments from inside script tags with a src attribute.  I
 already did this in r52828, since they're pointless anyway.


Good ideas.

* We can use HTML 5 form attributes.  These will enhance the
 experience of users of appropriate browsers, and do nothing for
 others.  At least Opera 9.6x already supports almost all HTML 5 form
 attributes.  (Source:
 http://www.opera.com/docs/specs/presto211/forms/)  We could, for
 instance, give required fields the required attribute, which will
 cause the browser to prevent the form submission and notify the user
 if they aren't filled in, without needing either JavaScript or a
 server-side check.


What's to prevent a malicious user from manually posting an invalid
submission? If there are no server-side checks, will the servers crash?


 2) Once this goes live, if no problems arise, try causing an XML
 well-formedness error.  For instance, remove the quote marks around
 one attribute of an element that's included in every page.  I suggest
 this as a separate step because I suspect there are some bot operators
 who are doing screen-scraping using XML libraries, so it would be a
 good idea to assess how feasible it is at the present time to stop
 being well-formed.  In the long run, of course, those bot operators
 should switch to using the API.  If we receive enough complaints once
 this goes live, we can revert it and continue to ship HTML 5 that's
 also well-formed XML, for the time being.


Why be cruel to our bot operators? XHTML is simpler and more consistent than
tag soup HTML, and it's a lot easier to find a good XML parser than a good
HTML parser.

So, while I see some benefit to switching to HTML 5, I'd prefer to use XHTML
5 instead.

-- 
Remember the dot
http://en.wikipedia.org/wiki/User:Remember_the_dot
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Andrew Garrett

On 07/07/2009, at 7:37 AM, Remember the dot wrote:

 Okay, first thoughts:

 On Mon, Jul 6, 2009 at 11:54 PM, Aryeh Gregor
 simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 It's clear at this point that HTML 5 will be the next version of  
 HTML.
 It was obvious for a long time that XHTML was going nowhere, but now
 it's official: the XHTML working group has been disbanded and work on
 all non-HTML 5 variants of HTML has ceased.  (Source:
 http://www.w3.org/2009/06/xhtml-faq.html)


 That page clearly says that there will be an XHTML 5. XHTML is not  
 going
 away.

 * We can use HTML 5 form attributes.  These will enhance the
 experience of users of appropriate browsers, and do nothing for
 others.  At least Opera 9.6x already supports almost all HTML 5 form
 attributes.  (Source:
 http://www.opera.com/docs/specs/presto211/forms/)  We could, for
 instance, give required fields the required attribute, which will
 cause the browser to prevent the form submission and notify the user
 if they aren't filled in, without needing either JavaScript or a
 server-side check.

 What's to prevent a malicious user from manually posting an invalid
 submission? If there are no server-side checks, will the servers  
 crash?

... Or from using a browser that doesn't support them. We're obviously  
not going to be removing server-side checks in favour of client-side  
checks, that's stupid. We're adding client-side checks to enhance  
usability.

 2) Once this goes live, if no problems arise, try causing an XML
 well-formedness error.  For instance, remove the quote marks around
 one attribute of an element that's included in every page.  I suggest
 this as a separate step because I suspect there are some bot  
 operators
 who are doing screen-scraping using XML libraries, so it would be a
 good idea to assess how feasible it is at the present time to stop
 being well-formed.  In the long run, of course, those bot operators
 should switch to using the API.  If we receive enough complaints once
 this goes live, we can revert it and continue to ship HTML 5 that's
 also well-formed XML, for the time being.


 Why be cruel to our bot operators? XHTML is simpler and more  
 consistent than
 tag soup HTML, and it's a lot easier to find a good XML parser than  
 a good
 HTML parser.

They should be using the API.

 So, while I see some benefit to switching to HTML 5, I'd prefer to  
 use XHTML
 5 instead.

You've given one benefit of XHTML5, which is negated by the fact that  
we provide the API for a consistent machine-readable interface, and  
the benefits to HTML5 that Aryeh has outlined. What other advantages  
are there?

--
Andrew Garrett
Contract Developer, Wikimedia Foundation
agarr...@wikimedia.org
http://werdn.us




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] SVN commit access for new SlippyMap

2009-07-07 Thread Christian Becker

Hi all,
I'm developing the new OSM SlippyMap with Aude  Avar. As our code has  
now made it into the Wikimedia trunk, I could use SVN commit access.
As for my contributions, I externalized the JavaScript code and made  
it object-oriented, added support for image placeholders (i.e. click  
to get a dynamic map), and did lots of refactoring (see [1], our  
previous external repository).

I'd prefer the username beckr; my public key is at [2].

Cheers,

Christian

[1] http://code.google.com/p/wikimaps/updates/list
[2] http://beckr.org/key.pub

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] secure slower and slower

2009-07-07 Thread William Allen Simpson
Aryeh Gregor wrote:
 Yes, I'm aware all this is possible in theory.  Even more trivially,
 just set up a nice high-quality wireless hotspot and do whatever you
 want with the traffic.  But do you know of any time this has
 *actually* *happened*?  Where a malicious person has successfully
 staged a MITM attack in the wild against a typical person using the
 Internet, in the last decade or two?
 
*Yes.* Of course, I've long been involved in Internet security, so I'm
privy to information that is discussed more privately by the vetted

Moreover, there have certainly been *claims* that Wikipedia accounts have
been hijacked.  Folks have been adding ownership hashes to their user
pages, to be able to re-establish ownership.

You may be thinking about the various proofs of concept for MITM against
SSL, which is certainly possible (although impractical without financing).
But MITM against disclosing passwords and cleartext cookies is known.

A fairly public example that comes to mind -- for a considerably less well
known site than Wikipedia (but very popular in its day) -- was a MUD.  An
Immortal account was hijacked, and through a known software bug,
privileges were escalated to God.

The miscreants actually wiped the entire user account database, causing
thousands to lose their accumulated belongings and status.  The daily
backups were inconsistent, and after several days of examining the static
data for trapdoors and other problems, the site was restored with all
players having to start over

Now, think about that being Wikipedia  Does anybody really think the
software here is bug free?  Defense in depth helps.

Some may not think that this site is critical, or valuable, or whatever.
But I joined this list back when ISP support calls were escalating
because of lag.  Imagine the monetary cost to the world for complete site
failure or massive disruption.

Those with administrator or other privileges should use the secure server.
Heck, they should be prohibited from logging in by any other means.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Aryeh Gregor
On Tue, Jul 7, 2009 at 2:37 AM, Remember the
dotrememberthe...@gmail.com wrote:
 That page clearly says that there will be an XHTML 5. XHTML is not going
 away.

By XHTML I meant the family of standards including XHTML 1.0, 1.1,
2.0, etc..  XHTML 5 is identical to HTML 5 except with a different
serialization.  Practically speaking, however, it looks like no one
will use XHTML 5 either, because it's impossible to deploy on the
current web.  (See below.)  As far as I can tell, it was thrown in as
a sop to XML fans, on the basis that it cost very little to add it to
the spec (given the definition in terms of DOM plus serializations),
without any expectation that anyone will use it in practice.

 What's to prevent a malicious user from manually posting an invalid
 submission? If there are no server-side checks, will the servers crash?

Obviously there will be server-side checks as well!  This will just
serve to inform the user immediately that they're missing a required
field, without having to wait for the server or use JavaScript.

 Why be cruel to our bot operators? XHTML is simpler and more consistent than
 tag soup HTML, and it's a lot easier to find a good XML parser than a good
 HTML parser.

Because it will make the markup easier to read and write for humans,
and smaller.  Things like leaving off superfluous closing elements do
not make for tag soup.  One of the great features of HTML 5 is that
it very carefully defines the text/html parsing model in painstaking
backward-compatible detail.  For example, the description of unquoted
attributes is as follows:

The attribute name, followed by zero or more space characters,
followed by a single U+003D EQUALS SIGN character, followed by zero or
more space characters, followed by the attribute value, which, in
addition to the requirements given above for attribute values, must
not contain any literal space characters, any U+0022 QUOTATION MARK
() characters, U+0027 APOSTROPHE (') characters, U+003D EQUALS SIGN
(=) characters, U+003C LESS-THAN SIGN () characters, or U+003E
GREATER-THAN SIGN () characters, and must not be the empty string.

If an attribute using the unquoted attribute syntax is to be followed
by another attribute or by one of the optional U+002F SOLIDUS (/)
characters allowed in step 6 of the start tag syntax above, then there
must be a space character separating the two.
http://dev.w3.org/html5/spec/Overview.html#attributes

Given that browsers need to implement all these complicated algorithms
anyway, there's no reason to prohibit the use of convenient shortcuts
for authors.  They're absolutely well-defined, and even if they're
more complicated for machines to parse, they're easier for humans to
use than the theoretically simpler XML rules.


Anyway.  Bots should not be scraping the site.  They should be using
the bot API, which is *vastly* easier to parse for useful data than
any variant of HTML or XHTML.  We could use this as an opportunity to
push bot operators toward using the API -- screen-scraping has always
been fragile and should be phased out anyway.  Bot operators who
screen-scrape will already break on other significant changes anyway;
how many screen-scrapers will keep working when Vector becomes the
default skin?

So I view the added difficulty of screen-scraping as a long-term side
benefit of switching to HTML 5, like validation failures for
presentational elements.  It makes behavior that was already
undesirable more *obviously* undesirable.

Clearly we can't break all the bots, though.  So try breaking XML
well-formedness.  If there are only a few isolated complaints, go
ahead with it.  If it causes large-scale breakage, revert and tell all
the bot operators to switch to the API, then try again in a few months
or a year.  Or when we enable Vector, which will probably break all
the bots anyway.

 So, while I see some benefit to switching to HTML 5, I'd prefer to use XHTML
 5 instead.

XHTML 5, by definition, must be served under an XML MIME type.
Anything served as text/html is not XHTML 5, and is required to be an
HTML (not XHTML) serialization.  We cannot serve content under
non-text/html MIME types, because that would break IE, so we can't use
XHTML 5.  Even if we could, it would still be a bad idea.  In XHTML 5,
as in all XML, well-formedness errors are fatal.  And we can't ensure
that well-formedness errors are impossible without rewriting a lot of
the parser *and* UI code.

We can, however, serve HTML 5 that happens to also be well-formed XML.
 This will allow XML parsers to be used, and is what I propose we do
to start with.

On Tue, Jul 7, 2009 at 2:48 AM, Gregory Maxwellgmaxw...@gmail.com wrote:
 What do you think we're doing now? A jpeg 'poster' is displayed. When
 the user clicks the poster is replaced by the appropriate playback
 mechanism.

I'm confused.  What we're currently doing (correct me if I'm wrong) is
displaying a JPEG img as a poster, and replacing it via JavaScript
with the appropriate content when it's 

Re: [Wikitech-l] secure slower and slower

2009-07-07 Thread William Allen Simpson
Marco Schuster wrote:
 Public congresses, schools without protection for ARP spoofing (I got 0wned
 this way myself), maybe corporate networks w/o proper network setup... they
 all allow sniffing or in-line traffic manipulation.
 Not that uncommon attacks, and when you know the colleague you do not like
 is WP admin, you simply have to wait for him to visit WP logged in, and you
 have either his pass or the cookies.
 
(heavy sigh) The use of disclosing passwords (and plaintext cookies) is the
bane of Internet security. Getting the secure server working is important.

===

Meantime, here's an error message that happens frequently during editing:

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /wikipedia/en/w/index.php.

Reason: Error reading from remote server

===

I get a similar error sometimes on Preview or just reading a page:

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /wikipedia/en/wiki/Wikipedia 
talk:Biographies of living persons.

Reason: Error reading from remote server


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] SVN commit access for new SlippyMap

2009-07-07 Thread Christian Becker
Did that, thanks!

Cheers,

Christian

On Jul 7, 2009, at 1:10 PM, Chad wrote:

 Drop a note on the [[Commit access requests]] page
 on Mediawiki.org too. Trying to keep requests all in one
 place these days :)

 -Chad

 On Jul 7, 2009 7:00 AM, Christian Becker ch...@beckr.org wrote:

 Hi all,
 I'm developing the new OSM SlippyMap with Aude  Avar. As our code  
 has now
 made it into the Wikimedia trunk, I could use SVN commit access.
 As for my contributions, I externalized the JavaScript code and made  
 it
 object-oriented, added support for image placeholders (i.e. click to  
 get a
 dynamic map), and did lots of refactoring (see [1], our previous  
 external
 repository).
 I'd prefer the username beckr; my public key is at [2].

 Cheers,

 Christian

 [1] http://code.google.com/p/wikimaps/updates/list
 [2] http://beckr.org/key.pub


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Sergey Chernyshev
Great, looks like HTML5 vs. XHTML fight is infecting everything.

Just my 2 cents - I don't think that switching to new not yet W3C
Recomendation is a good idea - many extensions and features are not yet
finished (e.g. RDFa support for it) and considering a huge commotion in this
area it might not be a very good decision.

Thank you,

Sergey


--
Sergey Chernyshev
http://www.sergeychernyshev.com/


On Tue, Jul 7, 2009 at 9:38 AM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 On Tue, Jul 7, 2009 at 2:37 AM, Remember the
 dotrememberthe...@gmail.com wrote:
  That page clearly says that there will be an XHTML 5. XHTML is not going
  away.

 By XHTML I meant the family of standards including XHTML 1.0, 1.1,
 2.0, etc..  XHTML 5 is identical to HTML 5 except with a different
 serialization.  Practically speaking, however, it looks like no one
 will use XHTML 5 either, because it's impossible to deploy on the
 current web.  (See below.)  As far as I can tell, it was thrown in as
 a sop to XML fans, on the basis that it cost very little to add it to
 the spec (given the definition in terms of DOM plus serializations),
 without any expectation that anyone will use it in practice.

  What's to prevent a malicious user from manually posting an invalid
  submission? If there are no server-side checks, will the servers crash?

 Obviously there will be server-side checks as well!  This will just
 serve to inform the user immediately that they're missing a required
 field, without having to wait for the server or use JavaScript.

  Why be cruel to our bot operators? XHTML is simpler and more consistent
 than
  tag soup HTML, and it's a lot easier to find a good XML parser than a
 good
  HTML parser.

 Because it will make the markup easier to read and write for humans,
 and smaller.  Things like leaving off superfluous closing elements do
 not make for tag soup.  One of the great features of HTML 5 is that
 it very carefully defines the text/html parsing model in painstaking
 backward-compatible detail.  For example, the description of unquoted
 attributes is as follows:

 The attribute name, followed by zero or more space characters,
 followed by a single U+003D EQUALS SIGN character, followed by zero or
 more space characters, followed by the attribute value, which, in
 addition to the requirements given above for attribute values, must
 not contain any literal space characters, any U+0022 QUOTATION MARK
 () characters, U+0027 APOSTROPHE (') characters, U+003D EQUALS SIGN
 (=) characters, U+003C LESS-THAN SIGN () characters, or U+003E
 GREATER-THAN SIGN () characters, and must not be the empty string.

 If an attribute using the unquoted attribute syntax is to be followed
 by another attribute or by one of the optional U+002F SOLIDUS (/)
 characters allowed in step 6 of the start tag syntax above, then there
 must be a space character separating the two.
 http://dev.w3.org/html5/spec/Overview.html#attributes

 Given that browsers need to implement all these complicated algorithms
 anyway, there's no reason to prohibit the use of convenient shortcuts
 for authors.  They're absolutely well-defined, and even if they're
 more complicated for machines to parse, they're easier for humans to
 use than the theoretically simpler XML rules.


 Anyway.  Bots should not be scraping the site.  They should be using
 the bot API, which is *vastly* easier to parse for useful data than
 any variant of HTML or XHTML.  We could use this as an opportunity to
 push bot operators toward using the API -- screen-scraping has always
 been fragile and should be phased out anyway.  Bot operators who
 screen-scrape will already break on other significant changes anyway;
 how many screen-scrapers will keep working when Vector becomes the
 default skin?

 So I view the added difficulty of screen-scraping as a long-term side
 benefit of switching to HTML 5, like validation failures for
 presentational elements.  It makes behavior that was already
 undesirable more *obviously* undesirable.

 Clearly we can't break all the bots, though.  So try breaking XML
 well-formedness.  If there are only a few isolated complaints, go
 ahead with it.  If it causes large-scale breakage, revert and tell all
 the bot operators to switch to the API, then try again in a few months
 or a year.  Or when we enable Vector, which will probably break all
 the bots anyway.

  So, while I see some benefit to switching to HTML 5, I'd prefer to use
 XHTML
  5 instead.

 XHTML 5, by definition, must be served under an XML MIME type.
 Anything served as text/html is not XHTML 5, and is required to be an
 HTML (not XHTML) serialization.  We cannot serve content under
 non-text/html MIME types, because that would break IE, so we can't use
 XHTML 5.  Even if we could, it would still be a bad idea.  In XHTML 5,
 as in all XML, well-formedness errors are fatal.  And we can't ensure
 that well-formedness errors are impossible without 

Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Aryeh Gregor
On Tue, Jul 7, 2009 at 2:29 PM, Sergey
Chernyshevsergey.chernys...@gmail.com wrote:
 Just my 2 cents - I don't think that switching to new not yet W3C
 Recomendation is a good idea - many extensions and features are not yet
 finished (e.g. RDFa support for it)

Much of the spec is very stable.  We would not be using any part
that's likely to change -- in most cases, only parts that have
multiple interoperable implementations.  Such parts of the spec will
not change significantly; that's a basic principle of most W3C specs'
development processes (and HTML 5's in particular).

We use other W3C specs that nominally aren't stable, e.g., some parts
of CSS.  We used plenty of CSS 2.1 when that was still nominally a
Working Draft.  We use multi-column layout (at least in our content on
enwiki) even though that's a Working Draft.  Etc.  Given the way the
W3C works, it's not reasonable at all to require that the *whole* spec
be a Candidate Recommendation or whatever.  You can make a
feature-by-feature stability assessment pretty easily in most cases:
if it has multiple interoperable implementations, it's stable and can
be used; if it doesn't, it's not very useful anyway, so who cares?

 and considering a huge commotion in this
 area it might not be a very good decision.

There is no more commotion.  XHTML 2.0 is officially dead.  The
working group is disbanded.  HTML 5 is the only version of HTML that
is being developed.


I don't think you've raised any substantive objections here.
*Practically* speaking, what reason is there not to begin moving to
HTML 5 now?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Aryeh Gregor
On Tue, Jul 7, 2009 at 2:46 PM, Aryeh
Gregorsimetrical+wikil...@gmail.com wrote:
 Much of the spec is very stable.  We would not be using any part
 that's likely to change -- in most cases, only parts that have
 multiple interoperable implementations.  Such parts of the spec will
 not change significantly; that's a basic principle of most W3C specs'
 development processes (and HTML 5's in particular).

To elaborate on this, from the WHATWG FAQ:


Different parts of the specification are at different maturity
levels. Some sections are already relatively stable and there are
implementations that are already quite close to completion, and those
features can be used today (e.g. canvas). But other sections are
still being actively worked on and changed regularly, or not even
written yet.

You can see annotations in the margins showing the estimated
stability of each section. . . .

The point to all this is that you shouldn’t place too much weight on
the status of the specification as a whole. You need to consider the
stability and maturity level of each section individually.
http://wiki.whatwg.org/wiki/FAQ#When_will_HTML_5_be_finished.3F


When will we be able to start using these new features?

As soon as browsers begin to support them. You do not need to wait
till HTML5 becomes a recommendation, because that can’t happen until
after the implementations are completely finished.

For example, the canvas feature is already widely implemented.

The specification has annotations in the margins showing what
browsers implement each section.
http://wiki.whatwg.org/wiki/FAQ#When_will_we_be_able_to_start_using_these_new_features.3F

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Brion Vibber
Aryeh Gregor wrote:
 On Tue, Jul 7, 2009 at 2:37 AM, Remember the
 dotrememberthe...@gmail.com wrote:
 Why be cruel to our bot operators? XHTML is simpler and more consistent than
 tag soup HTML, and it's a lot easier to find a good XML parser than a good
 HTML parser.
 
 Because it will make the markup easier to read and write for humans,
 and smaller.  Things like leaving off superfluous closing elements do
 not make for tag soup.  One of the great features of HTML 5 is that
 it very carefully defines the text/html parsing model in painstaking
 backward-compatible detail.  For example, the description of unquoted
 attributes is as follows:

Technically HTML 4 is pretty much the same in this regard; it's 100% 
legitimate SGML and HTML 4 to skip implied opening and closing elements, 
drop quotes on attribute values that are unambiguous, etc.

HTML 5 is a little better I think in that it specifies which SGML short 
forms are required to be supported and which shouldn't (for instance few 
browsers support this SGML short form: b/this is some bold text/).

The primary advantage of the XML formulation is that you can parse the 
document tree unambiguously *without* knowing the spec of the individual 
markup -- omitting implied values means the consumer needs to know what 
to expect.

Is this really a huge advantage when the impliable elements are 
well-known as in HTML? I dunno.

It can cause problems when a new element with implied behavior is added, 
as with WebKit's initial canvas implementation. (Apple implemented it 
as allowing an implied empty element, whereas Mozilla requires you to 
close it so it won't confuse parsers that don't know it should be empty 
and thus closed immediately.)

But as long as new markup extensions are used unambiguously, HTML 5 
should be no more ambiguous and just as extensible as the XML formulation.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Michael Dale
I think if the playback system is java in ~any browser~ we should 
~softly~ inform people to get a browser with native support if they 
want a high quality video playback experience.

The cortado applet is awesome ... but startup time of the java vm is 
painful compared to other user experiences with video.. not to mention 
seeking, buffering, and general interface responsiveness in comparison 
to the native support.

--michael

Gregory Maxwell wrote:
 On Tue, Jul 7, 2009 at 4:23 PM, Brion Vibberbr...@wikimedia.org wrote:
   
 Unless they don't have Ogg support. :)

 *cough Safari cough*

 But if they do, yes; our JS won't bother bringing up the Java applet if
 it's got native support available.
 

 It would be a four or five line patch to make OggHandler nag Safari
 3/4 users to install XiphQT and give them the link to a download page.
  The spot for the nag is already stubbed out in the code. Just say the
 word.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
   


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Brion Vibber
At a minimum, I'm glad to see the dead-ended XHTML 2 working group 
officially killed; actual compatible implementations of ongoing work are 
happening in the HTML 5 world and that's where the future definitely is.


I don't see much need for us to stick with the XML formulation for the 
next generation, given that we've never actually served our XHTML 1 
*marked* as application/html+xml for compatibility reasons:

* IE refuses to display any content usefully
* Safari gets confused about character references
* even Mozilla will have different JS behavior, which would require us 
to jump through some more hoops to kill the last document.write() calls...
* not to mention that your entire web site becomes inaccessible 
instantly if you end up with a markup error in the page footer!

Unless we're embedding our XHTML into other XML streams (which we're 
not), there's little benefit to strictly sticking to the XML formulation 
for page output.

XML formulation could perhaps be useful if we migrate page text storage 
from custom markup to an HTML-based internal format, as we could then 
toss it at XML parsers without worrying. But that doesn't have any 
bearing on the HTML user interface we display to end-users in browsers.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Brion Vibber
Michael Dale wrote:
 I think if the playback system is java in ~any browser~ we should 
 ~softly~ inform people to get a browser with native support if they 
 want a high quality video playback experience.
 
 The cortado applet is awesome ... but startup time of the java vm is 
 painful compared to other user experiences with video.. not to mention 
 seeking, buffering, and general interface responsiveness in comparison 
 to the native support.

*nod*

We don't want to annoy users, but subtle nudges to a better experience 
can be good. :)

(It'd be good to avoid the This site best viewed in Netscape Gold sort 
of browser fanboy wars of the '90s, though. ;)

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Michael Dale
Also should be noted a simple patch for oggHandler to output video and 
use the mv_embed library is in the works see:
https://bugzilla.wikimedia.org/show_bug.cgi?id=18869

you can see it in action a few places like 
http://metavid.org/wiki/File:FolgersCoffe_512kb.1496.ogv

Also note my ~soft~ push for native support if you don't already native 
support. (per our short discussion earlier in this thread) if you say 
don't show again it sets a cookie and won't show it again.

I would be happy to randomly link to other browsers that support html5 
video tag with ogg as they ship with that functionality.

I don't really have apple machine handy to test quality of user 
experience in OSX safari with xiph-qt. But if that is on-par with 
Firefox native support we should probably link to the component install 
instructions for safari users.

--michael



Gregory Maxwell wrote:
 On Tue, Jul 7, 2009 at 1:54 AM, Aryeh
 Gregorsimetrical+wikil...@gmail.com wrote:
 [snip]
   
 * We could support video/audio on conformant user agents without
 the use of JavaScript.  There's no reason we should need JS for
 Firefox 3.5, Chrome 3, etc.
 


 Of course, that could be done without switching the rest of the site to 
 HTML5...

 Although I'm not sure that giving the actual video tags is desirable.
 It's a tradeoff:

 Work for those users when JS is enabled and correctly handle saving
 the full page including the videos vs take more traffic from clients
 doing range requests to generate the poster image, and potentially
 traffic from clients which decide to go ahead and fetch the whole
 video regardless of the user asking for it.

 There is also still a bug in FF3.5 that where the built-in video
 controls do not work when JS is fully disabled. (Because the controls
 are written in JS themselves)


 (To be clear to other people reading this the mediawiki ogghandler
 extension already uses HTML5 and works fine with Firefox 3.5, etc. But
 this only works if you have javascript enabled.  The site could
 instead embed the video elements directly, and only use JS to
 substitute the video tag for fallbacks when it detects that the video
 tag can't be used)

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread David Gerard
2009/7/7 Brion Vibber br...@wikimedia.org:
 Michael Dale wrote:

 I think if the playback system is java in ~any browser~ we should
 ~softly~ inform people to get a browser with native support if they
 want a high quality video playback experience.
 The cortado applet is awesome ... but startup time of the java vm is
 painful compared to other user experiences with video.. not to mention
 seeking, buffering, and general interface responsiveness in comparison
 to the native support.

 *nod*
 We don't want to annoy users, but subtle nudges to a better experience
 can be good. :)
 (It'd be good to avoid the This site best viewed in Netscape Gold sort
 of browser fanboy wars of the '90s, though. ;)


I know we can't do it, but I do have subtle dreams of Sorry, this
video won't display in Safari because Apple refuse to. If you don't
want to use a better browser, here's Apple's phone number.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal: switch to HTML 5

2009-07-07 Thread Gregory Maxwell
On Tue, Jul 7, 2009 at 7:53 PM, Michael Dalemd...@wikimedia.org wrote:
[snip]
 I don't really have apple machine handy to test quality of user
 experience in OSX safari with xiph-qt. But if that is on-par with
 Firefox native support we should probably link to the component install
 instructions for safari users.

I believe it's quite good. Believe is the best I can offer never
having personally tested it.  I did work with a safari user sending
them specific test cases designed to torture it hard (and some XiphQT
bugs were fixed in the process) and at this point it sounds pretty
good.

What I have not stressed is any of the JS API. I know it seeks, I have
no clue how well, etc.

There is also an apple webkit developer who is friendly and helpful at
getting things fixed whom we work with if we do encounter bugs... but
more testing is really needed.

Safari users wanted.


As far as the 'soft push' ... I'm generally not a big fan of one-shot
completely dismissible nags: Too often I click past something only to
realize shortly thereafter that I really should have clicked on it.
I'd prefer something that did a significant (alert-level) nag *once*
but perpetually included a polite Upgrade your Video button below
(above?) the fallback video window.

There is only a short period of time remaining where a singular
browser recommendation can be done fairly and neutrally. Chrome and
Opera will ship production versions and then there will be options.
Choices are bad for usability.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] secure slower and slower

2009-07-07 Thread Steve Bennett
On Tue, Jul 7, 2009 at 11:35 PM, William Allen
Simpsonwilliam.allen.simp...@gmail.com wrote:
 Some may not think that this site is critical, or valuable, or whatever.

That's a horrible strawman argument. Some simply think that the
amount of damage that can be caused by hijacking a non-admin account
is fairly low. Maybe for admins the risk is higher. Pretty much all
damage is reversible though.

Steve

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l