Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-17 Thread Daniel Kinzler
Am 16.05.2014 21:07, schrieb Gabriel Wicke:
 On 05/15/2014 04:42 PM, Daniel Kinzler wrote:
 The one thing that will not work on wikis with
 $wgRawHtml disabled is parsing the output of expandtemplates.
 
 Yes, which means that it won't work with Parsoid, Flow, VE and other users.

And it has been fixed now. In the latest version, expandtemplates will just
return {{Foo}} as it was if {{Foo}} can't be expanded to wikitext.

 I do think that we can do better, and I pointed out possible ways to do so
 in my earlier mail:
 
 My preference
 would be to let the consumer directly ask for pre-expanded wikitext *or*
 HTML, without overloading action=expandtemplates. Even indicating the
 content type explicitly in the API response (rather than inline with an HTML
 tag) would be a better stop-gap as it would avoid some of the security and
 compatibility issues described above.

I don't quite understand what you are asking for... action=parse returns HTML,
action=expandtemplates returns wikitext. The issue was with mixed output, that
is, representing the expandion of templates that generate HTML in wikitext. The
solution I'm going for no is to simply not expand them.

-- daniel


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-17 Thread Subramanya Sastry
(Top posting to quickly summarize what I gathered from the discussion 
and what would be required for Parsoid to expand pages with these 
transclusions).


Parsoid currently relies on the mediawiki API to preprocess 
transclusions and return wikitext (uses action=expandtemplates for this) 
which it then parses using native Parsoid pipeline.  Parsoid processes 
extension tags via action=parse and weaves the result back into the 
top-level content of the page.


As per your original email, I am assuming the T is a page with a special 
content model that generates HTML and another page P has a transclusion 
{{T}}.


So, when Parsoid encounters {{T}}, it should be able to replace {{T}} 
with the HTML to generate the right parse output for P.


So, I am listing below 4 possible ways action=expandtemplates can 
process {{T}}


1. Your newest implementation (that just returns back {{T}}):

* If Parsoid gets back {{T}}, one of two things can happen:
--- Parsoid, as usual, tries to parse it as wikitext, and it gets stuck 
in an infinite loop (query MW api for expansion of {{T}}, get back 
{{T}}, parse it as {{T}}, query MW api for expansion of {{T}},  ). 
So, this will definitely not work.
--- Parsoid adds a special case check to see if the API sent back {{T}}, 
and in which case, requires a different API endpoint 
(action=expandtohtml maybe?) to send back the html expansion based on 
the assumption about output of expandtemplates. This would work and 
would require the new endpoint to be implemented, but feels hacky.


So, going back to your original implementation, here are at least 3 ways 
I see this working:


2. action=expandtemplates returns a html.../html for the expansion 
of {{T}}, but also provides an additional API response header that tells 
Parsoid that T was a special content model page and that the raw HTML 
that it received should not be sanitized.


3. action=expandtemplates returns html.../html for the expansion of 
{{T}} and no other indication about T being a special content model page 
or not. However, if Parsoid (and other clients) are to trust these html 
output always without sanitization, expandtemplates implementation 
should have a conditional sanitization of html tags encountered in 
wikitext to prevent XSS. As far as I understand, expandtemplates (on 
master, not your patch) does not do this tag sanitization. But, 
independent of that, what Parsoid and clients need is a guarantee that 
it is safe to blindly splice the contents of any html.../html it 
receives for any {{T}} no matter whether what content model T implements.


4. Parsoid first queries the MW-api to find out the content model of T 
for every transclusion {{T}} it encounters on the page P and based on 
the content-model info, knows how to process the output of 
action=expandtemplates.


Clearly 4. is expensive and 3. seems hacky, but if it can be made to 
work, we can work with that.


But, both Gabriel and I think that solution 2. is the cleanest solution 
for now that would work. The PHP parser (in your patch to handle {{T}}) 
already has information about the content model of T when it is 
expanding {{T}} and it seems simplest and cleanest to return this 
information back to clients in the non-default content content-model 
expansions. That gives clients like Parsoid the cleanest way of handling 
these.


If I am missing something or this is unclear, and this getting into too 
much back and forth on email and it is simpler to discuss this on IRC, I 
can hop onto any IRC channel on Monday or we can do this on 
#mediawiki-parsoid, and one of us could later summarize the discussion 
back onto this thread.


Thanks,
Subbu.


On 05/17/2014 02:54 AM, Daniel Kinzler wrote:

Am 16.05.2014 21:07, schrieb Gabriel Wicke:

On 05/15/2014 04:42 PM, Daniel Kinzler wrote:

The one thing that will not work on wikis with
$wgRawHtml disabled is parsing the output of expandtemplates.

Yes, which means that it won't work with Parsoid, Flow, VE and other users.

And it has been fixed now. In the latest version, expandtemplates will just
return {{Foo}} as it was if {{Foo}} can't be expanded to wikitext.


I do think that we can do better, and I pointed out possible ways to do so
in my earlier mail:


My preference
would be to let the consumer directly ask for pre-expanded wikitext *or*
HTML, without overloading action=expandtemplates. Even indicating the
content type explicitly in the API response (rather than inline with an HTML
tag) would be a better stop-gap as it would avoid some of the security and
compatibility issues described above.

I don't quite understand what you are asking for... action=parse returns HTML,
action=expandtemplates returns wikitext. The issue was with mixed output, that
is, representing the expandion of templates that generate HTML in wikitext. The
solution I'm going for no is to simply not expand them.

-- daniel





___
Wikitech-l mailing list

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-17 Thread Subramanya Sastry

On 05/17/2014 10:51 AM, Subramanya Sastry wrote:
So, going back to your original implementation, here are at least 3 
ways I see this working:


2. action=expandtemplates returns a html.../html for the expansion 
of {{T}}, but also provides an additional API response header that 
tells Parsoid that T was a special content model page and that the raw 
HTML that it received should not be sanitized.


Actually, the html/html wrapper is not even required here since the 
new API response header (for example, X-Content-Model: HTML) is 
sufficient to know what to do with the response body.


Subbu.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-17 Thread Gabriel Wicke
On 05/17/2014 05:57 PM, Subramanya Sastry wrote:
 On 05/17/2014 10:51 AM, Subramanya Sastry wrote:
 So, going back to your original implementation, here are at least 3 ways I
 see this working:

 2. action=expandtemplates returns a html.../html for the expansion of
 {{T}}, but also provides an additional API response header that tells
 Parsoid that T was a special content model page and that the raw HTML that
 it received should not be sanitized.
 
 Actually, the html/html wrapper is not even required here since the new
 API response header (for example, X-Content-Model: HTML) is sufficient to
 know what to do with the response body.

Indeed.

Also, instead of the header we can just set a property / attribute in the
JSON/XML response structure. This will also work for multi-part responses,
for example when calling action=expandtemplates on multiple titles.

Gabriel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Login to Wikimedia Phabricator with a GitHub/Google/etc account?

2014-05-17 Thread Steven Walling
On Fri, May 16, 2014 at 5:19 PM, Chad innocentkil...@gmail.com wrote:

 I'm mostly worried about security issues in 3rd party implementations of
 oAuth
 that we can't control. I asked Chris S. about this earlier today and I hope
 he'll
 expand on this some more--especially concerning to me was the concrete
 example he gave with Facebook's own oAuth. Also he mentioned that Twitter's
 oAuth is known to be insecure in its implementation.

 Depending on how Github's oAuth is implemented that's the one I could see
 the strongest case being made for.


I think we all know there are many insecure things about most login
systems, including our own. The question is what do we get for the
potential cost/risk. Obviously with Google and Facebook as options we don't
stand to gain a lot in terms of technical contributions. With GitHub, the
balance is probably tipped the other way. If we try it and in the long run,
it provides very little benefit, we could consider phasing it out.


 Enabling all of them seems like it'll just make the login page cluttered
 with
 options used by about 1-2 people each but I could be wrong.


Yes, absolutely. The login page of Phabricator's own phabricator instance
is an example of providing too many choices. This slows people down when
they have evaluate all the options.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Deployments of new, radically different default features

2014-05-17 Thread Rainer Rillke
Recently, Media Viewer[1] (MMV) has been switched at Wikimedia Commons
to be default.

As some people do not care a lot for beta features or new features, do
not read the mailing lists and overlook main discussion forums or are
just unable to understand English, they were surprised and confused and
wondered how they could disable the feature.

Please help me to evaluate if there are alternatives to how software
deployment is currently done. Here are some suggestions from community
member Jameslwoodward (commons adminstrator):


A Post a banner so that there is a good chance of actually reaching
  everyone.
B Ensure that the internally referenced help page actually has correct
  information.
C Major changes can be the default for new users, but should be opt-in
  for existing users.

I think (A) could be partially done by tech-ambassadors (what a
difficult word); however when deploying something like MMV to all wikis,
isn't this worth a CentralNotice?

(B) is as obvious as important. Outdated information is confusing. Make
sure to update your help pages before going to release your software to
the wild. Or delay the release until this is done.

Suggestion (C) is interesting, although perhaps technically difficult to
implement.
If a feature that one experienced as anonymous user is good [login
cookies expire, ...], or one explicitly tested the feature or was told
by a fellow about a good feature, it is very likely that one will enable
that feature for the account, too. People will do this freely. Without
complaining. And people, who intentionally enabled a feature, usually
have a positive attitude, are willing to help and improve the feature.
They will provide you with constructive feedback.
The overall atmosphere would be a lot more positive than that we
currently experience with new tools, *the power users do not need*. So
why not actively promoting a feature until there is a critical mass
using it? It may take a lot of time but I think it's worth a test.


A personal appeal:
Please care about the power users [2]. They are the core and foundation
of the WMF projects. They create the content; manage most issues - think
about the OTRS team - in their spare time, ... so WMF can finally run
the fundraiser banners and you can get your payment.

-- Rillke


[1] https://www.mediawiki.org/wiki/Help:Multimedia/Media_Viewer
[2]
http://www.aswedeingermany.de/50SoftwareDevelopment/20MostImportantRuleOfUIDesign.html

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployments of new, radically different default features

2014-05-17 Thread Max Semenik
I thought we already had this discussion recently:
http://www.gossamer-threads.com/lists/wiki/wikitech/412782?do=post_view_threaded#412782


On Sat, May 17, 2014 at 2:55 PM, Rainer Rillke rainerril...@hotmail.comwrote:

 Recently, Media Viewer[1] (MMV) has been switched at Wikimedia Commons
 to be default.

 As some people do not care a lot for beta features or new features, do
 not read the mailing lists and overlook main discussion forums or are
 just unable to understand English, they were surprised and confused and
 wondered how they could disable the feature.

 Please help me to evaluate if there are alternatives to how software
 deployment is currently done. Here are some suggestions from community
 member Jameslwoodward (commons adminstrator):


 A Post a banner so that there is a good chance of actually reaching
   everyone.
 B Ensure that the internally referenced help page actually has correct
   information.
 C Major changes can be the default for new users, but should be opt-in
   for existing users.

 I think (A) could be partially done by tech-ambassadors (what a
 difficult word); however when deploying something like MMV to all wikis,
 isn't this worth a CentralNotice?

 (B) is as obvious as important. Outdated information is confusing. Make
 sure to update your help pages before going to release your software to
 the wild. Or delay the release until this is done.

 Suggestion (C) is interesting, although perhaps technically difficult to
 implement.
 If a feature that one experienced as anonymous user is good [login
 cookies expire, ...], or one explicitly tested the feature or was told
 by a fellow about a good feature, it is very likely that one will enable
 that feature for the account, too. People will do this freely. Without
 complaining. And people, who intentionally enabled a feature, usually
 have a positive attitude, are willing to help and improve the feature.
 They will provide you with constructive feedback.
 The overall atmosphere would be a lot more positive than that we
 currently experience with new tools, *the power users do not need*. So
 why not actively promoting a feature until there is a critical mass
 using it? It may take a lot of time but I think it's worth a test.


 A personal appeal:
 Please care about the power users [2]. They are the core and foundation
 of the WMF projects. They create the content; manage most issues - think
 about the OTRS team - in their spare time, ... so WMF can finally run
 the fundraiser banners and you can get your payment.

 -- Rillke


 [1] https://www.mediawiki.org/wiki/Help:Multimedia/Media_Viewer
 [2]

 http://www.aswedeingermany.de/50SoftwareDevelopment/20MostImportantRuleOfUIDesign.html

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Best regards,
Max Semenik ([[User:MaxSem]])
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployments of new, radically different default features

2014-05-17 Thread Daniel Schwen
Rainer makes a few good points. This is not just another weekly
software update, this is a high profile interface change. The
discussion Max linked to did not have the same scope. And the site
notice is just point A in a set of three sensible suggestions.
I've followed the development of the Media Viewer and am very happy
with how it turned out. On the other hand I can understand that it may
pose a disruption in the established workflows of power users.
I don't think we should paddle back now though. What we could offer on
commons is a banner with a two buttons. One to dismiss the banner, and
one to dismiss the banner and deactivate the media viewer (we have the
technology for one-click gadget (de)activation already).
I'd hate for a negative opinion to build on media viewer because of
that. I'm convinced it is a necessary and modern UI that is adequate
for most of our end users.
Daniel

On Sat, May 17, 2014 at 4:18 PM, Max Semenik maxsem.w...@gmail.com wrote:
 I thought we already had this discussion recently:
 http://www.gossamer-threads.com/lists/wiki/wikitech/412782?do=post_view_threaded#412782


 On Sat, May 17, 2014 at 2:55 PM, Rainer Rillke 
 rainerril...@hotmail.comwrote:

 Recently, Media Viewer[1] (MMV) has been switched at Wikimedia Commons
 to be default.

 As some people do not care a lot for beta features or new features, do
 not read the mailing lists and overlook main discussion forums or are
 just unable to understand English, they were surprised and confused and
 wondered how they could disable the feature.

 Please help me to evaluate if there are alternatives to how software
 deployment is currently done. Here are some suggestions from community
 member Jameslwoodward (commons adminstrator):


 A Post a banner so that there is a good chance of actually reaching
   everyone.
 B Ensure that the internally referenced help page actually has correct
   information.
 C Major changes can be the default for new users, but should be opt-in
   for existing users.

 I think (A) could be partially done by tech-ambassadors (what a
 difficult word); however when deploying something like MMV to all wikis,
 isn't this worth a CentralNotice?

 (B) is as obvious as important. Outdated information is confusing. Make
 sure to update your help pages before going to release your software to
 the wild. Or delay the release until this is done.

 Suggestion (C) is interesting, although perhaps technically difficult to
 implement.
 If a feature that one experienced as anonymous user is good [login
 cookies expire, ...], or one explicitly tested the feature or was told
 by a fellow about a good feature, it is very likely that one will enable
 that feature for the account, too. People will do this freely. Without
 complaining. And people, who intentionally enabled a feature, usually
 have a positive attitude, are willing to help and improve the feature.
 They will provide you with constructive feedback.
 The overall atmosphere would be a lot more positive than that we
 currently experience with new tools, *the power users do not need*. So
 why not actively promoting a feature until there is a critical mass
 using it? It may take a lot of time but I think it's worth a test.


 A personal appeal:
 Please care about the power users [2]. They are the core and foundation
 of the WMF projects. They create the content; manage most issues - think
 about the OTRS team - in their spare time, ... so WMF can finally run
 the fundraiser banners and you can get your payment.

 -- Rillke


 [1] https://www.mediawiki.org/wiki/Help:Multimedia/Media_Viewer
 [2]

 http://www.aswedeingermany.de/50SoftwareDevelopment/20MostImportantRuleOfUIDesign.html

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 Best regards,
 Max Semenik ([[User:MaxSem]])
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployments of new, radically different default features

2014-05-17 Thread John
Here is an interesting deployment idea, roll out but leave as opt-in for a
period of time, after a month or so, set as default for anons and new
accounts, During the whole process keep track of who has enabled it and
then disabled it, vs never enabled it. After a period of time move everyone
who hasnt specifically disabled the viewer to have the setting as default.

This would achieve several different things all at the same time, enabling
wider spread testing/debugging, and a phased deployment process that should
minimize negative user impact as much as possible. As I have seen way too
many features pushed out to the general pubic long before they should
have been. WMF wikis take mediawiki and wikitext along with templates and
the parser and make them do some really odd things. It doesnt matter how
much testing you do, quirks will pop up. In a phased deploy process that
utilizes both watchlist and central notices this should keep the fallout
to  minimum.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployments of new, radically different default features

2014-05-17 Thread Max Semenik
I actually think that  anons should be the last to gget new features if at
all possible: they are the group least informed about projects' internal
workings and thus it's hard for them to report problems.


On Sat, May 17, 2014 at 3:37 PM, John phoenixoverr...@gmail.com wrote:

 Here is an interesting deployment idea, roll out but leave as opt-in for a
 period of time, after a month or so, set as default for anons and new
 accounts, During the whole process keep track of who has enabled it and
 then disabled it, vs never enabled it. After a period of time move everyone
 who hasnt specifically disabled the viewer to have the setting as default.

 This would achieve several different things all at the same time, enabling
 wider spread testing/debugging, and a phased deployment process that should
 minimize negative user impact as much as possible. As I have seen way too
 many features pushed out to the general pubic long before they should
 have been. WMF wikis take mediawiki and wikitext along with templates and
 the parser and make them do some really odd things. It doesnt matter how
 much testing you do, quirks will pop up. In a phased deploy process that
 utilizes both watchlist and central notices this should keep the fallout
 to  minimum.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Best regards,
Max Semenik ([[User:MaxSem]])
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-17 Thread Daniel Kinzler
Am 17.05.2014 17:57, schrieb Subramanya Sastry:
 On 05/17/2014 10:51 AM, Subramanya Sastry wrote:
 So, going back to your original implementation, here are at least 3 ways I 
 see
 this working:

 2. action=expandtemplates returns a html.../html for the expansion of
 {{T}}, but also provides an additional API response header that tells Parsoid
 that T was a special content model page and that the raw HTML that it 
 received
 should not be sanitized.
 
 Actually, the html/html wrapper is not even required here since the new 
 API
 response header (for example, X-Content-Model: HTML) is sufficient to know 
 what
 to do with the response body.

 But that would only work if {{T}} was the whole text that was being expanded (I
guess that's what you do with parsoid, right? Took me a minute to realize that).
expandtemplates operates on full wikitext. If the input is something like

  == Foo ==
  {{T}}

  [[Category:Bla}}

Then expanding {{T}} without a wrapper and pretending the result was HTML would
just be wrong.

Regarding trusting the output: MediaWiki core trusts the generated HTML for
direct output. It's no different from the HTML generated by e.g. special pages
in that regard.

I think something like html transclusion={{T}} model=whatever.../html
would work best.

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployments of new, radically different default features

2014-05-17 Thread Brian Wolff
On 5/17/14, Max Semenik maxsem.w...@gmail.com wrote:
 I actually think that  anons should be the last to gget new features if at
 all possible: they are the group least informed about projects' internal
 workings and thus it's hard for them to report problems.


+1

I'm reminded of a type of issue that popped up on flagged revision
wikis, where anons get stable versions, and users get newest version -
Since the anons were viewing different text, it took a long time for
the people with the power to fix things to actually find out things
were broken because anyone who could report the issue saw the ok
version.

As for A and B - B seems to be a no brainer. A is a bit of a balance.
Its hard to know what is too much notification, and when its not
enough.

I do think there are probably better ways to handle notification of
new features. Perhaps a pop up the first time new feature is activated
explaining the feature and how to disable it. I'm not really sure.

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Transcluding non-text content as HTML on wikitext pages

2014-05-17 Thread Subramanya Sastry

On 05/17/2014 06:14 PM, Daniel Kinzler wrote:

Am 17.05.2014 17:57, schrieb Subramanya Sastry:

On 05/17/2014 10:51 AM, Subramanya Sastry wrote:

So, going back to your original implementation, here are at least 3 ways I see
this working:

2. action=expandtemplates returns a html.../html for the expansion of
{{T}}, but also provides an additional API response header that tells Parsoid
that T was a special content model page and that the raw HTML that it received
should not be sanitized.

Actually, the html/html wrapper is not even required here since the new API
response header (for example, X-Content-Model: HTML) is sufficient to know what
to do with the response body.

  But that would only work if {{T}} was the whole text that was being expanded 
(I
guess that's what you do with parsoid, right? Took me a minute to realize that).
expandtemplates operates on full wikitext. If the input is something like

   == Foo ==
   {{T}}

   [[Category:Bla}}

Then expanding {{T}} without a wrapper and pretending the result was HTML would
just be wrong.


Parsoid handles this correctly. We have mechanisms for injecting HTML as 
well as wikitext into the toplevel page. For example, tag extensions 
currently return fully expanded html (we use action=parse API endpoint) 
and we inject that HTML into the page. So, consider this wikitext for 
page P.


== Foo ==
{{wikitext-transclusion}}
  *a1
map .. ... /map
  *a2
{{T}} (the html-content-model-transclusion)
  *a3

Parsoid gets wikitext from the API for {{wikitext-transclusion}}, parses 
it and injects the tokens into the P's content. Parsoid gets HTML from 
the API for map./map and injects the HTML into the 
not-fully-processed wikitext of P (by adding an appropriate token 
wrapper). So, if {{T}} returns HTML (i.e. the MW API lets Parsoid know 
that it is HTML), Parsoid can inject the HTML into the 
not-fully-processed wikitext and ensure that the final output comes out 
right (in this case, the HTML from both the map extension and {{T}} 
would not get sanitized as it should be).


Does that help explain why we said we don't need the html wrapper?

All that said, if you want to provide the wrapper with html 
model=whatever fully-expanded-HTML/html, we can handle that as 
well. We'll use the model attribute of the wrapper, discard the wrapper 
and use the contents in our pipeline.


So, model information either as an attribute on the wrapper, api 
response header, or a property in the JSON/XML response structure would 
all work for us. I don't have clarity on which of these three is the 
best mechanism for providing the template-page content-model information 
to clients .. so till such time I understand that better, I dont have an 
opinion about the specific mechanism. However, in his previous message, 
Gabriel indicated that a property in the JSON/XML response structure 
might work better for multi-part responses.


Subbu.


Regarding trusting the output: MediaWiki core trusts the generated HTML for
direct output. It's no different from the HTML generated by e.g. special pages
in that regard.

I think something like html transclusion={{T}} model=whatever.../html
would work best.

-- daniel

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Performance guidelines - aim to make official 23 May

2014-05-17 Thread MZMcBride
Sumana Harihareswara wrote:
https://www.mediawiki.org/wiki/Performance_guidelines

I think this page is ready to have the {{draft}} tag removed. I believe
it now represents our consensus on what MediaWiki core, extensions, and
gadgets developers should do to preserve high performance.

This looks pretty good overall. I left a couple notes on the talk page.

On May 23, I'd like to move forward with making a tutorial and a poster
based on this. So, please edit, speak up, and so on, within the next week.

One of the pain points of this mediawiki.org page (and many others) is
that it's a lot of text. I gave some thought to whether visuals (even clip
art) would help. Or perhaps futzing with the layout. But it might just be
distracting. A poster sounds neat, but the Web version will likely be
canonical, so if these types of pages can get a bit of visual love, that'd
be cool. Perhaps Heather or one of the other designers could take a look.

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Login to Wikimedia Phabricator with a GitHub/Google/etc account?

2014-05-17 Thread Tyler Romeo
On Sat, May 17, 2014 at 2:26 PM, Steven Walling steven.wall...@gmail.comwrote:

 Obviously with Google and Facebook as options we don't
 stand to gain a lot in terms of technical contributions.


This isn't necessarily true. I know that I personally would prefer to be
able to log in with my Google account, because it's what I use for
everything.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l