Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Dmitriy Sintsov
* Nikola Smolenski smole...@eunet.rs [Thu, 24 Sep 2009 08:44:28 
+0200]:
 Having said that, I don't see why would XML be necessary. Table and
 template markup are well structured and could be used by any editor 
just
 as XML would. Additionally, it is easier to observe diffs with wiki
 markup.

So it would be menu and icon-driven editing, where the hands should move 
from keyboard to mouse and vice versa, like MS Word. Not very handy for 
programmers and people who prefer to do most of tasks in command line. 
But, the majority probably wants the WYSIWYG. It;s just with wikitext 
you may have both source text and WYSIWYG, with XML source text becomes 
a real pain..
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Steve Bennett
On Thu, Sep 24, 2009 at 4:44 PM, Nikola Smolenski smole...@eunet.rs wrote:
 Having said that, I don't see why would XML be necessary. Table and
 template markup are well structured and could be used by any editor just
 as XML would. Additionally, it is easier to observe diffs with wiki markup.

Is it possible, currently, to define the types of parameters to
functions? Can you include hints in them about what the parameter is
for? Can you specify maximum lengths, or choose from a small number of
choices?

Steve

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Steve Bennett
On Thu, Sep 24, 2009 at 5:31 PM, Dmitriy Sintsov ques...@rambler.ru wrote:
 So it would be menu and icon-driven editing, where the hands should move
 from keyboard to mouse and vice versa, like MS Word. Not very handy for
 programmers and people who prefer to do most of tasks in command line.

I think you're making a large number of unjustified assumptions here.
If you look at the proposal (and remember - it's a proposal), it says
it would be enabled by default for every user. Reading between the
lines, that means you can disable it. Relax.

Steve

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Nikola Smolenski
Steve Bennett wrote:
 On Thu, Sep 24, 2009 at 4:44 PM, Nikola Smolenski smole...@eunet.rs wrote:
 Having said that, I don't see why would XML be necessary. Table and
 template markup are well structured and could be used by any editor just
 as XML would. Additionally, it is easier to observe diffs with wiki markup.
 
 Is it possible, currently, to define the types of parameters to
 functions? Can you include hints in them about what the parameter is
 for? Can you specify maximum lengths, or choose from a small number of
 choices?

It seems that we are talking about two different things here:

1) What would be the markup of template description pages, that will be 
used to make template editing forms.

2) What would be the markup of a template call, that will be used in the 
source of article pages to make actual template call.

Given that #1 is an entirely new thing, it can have an entirely new 
markup. I see no reason for it not to be XML. #2 however should be kept 
as is.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Tim Starling
Trevor Parscal wrote:
 If you are really doing a JS2 rewrite/reorganization, would it be 
 possible for some of us (especially those of us who deal almost 
 exclusively with JavaScript these days) to get a chance to ask 
 questions/give feedback/help in general?

I've mostly been working on analysis and planning so far. I made a few
false starts with the code and so ended up planning in a more detailed
way than I initially intended. I've discussed various issues with the
people in #mediawiki, including our resident client-side guru Splarka.

I started off working on fixing the coding style and the most glaring
errors from the JS2 branch, but I soon decided that I shouldn't be
putting so much effort into that when a lot of the code would have to
be deleted or rewritten from scratch.

I did a survey of script loaders in other applications, to get an idea
of what features would be desirable. My observations came down to the
following:

* The namespacing in Google's jsapi is very nice, with everything
being a member of a global google object. We would do well to
emulate it, but migrating all JS to such a scheme is beyond the scope
of the current project.

* You need to deal with CSS as well as JS. All the script loaders I
looked at did that, except ours. We have a lot of CSS objects that
need concatenation, and possibly minification.

* JS loading can be deferred until near the /body or until the
DOMContentLoaded event. This means that empty-cache requests will
render faster. Wordpress places emphasis on this.

* Dependency tracking is useful. The idea is to request a given
module, and all dependencies of that module, such as other scripts,
will automatically be loaded first.



I then looked more closely at the current state of script loading in
MediaWiki. I made the following observations:

* Most linked objects (styles and scripts) on a typical page view come
from the Skin. If the goal is performance enhancement, then working on
the skins and OutputPage has to be a priority.

* The class abstraction as implemented in JS2 has very little value
to PHP callers. It's just as easy to use filenames. It could be made
more useful with features such as dependency tracking, better
concatenation and CSS support. But it seems to me that the most useful
abstraction for PHP code would be for client-side modules to be
multi-file, potentially with supporting PHP code for each module.

* Central registration of all client-side resources in a global
variable would be onerous and should be avoided.

* Dynamic requests such as [[MediaWiki:Handheld.css]] have a large
impact on site performance and need to be optimised. I'm planning a
new interface, similar to action=raw, allowing these objects to be
concatenated.



The following design documents are in my user space on mediawiki.org:

http://www.mediawiki.org/wiki/User:Tim_Starling/CSS_and_JS_caller_survey_(r56220)
  - A survey of MW functions that add CSS and JS, especially the
terribly confusing situation in Skin and OutputPage

http://www.mediawiki.org/wiki/User:Tim_Starling/JS_load_order_issues_(r56220)
  - A breakdown of JS files by the issues that might be had in moving
them to the footer or DOMContentLoaded. I favour a conservative
approach, with wikibits.js and the site and user JS staying in the
head.

http://www.mediawiki.org/wiki/User:Tim_Starling/Proposed_modularisation_of_client-side_resources
  - A proposed reorganisation of core scripts (Skin and OutputPage)
according to the MW modules they are most associated with.



The object model I'm leaning towards on the PHP side is:

* A client-side resource manager (CSRM) class. This would be
responsible for maintaining a list of client-side resources that have
been requested and need to be sent to the skin. It would also handle
caching, distribution of incoming dynamic requests, dependencies,
minification, etc. This is quite a complex job and might need to be
split up somewhat.

* A hierarchy of client-side module classes. A module object would
contain a list of files, dependencies and concatenation hints. Objects
would be instantiated by parent classes such as skins and special
pages, and added to the CSRM. Classes could be registered globally,
and then used to generate dynamic CSS and JS, such as the user
preference stylesheet.

* The module base class would be non-abstract and featureful, with a
constructor that accepts an array-based description. This allows
simple creation of modules by classes with no interest in dynamic
script generation.

* A new script loader entry point would provide an interface to
registered modules.



There are some design decisions I still have to make, which are tricky
due to performance tradeoffs:

* With concatenation, there is the question of which files to combine
and which to leave separate. I would like to have a combine
parameter which is a string, and files with the same combine parameter
will be combined.

* Like Wordpress, we could store minified and concatenated files in a

[Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Tisza Gergő
Aryeh Gregor Simetrical+wikilist at gmail.com writes:
 Wikitext is not easy to edit.

It is easy enough to edit for power users, who make the large majority of edits;
and way more comfortable than WYSIWYG. Wikis require a certain hacker mentality
- not in the technical sense, but a desire to understand things in depth. It
takes effort to learn the syntax, but once you did, it gives you freedom and
effectiveness, because you are actually in control of things (as opposed to rich
text editors which sometimes do something similar to what you intended, at other
times not even close, because they use some fucked-up internal representation
that you have no way of knowing or understanding). This might be a problem for
Wikia with its fanboi target demographic that has the attention span of a Naruto
episode, but Wikipedia is an encyclopedia, and writing a good encyclopedia
article requires hacker mentality in the first place, so whatever.

And then there is the ecosystem of bots, gadgets and other third-party tools
which is based on wikitext, and not only would moving away from wikitext a huge
maintenance burden, but again it would be replaced with something that is way
less intuitive and actually harder to use (simple text operations are somewhat
easier than fooling around with document trees).

So if you can do WYSIWYG on top of wikitext, cool (the learning curve is
certainly steep for new users, and that will only become worse as new features
are added). If you can do a sort of WYSIWYM with syntax highlighting,
context-sensitive help and wizards for the more inconvenient elements like
templates, that is even better, because it wouldn't create a gap between people
using WYSIWYG and wikitext, and would allow for a gradual learning experience.
But replacing wikitext with some sort of internal representation that is
unreadable for humans would be a huge mistake IMO.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards br anching MediaWiki 1.16)

2009-09-24 Thread Tisza Gergő
Tim Starling tstarling at wikimedia.org writes:

 * Unnecessary use of the global namespace. The jQuery style is nice,
 with local functions inside an anonymous closure:
 
 function () {
function setup() {
...
}
addOnloadHook( setup );
 }();

This would make it impossible to overwrite the function locally on a wiki, which
is done sometimes, either because it conflicts with some local script, or for
better localization (such as changing the sorting algorithm in the
sortable-table script to handle non-ASCII characters decently). You should
rather use a global MediaWiki object, that works just as well for clearing the
global namespace, and it leaves the functions accessible.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Links to Wikipedia with .htm at the end

2009-09-24 Thread Nikola Smolenski
I have noticed that a number (hundreds) of links to Wikipedia exist 
that are of the form http://xx.wikipedia.org/wiki/whatever.htm See this 
Google search for an overview: 
http://www.google.com/search?hl=enq=%22wikipedia.org%2Fwiki+*+htm%22

Is this way of linking a consequence of some bad editing tools appending 
.htm automatically, or were these links once actually valid? If they 
were, should they somehow be made to continue working? I saw at least 
one printed scientific work using such a link: 
http://scindeks.nb.rs/article.aspx?artid=0353-79190804835Rlang=en cites 
http://Wikipedia.org/wiki/contraception.htm

On a related note, an interesting example I noticed in 
http://www.library.ait.ac.th/ThesisSearch/summary/Debajit%20Dutta.pdf 
that links to www.wiki.wikipedia.org/wiki/urban_heat_island.htm - did 
this www.wiki.wikipedia.org server really exist?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread David Gerard
2009/9/24 Tisza Gergő gti...@gmail.com:

 It is easy enough to edit for power users, who make the large majority of 
 edits;
 and way more comfortable than WYSIWYG.


Much as vim is more powerful than Notepad.

However, impenetrable wikitext is one of *the* greatest barriers to
new users on Wikimedia projects. And this impenetrability is not, in
any way whatsoever or by any twists of logic, a feature.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Peter Gervai
On Thu, Sep 24, 2009 at 14:36, David Gerard dger...@gmail.com wrote:

 However, impenetrable wikitext is one of *the* greatest barriers to
 new users on Wikimedia projects. And this impenetrability is not, in
 any way whatsoever or by any twists of logic, a feature.

Adding a gui layer to wikitext is always okay, as long as it's
possible to get rid of, since majority of edits not coming from new
users, and losing flexibility for power users to get more newbies
doesn't sound like a good deal to me.

At least all of the GUIs I've seen were slow and hard to use, and
resulted unwanted (side) effects if something even barely complex were
entered. And this isn't the problem of Wikipedia: google docs, which
is one of the most advanced web-based gui systems I guess have plenty
of usability problems, which only can be fixed by messing with the
Source. And many core people want to mess with the source.

So, adding a newbie layer is okay as long as you don't mess up the
work of the non-newbies.

g

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Platonides
Also take into account on the javascript redesign, javascript wiki-side
extensions.

[[MediaWiki:Common.js]] importScripts [[MediaWiki:Wikiminiatlas.js]],
[[MediaWiki:niceGalleries.js]] and [[MediaWiki:buttonForRFA.js]], which
then load [[MediaWiki:buttonForRFA/lang.js]]... plus the several Gadgets
the user may have enabled.

On Wikimedia Commons I load 38 scripts located at the MediaWiki
namespace (plus gen=js).
I'm pretty sure loading all of them when they aren't in the cache slows
it much more than the organization of the core mediawiki javascript.

Transcluding in the same request files would benefit a lot (either
automatically detecting calls to importScript or with a new syntax).


Finally, a dependence you may not have taken into account would be that
some CSS from the shared repository should be usable by host wikis when
viewing the pages.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 3:31 AM, Dmitriy Sintsov ques...@rambler.ru wrote:
 So it would be menu and icon-driven editing, where the hands should move
 from keyboard to mouse and vice versa, like MS Word. Not very handy for
 programmers and people who prefer to do most of tasks in command line.

Programmers rank far, far below normal people when it comes to
usability prioritization.  Programmers can use WYSIWYG just as well as
anyone else, even if it's not as powerful as they'd like.  FWIW, on
forums I go to where I can use either BB code or WYSIWYG, I use
WYSIWYG most of the time -- it's just more convenient.  I only resort
to BB code to work around deficiencies in the WYSIWYG editor.

Even for programmers, learning a new syntax is a significant issue.  A
few months ago I spoke to a programmer I know who tried editing
Wikipedia a few times.  He got lost in the wikitext.  He knew he could
figure out how to make the correction he wanted if he had to, but in
practice, he just gave up without putting in the effort.  It was too
much of a barrier to entry to overcome his weak interest in fixing the
error he saw.

As far as I'm concerned, a situation in which WYSIWYG is the only
supported editing method would be far superior to the current
situation.  If we could allow power users to edit manually, that would
be a nice bonus.  Note that even if we use a format like XML that's a
pain to manually edit, programmers can write up their own front-ends
if they like -- they're programmers, after all!  And also note that as
with most WYSIWYG editors, there would presumably be a slew of
keyboard shortcuts for power users to memorize if they didn't want to
use the mouse.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Tei
Possibly-OFF-TOPIC-here


I see that ImageMagick can combine images in a single one.

A single image mean a single hit to a Apache, so it only have to spawn once.

On the clientside, a single image can draw multiple elements with some
ninja CSS stuff. ( background-position?).

For such thing to be possible to a MediaWiki skins, do changes are needed?.

This is  minimize  but for graphics.

Is possible a idea for the future, for a future full of divs and CSS3 happynes.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Platonides
Tei wrote:
 Possibly-OFF-TOPIC-here
 
 I see that ImageMagick can combine images in a single one.
 
 A single image mean a single hit to a Apache, so it only have to spawn once.
 
 On the clientside, a single image can draw multiple elements with some
 ninja CSS stuff. ( background-position?).
 
 For such thing to be possible to a MediaWiki skins, do changes are needed?.
 
 This is  minimize  but for graphics.
 
 Is possible a idea for the future, for a future full of divs and CSS3 
 happynes.

I don't think it fits our normal image usage into the pages. Could be
tried for the images used by the skins. Although I would worry about
support for that CSS on legacy browsers.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 7:33 AM, Tisza Gergő gti...@gmail.com wrote:
 It is easy enough to edit for power users, who make the large majority of 
 edits;

Retention of existing users is not a problem.  We don't have to worry
that a significant number of dedicated contributors will leave because
of a switch to WYSIWYG.  They are, by hypothesis, dedicated.  On the
other hand, new users being reluctant to contribute due to wikitext is
a demonstrable and serious problem.

I also contest your implication that power users will uniformly or
even mostly prefer wikitext to WYSIWYG.  I'm a power user by any
standard, but I use WYSIWYG wherever possible.

Last I heard, by the way, even now most actual *content* is added by
occasional contributors.  Power users may have more edits, but that
doesn't mean they're the most important ones.


Of course, I should emphasize that ideally we should keep everyone
happy.  But making Wikipedia easier to edit for new users is *much*
more important than making it easier for established editors.  It will
*always* be easier for established users to edit than new users, and
established editors require a lot less coddling than new editors.

 Wikis require a certain hacker mentality
 - not in the technical sense, but a desire to understand things in depth.

No, they don't.  One of the core principles of wikis is eliminating
barriers to entry.  Ten thousand people who each fix one typo a month
are a tremendously valuable resource even if none of them ever
contribute more.  But many of them will -- *if* you can lure them into
making those typo fixes to begin with.  Which you can't, if they're
scared off by the fixed-width text with random incomprehensible
punctuation thrown in everywhere that has no obvious relationship to
the article's actual content.

 And then there is the ecosystem of bots, gadgets and other third-party tools
 which is based on wikitext, and not only would moving away from wikitext a 
 huge
 maintenance burden, but again it would be replaced with something that is way
 less intuitive and actually harder to use (simple text operations are somewhat
 easier than fooling around with document trees).

Are you arguing here that it's easier for *bots* to edit wikitext than
XML?  Because that seems to be what you're saying, but I don't
understand how that would make any sense.  Wikitext is unparseable,
bots have to resort to fragile regexes and hope they mostly work.

 But replacing wikitext with some sort of internal representation that is
 unreadable for humans would be a huge mistake IMO.

It's not going to happen anytime soon in any case.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread dgerard
2009/9/24 Aryeh Gregor simetrical+wikil...@gmail.com:
 On Thu, Sep 24, 2009 at 3:31 AM, Dmitriy Sintsov ques...@rambler.ru wrote:

 So it would be menu and icon-driven editing, where the hands should move
 from keyboard to mouse and vice versa, like MS Word. Not very handy for
 programmers and people who prefer to do most of tasks in command line.

 Programmers rank far, far below normal people when it comes to
 usability prioritization.


Indeed, as do robot editors. This is part of the no way, not even
with logic twisting, is the impenetrability of wikitext a feature.


 As far as I'm concerned, a situation in which WYSIWYG is the only
 supported editing method would be far superior to the current
 situation.  If we could allow power users to edit manually, that would
 be a nice bonus.  Note that even if we use a format like XML that's a
 pain to manually edit, programmers can write up their own front-ends
 if they like -- they're programmers, after all!  And also note that as
 with most WYSIWYG editors, there would presumably be a slew of
 keyboard shortcuts for power users to memorize if they didn't want to
 use the mouse.


Realistically, Tim has already stated we're not throwing out wikitext
because of the huge body of text already in it. WYSIWYG editing is
getting there bit by bit - FCKeditor would be fine on a fresh wiki
without the unspeakable atrocities inventive geeks have perpetrated
upon wikitext on en:wp and should continue to get better at dealing
with more obtusities.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 4:41 AM, Tim Starling tstarl...@wikimedia.org wrote:
      * Removes a few RTTs for non-pipelining clients

Do you mean to imply that there's such a thing as a pipelining client
on the real web?  (Okay, okay, Opera.)  This concern seems like it
outweighs all the others put together pretty handily -- especially for
script files that aren't at the end, which block page loading.

 * Automatically create CSS sprites?

That would be neat, but perhaps a bit tricky.

On Thu, Sep 24, 2009 at 9:13 AM, Platonides platoni...@gmail.com wrote:
 Also take into account on the javascript redesign, javascript wiki-side
 extensions.

 [[MediaWiki:Common.js]] importScripts [[MediaWiki:Wikiminiatlas.js]],
 [[MediaWiki:niceGalleries.js]] and [[MediaWiki:buttonForRFA.js]], which
 then load [[MediaWiki:buttonForRFA/lang.js]]... plus the several Gadgets
 the user may have enabled.

 On Wikimedia Commons I load 38 scripts located at the MediaWiki
 namespace (plus gen=js).
 I'm pretty sure loading all of them when they aren't in the cache slows
 it much more than the organization of the core mediawiki javascript.

Hmm, yeah.  This scheme needs to support combining admin-added
JavaScript, unless we can convince everyone to just put everything in
Common.css.  Maybe we could support some sort of transclusion
mechanism for JS files -- like rather than serving JS pages raw, MW
first substitutes templates (but nothing else)?

On Thu, Sep 24, 2009 at 10:00 AM, Tei oscar.vi...@gmail.com wrote:
 I see that ImageMagick can combine images in a single one.

 A single image mean a single hit to a Apache, so it only have to spawn once.

 On the clientside, a single image can draw multiple elements with some
 ninja CSS stuff. ( background-position?).

 For such thing to be possible to a MediaWiki skins, do changes are needed?.

This is image spriting, which Tim mentioned as a possibility.  It's
not a big issue for us right now because we use so few images, and
images don't block page parsing or rendering, but it might be worth
considering eventually.

On Thu, Sep 24, 2009 at 10:13 AM, Platonides platoni...@gmail.com wrote:
 I don't think it fits our normal image usage into the pages. Could be
 tried for the images used by the skins. Although I would worry about
 support for that CSS on legacy browsers.

Image spriting is very well-studied and works in all browsers of
import.  It's used by all the fancy high-performance sites, like
Google:

http://www.google.com/images/nav_logo7.png

It would be nice if we didn't have to go to such lengths to hack
around the fact that HTTP pipelining is broken, wouldn't it?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 10:48 AM, dgerard dger...@gmail.com wrote:
 Realistically, Tim has already stated we're not throwing out wikitext
 because of the huge body of text already in it.

Not in the foreseeable future, but maybe someday.  We'd just need a
very careful and thorough migration plan.

 WYSIWYG editing is
 getting there bit by bit - FCKeditor would be fine on a fresh wiki
 without the unspeakable atrocities inventive geeks have perpetrated
 upon wikitext on en:wp and should continue to get better at dealing
 with more obtusities.

Funnily enough, I just talked to a user in #mediawiki asking why
FCKeditor didn't work right on his wiki when copy-pasting from MS
Word.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread rarohde
On Thu, Sep 24, 2009 at 7:27 AM, Aryeh Gregor
simetrical+wikil...@gmail.com wrote:
snip

 Last I heard, by the way, even now most actual *content* is added by
 occasional contributors.  Power users may have more edits, but that
 doesn't mean they're the most important ones.

snip

It may depend on your definition of occasional contributor and
power user, but the way I tend to think about such distinctions
would suggest your statement is false.  I've never been able to do the
analysis directly for enwiki, because it is too large and lacks
appropriate dumps, but looking at other large Wikipedias suggests that
as a rule of thumb about 70% of article content (measured by character
count) comes from accounts with more than 1000 edits to articles.
Only ~15% of content originates from people with 100 article edits or
less.  In practice, adding sentences, paragraphs, sections, and
entirely new articles, is something that most people have to ease
their way into.  In addition, young editors who try to add large
blocks of text too early in their career often find their content is
reverted because of writing style or formatting problems.  So, the
creation of new blocks of content tends to be primarily accomplished
by experienced editors.

You are right that the multitude of drive-by editors willing to do
spell checking and make other small edits is a great resource, and
should be encouraged.  However, I would suggest that for the expansion
and long-term development of Wikipedia it is the retention of existing
power users and the development of new ones that is most important.

However, making it easier for people to first start editing should
also ultimately lead to more potential power editors, so I think the
goals are generally compatible.  There is no reason why making it
easier for newbies should ever be anything other than a net benefit to
everyone.

-Robert Rohde

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Tim Starling
Tei wrote:
 Possibly-OFF-TOPIC-here
 
 
 I see that ImageMagick can combine images in a single one.
 
 A single image mean a single hit to a Apache, so it only have to spawn once.
 
 On the clientside, a single image can draw multiple elements with some
 ninja CSS stuff. ( background-position?).

People have taken to calling that the CSS sprite technique, I
mentioned it as a possibility in my original post.

http://www.alistapart.com/articles/sprites

I always thought the defining characteristic of a sprite was that it
moved around the screen, not that it was copied from a grid, but there
you have it.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread yaron57
Hi,

Just to clarify a few things:

- yes, it's important to distinguish between the editing of templates and
the editing of template calls. Most people understand that distinction, but
just to clarify, this project would allow for editing of template *calls*;
while template pages themselves would just get extended with a new XML file.

- yes, users could opt out of it and stick with standard wiki-text editing.

- I don't think there's any support for replacing wiki-text with XML, by the
way.

- there's no way currently within templates to specify the type of a
parameter (that is, whether it's a string, date, enumeration, etc.),
descriptions of parameters, etc.; that's why something like the XML subpage
is needed. XML seems like a good solution for the task; and the German
Wikipedia's implementation provides a good proof of concept for it.

- this project isn't about getting WYSIWYG onto Wikipedia, although I think
it's a step toward that goal: first, because it would put in place some sort
of Javascript-enabled editor in place of the standard edit page textarea;
and second, because, as far as I know, handling of template calls is one of
the big stumbling blocks (though not the only one) preventing WYSIWYG from
getting used on Wikimedia projects.

-Yaron
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branchingMediaWiki 1.16)

2009-09-24 Thread Jared Williams
 

 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org 
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of 
 Aryeh Gregor
 Sent: 24 September 2009 15:48
 To: Wikimedia developers
 Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards 
 branchingMediaWiki 1.16)
 
 On Thu, Sep 24, 2009 at 4:41 AM, Tim Starling 
 tstarl...@wikimedia.org wrote:
       * Removes a few RTTs for non-pipelining clients
 
 Do you mean to imply that there's such a thing as a 
 pipelining client on the real web?  (Okay, okay, Opera.)  
 This concern seems like it outweighs all the others put 
 together pretty handily -- especially for script files that 
 aren't at the end, which block page loading.
 
  * Automatically create CSS sprites?
 
 That would be neat, but perhaps a bit tricky.

Just trying to think how it'd work. 

Given a CSS selector, and an image, should be able to construct a
stylesheet which sets the background property of the css rules and an
single image.

(#toolbar-copy, toolbar-copy.png)
(#toolbar-copy:hover, toolbar-copy-hover.png)

And the generated stylesheet would get concatenated with other
stylesheets.

Jared


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Dmitriy Sintsov
* dgerard dger...@gmail.com [Thu, 24 Sep 2009 15:48:16 +0100]:
 Realistically, Tim has already stated we're not throwing out wikitext
 because of the huge body of text already in it. WYSIWYG editing is
 getting there bit by bit - FCKeditor would be fine on a fresh wiki
 without the unspeakable atrocities inventive geeks have perpetrated
 upon wikitext on en:wp and should continue to get better at dealing
 with more obtusities.

It should be possible to have wikitext and XML in parallel, even mixed - 
there are already parser tags for extensions and I remember Robert Rohde 
wrote they can be patched to work recursively.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Dmitriy Sintsov
* Aryeh Gregor simetrical+wikil...@gmail.com [Thu, 24 Sep 2009 
09:57:27 -0400]:
 Programmers rank far, far below normal people when it comes to
 usability prioritization.  Programmers can use WYSIWYG just as well as
 anyone else, even if it's not as powerful as they'd like.  FWIW, on
 forums I go to where I can use either BB code or WYSIWYG, I use
 WYSIWYG most of the time -- it's just more convenient.  I only resort
 to BB code to work around deficiencies in the WYSIWYG editor.

I prefer to use BB code and rarely use forum icons. That's a matter of 
personal choice.

 Even for programmers, learning a new syntax is a significant issue.  A
 few months ago I spoke to a programmer I know who tried editing
 Wikipedia a few times.  He got lost in the wikitext.
The only really complex part of wikitext are the templates - nested, 
sometimes really weird subst and so on. I remember reading Advanced 
Templates at meta and have the feeling I am reading something really 
cryptic - I am still not good at making complex templates. Tables, 
links, text formatting, images are easy to me (there were great guides 
like Advanced tables at meta back in v1.9.3..v1.10 when I've started 
to use MediaWiki). Of course it doesn't matter, because I am hardly a 
representative.

 He knew he could
 figure out how to make the correction he wanted if he had to, but in
 practice, he just gave up without putting in the effort.  It was too
 much of a barrier to entry to overcome his weak interest in fixing the
 error he saw.

Only the complex template I can think of reason.

 As far as I'm concerned, a situation in which WYSIWYG is the only
 supported editing method would be far superior to the current
 situation.  If we could allow power users to edit manually, that would
 be a nice bonus.  Note that even if we use a format like XML that's a
 pain to manually edit, programmers can write up their own front-ends
 if they like -- they're programmers, after all!  And also note that as
 with most WYSIWYG editors, there would presumably be a slew of
 keyboard shortcuts for power users to memorize if they didn't want to
 use the mouse.

Maybe it should be possible to store everything in XML and map XML to 
wikitext on the fly for these who like wikitext, and also to make future 
core compatible with old bots. Eg. {| will be stored internally as 
wmf:table.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Tim Starling
Aryeh Gregor wrote:
 On Thu, Sep 24, 2009 at 4:41 AM, Tim Starling tstarl...@wikimedia.org wrote:
  * Removes a few RTTs for non-pipelining clients
 
 Do you mean to imply that there's such a thing as a pipelining client
 on the real web?  (Okay, okay, Opera.)  This concern seems like it
 outweighs all the others put together pretty handily -- especially for
 script files that aren't at the end, which block page loading.

It's not really as simple as that. The major browsers use concurrency
as a substitute for pipelining. Instead of queueing up multiple
requests in a single TCP connection and then waiting, they queue up
multiple requests in multiple connections and then wait. The effect is
very similar in terms of RTTs.

By concatenating, you eliminate concurrency in the browser. The effect
of this could actually be to make the initial page view slower,
despite the increased TCP window size at the end of the concatenated
request. The net performance impact would depend on all sorts of
factors, but you can see that the concurrent case would be faster when
the RTT is very long, the number of objects is large, the number of
connections is equally large, and the unmerged object size is slightly
smaller than the initial TCP window.

In a default install, it's not harmful to concatenate the
[[MediaWiki:*.css]] pages regardless of network distance, because the
pages are so small that even the merged object will fit in the initial
TCP window.

There is a potential reduction in RTT count due to concatenation,
that's why I included that item on the list. But it's client-dependent
and might not exist at all in the most common case. That's why I'm
focusing on other benefits of concatenation to justify why I'm doing it.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branchingMediaWiki 1.16)

2009-09-24 Thread Trevor Parscal
On 9/24/09 9:31 AM, Jared Williams wrote:
 * Automatically create CSS sprites?

 That would be neat, but perhaps a bit tricky.
  
 Just trying to think how it'd work.

 Given a CSS selector, and an image, should be able to construct a
 stylesheet which sets the background property of the css rules and an
 single image.

 (#toolbar-copy, toolbar-copy.png)
 (#toolbar-copy:hover, toolbar-copy-hover.png)

 And the generated stylesheet would get concatenated with other
 stylesheets.

I work with CSS sprites all the time, and have seen some automated 
methods to go from individual images to sprites, but it's not nearly as 
good of an idea as it sounds.

I will go into depth, but if you don't really care (totally 
understandable) just understand this.

*Automation of sprite creation and implementation is not an efficient 
use of time in most cases.*

First, to use sprites, you have to be in a situation where CSS 
background-position  attributes are not already being used. Take for 
instance the CSS we use to place little icons next to links that are 
pointing to external URLs. Essentially we set a background image to be 
positioned right center and then move the text out of the way with 
padding-right:18px. If you were to sprite this image, you could 
perhaps use a vertical sprite (images tiled vertically only) but then 
when the user adjusts the size of the text in their browser they start 
seeing multiple images on the right. You could add more space between 
the images so that the text could be pretty big before you start seeing 
the other icons, but how much space is enough? What limit on text-size 
adjustment should we declare? Does the extra space between the icons 
introduce a significant amount of additional data? (maybe not much with 
PNG compression techniques, but it does add something) In many other 
cases the background position in both X and Y are being used already so 
sprites are not a possibility at all.

To use sprites like Google does, you would need to change the HTML 
output to accommodate the technique. For instance you could insert a 
fixed sized float:right div as an icon at the end of the link, but 
then the elegant way that we apply styles to such links (rules like 
a[href^=http://];) are useless... We would have to make changes to the 
output of the parser for purely aesthetic reasons (evil), or perform 
client-side DOM manipulations (requiring JavaScript to to be enabled 
just to see the external link icon - also evil) --- this is getting messy.

My point is not that sprites are bad, it's that they aren't always an 
option, and take allot of careful design of CSS, HTML and image 
resources to get working properly. Automating them as is starting to be 
proposed here includes inventing some sort of instruction set that a 
computer can read and assemble sprites from, but the problem is usually 
so complex that such a language would take much more time to invent, 
create parsers for, test and maintain than to just do the sprites by hand.

Automating sprite creation is still a great idea, but it needs to be 
done in more isolated and predictable cases like generating toolbar 
icons. This case is more firendly to automation because it's dealing 
with fixed height and width images that are always displayed in the 
browser at the same size no matter what. Thes files are currently stored 
in separate files, so merging them into a single file and generating CSS 
code that defines the offsets for them to be put to use using automation 
would be great!. However even this case has it's issues. It makes the 
toolbar code more complex because we have to support sprite-based images 
as well as non-sprite-based images (so that users can still customize 
the toolbar) and we have to handle the naming of the selectors of the 
generated CSS in some way that won't cause confusion or namespace collision.

Finally, the png or gif files that are created by things like 
ImageMagick are larger (in bytes) than images compressed by hand (using 
image manipulation software). Even pngcrush or similar utilities fail to 
outperform manual image compression. The reason is that images can be 
reduced in size, but when you do this it reduces the quality (fewer 
colors in the pallete make the image look more grainy, aggressive jpeg 
compression makes the image look more blocky). When performing image 
compression manually, you use your eyes and image processing in your 
brain to decide where the line should be drawn between quality and 
optimization - automated solutions I've used seem to either draw this 
line arbitrarily or error on the side of quality at the cost of optimal 
compression.

So - not only does the CSS and HTML need close attention when working 
with sprites, but the image optimization process does as well.

Again, I like sprites allot! But in reality, they are an optimization 
technique that needs careful attention and can cause problems if done 
improperly.

- Trevor

Re: [Wikitech-l] LocalisationUpdate and ProofreadPage second try tomorrow; Collection too

2009-09-24 Thread Brion Vibber
On 9/23/09 5:02 PM, Brion Vibber wrote:
 ThomasV has fixed up some bad queries in ProofreadPage, and it should be
 ready to go again; the updated version has much more advanced index
 support and looks pretty spiffy. :)

ProofreadPage updates have been put live, and aren't killing servers 
this time. :) Thomas is investigating a reported compat issue with Opera.

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] LocalisationUpdate and ProofreadPage second try tomorrow; Collection too

2009-09-24 Thread Brion Vibber
On 9/23/09 5:02 PM, Brion Vibber wrote:
 Roan's redone LU to store the message updates in serialized files
 instead of the database, which we can sync locally to web servers and
 should perform much better; I'll also do a more gradual test rollout so
 we can scale it back more gracefully if we have problems again.

Ok, the updated LocalisationUpdate is running on test.wikipedia.org and 
aa.wikipedia.org; have run one update but haven't set it up as a cron 
job yet.

Will roll out progressively to more client wikis later while watching 
the CPU. :)

Config details at http://wikitech.wikimedia.org/view/LocalisationUpdate

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Michael Dale
~some comments inline~

Tim Starling wrote:

[snip]
 I started off working on fixing the coding style and the most glaring
 errors from the JS2 branch, but I soon decided that I shouldn't be
 putting so much effort into that when a lot of the code would have to
 be deleted or rewritten from scratch.
   

I agree there are some core components that should be separated out and 
re-factored. And some core pieces that your probably focused on do need 
to be removed  rewritten as they are aged quite a bit. (parts of 
mv_embed.js where created in SOC 06) ... I did not focus on the ~best~ 
core loader that could have been created I have just built on what I 
already had available that has worked reasonably well for the 
application set that I was targeting. Its been an iterative process 
which I feel is moving in the right direction as I will outline below.

Obviously more input is helpful and I am open to implementing most of 
the changes you describe as they make sense. But exclusion and dismissal 
may not be less helpful... unless that is your targeted end in which 
case just say so ;)

Its normal for 3rd party observer to say the whole system should be 
scraped and rewritten. Of course starting from scratch is much easier to 
design an ideal system and what it should/could be.

 I did a survey of script loaders in other applications, to get an idea
 of what features would be desirable. My observations came down to the
 following:

 * The namespacing in Google's jsapi is very nice, with everything
 being a member of a global google object. We would do well to
 emulate it, but migrating all JS to such a scheme is beyond the scope
 of the current project.
   

You somewhat contradict this approach by recommending against class 
abstraction below.. ie how will you cleanly load components and 
dependencies if not by a given name?

I agree we should move things into a global object ie: $j and all our 
components / features should extend that object. (like jquery plugins). 
That is the direction we are already going.

Dependency loading is not really beyond the scope... we are already 
supporting that. If you check out the mv_jqueryBindings function in 
mv_embed.js ... here we have loader calls integrated into the jquery 
binding. This integrates loading the high level application interfaces 
into their interface call.

The idea is to move more and more of the structure of the application 
into that system. so right now mwLoad is a global function but should be 
re-factored into the jquery space and be called via $j.load();  |
|
 * You need to deal with CSS as well as JS. All the script loaders I
 looked at did that, except ours. We have a lot of CSS objects that
 need concatenation, and possibly minification.
   

Brion did not set that as high priority when I inquired about it, but of 
course we should add in style grouping as well. It's not like I said we 
should exclude that in our script-loader just a matter of setting 
priority which I agree is high priority.
 * JS loading can be deferred until near the /body or until the
 DOMContentLoaded event. This means that empty-cache requests will
 render faster. Wordpress places emphasis on this.
   

true. I agree that we should put the script includes at the bottom. Also 
all non-core js2 scripts is already loaded via DOMContentLoaded ready 
event. Ideally we should only provide loaders and maybe some small bit 
of configuration for the client side applications they provide. As 
briefly described here: 
http://www.mediawiki.org/wiki/JS2_Overview#How_to_structure_your_JavaScript_application
 * Dependency tracking is useful. The idea is to request a given
 module, and all dependencies of that module, such as other scripts,
 will automatically be loaded first.
   

As mentioned above we do some dependency tracking via binding jquery 
helpers that do that setup internally on a per application interface level.
We could add that convention directly into the script-loader function if 
desired so that on a per class level we include dependencies. Like 
mwLoad('ui.dialog') would know to load ui.core etc.


 I then looked more closely at the current state of script loading in
 MediaWiki. I made the following observations:

 * Most linked objects (styles and scripts) on a typical page view come
 from the Skin. If the goal is performance enhancement, then working on
 the skins and OutputPage has to be a priority.
   

agreed. The script-loading was more urgent for my application task set. 
But for the common case of per page view performance css grouping has 
bigger wins.
 * The class abstraction as implemented in JS2 has very little value
 to PHP callers. It's just as easy to use filenames. 
The idea with class abstraction is that you don't know what script set 
you have available at any given time. Maybe one script included 
ui.resizable and ui.move and now your script depends on  ui.resizable 
and ui.move and ui.drag... your loader call will only include ui.drag 
(since the 

Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 11:20 AM, rarohde raro...@gmail.com wrote:
 It may depend on your definition of occasional contributor and
 power user, but the way I tend to think about such distinctions
 would suggest your statement is false.  I've never been able to do the
 analysis directly for enwiki, because it is too large and lacks
 appropriate dumps, but looking at other large Wikipedias suggests that
 as a rule of thumb about 70% of article content (measured by character
 count) comes from accounts with more than 1000 edits to articles.
 Only ~15% of content originates from people with 100 article edits or
 less.

Do these statistics take into account things like vandalism
reversions?  Also, how do they handle anonymous users -- are they
summed up by edit count like anyone else?  I distinctly remember
seeing a study conclude that most of the actual content comes from
users with few edits, but I can't recall where or how long ago
(possibly two or three years).

Regardless, the point remains that heavy contributors only exist
because they started out as new users.  Just because you're a power
user type of person doesn't mean you'll be less daunted by wikimarkup
if you've never seen it before -- any barrier to entry is a problem.
(Otherwise, why not require registration too?  That's probably
*easier* than understanding wikimarkup for most people.)  And of
course, a lot of contributors that Wikipedia would really like to
encourage are people like academics in the humanities, say, who can't
be expected to be  particularly comfortable with computers.  How much
of the bias toward science and technology in Wikipedia is because of
wikimarkup?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 12:47 PM, Dmitriy Sintsov ques...@rambler.ru wrote:
 The only really complex part of wikitext are the templates - nested,
 sometimes really weird subst and so on.

Templates and refs are by far the worst offenders, for sticking tons
of content in the page that doesn't have any obvious relationship to
the actual content.  Getting rid of them would be a huge step forward.
 But stuff like '''bold''' and ==headings== are also a real problem.
Everything unexpected like that is going to increase the risk that a
new user will get worried he doesn't know what he's doing, and give up
rather than risk breaking something or put effort into figuring out
what to do.  If you give *anyone*[1] a WYSIWYG interface, they'll know
how it works, because they're used to it from Word and whatnot.
That's just not true of wikitext, no matter how simple it is once you
*already* understand it.

[1] Yes, yes, I mean anyone who uses computers much at all, not
farmers in rural Africa or my maternal grandmother.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread David Gerard
2009/9/24 Aryeh Gregor simetrical+wikil...@gmail.com:
 On Thu, Sep 24, 2009 at 10:48 AM, dgerard dger...@gmail.com wrote:

 WYSIWYG editing is
 getting there bit by bit - FCKeditor would be fine on a fresh wiki
 without the unspeakable atrocities inventive geeks have perpetrated
 upon wikitext on en:wp and should continue to get better at dealing
 with more obtusities.

 Funnily enough, I just talked to a user in #mediawiki asking why
 FCKeditor didn't work right on his wiki when copy-pasting from MS
 Word.


*facepalm* And then there's the obvious things users will do that I
hadn't thought of, yes.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread David Gerard
2009/9/24 Dmitriy Sintsov ques...@rambler.ru:

 The only really complex part of wikitext are the templates - nested,
 sometimes really weird subst and so on. I remember reading Advanced
 Templates at meta and have the feeling I am reading something really
 cryptic - I am still not good at making complex templates. Tables,
 links, text formatting, images are easy to me (there were great guides
 like Advanced tables at meta back in v1.9.3..v1.10 when I've started
 to use MediaWiki). Of course it doesn't matter, because I am hardly a
 representative.


Indeed. I can only recommend that you spend more time with people who
can't work computers very well. We also need as editors people who are
experts in things that you are a dunce in but who are dunces in the
computers that you are an expert in. There's a lot more people who
can't work computers very well than who can.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread David Gerard
2009/9/24 Aryeh Gregor simetrical+wikil...@gmail.com:

 Do these statistics take into account things like vandalism
 reversions?  Also, how do they handle anonymous users -- are they
 summed up by edit count like anyone else?  I distinctly remember
 seeing a study conclude that most of the actual content comes from
 users with few edits, but I can't recall where or how long ago
 (possibly two or three years).


Aaron Swartz.

http://www.aaronsw.com/weblog/whowriteswikipedia

Most of the edits are done by a very small group of regulars.

But most of the actual text is contributed by drive-by contributors
and then beaten into shape by the regulars.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 12:49 PM, Tim Starling tstarl...@wikimedia.org wrote:
 It's not really as simple as that. The major browsers use concurrency
 as a substitute for pipelining. Instead of queueing up multiple
 requests in a single TCP connection and then waiting, they queue up
 multiple requests in multiple connections and then wait. The effect is
 very similar in terms of RTTs.

Except that even on a page with 30 or 40 includes, the number of
concurrent requests will typically be something like 4 or 8, so RTT
becomes a huge issue if you have lots of includes.  Not to mention
that most browsers before very recently won't do concurrency at all
for scripts -- script loads block parsing, so no new requests start
when a script is still loading or executing.  If you're talking about
cutting four includes down to one, then maybe the benefit would be
insignificant or even negative, but if you're talking about cutting 30
includes down to ten, AFAIK the benefit just from RTT should swamp all
other considerations.  This is why Yahoo!'s #1 rule for good front-end
performance is Minimize HTTP Requests:

http://developer.yahoo.com/performance/rules.html

 you can see that the concurrent case would be faster when
 the RTT is very long, the number of objects is large, the number of
 connections is equally large

This last point is the major failure here.  If browsers really
requested everything in parallel, then we wouldn't need any of these
hacks -- not combining, not spriting.  But they don't, they request
very few things in parallel.

 There is a potential reduction in RTT count due to concatenation,
 that's why I included that item on the list. But it's client-dependent
 and might not exist at all in the most common case.

AFAIK this is not true in practice.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Brian
On Thu, Sep 24, 2009 at 8:48 AM, dgerard dger...@gmail.com wrote:

 2009/9/24 Aryeh Gregor 
 simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 :
  On Thu, Sep 24, 2009 at 3:31 AM, Dmitriy Sintsov ques...@rambler.ru
 wrote:

  So it would be menu and icon-driven editing, where the hands should move
  from keyboard to mouse and vice versa, like MS Word. Not very handy for
  programmers and people who prefer to do most of tasks in command line.

  Programmers rank far, far below normal people when it comes to
  usability prioritization.


 Indeed, as do robot editors. This is part of the no way, not even
 with logic twisting, is the impenetrability of wikitext a feature.


  As far as I'm concerned, a situation in which WYSIWYG is the only
  supported editing method would be far superior to the current
  situation.  If we could allow power users to edit manually, that would
  be a nice bonus.  Note that even if we use a format like XML that's a
  pain to manually edit, programmers can write up their own front-ends
  if they like -- they're programmers, after all!  And also note that as
  with most WYSIWYG editors, there would presumably be a slew of
  keyboard shortcuts for power users to memorize if they didn't want to
  use the mouse.


 Realistically, Tim has already stated we're not throwing out wikitext
 because of the huge body of text already in it. WYSIWYG editing is
 getting there bit by bit - FCKeditor would be fine on a fresh wiki
 without the unspeakable atrocities inventive geeks have perpetrated
 upon wikitext on en:wp and should continue to get better at dealing
 with more obtusities.


 - d.

This round the Usability Initiative got 800,000 dollars. That's a load of
money. If the Foundation decides that it wants to fix the problem the
correct way then it can. And it can start at any time! We just need to agree
on a solution.

We can't fix the problem by looking backwards at the wikitext that has
already been produced along with the language definition (5,000 lines of
parser code) and saying that the problem is simply intractable. In fact, the
problem does not depend in any way on the quantity of wikitext that has been
produced - it only depends on an understanding (if not a definition) of the
language as it currently exists. Hard work but not, at all, impossible.

It doesn't seem productive to me to start by looking at the problem from
that looking-backwards angle of oh my god there is so much wikitext written
in this language that isn't even defined. It would be more productive to
first decide what we would like to see. For example:

* wikitext parsing would be much faster if the language was well defined and
we could use flex/bison/etc...
* usability would be greatly enhanced if all wikitext was easy to
understand, even for newcomers. this includes a clear breakdown of the
problem of including/querying remote resources and a clean solution to that
problem (unlike templates)
* if the language is well defined, then we can have a wysywig editor whose
behavior is well defined
* if the language is well defined, we can have multiple fully compatible
parser implementations, for example, flex/bison, php,  python and
importantly, javascript.

After we have together designed and implemented the new solution we could
start work on the isomorphic mapping between old school wikitext and new
school wikitext. Or they can happen in parallel. It certainly doesn't seem
helpful to our movement to settle on the existing solution, which has so
many flaws, when we can easily imagine so many other solutions which are
clearly better.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 4:03 PM, Brian brian.min...@colorado.edu wrote:
 This round the Usability Initiative got 800,000 dollars. That's a load of
 money. If the Foundation decides that it wants to fix the problem the
 correct way then it can. And it can start at any time! We just need to agree
 on a solution.

Only at the expense of sacrificing some, most, or all of the work
they're already doing.  $800,000 is not a particularly large sum for a
programming project, and the usability project had to prioritize it.
They decided to target things other than WYSIWYG first.  I'd be
inclined to think that's wise, without having considered the issue
very deeply.  Full WYSIWYG *is* really needed, but it would be
unjustifiably expensive when there are so many more minor improvements
that could be (and are being) made much more easily.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Trevor Parscal
On 9/24/09 1:41 AM, Tim Starling wrote:
 Trevor Parscal wrote:

 If you are really doing a JS2 rewrite/reorganization, would it be
 possible for some of us (especially those of us who deal almost
 exclusively with JavaScript these days) to get a chance to ask
 questions/give feedback/help in general?
  
 I've mostly been working on analysis and planning so far. I made a few
 false starts with the code and so ended up planning in a more detailed
 way than I initially intended. I've discussed various issues with the
 people in #mediawiki, including our resident client-side guru Splarka.

 I started off working on fixing the coding style and the most glaring
 errors from the JS2 branch, but I soon decided that I shouldn't be
 putting so much effort into that when a lot of the code would have to
 be deleted or rewritten from scratch.

 I did a survey of script loaders in other applications, to get an idea
 of what features would be desirable. My observations came down to the
 following:

 * The namespacing in Google's jsapi is very nice, with everything
 being a member of a global google object. We would do well to
 emulate it, but migrating all JS to such a scheme is beyond the scope
 of the current project.

 * You need to deal with CSS as well as JS. All the script loaders I
 looked at did that, except ours. We have a lot of CSS objects that
 need concatenation, and possibly minification.

 * JS loading can be deferred until near the/body  or until the
 DOMContentLoaded event. This means that empty-cache requests will
 render faster. Wordpress places emphasis on this.

 * Dependency tracking is useful. The idea is to request a given
 module, and all dependencies of that module, such as other scripts,
 will automatically be loaded first.



 I then looked more closely at the current state of script loading in
 MediaWiki. I made the following observations:

 * Most linked objects (styles and scripts) on a typical page view come
 from the Skin. If the goal is performance enhancement, then working on
 the skins and OutputPage has to be a priority.

 * The class abstraction as implemented in JS2 has very little value
 to PHP callers. It's just as easy to use filenames. It could be made
 more useful with features such as dependency tracking, better
 concatenation and CSS support. But it seems to me that the most useful
 abstraction for PHP code would be for client-side modules to be
 multi-file, potentially with supporting PHP code for each module.

 * Central registration of all client-side resources in a global
 variable would be onerous and should be avoided.

 * Dynamic requests such as [[MediaWiki:Handheld.css]] have a large
 impact on site performance and need to be optimised. I'm planning a
 new interface, similar to action=raw, allowing these objects to be
 concatenated.



 The following design documents are in my user space on mediawiki.org:

 http://www.mediawiki.org/wiki/User:Tim_Starling/CSS_and_JS_caller_survey_(r56220)
- A survey of MW functions that add CSS and JS, especially the
 terribly confusing situation in Skin and OutputPage

 http://www.mediawiki.org/wiki/User:Tim_Starling/JS_load_order_issues_(r56220)
- A breakdown of JS files by the issues that might be had in moving
 them to the footer or DOMContentLoaded. I favour a conservative
 approach, with wikibits.js and the site and user JS staying in the
 head.

 http://www.mediawiki.org/wiki/User:Tim_Starling/Proposed_modularisation_of_client-side_resources
- A proposed reorganisation of core scripts (Skin and OutputPage)
 according to the MW modules they are most associated with.



 The object model I'm leaning towards on the PHP side is:

 * A client-side resource manager (CSRM) class. This would be
 responsible for maintaining a list of client-side resources that have
 been requested and need to be sent to the skin. It would also handle
 caching, distribution of incoming dynamic requests, dependencies,
 minification, etc. This is quite a complex job and might need to be
 split up somewhat.

 * A hierarchy of client-side module classes. A module object would
 contain a list of files, dependencies and concatenation hints. Objects
 would be instantiated by parent classes such as skins and special
 pages, and added to the CSRM. Classes could be registered globally,
 and then used to generate dynamic CSS and JS, such as the user
 preference stylesheet.

 * The module base class would be non-abstract and featureful, with a
 constructor that accepts an array-based description. This allows
 simple creation of modules by classes with no interest in dynamic
 script generation.

 * A new script loader entry point would provide an interface to
 registered modules.



 There are some design decisions I still have to make, which are tricky
 due to performance tradeoffs:

 * With concatenation, there is the question of which files to combine
 and which to leave separate. I would like to have a combine
 parameter which is a string, 

Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Brian
On Thu, Sep 24, 2009 at 2:22 PM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 On Thu, Sep 24, 2009 at 4:03 PM, Brian brian.min...@colorado.edu wrote:
  This round the Usability Initiative got 800,000 dollars. That's a load of
  money. If the Foundation decides that it wants to fix the problem the
  correct way then it can. And it can start at any time! We just need to
 agree
  on a solution.

 Only at the expense of sacrificing some, most, or all of the work
 they're already doing.  $800,000 is not a particularly large sum for a
 programming project, and the usability project had to prioritize it.
 They decided to target things other than WYSIWYG first.  I'd be
 inclined to think that's wise, without having considered the issue
 very deeply.  Full WYSIWYG *is* really needed, but it would be
 unjustifiably expensive when there are so many more minor improvements
 that could be (and are being) made much more easily.


I definitely think it's a good idea to go after the low hanging fruit first,
which it sounds like is what they are doing with this 800k. Fixing the core
of the problem is definitely not low hanging fruit - it's hard work. On the
other hand, the foundation just got a couple million in unrestricted funds,
and when I say that they can start fixing the problem at any time, I mean
they can seek out an additional grant if necessary for this specific issue.
Basically what I am saying is that I don't jive with the perspective that we
should accept wikitext as it is and hack in new fixes on top of it. I
would like to see the foundation go out and try to fix this problem the
correct way, starting nowish.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Aryeh Gregor
On Thu, Sep 24, 2009 at 4:28 PM, Brian brian.min...@colorado.edu wrote:
 I definitely think it's a good idea to go after the low hanging fruit first,
 which it sounds like is what they are doing with this 800k. Fixing the core
 of the problem is definitely not low hanging fruit - it's hard work. On the
 other hand, the foundation just got a couple million in unrestricted funds,
 and when I say that they can start fixing the problem at any time, I mean
 they can seek out an additional grant if necessary for this specific issue.
 Basically what I am saying is that I don't jive with the perspective that we
 should accept wikitext as it is and hack in new fixes on top of it. I
 would like to see the foundation go out and try to fix this problem the
 correct way, starting nowish.

They could do that.  I wouldn't be surprised if they start serious
WYSIWYG work in a year or two.  But there are a *lot* of things on
Wikipedia that could be improved.  Even with the big grants
Wikimedia's now getting, it operates on a budget less than 0.1% that
of some comparably large websites (like Google).

Right now I hope we're going to focus on getting more full-time
experienced programmers, like hiring a CTO and letting Brion become
only senior software architect.  We have lots of junior people doing
work, but code review is still a huge bottleneck AFAICT.  Just look at
the current discussion on JS2, for instance, or the outages caused by
performance problems that weren't caught before deployment.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branchingMediaWiki 1.16)

2009-09-24 Thread Jared Williams
 

 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org 
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of 
 Trevor Parscal
 Sent: 24 September 2009 19:38
 To: wikitech-l@lists.wikimedia.org
 Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards 
 branchingMediaWiki 1.16)
 
 On 9/24/09 9:31 AM, Jared Williams wrote:
  * Automatically create CSS sprites?
 
  That would be neat, but perhaps a bit tricky.
   
  Just trying to think how it'd work.
 
  Given a CSS selector, and an image, should be able to construct a 
  stylesheet which sets the background property of the css 
 rules and an 
  single image.
 
  (#toolbar-copy, toolbar-copy.png)
  (#toolbar-copy:hover, toolbar-copy-hover.png)
 
  And the generated stylesheet would get concatenated with other 
  stylesheets.
 

 Again, I like sprites allot! But in reality, they are an 
 optimization technique that needs careful attention and can 
 cause problems if done improperly.

Providing CSS sprite support would be (I guess) just a service for
modules/extensions to use, just as a part of the proposed client
resource manager(?). So the mediawiki or an extension can put in a
request for a some stylesheet or javascript be linked to, it could
also request for images possibly via CSS sprites.

So don't see how it should cause a problem.

Jared


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Steve Bennett
On Thu, Sep 24, 2009 at 10:44 PM, Peter Gervai grin...@gmail.com wrote:
 Adding a gui layer to wikitext is always okay, as long as it's
 possible to get rid of, since majority of edits not coming from new
 users, and losing flexibility for power users to get more newbies
 doesn't sound like a good deal to me.

There does seem to be a common presumption that us old-timers would
switch off the WYSIWYG and get straight into the wikitext. Speaking
for myself though, I'd love a good interface where I didn't feel
compelled to do that. I hate the amount of time it takes just to find
the point I want to edit. I'd love to be able to hide all the wikitext
that I'm not right in the process of editing. I can't really picture
such a beast yet, but hopefully someone else can.

Steve

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branchingMediaWiki 1.16)

2009-09-24 Thread Trevor Parscal
On 9/24/09 1:40 PM, Jared Williams wrote:



 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of
 Trevor Parscal
 Sent: 24 September 2009 19:38
 To: wikitech-l@lists.wikimedia.org
 Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards
 branchingMediaWiki 1.16)

 On 9/24/09 9:31 AM, Jared Williams wrote:
  
 * Automatically create CSS sprites?


 That would be neat, but perhaps a bit tricky.

  
 Just trying to think how it'd work.

 Given a CSS selector, and an image, should be able to construct a
 stylesheet which sets the background property of the css

 rules and an
  
 single image.

 (#toolbar-copy, toolbar-copy.png)
 (#toolbar-copy:hover, toolbar-copy-hover.png)

 And the generated stylesheet would get concatenated with other
 stylesheets.



 Again, I like sprites allot! But in reality, they are an
 optimization technique that needs careful attention and can
 cause problems if done improperly.
  
 Providing CSS sprite support would be (I guess) just a service for
 modules/extensions to use, just as a part of the proposed client
 resource manager(?). So the mediawiki or an extension can put in a
 request for a some stylesheet or javascript be linked to, it could
 also request for images possibly via CSS sprites.

 So don't see how it should cause a problem.

 Jared


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

So you are saying that you believe a generic set of sprite-generation 
utilities are going to be able to completely overcome the issues I 
identified and be a better use of time (to design, develop and use) than 
just creating and using sprites manually?

- Trevor

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Brian
On Thu, Sep 24, 2009 at 2:38 PM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 On Thu, Sep 24, 2009 at 4:28 PM, Brian brian.min...@colorado.edu wrote:
  I definitely think it's a good idea to go after the low hanging fruit
 first,
  which it sounds like is what they are doing with this 800k. Fixing the
 core
  of the problem is definitely not low hanging fruit - it's hard work. On
 the
  other hand, the foundation just got a couple million in unrestricted
 funds,
  and when I say that they can start fixing the problem at any time, I mean
  they can seek out an additional grant if necessary for this specific
 issue.
  Basically what I am saying is that I don't jive with the perspective that
 we
  should accept wikitext as it is and hack in new fixes on top of it. I
  would like to see the foundation go out and try to fix this problem the
  correct way, starting nowish.

 They could do that.  I wouldn't be surprised if they start serious
 WYSIWYG work in a year or two.  But there are a *lot* of things on
 Wikipedia that could be improved.  Even with the big grants
 Wikimedia's now getting, it operates on a budget less than 0.1% that
 of some comparably large websites (like Google).

 Right now I hope we're going to focus on getting more full-time
 experienced programmers, like hiring a CTO and letting Brion become
 only senior software architect.  We have lots of junior people doing
 work, but code review is still a huge bottleneck AFAICT.  Just look at
 the current discussion on JS2, for instance, or the outages caused by
 performance problems that weren't caught before deployment.


Working through the current backlog of bugs definitely counts as low
hanging fruit so I definitely agree that there should be more hackers
on hand at the foundation
and a CTO to guide
them. On the other hand, given the admitted size of the wysiwyg problem and
the large number of existing smaller problems it doesn't seem wise to to try
and shoehorn it into the current usability initiative if we all agree that
they already have their work cut out for them. The solution we come up with,
for example this xml proposal for templates, could in practice end up being
counter to our long term goals to standardize wikitext. It could quickly
propagate throughout the wikisphere and not only will we have to deal with
templates and parser functions, but now an entire form language. I think
something much simpler and cleaner than all of that is desirable.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branchingMediaWiki 1.16)

2009-09-24 Thread Jared Williams
 

 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org 
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of 
 Trevor Parscal
 Sent: 24 September 2009 21:49
 To: wikitech-l@lists.wikimedia.org
 Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards 
 branchingMediaWiki 1.16)
 
 On 9/24/09 1:40 PM, Jared Williams wrote:
 
 
 
  -Original Message-
  From: wikitech-l-boun...@lists.wikimedia.org
  [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf 
 Of Trevor 
  Parscal
  Sent: 24 September 2009 19:38
  To: wikitech-l@lists.wikimedia.org
  Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards 
  branchingMediaWiki 1.16)
 
  On 9/24/09 9:31 AM, Jared Williams wrote:
   
  * Automatically create CSS sprites?
 
 
  That would be neat, but perhaps a bit tricky.
 
   
  Just trying to think how it'd work.
 
  Given a CSS selector, and an image, should be able to construct
a 
  stylesheet which sets the background property of the css
 
  rules and an
   
  single image.
 
  (#toolbar-copy, toolbar-copy.png)
  (#toolbar-copy:hover, toolbar-copy-hover.png)
 
  And the generated stylesheet would get concatenated with other 
  stylesheets.
 
 
 
  Again, I like sprites allot! But in reality, they are an 
 optimization 
  technique that needs careful attention and can cause 
 problems if done 
  improperly.
   
  Providing CSS sprite support would be (I guess) just a service for

  modules/extensions to use, just as a part of the proposed client 
  resource manager(?). So the mediawiki or an extension can put in a

  request for a some stylesheet or javascript be linked to, it could

  also request for images possibly via CSS sprites.
 
  So don't see how it should cause a problem.
 
  Jared
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 So you are saying that you believe a generic set of 
 sprite-generation utilities are going to be able to 
 completely overcome the issues I identified and be a better 
 use of time (to design, develop and use) than just creating 
 and using sprites manually?
 
 - Trevor

I wouldn't say there a issues with CSS sprites, but there are
limitations which you have to be aware of before deciding on using
them, and therefore do not need overcoming.

In the context of providing toolbar imagery for UIs like a WYSIWYG
editor, or for playing video, audio, or for simple image editing, they
can remove a lot of round triping. 

Jared





___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Platonides
Brian wrote:
 This round the Usability Initiative got 800,000 dollars. That's a load of
 money. If the Foundation decides that it wants to fix the problem the
 correct way then it can. And it can start at any time! We just need to agree
 on a solution.
 
 We can't fix the problem by looking backwards at the wikitext that has
 already been produced along with the language definition (5,000 lines of
 parser code) and saying that the problem is simply intractable. In fact, the
 problem does not depend in any way on the quantity of wikitext that has been
 produced - it only depends on an understanding (if not a definition) of the
 language as it currently exists. Hard work but not, at all, impossible.
 
...
 
 * wikitext parsing would be much faster if the language was well defined and
 we could use flex/bison/etc...

Have you read the archives?
It has been tried. Several times.
There's even a mailing list for that.

Getting a formal definition of ~90% of the wikitext syntax is easy. The
other 10% drived nuts everyone trying to do it hard enough, so far.

Keep trying, but build over what's already done.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread David Gerard
2009/9/24 Platonides platoni...@gmail.com:
 Brian wrote:

 * wikitext parsing would be much faster if the language was well defined and
 we could use flex/bison/etc...

 Have you read the archives?
 It has been tried. Several times.
 There's even a mailing list for that.
 Getting a formal definition of ~90% of the wikitext syntax is easy. The
 other 10% drived nuts everyone trying to do it hard enough, so far.
 Keep trying, but build over what's already done.


I suspect it'll be approached from the other end: FCKeditor will be
improved until it can do quite a lot of the stuff, and there'll
eventually be sufficiently little remaining that it can be
bot-converted to a form FCKeditor can handle.

At which point we formalise that and away we go with third-party parsers galore!

Well, I can dream.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Robert Rohde
On Thu, Sep 24, 2009 at 12:50 PM, David Gerard dger...@gmail.com wrote:
 2009/9/24 Aryeh Gregor simetrical+wikil...@gmail.com:

 Do these statistics take into account things like vandalism
 reversions?  Also, how do they handle anonymous users -- are they
 summed up by edit count like anyone else?  I distinctly remember
 seeing a study conclude that most of the actual content comes from
 users with few edits, but I can't recall where or how long ago
 (possibly two or three years).


 Aaron Swartz.

 http://www.aaronsw.com/weblog/whowriteswikipedia

 Most of the edits are done by a very small group of regulars.

 But most of the actual text is contributed by drive-by contributors
 and then beaten into shape by the regulars.

Yes, I have seen that too.  My analysis, using a blame engine against
a Wikipedia's full history, suggests that Aaron's thesis is simply not
true in general.  There could be any number of reasons he got a
different result.  For example he looked at only one article
discussing a fairly well known person, which may not have been
representative of the bulk of Wiki cotnent, or things may be different
in English rather than say Russian (one of the wikis I used), or
things may have changed since 2006.  Whatever the reason, my
conclusion is that the core, highly-active community (perhaps 25,000
accounts on a site the size of enwiki) contributes more than 50% of
the currently displayed text.

To answer Aryeh, yes, I paid attention to handling vandalism
reversions, and yes anons were tracked as if they were users.

I went into it expecting a result like that described in the blog
post, and came out with the opposite conclusion.

-Robert Rohde

PS. A full write-up of this analysis has been on my to-do list for a while now.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branchingMediaWiki 1.16)

2009-09-24 Thread Trevor Parscal
On 9/24/09 2:34 PM, Jared Williams wrote:



 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of
 Trevor Parscal
 Sent: 24 September 2009 21:49
 To: wikitech-l@lists.wikimedia.org
 Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards
 branchingMediaWiki 1.16)

 On 9/24/09 1:40 PM, Jared Williams wrote:
  



 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf
  
 Of Trevor
  
 Parscal
 Sent: 24 September 2009 19:38
 To: wikitech-l@lists.wikimedia.org
 Subject: Re: [Wikitech-l] JS2 design (was Re: Working towards
 branchingMediaWiki 1.16)

 On 9/24/09 9:31 AM, Jared Williams wrote:

  
 * Automatically create CSS sprites?



 That would be neat, but perhaps a bit tricky.


  
 Just trying to think how it'd work.

 Given a CSS selector, and an image, should be able to construct

 a

 stylesheet which sets the background property of the css


 rules and an

  
 single image.

 (#toolbar-copy, toolbar-copy.png)
 (#toolbar-copy:hover, toolbar-copy-hover.png)

 And the generated stylesheet would get concatenated with other
 stylesheets.





 Again, I like sprites allot! But in reality, they are an
  
 optimization
  
 technique that needs careful attention and can cause
  
 problems if done
  
 improperly.

  
 Providing CSS sprite support would be (I guess) just a service for


 modules/extensions to use, just as a part of the proposed client
 resource manager(?). So the mediawiki or an extension can put in a


 request for a some stylesheet or javascript be linked to, it could


 also request for images possibly via CSS sprites.

 So don't see how it should cause a problem.

 Jared


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


 So you are saying that you believe a generic set of
 sprite-generation utilities are going to be able to
 completely overcome the issues I identified and be a better
 use of time (to design, develop and use) than just creating
 and using sprites manually?

 - Trevor
  
 I wouldn't say there a issues with CSS sprites, but there are
 limitations which you have to be aware of before deciding on using
 them, and therefore do not need overcoming.

 In the context of providing toolbar imagery for UIs like a WYSIWYG
 editor, or for playing video, audio, or for simple image editing, they
 can remove a lot of round triping.

 Jared





 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Sounds like we agree. The issues weren't issues with sprites, they were 
issues with automating them.

- Trevor

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Unicode characters in Chrome

2009-09-24 Thread Steve Bennett
I'm using Chrome 3.0.195.21, and have long found that some characters
in Wikipedia render as boxes. One example:
http://en.wikipedia.org/wiki/P_with_stroke renders as box
(minuscule: box)...

Now, I looked at http://en.wikipedia.org/wiki/Help:Special_characters
and the advice is not very useful or specific: it says that Special
symbols should display properly without further configuration with ...
Safari and most other recent browsers. I tried installing Gnu Unifont
and setting it as the default browser font, but that seems to be
overridden by MediaWiki anyway?

So anyway, I'm asking two questions:
1) What can I do to get more special characters to render correctly in Chrome?
2) Could/would anyone improve [[Help:Special characters]] to make it
clearer, more specific and correct?

Thanks,
Steve

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Brian
On Thu, Sep 24, 2009 at 4:05 PM, Platonides platoni...@gmail.com wrote:

 Brian wrote:
  This round the Usability Initiative got 800,000 dollars. That's a load of
  money. If the Foundation decides that it wants to fix the problem the
  correct way then it can. And it can start at any time! We just need to
 agree
  on a solution.
 
  We can't fix the problem by looking backwards at the wikitext that has
  already been produced along with the language definition (5,000 lines of
  parser code) and saying that the problem is simply intractable. In fact,
 the
  problem does not depend in any way on the quantity of wikitext that has
 been
  produced - it only depends on an understanding (if not a definition) of
 the
  language as it currently exists. Hard work but not, at all, impossible.
 
 ...
 
  * wikitext parsing would be much faster if the language was well defined
 and
  we could use flex/bison/etc...

 Have you read the archives?
 It has been tried. Several times.
 There's even a mailing list for that.

 Getting a formal definition of ~90% of the wikitext syntax is easy. The
 other 10% drived nuts everyone trying to do it hard enough, so far.

 Keep trying, but build over what's already done.


Platonides, if you had read the archives you would know that I am very
familiar with previous work done on creating formal grammars for wikitext,
and that I know it would take a redesign of certain parts of the language.
Of course, this information is embedded in the very text you quote.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] LocalisationUpdate and ProofreadPage second try tomorrow; Collection too

2009-09-24 Thread Brion Vibber
On 9/24/09 12:23 PM, Brion Vibber wrote:
 Ok, the updated LocalisationUpdate is running on test.wikipedia.org and
 aa.wikipedia.org; have run one update but haven't set it up as a cron
 job yet.

 Will roll out progressively to more client wikis later while watching
 the CPU. :)

 Config details at http://wikitech.wikimedia.org/view/LocalisationUpdate

Ok, we still need to complete automation of update runs for 
LocalisationUpdate[1], but it seems to be working!

It's not the most glamorous of extensions, but you can see here[2] an 
updated message (Den här sidan where the current deployed message file 
says Denna sidan har)!

[1] https://bugzilla.wikimedia.org/show_bug.cgi?id=20800
[2] http://ur1.ca/cc0n

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikitext vs. WYSIWYG (was: Proposal for editing template calls within pages)

2009-09-24 Thread Jacopo Corbetta
On Thu, Sep 24, 2009 at 05:44, Peter Gervai grin...@gmail.com wrote:
 On Thu, Sep 24, 2009 at 14:36, David Gerard dger...@gmail.com wrote:

 However, impenetrable wikitext is one of *the* greatest barriers to
 new users on Wikimedia projects. And this impenetrability is not, in
 any way whatsoever or by any twists of logic, a feature.

 Adding a gui layer to wikitext is always okay, as long as it's
 possible to get rid of, since majority of edits not coming from new
 users, and losing flexibility for power users to get more newbies
 doesn't sound like a good deal to me.

 At least all of the GUIs I've seen were slow and hard to use, and
 resulted unwanted (side) effects if something even barely complex were
 entered. And this isn't the problem of Wikipedia: google docs, which
 is one of the most advanced web-based gui systems I guess have plenty
 of usability problems, which only can be fixed by messing with the
 Source. And many core people want to mess with the source.

 So, adding a newbie layer is okay as long as you don't mess up the
 work of the non-newbies.

I totally agree with this analysis, and that's what brought us to
write MeanEditor. Unfortunately I don't have much time to work on it
right now (it was part of a much bigger wiki project which never came
to light), so it's just a prototype.

It would be valuable if someone with a lot of wiki experience could
write a list of what can be considered basic wikitext. That is,
syntax that newbies can be reasonably expected to understand and
manipulate through a (visual) editor. My list is at
http://www.mediawiki.org/wiki/Extension:MeanEditor#Details.

I also want to share a funny fact with you: we have a sandbox on which
anyone can test the editor. For some reason, a _lot_ of people expect
to be able to write HTML code in wikitext (maybe they are implementing
a CMS and are used to HTML coding?). Some even write stuff like
h2heading/h2 and then complain that the editor lets them enter
the text (of course, it's valid wikitext after all), but not edit it
afterwards (HTML-style headings do not show up in the TOC, an odd
wikitext feature which we surely don't want newbies to use).
It might be useful to have a list of these common mistakes and show
a warning (Do you really want a non-TOC heading? Use == heading ==
otherwise.).

I'm not sure if the Usability team is working on this. They ran a
visual editor survey some time ago, but right now they are probably
working on more urgent matters.
-- Jacopo

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-24 Thread Steve Bennett
On Fri, Sep 25, 2009 at 8:05 AM, Platonides platoni...@gmail.com wrote:
 Getting a formal definition of ~90% of the wikitext syntax is easy. The
 other 10% drived nuts everyone trying to do it hard enough, so far.

I wouldn't put it quite like that. Yes, the problem gets harder as you
get nearer the end - but it also doesn't matter, because you're
dealing with rare examples. There are also parts of the language best
not dealt with this way. For example, there's not much point
attempting to parse template transclusions like this, because there
will always have to be a pre-processor that handles them.

What actually really drove me mad was ANTLR - it wasn't stable, and
the effort in turning a language file into a working Java program that
I could play with was a lot greater than I expected. And I discovered
the usual problem with declarative languages - that small changes in
one place can have huge impacts in others.

I would definitely like to take up the challenge again sometime. I've
sort of been waiting for ANTLR to become more stable. And we'd need
some kind of plan of what to do with the language spec file, like
whether to actually build a parser off it or whatever.

Steve

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS2 design (was Re: Working towards branching MediaWiki 1.16)

2009-09-24 Thread Tim Starling

 * The namespacing in Google's jsapi is very nice, with everything
 being a member of a global google object. We would do well to
 emulate it, but migrating all JS to such a scheme is beyond the scope
 of the current project.
   
 
 You somewhat contradict this approach by recommending against class 
 abstraction below.. ie how will you cleanly load components and 
 dependencies if not by a given name?

By module name. Each module can contain multiple files. I don't see
any problem with allowing anonymous modules, as long as the caller is
happy with the fact that such modules can't be used in dependencies or
loaded on demand on the client side.

 I agree we should move things into a global object ie: $j and all our 
 components / features should extend that object. (like jquery plugins). 
 That is the direction we are already going.

I think it would be better if jQuery was called window.jQuery and
MediaWiki was called window.mw. Then we could share the jQuery
instance with JS code that's not aware of MediaWiki, and we wouldn't
need to worry about namespace conflicts between third-party jQuery
plugins and MediaWiki.

 Dependency loading is not really beyond the scope... we are already 
 supporting that. If you check out the mv_jqueryBindings function in 
 mv_embed.js ... here we have loader calls integrated into the jquery 
 binding. This integrates loading the high level application interfaces 
 into their interface call.

Your so-called dependency functions (e.g. doLoadDepMode) just seemed
to be a batch load feature, there was no actual dependency handling.
Every caller was required to list the dependencies for the classes it
was loading.

 The idea is to move more and more of the structure of the application 
 into that system. so right now mwLoad is a global function but should be 
 re-factored into the jquery space and be called via $j.load();  |
 |

That would work well until jQuery introduced its own script-loader
plugin with the same name and some extension needed to use it.

[...]
 We could add that convention directly into the script-loader function if 
 desired so that on a per class level we include dependencies. Like 
 mwLoad('ui.dialog') would know to load ui.core etc.

Yes, that is what real dependency handling would do.

 * The class abstraction as implemented in JS2 has very little value
 to PHP callers. It's just as easy to use filenames. 
 The idea with class abstraction is that you don't know what script set 
 you have available at any given time. Maybe one script included 
 ui.resizable and ui.move and now your script depends on  ui.resizable 
 and ui.move and ui.drag... your loader call will only include ui.drag 
 (since the other are already defined).

I think you're missing the point. I'm saying it doesn't provide enough
features. I want to add more, not take away some.

You can remove duplicates by filename.

[...]
 We want to move away from php code dependencies for each javascript 
 module. Javascript should just directly hit a single exposure point of 
 the mediawiki api. If we have php code generating bits and pieces of 
 javascript everywhere it quickly gets complicated, is difficult to 
 maintain, much more resource intensive, and requires a whole new 
 framework to work right.
 
 Php's integration with the javascript should be minimal. php should 
 supply configuration, and package in localized msgs.

I don't think it will be too complicated or resource intensive. JS
generation in PHP is very flexible and you admit that there is a role
for it. I don't think there's a problem with adding a few more
features on the PHP side.

If necessary, we can split it back out to a non-MediaWiki standalone
mode by generating some static JS.

What is your reason for saying this? Have you worked on some other
framework where integration of PHP and JavaScript has caused problems?


 * Central registration of all client-side resources in a global
 variable would be onerous and should be avoided.
   
 
 You can always add to the registered global. This works well by having 
 the php read the javascript file directly to ascertain the global list. 
 That way your javascript works stand alone as well as integrated with a 
 script-loader that provides localization and configuration.

There's a significant CPU cost to loading and parsing JS files on
every PHP request. I want to remove that behaviour. Instead, we can
list client-side files in PHP. Then from the PHP list, we can generate
static JS files in order to recover the standalone functionality.

[...]
 That sounds cleaner than the present outputPage and Skin.php and 
 associated script-loader grafting. Having a cleaner system would be 
 nice... but will probably break skins and other stuff... or have 
 OutputPage and Skin old api mappings or change almost every extension 
 and break every 3rd party skin out there?

I think I'll probably break most third-party skins, if they have PHP
code. We break them with just about every major release so 

[Wikitech-l] Data Processing

2009-09-24 Thread vanessa lee
Hi

Now I have constructed a local wiki.And I want to add the data which download 
from the internet Wikipedia to the local wiki.I tried to read the source 
code,but I coudln’t find the exact thing(Interface) that I want.

So,I want to ask  some questions:

when click the save button after edit an article or add a new article, how is 
the data stored? Which function/class does it call? 



Could you describe the process of data storage ?



What form are articles  stored in database?


Thanks



  Vanessa



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l