Hurra!
Alex
2016-09-20 19:09 GMT+02:00 Greg Grossmeier :
> Due to the Wikimedia Technical Operations Team having their team offsite
> that week and generally being less than normally available there will be
> no non-emergency deploys the week of September 26th (aka: next
://it.wikisource.org/w/index.php?diff=1499050oldid=
1498642 should do the job.
Il 07/01/2015 10:21, Alex Brollo ha scritto:
While dragging a little bit into canvas, I successfully upload into a
canvas a cropped clip of the image of a djvu page into it.source, just to
crash into a DOM exception
()
and toDataURL() methods.
Again, it seems a CORS issue.
Am I wrong? Is there some doc about this issue?
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
I'm not a developer so it's perfectly normal that I can't understand
anything about your talk; nevertheless, please, remember of KISS principle
when building any installing tools for poor, final users. I'm waiting for
something like pip install core.
Alex
2014-06-11 15:58 GMT+02:00 C. Scott
OK, done
2014-02-23 7:53 GMT+01:00 K. Peachey p858sn...@gmail.com:
bugzilla.
On 23 February 2014 16:51, Alex Brollo alex.bro...@gmail.com wrote:
I'd need internetarchive python package into Labs:
https://pypi.python.org/pypi/internetarchive , a python bot for Internet
Archive. I
mediawiki pages and Internet Archive items both reading and
editing metadata and uploading new items/pages. I've been encouraged to go
on by Tpt.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo
While browsing the web for new trends of human-horse communication horse
management, I found the website of Marjorie Smith, and I've been deeply
influenced by her; her thoughts about links between man-to-man and
man-to-horse communication - really an example of advantages of NVC - were
extremely
I'm playing a little bit with HTML5 features; I see that thare's a jStorage
module, but i didn't found jCanvas module. Is there some interest about?
Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Users are very confused and worried any time a new version of wiki software
is launched and tested, and some major or minor bug comes invariably out.
A clear message using central sitenotice, with links to doc pages listing
the changes at different levels of detail and to their talk pages to
but as an
ecologist I'd like to save both band and server load. :-)
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
.
The whole thing is very simple and effective is infobox template code is
designed from the beginning to accept clean string data without any
wikicode or html code inside; but I see that very few infoboxes are
designed to get such clean data and nothing other.
Alex brollo
Manzoni]]/span
since wikicode [[Alessandro Manzoni]] will be interpreted by the server,
and parsed/expanded into a html link as usual, resulting into a big mess.
The same occurs for any wikicode and/or html passed into a infobox template
parameter.
Alex brollo
.
Your're right, I used a wrong example. I got troubles from html codes,
quotes and templates; not from links.
Well it seems that {{urlencode:{{{1|}}}|WIKI}} solves anything. Thanks.
I'll test it on our main infoboxes.
I apologyze for my question (perhaps not so deep).
Alex brollo
,or
server loading related ) reason to avoid that HTML comments, wrapped into
raw page wikicode are sent back into html rendering as-they-are?
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo
(function (data) {
parametri.callback( parametri, data);
});
}
Yes, parametri a pretty complex object, and callback() could be very simple
or extremely complex, anyway it runs and it is a script of one line,
needing one parameter only.
It runs too using Wikidata API.
Alex brollo
data/span into the raw code of
any page, then save it, and then use js console of Chrome from the resultin
page in view mode with this:
$(#container).attr(data-test)
and you'll get This is a test data.
This is largely sufficient (thanks again for suggestion!) :-)
Alex brollo
of
code/of servers.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Perfect! A data- attribute can contain anything and it runbs perfectly. It
can contain too a JSON-stringified object added into edit mode into a
span (so that a while dictionary can be passed into a single data-
attribute). It's just what I needed.
Alex brollo
of
current page in view mode by js with a index.php or an api.php call, and I
do, but this is much more server-expensive IMHO.
Is there any sound reason to strip html comments away? If there is no sound
reason, could such a stripping be avoided?
Alex brollo
disappears and login runs.
Can please one of you post a bug into Bugzilla (I can't... I hate Bugzilla
and I never use it :-( )? In the meantime, I'll fix the bug manually at
any pywikipedia update.
Thanks!
Alex brollo
___
Wikitech-l mailing list
Wikitech-l
2012/12/21 Bináris wikipo...@gmail.com
Neither wikitech-l, nor Bugzilla is the right place to complain about
Pywikipedia. :-) We have a separate mailing list called Pywikipedia-l (
https://lists.wikimedia.org/mailman/listinfo/pywikipedia-l) which is
recommended to join if you use Pywiki
2012/9/24 Tim Starling tstarl...@wikimedia.org
I suppose a nested switch like:
{{#switch: {{{1}}}
| 0 = {{#switch: {{{2}}}
| 0 = zero
| 1 = one
}}
| 1 = {{#switch: {{{2}}}
| 0 = two
| 1 = three
}}
}}
might give you a performance advantage over one of the
into a list, at least; much better, to implement a JSON parsing of a
JSON string, to get lists and dictionaries from strings saved into pages. I
guess a dramatic improvement of performance; but I'm far from sure about.
Alex brollo
___
Wikitech-l mailing list
users will find this way and will use it, since it's needed to get
result.
Simply build something more light and efficient and simple than #switch to
get the same result, and users will use it.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l
Just to give a final feedback to this talk, that has been very useful for
my tries: woks are going on fastly, and are presently focused on alignement
of some structures templates whose data are shared between Commons and
Wikisource: Creator vs. Author; Book
vs
2012/8/30 Brion Vibber br...@pobox.com
Luckily, if you're using jQuery much of the low-level stuff can be taken
care of for you. Something like this should work for API calls,
automatically using the callback behind the scenes:
Thanks! really I tested some interproject AJAX API call with
Thanks again Brion, it runs perfectly and - strange to say - I got no hard
difficulty, just a little bit of review of API calls and of structure of
resulting formidable objects. It's really much simpler to parse original
template contents than resulting html from their expansion ;-) and AJAX
such an AJAX call.
Alex brollo, from it.wikisource
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
for future enhancements
when data can be accessed and used today, with present software.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
2012/8/29 bawolff bawolff...@gmail.com
On Wed, Aug 29, 2012 at 2:24 PM, Alex Brollo alex.bro...@gmail.com
wrote:
Thanks for comments.
[..]
Thanks for API suggestion, but the question is: does it violates same
origin AJAX policy? I can read anything by a bot from any project, but
AJAX
No it doesn't violate the same origin policy. Same origin policy only
prevents reading information from other websites, it does not stop you
from executing content from other websites (Which always seemed an odd
distinction to me...). Thus you can use the api with a callback
parameter to get
.
Normal users shuold not view anything; advanced users (sysops and layman
programmers) will surely appreciate it a lot. I remember terrible headaches
trying to fix unexpented, intriguing local bugs of out rich javascript set
of local tools into it.source.
Alex brollo
2012/8/24 Strainu strain
Djvu files are the wikisource standard supporting proofreading. They have
very interesting features, being fully open in structure and layering,
and allowing a fast and effective sharing into the web, when they are
stored in their indirect mode. Most interesting, their text layer - which
can be
Text layer is stored in img_metadata, which means it can be retrieved
by the API (using ?action=queryprop=imageinfoiiprop=metadata).
However when I tried to test this, it didn't seem to work. Maybe
trying to return the entire text layer hit some max api result size
limit or something. (It'd
://it.wikisource.org/wiki/MediaWiki:Variabili.js where date used into
automation/help of edit are collected as js objects.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
both as normal wikitext container and data container. Why not?
Alex brollo (it.source)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Thank you, we did, but we wrapped the needed jQuery code into a gadget, so
that users can set it as a personal preference. Our gadgets are growinig
and growing in number and performance! :-)
Alex
___
Wikitech-l mailing list
2012/3/8 Kim Eik k...@heldig.org
By fixed do you mean the css style position: fixed; ?
Yes. An absolutely simple, but effective idea. Really all tools and
buttons should have a position:fixed css attribute - particularly when
proofreading into wikisource. Recently I registered too into
when scrolling long texts in
edit mode.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
I'd like to replace usual wiki markup ''...'' and '''...''' for italic and
bold with well-formed - even if deprecated - html tags i.../i and
b.../b. Is there any serious issue dealing with server load, or
safety/compatibility/other?
And - generally speaking - is there any project to convert wiki
Thanks Platonides - it's rewarding to find that I'm not so crazy. :-)
I'll subscribe wikitext-l, I saw a recent, incouraging contribution about
wiki markup just my dream too.
Alex
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
Is there a sound reason to hidden so well the main id of pages? Is there
any drawback to show it anywhere into wikies, and to use it much largely
for links and API calls?
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https
Thanks! I'll save this talk as a reference.
Really my question was focused on possibile risks or safety issues; it's so
strange that a database (since I see wikisource as a database) is mainly
indexed, from the user's point of view, on a variable field as title of the
page, that I suspected some
Thanks for contributions.
Really I was going to use large switches, both as associative arrays and as
sets. I hoped, that algorithm was based simply on string search into the
code (I presume, it is possible, and I know how efficient is plain string
search in any decent language) but I guess, from
I'm using more and more #switch into templates, it's surprising how many
issues it can solve, and how much large arrays it can manage. My questions
are:
1. Is there a reasonable upper limit for the number of #switch options?
2. Is there a difference in server load between a long list of #switch
I'm far from being enough skilled to understand the whole stuff. Working as
hardly as I can into wikisource, I found that the most useful tools are js
scripts like RegexMenuFramework by Pathoschild, t.i. a container for
personal, highly customizable js scripts to work on wikitext (fixing
scannos,
It could frequently come out a big problem of collisions with existing
templates with different code. I feel that synonimous templates, with
different features, are a very subtle and harmful touble. This raises the
long-standing problem of redundancy and coherence so deeply afflicting
anything
, and very useful so build a shared group of
templates.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
2011/6/29 Ashar Voultoiz hashar+...@free.fr
On 27/06/11 23:14, Platonides wrote:
See wfUrlEncode in GlobalFunctions.php
I have added tests for this function with r91108. Feel free to propose
additional tests :)
http://www.mediawiki.org/wiki/Special:Code/MediaWiki/91108
--
Ashar
I tested too mediaWiki encoding with js encodeURI() function. I found a
difference, only one, since mediaWiki encodes apostrophe while encodeURI()
doesn't. Obviously the second, great difference is the conversion of spaces
into underlines.
So, in js so far I got a good simulation of localurl:
2011/6/27 Platonides platoni...@gmail.com
The relevant function is Title::getLocalURL()
I think that in your quote function you need to skip '@$*(),' as well,
and .-_ wouldn't be needed there (but urllib.quote could differ from
php urlencode).
See wfUrlEncode in GlobalFunctions.php
Thanks
question.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
2011/4/21 K. Peachey p858sn...@gmail.com
http://www.mediawiki.org/wiki/Manual:$wgExternalLinkTarget can be done
to effect all outbound links.
Here on the outdated page about it, gives a little bit of information
about why people really dislike it when you do that:
2011/4/11 Daniel Friesen li...@nadir-seen-fire.com
Side thought... why a #switch library? What happened to the old
{{Foo/{{{1}}}|...}} trick?
Simply, {{Foo/{{{1}}}|...}} links to different pages, while
{{Foo|{{{1}}}|...}} points to the same page. I had been frustrated when I
tried to use
2011/4/11 Daniel Friesen li...@nadir-seen-fire.com
Though, when we're talking about stuff this complex... that line about
using a REAL programming language comes into play...
Would be nice if there was some implemented-in-php language script
language we could use that would work on any
2011/4/11 Andrew Garrett agarr...@wikimedia.org
On Mon, Apr 11, 2011 at 5:59 AM, Roan Kattouw roan.katt...@gmail.com
wrote:
What we store in memcached is a serialized version of the preprocessor
XML tree, keyed on the MD5 hash of the wikitext input, unless it's too
small, like Platonides
I'd like to know something more about template parsing/caching for
performance issues.
My question is: when a template is called, it's wikicode, I suppose, is
parsed and translated into something running - I can't imagine what
precisely, but I don't care so much about (so far :-) ). If a second
to download the core html only? And, most important: could
this save a little bit of server load/bandwidth? I humbly think that core
html alone could be useful as a means to obtain a well formed page
content, and that this could be useful to obtain derived formats of the
page (i.e. ePub).
Alex brollo
2011/4/6 Daniel Kinzler dan...@brightbyte.de
On 06.04.2011 09:15, Alex Brollo wrote:
I saved the HTML source of a typical Page: page from it.source, the
resulting txt file having ~ 28 kBy; then I saved the core html only,
t.i.
the content of div class=pagetext, and this file have 2.1 kBy
2011/4/6 Daniel Kinzler dan...@brightbyte.de
I
know that some thousands of calls are nothing for wiki servers, but... I
always try to get a good performance, even from the most banal template.
That'S always a good idea :)
-- daniel
Thanks Daniel. So, my edits will drop again. I'll
without any embarrassment. Beginners often feel themselves really stupid
and some good idea could be lost.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
2011/3/4 Paul Houle p...@ontology2.com
Briefly, atthe border of OT: I see the magic word ontology into your mail
address. :-) :-)
I discovered ontology ... well, a long history. Ontological classification
is used to collect data on cancer by National Cancer Insititute; and,
strange to tell, I
There's a parallel talk into it chapter about gender gap. This gap is part
of a larger gap involving software development in general; there are few
programmer women. I presume, that this highlights a similarity between wiki
and a software development environment; and really many from most
lalk here:
http://en.wikisource.org/wiki/User_talk:John_Vandenberg#reCAPTCHA_for_source
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
do
anything and while interpreting words (in any language) any user will
contribute to source transcriptions in a very valuable way.
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo
2011/2/5 River Tarnell r.tarn...@ieee.org
In article AANLkTikWLU5Y8C2UokYRN=v1-zwhb1kthnxi4xtbm...@mail.gmail.com,
David Gerard dger...@gmail.com wrote:
On 5 February 2011 15:12, Alex Brollo alex.bro...@gmail.com wrote:
Just to let you know that Aubrey just prestented it.source idea
I'd like to share an idea. If you think that I don't know of what I am
speaking of, probably you're right; nevertheless I'll try.
Labeled section trasclusion, I presume, simply runs as a substring search
into raw wiki code of a page; it gives back a piece of the page as it is
(but removing any
2011/1/25 Jesse (Pathoschild) pathosch...@gmail.com
On Tue, Jan 25, 2011 at 8:14 AM, Alex Brollo alex.bro...@gmail.com
wrote:
If this would happen, I imagine that the original page could be
considered
an object, t.i. a collection of attributes (fragments of text) and
methods (template
2011/1/25 Alex Brollo alex.bro...@gmail.com
Just to test effectiveness of such a strange idea, I added some formal
section tags into a 6 Kby text section.txt, then I wrote a simple script to
create a data area , this is the result (a python dictionary into a html
comment code) appended
The interest of wikisource project for a formal and standardyzed set of book
metadata (I presume from Dublin Core) into a database table is obviuos.
Some preliminary tests into it.source suggest that templates and Labeled
Section Transclusion extension could have a role as existing wikitext
/Wikisource:Scriptorium#Help.21_.28fractions_and_TeX_formatting.29
Alex brollo
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
2011/1/19 Alex Brollo alex.bro...@gmail.com
2011/1/19 Maury Markowitz maury.markow...@gmail.com
I am dipping my toe in MATH for the first time and finding the results
somewhat curious. They key appears to be this statement:
It generates either PNG images or simple HTML markup, depending
2011/1/19 Maury Markowitz maury.markow...@gmail.com
Wow, thanks for the pointer Carl, MathJax is impressive.
Alex, your work is appreciated, but I'm not sure exactly what I'm
seeing. Can you point me in the right direction to read up a bit more?
Don't care, throw away my suggestions
It seems a complely different topic, but: is there something to learn about
text saving from the smart trick of TeX formulas storing? I did a little bit
of reverse engineering on that algorithm, I did never find anything useful
application from it, but much fun. :-)
Alex
Just to give an example: i wrote a different algorithm for
[[en:s:template;Loop]], naming it [[en:s:template;Loop!]] and I asked for
100 and 101 dots with them into an empty sandbox preview.
These are results:
Sandbox, empty, preview:
Preprocessor node count: 35/100
Post-expand include size:
2011/1/14 Tim Starling tstarl...@wikimedia.org
However, I'm not sure how that obtained that result, since
{{loop!|100|x}} just expands to {{loop|100|x}}, since it hits the
default case of the #switch. When I try it, I get a preprocessor node
count of 1069, not 193.
:-)
The
2011/1/12 Platonides platoni...@gmail.com
MZMcBride wrote:
Doesn't it make much more sense to fix the underlying problem instead?
Users
shouldn't have to be concerned with the number of #ifexists on a page.
MZMcBride
Ok, now I feel myself much more comfortable. These my conclusions:
2011/1/11 Tim Starling tstarl...@wikimedia.org
On 07/01/11 07:50, Aryeh Gregor wrote:
On Wed, Jan 5, 2011 at 8:07 PM, Alex Brollo alex.bro...@gmail.com
wrote:
Browsing the html code of source pages, I found this statement into a
html
comment:
*Expensive parser function count: 0/500
2011/1/11 Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
Overall, I'd advise you to do whatever minimizes user-visible latency.
That directly improves things for your users, and is a decent proxy
for server resource use. So use whichever method takes less time to
2011/1/11 Casey Brown li...@caseybrown.org
That's good, but also keep in mind that, generally, you shouldn't
worry too much about performance:
http://en.wikipedia.org/wiki/WP:PERF. (Had to throw in the little
disclaimer here. ;-))
Yes I got this suggestion but when I try new tricks and
Browsing the html code of source pages, I found this statement into a html
comment:
*Expensive parser function count: 0/500*
I'd like to use this statement to evaluate lightness of a page, mainly
testing the expensiveness of templates into the page but: in your opinion,
given that the best would
I apologyze, I sent an empty reply. :-(
Just a brief comment: there's no need of seaching for a perfect wiki
syntax, since it exists: it's the present model of well formed markup, t.i.
xml.
While digging into subtler troubles from wiki syntax, t.i. difficulties in
parsing it by scripts or
Can I suggest a really simple trick to inject something new into
stagnating wikipedia?
Simply install Labeled Section Trasclusion into a large pedia project; don't
ask, simply install it. If you'd ask, typical pedian boldness would raise a
comment Thanks, we don't need such a thing for sure. They
2011/1/4 Roan Kattouw roan.katt...@gmail.com
Just from looking at the LST code, I can tell that it has at least one
performance problem: it initializes the parser on every request. This
is easy to fix, so I'll fix it today. I can also imagine that there
would be other performance concerns
2011/1/4 Roan Kattouw roan.katt...@gmail.com
What a creative use of #lst allows, if it is really an efficient, light
routine, is to build named variables and arrays of named variables into
one
page; I can't imagine what a good programmer could do with such a
powerful
tool. I'm, as you
2011/1/4 Brion Vibber br...@pobox.com:
Indeed, Google Docs has an optimized editing UI for Android and iOS
that focuses precisely on making it easy to make a quick change to a
paragraph in a document or a cell in a spreadsheet (with concurrent
editing).
2011/1/4 Rob Lanphier ro...@robla.net
On Mon, Jan 3, 2011 at 5:54 PM, Chad innocentkil...@gmail.com wrote:
On Mon, Jan 3, 2011 at 8:41 PM, Rob Lanphier ro...@wikimedia.org
wrote:
If, for example, we can build some sort of per-revision indicator of
markup language (sort of similar to mime
2010/12/31 Conrad Irwin conrad.ir...@gmail.com
Evolution is the best model we have for how to build something, the
way to keep progress going is to continually try new things; if they
fail, meh, if they succeed — yay!
Just to add a little bit of pure theory into the talk, wiki project is
2010/12/30 Neil Kandalgaonkar ne...@wikimedia.org
On 12/29/10 7:26 PM, Tim Starling wrote:
Making editing easier could actually be counterproductive. If we let
more people past the editing interface barrier before we fix our
social problems, [...]
This is an interesting insight!
Yes
2010/12/29 MZMcBride z...@mzmcbride.com
Neil Kandalgaonkar wrote:
Let's imagine you wanted to start a rival to Wikipedia. Assume that you
are motivated by money, and that venture capitalists promise you can be
paid gazillions of dollars if you can do one, or many, of the following:
1 -
2010/12/29 Maciej Jaros e...@wp.pl
@2010-12-28 22:22, MZMcBride:
Alex Brollo wrote:
I too don't understand precisely why string functions are so
discouraged. I
saw extremely complex templates built just to do (with a high server
load I
suppose in my ignorance...) what could be obtained
@2010-12-28 22:22, MZMcBride:
https://bugzilla.wikimedia.org/show_bug.cgi?id=6455#c92 (and subsequent
comments)
I read almost all that talk but I'm far from satisfied. I know (and I meet
sometimes) that there are tricks to emulate some string function by very
complex, and I suppose,
I too don't understand precisely why string functions are so discouraged. I
saw extremely complex templates built just to do (with a high server load I
suppose in my ignorance...) what could be obtained with an extremely simple
string function.
Alex
___
2010/11/7 Andrew Dunbar hippytr...@gmail.com
On 14 October 2010 09:37, Alex Brollo alex.bro...@gmail.com wrote:
Hi Alex. I have been doing something similar in Perl for a few years
for the English
Wiktionary. I've never been sure on the best way to store all the
index files I create
2010/10/25 Jan Paul Posma jp.po...@gmail.com
Hi all,
As presented last Saturday at the Hack-A-Ton, I've committed a new version
of the InlineEditor extension. [1] This is an implementation of the
sentence-level editing demo posted a few months ago.
Very interesting! Obviously I'll not see
2010/10/13 Paul Houle p...@ontology2.com
Don't be intimidated by working with the data dumps. If you've got
an XML API that does streaming processing (I used .NET's XmlReader) and
use the old unix trick of piping the output of bunzip2 into your
program, it's really pretty easy.
When
2010/10/8 Dmitriy Sintsov ques...@rambler.ru
* Wikirating Team t...@wikirating.org [Thu, 07 Oct 2010 22:10:32
+0200]:
Hi there,
This is my first post on the wikitech forum and I hope I'm posting it
correctly...
Hi!
You may also take a look at [[Extension:Semantic MediaWiki]] and
Special pages, if I understand all their features, are special why:
# they come from a live API query;
# they cannot be managed/created/edited by users;
# they have no chronology (it would be nonsense).
It.source uses many list pages, daily updated by a bot, containing other
project-specific
2010/10/7 Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
It.source uses many list pages, daily updated by a bot, containing
other
project-specific queries. They are normal pages, and their chronology
is
bot useless and heavy. DynamicPageList extension could
2010/10/7 Platonides platoni...@gmail.com
Really I went back to
http://stats.wikimedia.org/wikisource/EN/TablesWikipediaIT.htm, and list
pages (Elenco...) have a history of less than 2Mby each. You're right.
If you want to reduce history size, you should begin by removing
date-changing
1 - 100 of 116 matches
Mail list logo