I'd love to take part, but this is silly o'clock in europe.
-- daniel
Am 23.09.2013 05:26, schrieb Tim Starling:
I would like to have an open IRC meeting for RFC review, on Tuesday 24
September at 22:00 UTC (S.F. 3pm).
We will work through a few old, neglected RFCs, and maybe consider a
[Re-posting, since my original post apparently never got through. Maybe I posted
from the wrong email account.]
Hi all!
As discussed at the MediaWiki Architecture session at Wikimania, I have created
an RFC for the TitleValue class, which could be used to replace the heavy-weight
Title class in
Am 10.10.2013 18:40, schrieb Rob Lanphier:
Hi folks,
I think Daniel buried the lede here (see his mail below), so I'm
mailing this out with a subject line that will hopefully provoke more
discussion. :-)
Thanks for bumping this, Rob. And thanks to Tim for moderating this discussion
so far,
Am 30.10.2013 18:32, schrieb Martijn Hoekstra:
Rebase early, rebase often. At some point integration must take place. Not
using a separate branch won't help you there. Anyone working on anything
involving the title object that hasn't been merged yet will hate to rebase
whenever you'll have
Am 31.10.2013 14:52, schrieb Daniel Kinzler:
The idea is to *not* actually refactor the Title class, but to introduce a
light
weight alternative, TitleValue. We can then replace usage of Title with usage
of
TitleObject bit by bit.
That was meant to be replace Title with TitleValue
the holidays?
Thanks,
Daniel
--
Daniel Kinzler
Senior Software Developer
Wikimedia Deutschland
Gesellschaft zur Förderung Freien Wissens e.V.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo
Am 10.12.2013 22:38, schrieb Brad Jorsch (Anomie):
Looking at the code, ParserCache::getOptionsKey() is used to get the
memc key which has a list of parser option names actually used when
parsing the page. So for example, if a page uses only math and
thumbsize while being parsed, the value
RFC: https://www.mediawiki.org/wiki/Requests_for_comment/Assert
This is a proposal for providing an alternative to PHP's assert() that allows
for an simple and reliable way to check preconditions and postconditions in
MediaWiki code.
The background of this proposal is the reoccurring discussions
Am 24.01.2014 16:15, schrieb Tim Starling: On 24/01/14 15:11, Jeroen De Dauw
wrote:
Daniel proposed an ideal code
architecture as consisting of a non-trivial network of trivial classes
-- a bold and precise vision. Nobody was uncivil or deprecating in
their response.
This idea is something
Thanks for your input Nik!
I'll add my 2¢ below. Would be great if others could chime in.
I have just pushed a new version of the path, please have a look at
https://gerrit.wikimedia.org/r/#/c/106517/
Am 04.02.2014 16:31, schrieb Nikolas Everett:
* Should linking, parsing, and formatting live
Am 06.02.2014 21:09, schrieb Sumana Harihareswara:
I agree that this mailing list is a reasonable place to discuss the
interfaces.
Notes from the Architecture Summit are now up at
https://www.mediawiki.org/wiki/Architecture_Summit_2014/TitleValue# . At
yesterday's RFC review we agreed that
Am 14.02.2014 22:39, schrieb Gabriel Wicke:
VisualEditor is an HTML editor and doesn't know about wikitext. All
conversions between wikitext and HTML are done by Parsoid. You need
Parsoid if you want to use VisualEditor on current wikis.
Implementing a HTML content type in mediawiki would be
Am 16.02.2014 10:32, schrieb David Gerard:
There are extensions that allow raw HTML widgets, just putting them
through unchecked.
I know, I wrote one :) But that's not the point. The point is maintaining
editable content as HTML instead of Wikitext.
The hard part will be checking.
Wikitext
I have just pushed a new version of the TitleValue patch to Gerrit:
https://gerrit.wikimedia.org/r/106517.
I have also updated the RDF to reflect the latest changes:
https://www.mediawiki.org/wiki/Requests_for_comment/TitleValue.
Please have a look. I have tried to address several issues with
Am 28.02.2014 15:27, schrieb Leonie Ehrl:
Hi Andre,
thanks for your message. Indeed, I didn´t know that this is an international
mailing list. Rookie mistake! Wikimedia remains to be discovered :)
CheersLeonie
Not only is it international, it's also about MediaWiki, the software that runs
Am 03.03.2014 21:38, schrieb Sumana Harihareswara:
Ryan, thank you superlatively for doing and documenting this research.
+1
-- daniel
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Am 05.05.2014 07:20, schrieb Jeremy Baron:
On May 4, 2014 10:24 PM, Ori Livneh o...@wikimedia.org wrote:
an implementation for a recent changes
stream broadcast via socket.io, an abstraction layer over WebSockets that
also provides long polling as a fallback for older browsers.
[...]
How
Hi all!
During the hackathon, I worked on a patch that would make it possible for
non-textual content to be included on wikitext pages using the template syntax.
The idea is that if we have a content handler that e.g. generates awesome
diagrams from JSON data, like the extension Dan Andreescu
Thanks all for the imput!
Am 14.05.2014 10:17, schrieb Gabriel Wicke: On 05/13/2014 05:37 PM, Daniel
Kinzler wrote:
It sounds like this won't work well with current Parsoid. We are using
action=expandtemplates for the preprocessing of transclusions, and then
parse the contents using Parsoid
Am 14.05.2014 15:11, schrieb Gabriel Wicke:
On 05/14/2014 01:40 PM, Daniel Kinzler wrote:
This means that HTML returned from the preprocessor needs to be valid in
wikitext to avoid being stripped out by the sanitizer. Maybe that's actually
possible, but my impression is that you are shooting
Am 14.05.2014 16:04, schrieb Gabriel Wicke:
On 05/14/2014 03:22 PM, Daniel Kinzler wrote:
My patch doesn't change the handling of html.../html by the parser. As
before, the parser will pass HTML code in html.../html through only if
wgRawHtml is enabled, and will mangle/sanitize it otherwise
Hi again!
I have rewritten the patch that enabled HTML based transclusion:
https://gerrit.wikimedia.org/r/#/c/132710/
I tried to address the concerns raised about my previous attempt, namely, how
HTML based transclusion is handled in expandtemplates, and how page meta data
such as resource
Am 16.05.2014 21:07, schrieb Gabriel Wicke:
On 05/15/2014 04:42 PM, Daniel Kinzler wrote:
The one thing that will not work on wikis with
$wgRawHtml disabled is parsing the output of expandtemplates.
Yes, which means that it won't work with Parsoid, Flow, VE and other users.
And it has been
Am 17.05.2014 17:57, schrieb Subramanya Sastry:
On 05/17/2014 10:51 AM, Subramanya Sastry wrote:
So, going back to your original implementation, here are at least 3 ways I
see
this working:
2. action=expandtemplates returns a html.../html for the expansion of
{{T}}, but also provides an
I'm getting the impression there is a fundamental misunderstanding here.
Am 18.05.2014 04:28, schrieb Subramanya Sastry:
So, consider this wikitext for page P.
== Foo ==
{{wikitext-transclusion}}
*a1
map .. ... /map
*a2
{{T}} (the html-content-model-transclusion)
*a3
Parsoid
Am 19.05.2014 14:21, schrieb Subramanya Sastry:
On 05/19/2014 04:52 AM, Daniel Kinzler wrote:
I'm getting the impression there is a fundamental misunderstanding here.
You are correct. I completely misunderstood what you said in your last
response
about expandtemplates. So, the rest of my
Am 19.05.2014 20:01, schrieb Gabriel Wicke:
On 05/19/2014 10:55 AM, Bartosz Dziewoński wrote:
I am kind of lost in this discussion, but let me just ask one question.
Won't all of the proposed solutions, other than the one of just not
expanding transclusions that can't be expanded to wikitext,
Am 19.05.2014 23:05, schrieb Gabriel Wicke:
I think we have agreement that some kind of tag is still needed. The main
point still under discussion is on which tag to use, and how to implement
this tag in the parser.
Indeed.
Originally, domparse was conceived to be used in actual page content
Hi all.
We (the Wikidata team) ran into an issue recently with the value that gets
passed as $baseRevId to Content::prepareSave(), see Bug 67831 [1]. This comes
from WikiPage::doEditContent(), and, for core, is nearly always set to false
(e.g. by EditPage).
We interpreted this rev ID to be the
Am 29.05.2014 21:07, schrieb Aaron Schulz:
Yes it was for auto-reviewing new revisions. New revisions are seen as a
combination of (base revision, changes).
But EditPage in core sets $baseRevId to false. The info isn't there for the
standard case. In fact, the ONLY thing in core that sets it
Am 30.05.2014 15:38, schrieb Brad Jorsch (Anomie):
I think you need to look again into how FlaggedRevs uses it, without the
preconceptions you're bringing in from the way you first interpreted the
name of the variable. The current behavior makes perfect sense for that
specific use case.
Am 11.07.2014 17:19, schrieb Tyler Romeo:
Most likely, we would encrypt the IP with AES or something using a
configuration-based secret key. That way checkusers can still reverse the
hash back into normal IP addresses without having to store the mapping in the
database.
There are two problems
MediaWiki offers several extension interfaces based on registering classes to be
used for a specific purpose, e.g. custom actions, special pages, api modules,
etc. The problem with this approach is that the signature of the constructor has
to be known to the framework, preventing us from moving
This is about whether it's OK for MediaWiki core to depend on other PHP
libraries, and how to manage such dependencies.
Background: A while back, I proposed a simple class for assertions to be added
to core[1]. It was then suggested[2] that this could be placed in a separate
component, which
yay! congrats!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
see any issues with this
approach? Is it worth the trouble?
Any input would be great!
Thanks,
daniel
--
Daniel Kinzler
Senior Software Developer
Wikimedia Deutschland
Gesellschaft zur Förderung Freien Wissens e.V.
___
Wikitech-l mailing list
Am 09.09.2014 13:45, schrieb Nikolas Everett:
All those options are less good then just updating the cache I think.
Indeed. And that *sounds* simple enough. The issue is that we have to be sure to
update the correct cache key, the exact one the OutputPage object in question
was loaded from.
Hi all.
During the RFC doscussion today, the question popped up how the performance of
creating closures compares to creating objects. This is particularly relevant
for closures/objects created by bootstrap code which is always executed, e.g.
when registering with a CI framework.
Attached is a
Apperently the attached file got stripped when posting to the list.
Here's a link:
http://brightbyte.de/repos/codebin/ClosureBenchmark.php?view=1
Here is the code inlined:
?php
function timeClosures( $n ) {
$start = microtime( true );
for ( $i = 0; $i $n; $i++ ) {
Hi!
Once wikidata.org allows for entry of arbitrary properties, we will need some
protection against spam. However, there is a nasty little problem with making
SpamBlacklist, AntiBot, AbuseFilter etc work with Wikidata content:
Wikibase implements editing directly via the API, but using
Hi all!
I recently found that it is less than clear how numbers should be quoted/escaped
in SQL queries. Should DatabaseBase::addQuotes() be used, or rather just
inval(), to make sure it's really a number? What's the best practice?
Looking at DatabaseBase::makeList(), it seems that addQuotes()
and LoadBalancer class
to take care of this. But I'm unsure on the details. Also... how does the new LB
interact with the existing LB? Would it just repolace it, or wrap delegate? Or
what?
Any ideas how to best do this?
-- daniel
--
Daniel Kinzler, Softwarearchitekt
Wikimedia Deutschland
On 04.12.2012 18:20, Matthew Flaschen wrote:
On 12/04/2012 04:52 AM, Daniel Kinzler wrote:
4) just add another hook, similar to EditFilterMergedContent, but more
generic,
and call it in EditEntity (and perhaps also in EditPage!). If we want a spam
filter extension to work with non-text
On 05.12.2012 14:39, Aran Dunkley wrote:
Hi Guys,
How do I get a specific version of an extension using git?
I want to get Validator 0.4.1.4 and Maps 1.0.5, but I can't figure out
how to use git to do this...
git always clones the entire repository, including all version. So, you clone,
and
On 06.12.2012 01:55, Chris Steipp wrote:
The same general idea should apply for Wikibase. The only difference is
that the core functionality of data editing is in Wikibase.
Correct, and I would say that Wikibase should be calling the same
hooks that core does, so that AbuseFilter can be
On 05.12.2012 22:06, Matthew Flaschen wrote:
More specifically, what if Wikidata exposed a JSON object representing
an external version of each change (essentially a data API).
This already exists, that's more or less how changes get pushed to client wikis.
It could allow hooks to register
test2.wikimedia.org is now configured to act as a client to wikidata.org. It's
supposed to access data items by directly talking to wikidata.org's database.
But this fails: Revision::getRevisionText returns false. Any ideas why that
would be? I have documented the issue in detail here:
Hi Christian
On 08.12.2012 22:16, Christian Aistleitner wrote:
However, we actually do not need those databases and tables for testing.
For testing, it would be sufficient to have mock database objects [1] that
pretend that there are underlying databases, tables, etc.
Hm... so, if that mock
On 09.12.2012 00:50, Platonides wrote:
Do you really need SQL access to wikidata?
I would expect your code to go through a WikidataClient class, which
could then connected to wikidata by sql, http, loading from a local file...
Sure, but then I can't tests the code that does the direct
This is a follow-up to Rob's mail Wikidata change propogation. I feel that the
question of running periodic jobs on a large number of wikis is a more generic
one, and deserves a separate thread.
Here's what I think we need:
1) Only one process should be performing a given update job on a given
Thanks Rob for starting the conversation about this.
I have explained our questions about how to run updates in the mail titled
Running periodic updates on a large number of wikis, because I feel that this
is a more general issue, and I'd like to decouple it a bit from the Wikidata
specifics.
Thanks aude for replying to Mark's questions!
On 12.01.2013 17:08, aude wrote:
Right now, I'm focused on non-WMF users of MediaWiki and this sounds
like something they should be aware of. If they install a new wiki and
have $wgContentHandlerUseDB enabled, then what new risks do they need to
On 12.01.2013 20:14, Ori Livneh wrote:
ContentHandler powers the Schema: namespace on metawiki, with the relevant
code residing in Extension:EventLogging. Here's an example:
http://meta.wikimedia.org/wiki/Schema:SavePageAttempts
I found the ContentHandler API to be useful and extensible,
On 12.01.2013 02:19, Mark A. Hershberger wrote:
As you may have guessed, I've been working on the release notes for
1.21. Please look over them and improve them if you can.
In the process, I came across the ContentHandler blurb. I don't recall
this being discussed on-list, but, from
On 12.01.2013 16:02, Mark A. Hershberger wrote:
On 01/12/2013 09:32 AM, Matthew Flaschen wrote:
Last I heard, significant progress was made on 2.0, but the project is
currently on hold. Thus, there's not a need to notify people right
away. When the time comes, I don't think initial migration
Thanks for your input, Ori!
On 13.01.2013 01:35, Ori Livneh wrote:
As I said, I found the API well-designed on the whole, but:
* getForFoo (getForModelID, getDefaultModelFor) is a confusing pattern for
method names. getDefaultModelFor is especially weird: I get what it does, but
I don't
On 13.01.2013 02:02, Lee Worden wrote:
Yes, I think ContentHandler does some of what WW does, and I'll be happy to
integrate with it. I don't think we'll want to abandon the source-file tag,
though, because on pages like
http://lalashan.mcmaster.ca/theobio/math/index.php/Nomogram and
On 14.01.2013 00:16, MZMcBride wrote:
Looks neat. :-) But this is mostly already in progress at
https://www.mediawiki.org/wiki/Extension:CodeEditor. This extension is
live on Wikimedia wikis already (including Meta-Wiki and MediaWiki.org),
but it has some outstanding issues and could
On 15.01.2013 12:44, Jeroen De Dauw wrote:
Hey,
I have observed a difference in opinion between two groups of people on
gerrit, which unfortunately is causing bad blood on both sides. I'm
therefore interested in hearing your opinion about the following scenario:
Someone makes a sound
On 15.01.2013 12:58, Nikola Smolenski wrote:
In my opinion, if the typo is trivial (f.e. someone typed fo instead of
of),
there is no need to -1 the commit, however if the typo pertains to a crucial
element of the commit (f.e. someone typed fixed wkidata bug) perhaps it
should, since
On 15.01.2013 15:06, Tyler Romeo wrote:
I agree with Antoine. Commit messages are part of the permanent history of
this project. From now until MediaWiki doesn't exist anymore, anybody can
come and look at the change history and the commit messages that go with
them. Now you might ask what the
On 15.01.2013 13:39, Chad wrote:
This is a non issue in the very near future. Once we upgrade (testing
now, planning for *Very Soon* after eqiad migration), we'll have the
ability to edit commit messages and topics directly from the UI. I
think this will save people a lot of time
Thanks Tim for pitching in.
On 16.01.2013 07:09, Tim Starling wrote:
Giving a change -1 means that you are asking the developer to take
orders from you, under threat of having their work ignored forever. A
-1 status can cause a change to be ignored by other reviewers,
regardless of its merit.
Hi all!
I would like to ask for you input on the question how non-wikitext content can
be indexed by LuceneSearch.
Background is the fact that full text search (Special:Search) is nearly useless
on wikidata.org at the moment, see
https://bugzilla.wikimedia.org/show_bug.cgi?id=42234.
The reason
On 07.03.2013 20:58, Brion Vibber wrote:
3) The indexer code (without plugins) should not know about Wikibase, but it
may
have hard coded knowledge about JSON. It could have a special indexing mode
for
JSON, in which the structure is deserialized and traversed, and any values
are
added
On 23.04.2013 14:46, Jeroen De Dauw wrote:
Hey,
At the risk of starting an emacs vs vim like discussion, I'd like to ask if
I ought to be using a SpecialPage or an Action in my use case. I want to
have an extra tab for a specific type of article that shows some additional
information about
Hi all!
I came across a general design issue when trying to make ApiQueryLangLinks more
flexible, taking into account extensions manipulating language links via the new
LanguageLinks hook. To do this, I want to introduce a LangLinkLoader class with
two implementations, one with the old behavior,
On 02.05.2013 16:12, Brad Jorsch wrote:
On Thu, May 2, 2013 at 9:36 AM, Daniel Kinzler dan...@brightbyte.de wrote:
1) The composition approach, using:
[...]
Disadvantages:
* more classes
* ???
* A lot of added complexity
The the number of classes, and the object graph, some
When looking for resources to answer Tim's question at
https://www.mediawiki.org/wiki/Architecture_guidelines#Clear_separation_of_concerns,
I found a very nice and concise overview of principles to follow for writing
testable (and extendable, and maintainable) code:
Writing Testable Code by Miško
Thanks for your thoughtful reply, Tim!
Am 03.06.2013 07:35, schrieb Tim Starling:
On 31/05/13 20:15, Daniel Kinzler wrote:
Writing Testable Code by Miško Hevery
http://googletesting.blogspot.de/2008/08/by-miko-hevery-so-you-decided-to.html.
It's just 10 short and easy points, not some
Am 13.05.2013 12:32, schrieb Denny Vrandečić:
That's awesome!
Two things:
* how set are you on a Java-based solution? We would prefer PHP in order to
make it more likely to be deployed.
Just saw that I never replied to this.
I think running Java core on the Wikimedia cluster isn't a
Am 03.06.2013 18:48, schrieb Chris Steipp:
On Mon, Jun 3, 2013 at 6:04 AM, Nikolas Everett never...@wikimedia.org
wrote:
2. Build smaller components sensibly and carefully. The goal is to be
able to hold all of the component in your head at once and for the
component to present such a
My take on assertions, which I also tried to stick to in Wikibase, is as
follows:
* A failing assertion indicates a local error in the code or a bug in PHP;
They should not be used to check preconditions or validate input. That's what
InvalidArgumentException is for (and I wish type hints
Hi Brian!
I like the idea of a metadata API very much. Being able to just replace the
scraping backend with Wikidata (as proposed) later seems a good idea. I see no
downside as long as no extra work needs to be done on the templates and
wikitext, and the API could even be used later to port
Am 17.09.2013 00:34, schrieb Gabriel Wicke:
There *might* be, in theory. In practice I doubt that there are any
articles starting with 'w/'.
I count 10 on en.wiktionary.org:
https://en.wiktionary.org/w/index.php?title=Special%3APrefixIndexprefix=w%2Fnamespace=0
To avoid future conflicts, we
Sorry? You can upload multiple files in the same HTTP POST. Just add
several input type=file to the same page (and hope you don't hit
max_post_size). That can be done with javascript.
Or do you mean uploading half file now and the other half on a second
connection later?
I mean
Does a PHP script using upload stuff get run if the file upload is complete,
or will it start while still uploading?
If not, can't you figure out the temporary name of the upload on the server
and then run ls -lh on it?
It gets run only after the upload is complete. And even if not, and you
David Gerard schrieb:
But basically: treating interwiki links as a 1-1 relationship even
from one wiki to another is horribly unreliable, and assuming you can
go from wiki A to wiki B to wiki C with interwiki links is just not
doable reliably with robots.
If you only look at language-links
jida...@jidanni.org schrieb:
And, we want this to be as simple as possible for our loyal
administrator, me. I.e., use existing facilities, no cronjobs to run
dumpBackup.php (or even mysqldump, which would be giving up too much
information) and then offering a link to what they produce.
Dawson schrieb:
Hello,
I have used Special:Export at en.wikipedia to export
Diabetes_mellitus and ticked the box include templates (I'm only
really after the templates).
The resulting XML file is 40.1mb so I decided to go with mwdumper.js
rather than Special:Import.
I'm working
Gerard Meijssen schrieb:
Hoi,
Who says that the meet-up at FOSDEM will fail?? With people from the USA,
the Netherlands, Finland, Germany and Great Britain arriving with MediaWiki
on their mind, it can hardly be called a failed meet up. I am also quite
sure that if you want to talk about
Exactly how Barcamp-style is this meetup gonna be? Does it include the
camping and stuff, or are we expected to sleep at hotels like at normal
conventions?
Afaik, few bar camps involve actual camping :) There are loads of inexpesive
hostely and modest hotels in the area. we'll put up some
Platonides wrote:
Remember to add some message like 'Uploading a low-res version. Keep the
original if you want it full-res for the future.' We don't want anyone
thinking 'I uploaded this 14GB file. Now I can delete as they keep a
copy.' without fully understanding it. Some people deleted
Andre Engels schrieb:
1. Why is this User Agent getting this response? If I remember
correctly, this was installed in the early days of the pywikipediabot,
when Brion wanted to block it because it had a programming error
causing it to fetch each page twice (sometimes even more?). If that is
Rolf Lampa schrieb:
Marco Schuster skrev:
I want to crawl around 800.000 flagged revisions from the German
Wikipedia, in order to make a dump containing only flagged revisions.
[...]
flaggedpages where fp_reviewed=1;. Is it correct this one gives me a
list of all articles with flagged revs,
Marco Schuster schrieb:
Fetch them from the toolserver (there's a tool by duesentrieb for that).
It will catch almost all of them from the toolserver cluster, and make a
request to wikipedia only if needed.
I highly doubt this is legal use for the toolserver, and I pretty
much guess that 800k
Marco Schuster schrieb:
...
But by then, i do hope we have revision flags in the dumps. because that
would
be The Right Thing to use.
Still, using the dumps would require me to get the full history dump
because I only want flagged revisions and not current revisions
without the flag.
Gerard Meijssen schrieb:
Hoi,
There is RDF, there is Semantic MediaWiki. Why should one get a push and the
other not. Semantic MediaWiki is used on production websites. Its usability
is continuously being improved. No cobwebs there.
SMW is of course an option for integrating metadata, but I
What is a translation but another type of annotation ?
Thanks,
This *Could* be modeled like that in theory. But I don't see an easy way to
implement this with a low cost of transition. Basically, it would require
license info to be not handled via templates at all.
I don't see that happening
Dawson schrieb:
Can anyone recommend a really lightweight Wiki? Preferably PHP but flat file
would be considered too.
http://en.wikipedia.org/wiki/Comparison_of_wiki_software
http://www.wikimatrix.org/
http://freewiki.info/
-- daniel
___
Aran schrieb:
Hi I'm just wondering what the policy is with regards to changes to
extension code in the svn in the case where the modification is
compatible only with recent versions. Shouldn't extensions be designed
to be as backward compatible as is practical rather than focussing
Gerard Meijssen schrieb:
Hoi,
Some extensions are backwards compatible however and some are not. Given
that there are plenty of people and orangisations using stable versions of
MediaWiki, how do they know and how are they to know ?
Thanks,
GerardM
Never rely on it. Assume extensions
jida...@jidanni.org schrieb:
Say, e.g., api.php?action=querylist=logevents looks fine, but when I
look at the same table in an SQL dump, the Chinese utf8 is just a
latin1 jumble. How can I convert such strings back to utf8? I can't
find the place where MediaWiki converts them back and forth.
The meet-up[1] is drawing close now: between April 3. and 5. we meet at the
c-base[2] in Berlin to discuss MediaWiki development, extensions, toolserver
projects, wiki research, etc. Registration[3] is open until March 20 (required
even if you already pre-registered).
The schedule[4] is slowly
Platonides schrieb:
O. Olson wrote:
Does anyone have experience importing the Wikipedia XML Dumps into
MediaWiki. I made an attempt with the English Wiki Dump as well as the
Portuguese Wiki Dump, giving php (cli) 1024 MB of Memory in the php.ini
file. Both of these attempts fail with out of
O. O. schrieb:
Daniel Kinzler wrote:
That sounds very *very* odd. because page content is imported as-is in both
cases, it's not processed in any way. The only thing I can imagine is that
things don't look right if you don't have all the templates imported yet.
Thanks Daniel. Yes, I think
Robert Rohde schrieb:
On Mon, Mar 9, 2009 at 9:29 PM, Andrew Garrett and...@werdn.us wrote:
On Tue, Mar 10, 2009 at 3:21 PM, K. Peachey p858sn...@yahoo.com.au wrote:
Currently all data, including private data, is replicated to the
toolserver. We could not do this with a third-party server.
My
Bilal Abdul Kader schrieb:
Greetings,
We are setting up a research server at Concordia University (Canada) that is
dedicated for Wikipedia. We would love to share the resources with anyone
interested.
In case anyone needs help setting it up, we would love to help as well.
bilal
There's
Robert Rohde schrieb:
On Tue, Mar 10, 2009 at 1:27 PM, River Tarnell
ri...@loreley.flyingparchment.org.uk wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
phoebe ayers:
River: Well, you say that part of the issue with the toolserver is money and
time... and this person that I've been
Tei schrieb:
note to self: look into the code that order text (collation) in
mediawiki, has to be fun one :-)
There is none. Sorting is done by the database. That is to say, in the default
comnpatibility mode, binary collation is used - that is, byte-by-byte
comparison of UTF-8 encoded data.
1 - 100 of 576 matches
Mail list logo