On 06/03/13 16:28, Jay Ashworth wrote:
To “convey” a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user
through a computer network, with no transfer of a copy, is not
conveying.
As javascript is executed in the client, it
Mark A. Hershberger m...@everybody.org wrote:
On 03/04/2013 01:34 AM, Chad wrote:
However, we do
have people who want/use MSSQL, so I think taking the effort to
keep it working is worthwhile--if someone's willing to commit.
Since Danny Bauch has been using MSSQL and modifying MW for his
Hi everybody,
on Tuesday 19th 17:00 UTC[1], there will be an IRC Office Hour in
#wikimedia-office about Wikimedia's issue tracker[2] and Bug
management[3].
Add it to your calendar and come to ask how to better find information
in Bugzilla that interests you, and to share ideas and criticism how
As you probably know, the search in Wikidata sucks big time.
Until we have created a proper Solr-based search and deployed on that
infrastructure, we would like to implement and set up a reasonable stopgap
solution.
The simplest and most obvious signal for sorting the items would be to
1) make a
I found EXPLAIN (http://dev.mysql.com/doc/refman/5.0/en/using-explain.html)
pretty useful during my project; rather than theories it shows
the exact way the query is being resolved and if the indexes are being used
rightly.
On Thu, Mar 7, 2013 at 6:06 AM, Sumana Harihareswara
The advice on
https://wikitech.wikimedia.org/wiki/Query_profiling_for_features_developers
sounds
good.
Is there more detail somewhere on how to do this part Test your query
against production slaves prior to full deployment?
Luke
On Wed, Mar 6, 2013 at 8:14 PM, Matthew Flaschen
Hey Quim, hey Maria,
thank you for your replies!
I actually knew where to find the XML-dumps but that pointer about the new
XML-import tools is really helpful.
So eventually, I was able to acquire a Xeon 8 core, 32GB RAM, 6TB SAS to
start my experiments on :)
Let's see what this baby can do *
Hi,
we discussed OAuth many times... but - what's the current status?
Do we have working extensions which support using OpenID in order to
login to mediawiki, or OAuth? So that you can login using your google
account or such? I believe that WMF is working on this, so can we have
some update?
I
I just discovered this: http://www.mediawiki.org/wiki/Extension:OpenID
why we don't have it on production? :)
On Thu, Mar 7, 2013 at 8:30 PM, Petr Bena benap...@gmail.com wrote:
Hi,
we discussed OAuth many times... but - what's the current status?
Do we have working extensions which support
On Thu, Mar 7, 2013 at 2:32 PM, Petr Bena benap...@gmail.com wrote:
I just discovered this: http://www.mediawiki.org/wiki/Extension:OpenID
why we don't have it on production? :)
Just last week there was a thread about this. Extension:OpenID is under
active development, but I think it could
Andreas Nüßlein wrote:
so I need to set up a local instance of the dewiki- and enwiki-DB with all
revisions.. :-D
I know it's rather a mammoth project so I was wondering if somebody could
give me some pointers?
First of all, I would need to know what kind of hardware I should get. Is
it
Hi all!
I would like to ask for you input on the question how non-wikitext content can
be indexed by LuceneSearch.
Background is the fact that full text search (Special:Search) is nearly useless
on wikidata.org at the moment, see
https://bugzilla.wikimedia.org/show_bug.cgi?id=42234.
The reason
On Thu, Mar 7, 2013 at 11:45 AM, Daniel Kinzler dan...@brightbyte.de wrote:
1) create a specialized XML dump that contains the text generated by
getTextForSearchIndex() instead of actual page content.
That probably makes the most sense; alternately, make a dump that
includes both raw data and
Le 06/03/13 23:58, Federico Leva (Nemo) a écrit :
There's slow-parse.log, but it's private unless a solution is found for
https://gerrit.wikimedia.org/r/#/c/49678/
https://wikitech.wikimedia.org/wiki/Logs
And slow-parse.log is probably going to be kept private unless proven it
is not harmful
Hey Chris
I was exploring SpamBlaklist Extension. I have some doubts hope you could
clear them.
Is there any place I can get documentation of
Class SpamBlacklist in the file SpamBlacklist_body.php. ?
In function filter what does the following variables represent ?
$title
$text
$section
Le 06/03/13 22:05, Robert Rohde a écrit :
On enwiki we've already made Lua conversions with most of the string
templates, several formatting templates (e.g. {{rnd}}, {{precision}}),
{{coord}}, and a number of others. And there is work underway on a
number of the more complex overhauls (e.g.
Le 07/03/13 11:32, Petr Bena wrote:
I just discovered this: http://www.mediawiki.org/wiki/Extension:OpenID
why we don't have it on production? :)
As far as I know, that extension is pending a full review before it
lands on the Wikimedia cluster.
Ryan Lane wrote about it:
On 03/07/2013 12:00 PM, Antoine Musso wrote:
Le 06/03/13 23:58, Federico Leva (Nemo) a écrit :
There's slow-parse.log, but it's private unless a solution is found for
https://gerrit.wikimedia.org/r/#/c/49678/
https://wikitech.wikimedia.org/wiki/Logs
And slow-parse.log is probably going to
ah ok I was confused by it being flagged stable
On Thu, Mar 7, 2013 at 8:35 PM, Tyler Romeo tylerro...@gmail.com wrote:
On Thu, Mar 7, 2013 at 2:32 PM, Petr Bena benap...@gmail.com wrote:
I just discovered this: http://www.mediawiki.org/wiki/Extension:OpenID
why we don't have it on
On Thu, Mar 7, 2013 at 3:05 PM, Antoine Musso hashar+...@free.fr wrote:
We still have to figure out which account will be used, the URL, whether
we want a dedicated wiki etc...
Those discussions are unrelated to using OpenID as a client, though.
*--*
*Tyler Romeo*
Stevens Institute of
On Thu, Mar 7, 2013 at 8:06 PM, Matthew Flaschen
mflasc...@wikimedia.org wrote:
Why would it be harmful for public wikis? Anyone can do this on an
article-by-article basis by copying the source their own MediaWiki
instances.
That user would have to pick which articles to copy and test (or
Those tags are arbitrary :(
-Chad
On Mar 7, 2013 12:09 PM, Petr Bena benap...@gmail.com wrote:
ah ok I was confused by it being flagged stable
On Thu, Mar 7, 2013 at 8:35 PM, Tyler Romeo tylerro...@gmail.com wrote:
On Thu, Mar 7, 2013 at 2:32 PM, Petr Bena benap...@gmail.com wrote:
I
On Thu, Mar 7, 2013 at 12:10 PM, Tyler Romeo tylerro...@gmail.com wrote:
On Thu, Mar 7, 2013 at 3:05 PM, Antoine Musso hashar+...@free.fr wrote:
We still have to figure out which account will be used, the URL, whether
we want a dedicated wiki etc...
Those discussions are unrelated to
On 07.03.2013 20:58, Brion Vibber wrote:
3) The indexer code (without plugins) should not know about Wikibase, but it
may
have hard coded knowledge about JSON. It could have a special indexing mode
for
JSON, in which the structure is deserialized and traversed, and any values
are
added
On 07/03/13 21:03, anubhav agarwal wrote:
Hey Chris
I was exploring SpamBlaklist Extension. I have some doubts hope you could
clear them.
Is there any place I can get documentation of
Class SpamBlacklist in the file SpamBlacklist_body.php. ?
In function filter what does the following
Am 07.03.2013 21:09, schrieb Petr Bena:
ah ok I was confused by it being flagged stable
Yes. It *is* stable, at least since I took over the maintenance a long
time ago.
This does not say, that it cannot be further improved.
Currently I am very busy adding new necessary features to the user
(1) seems like the right way to go to me too.
There may be other ways but puppet/files/lucene/lucene.jobs.sh has a
function called
import-db() which creates a dump like this:
php $MWinstall/common/multiversion/MWScript.php dumpBackup.php $dbname
--current $dumpfile
Ram
On Thu, Mar 7, 2013
Interesting article I found about Redis and its poor performance with SSDs
as a swap medium. For whoever might be interested.
http://antirez.com/news/52
*--*
*Tyler Romeo*
Stevens Institute of Technology, Class of 2015
Major in Computer Science
www.whizkidztech.com | tylerro...@gmail.com
On Thu, Mar 7, 2013 at 1:34 PM, Platonides platoni...@gmail.com wrote:
On 07/03/13 21:03, anubhav agarwal wrote:
Hey Chris
I was exploring SpamBlaklist Extension. I have some doubts hope you could
clear them.
Is there any place I can get documentation of
Class SpamBlacklist in the file
On 07/03/13 12:12, Asher Feldman wrote:
Ori - I think this has been discussed but automated xhprof configuration as
part of the vagrant dev env setup would be amazing :)
I don't think xhprof is the best technology for PHP profiling. I
reported a bug a month ago which causes the times it reports
On Thu, Mar 7, 2013 at 3:57 PM, Tim Starling tstarl...@wikimedia.orgwrote:
On 07/03/13 12:12, Asher Feldman wrote:
Ori - I think this has been discussed but automated xhprof configuration
as
part of the vagrant dev env setup would be amazing :)
I don't think xhprof is the best technology
On 2013-03-07 4:06 PM, Matthew Flaschen mflasc...@wikimedia.org wrote:
On 03/07/2013 12:00 PM, Antoine Musso wrote:
Le 06/03/13 23:58, Federico Leva (Nemo) a écrit :
There's slow-parse.log, but it's private unless a solution is found for
https://gerrit.wikimedia.org/r/#/c/49678/
*Marc-Andre Pelletier discovered a vulnerability in the MediaWiki OpenID
extension for the case that MediaWiki is used as a “provider” and the wiki
allows renaming of users.
All previous versions of the OpenID extension used user-page URLs as
identity URLs. On wikis that use the OpenID extension
On Thu, Mar 7, 2013 at 2:16 PM, Tyler Romeo tylerro...@gmail.com wrote:
Interesting article I found about Redis and its poor performance with SSDs
as a swap medium. For whoever might be interested.
http://antirez.com/news/52
This was not particularly insightful or useful; Redis swapping is
This is indeed a problem but given that rename permissions are granted
by default to bureaucrats who are most trusted users, and on small
wikis typically sysadmins with shell access, this shouldn't be very
dangerous. Sysadmin with shell access will be able to steal your
identity anyway.
It's a
35 matches
Mail list logo