Great minds think alike ;-)
https://github.com/jdlrobson/gerrit-be-nice-to-me/commit/d6fd7913ff8fd7b1b57003797ba9ecb1134797d4
On Wed, Apr 3, 2013 at 10:55 AM, Yuri Astrakhan yastrak...@wikimedia.org
wrote:
Jon, do you think it might make sense to hide all those jenkin-bot
comments
want to see them. It's all one big hack so
might be wrong but it works for me ;-)
On Wed, Apr 3, 2013 at 12:17 PM, Yuri Astrakhan yastrak...@wikimedia.org
wrote:
hehe, thanks! :), just pulled again, but, its broken a bit :(
https://gerrit.wikimedia.org/r/#/c/53131/ -- at first i thought
This is awesome!!! Thanks Jon :) And no, I didn't look at the code, too
scared :)
On Wed, Apr 3, 2013 at 1:14 AM, Jon Robson jdlrob...@gmail.com wrote:
I feel very dirty having done this but I made a chrome extension that
autoexpands all comments and adds a comment count next to unexpanded
Hehe, I have added both the
SwgDebugAPIhttp://www.mediawiki.org/wiki/Manual:$wgDebugAPI,
and the
SwgAPIGeneratorModuleshttp://www.mediawiki.org/wiki/Manual:SwgAPIGeneratorModules.
The second one is deprecated.
On Mon, Apr 1, 2013 at 2:07 PM, Antoine Musso hashar+...@free.fr wrote:
Le 01/04/13
Max, do we still plan to detect javascript support for mobile devices, or
do you want to fold that into isWAP ?
Non-js-supporting devices need very different handling, as all HTML has to
be pre-built for them on the server.
On Fri, Mar 29, 2013 at 2:45 AM, Max Semenik maxsem.w...@gmail.com
Ori, thanks for the great dev environment :)
As mentioned in another email, I would like to have the following added to
default vagrant installation. Having default dev environment would allow us
to quickly get new developers up to speed almost without
any walk-through steps.
* redirect from /
).
On Tue, Mar 26, 2013 at 2:57 PM, Asher Feldman afeld...@wikimedia.orgwrote:
On Thu, Mar 21, 2013 at 10:55 PM, Yuri Astrakhan
yastrak...@wikimedia.orgwrote:
API is fairly complex to meassure and performance target. If a bot
requests
5000 pages in one call, together with all links categories
There was a discussion recently about OAuth, and I just saw this blog
posthttp://insanecoding.blogspot.com/2013/03/oauth-great-way-to-cripple-your-api.html
(posted
on
slashdothttp://tech.slashdot.org/story/13/03/22/1439235/a-truckload-of-oauth-issues-that-would-make-any-author-quit)
with some
This is all due to the introduction of Wikidata http://wikidata.org.
On Thu, Mar 21, 2013 at 12:32 PM, Sumana Harihareswara
suma...@wikimedia.org wrote:
On 03/09/2013 10:00 PM, Brian Cassidy wrote:
Hello,
I'm the co-author of the WWW::Wikipedia Perl module (
API is fairly complex to meassure and performance target. If a bot requests
5000 pages in one call, together with all links categories, it might take
a very long time (seconds if not tens of seconds). Comparing that to
another api request that gets an HTML section of a page, which takes a
Hi Ori, I use your vagrant VM all the time. Thanks!!!
If we have a good vagrant setup, getting new devs on-board might be that
much easier.
I keep all the notes of the things I needed to do to your VM at
http://www.mediawiki.org/wiki/User:Yurik/Installing_Linux_virtual_box_under_Windows_7
Most
Answered Inline. Also, I apologize as I think my email was slightly
off-topic to Ori's question.
On Sun, Mar 10, 2013 at 6:57 PM, Matthew Flaschen
mflasc...@wikimedia.orgwrote:
PHPMyAdmin also has major security issues. It isn't allowed on
Wikimedia Labs and probably shouldn't be used here.
On Sun, Mar 10, 2013 at 7:56 PM, Matthew Flaschen
mflasc...@wikimedia.orgwrote:
As you said yourself, it's currently bridged. Doesn't that mean it is
in fact accessible to the whole LAN, until that's changed?
Of course - that's why I put the hostonly as the first requirement, and was
the
Should we re-start the lets migrate to github discussion?
P.S. no, this is not a troll attempt, I am trying to understand if the
costs of not getting quality volunteers is worth the benefits of gerrit, or
if the two-system solution would solve all perceived complexities.
Moreover, I do not know
Yes, lots of bad content might be submitted, but usually it is easy and
quick to spot, and could become good content over time. What I think we
should follow is the model that most other big open source projects follow,
which does seem to have lower barrier of entry.
On Sat, Mar 9, 2013 at 3:51
Brian, due to the recent introduction of wikidata.org, most language links
are now stored there.
Regardless, you should try to avoid getting langlinks from the raw source,
because it wastes a lot of bandwidth. Please consider using mediawiki
APIhttps://www.mediawiki.org/wiki/APIto get just the
On Sun, Mar 10, 2013 at 1:08 AM, Bartosz DziewoĆski matma@gmail.comwrote:
You want me to link to patches created by contributors who have been
carefully walked through the process of submitting something to gerrit?
Because I can do that, but it would be a little demeaning. (I can even
All these issues with the git-side driver is the reason I think we should
have a master-branch-monitoring bot that will update RELEASE-NOTES based on
commit messages. Easy to track changes, easy to fix problems. Might be a
bit more work than a driver though.
On Tue, Mar 5, 2013 at 12:30 PM,
Just for the record, sorry for not posting it right away:
Chris Steipp found the issue in my case to be the enabled Block
third-party cookies and site data chrome setting. Even though this is not
default at the moment, apparently Firefox is thinking of making this a
default. Enabling it breaks
I don't like the idea of a bot doing this. Nor do I think writing release
notes at commit time works well either (too many stupid conflicts).
What's the issue with a bot appending a few lines to a release-notes
section if it sees it the commit message? But yes, release-notes conflicts
are a
You would at least need some release-notes marker added by the commiter
so that you can skip non-RL-worthy ones.
That marker is exactly what I am proposing - if we
formalizehttp://www.mediawiki.org/wiki/Requests_for_comment/RELEASE-NOTES_botthe
commit messages, the release notes write
The
proposalhttp://www.mediawiki.org/wiki/Requests_for_comment/RELEASE-NOTES_botis
for a bot to parse commit message for special commands to add some
text to specific sections of the release-notes file. When bot detects a
master merge, it will pull the latest release-notes, change it, and merge
I am seeing this issue right now on desktop - I am logged in into en-wiki,
and all other languages works, but the moment i switch to commons /
wikiversity / wikiquote / etc, i need to login. Seems like all the
cross-site is broken (has it even worked before?)
On Wed, Feb 27, 2013 at 8:02 PM,
PM, Paul Selitskas p.selits...@gmail.comwrote:
Do you use the same protocol in Wikipedia and other projects? When I
first log in via HTTPS and then somehow get to HTTP, I need to log in.
On Thu, Feb 28, 2013 at 4:10 AM, Yuri Astrakhan yuriastrak...@gmail.com
wrote:
I am seeing this issue
Update: I think its fully repeatable - in chrome's incognito mode
(Ctrl+Shift+N) - logged in into http en.wiki, tested in ru.wiki and zh.wiki
-- worked fine. Switched to non *.wikipedia.org urls - fail.
On Wed, Feb 27, 2013 at 8:21 PM, Yuri Astrakhan yuriastrak...@gmail.comwrote:
I am
Atul, one of the fun projects that has been sitting on my backburner is to
implement errors and warnings localization for the Web API. The project
would involve some planning, figuring out translation framework, and later
- converting all API modules and extensions to use it. As a result, all
There are some bugs that also prevents accurate unit testing on multiple
backends (these are the ones I hit personally):
* *Bug 37702* https://bugzilla.wikimedia.org/show_bug.cgi?id=37702 - Cloned
tables for unittests do not have references and constraints
* *Bug 44790*
Do you intend to cover both SUL and legacy accounts?
I suspect that meta might not work due to the fact that there might be some
accounts that were created on meta, but never merged. So either the URL
would have to be different from the regular [[User:Xxx]] @ meta, like
How useful would it be for Lua to access to the query/content/parser API be?
I am suspecting there could be a lot of creative usages of this, including
getting data from the wikidata (which won't have to do anything special to
enable this)
On Mon, Feb 18, 2013 at 7:15 AM, Jens Ohlig
, Yuri Astrakhan wrote:
How useful would it be for Lua to access to the query/content/parser API
be?
I am suspecting there could be a lot of creative usages of this,
including
getting data from the wikidata (which won't have to do anything special
to
enable this)
If you provided full
Petr, make sure you require users to set their *User-Agent* string. Your
library should not use any defaults.
For the examples I would recommend this *User-Agent:*
*MyCoolTool/1.1 (http://example.com/MyCoolTool/; mycoolt...@example.com)
LinqToWiki/1.0*
See
[[en:User:Svick]]
On Sun, Feb 17, 2013 at 7:42 PM, Yuri Astrakhan yuriastrak...@gmail.com
wrote:
Petr, make sure you require users to set their *User-Agent* string. Your
library should not use any defaults.
For the examples I would recommend this *User-Agent:*
*MyCoolTool/1.1 (http
sure
someone will correct me if im wrong)
-bawolff
On 2013-02-11 9:14 AM, Yuri Astrakhan yuriastrak...@gmail.com wrote:
Mariya,
Could you be more specific? What types of changes caused extensions to
break? I might be mistaken but the vast majority of the API framework
classes have been
On Fri, Feb 8, 2013 at 3:33 PM, Luke Welling WMF lwell...@wikimedia.orgwrote:
Do you want help?
I don't know much about the API at the moment, but it is my Level-Up
assignment this quarter. Documenting things is a good way to learn them.
I have written a lot of courseware in the past.
Great! How do you feel about taking the lead? I can help making sure we
are in sync with WMF in terms of authorization, trademarks, etc.
Sure, although it seems everything legal takes forever with WMF... (its
been over a week since I asked to get an NDA to see the api logs, no
response :) )
On Fri, Feb 8, 2013 at 4:58 PM, Antoine Musso hashar+...@free.fr wrote:
Arent you contracting for the WMF? ...
Nope, all my work so far has been volunteering. Sponsors are welcome :)
On Fri, Feb 8, 2013 at 7:26 PM, Quim Gil q...@wikimedia.org wrote:
fyi I have put Yuri in touch with our
I will be happy to part-take in this, as I do have some experience with the
API :)
I am heading the API v2 project, and this would be a natural extension of
that.
* RFC http://www.mediawiki.org/wiki/Requests_for_comment/API_Future
* Action/Parameter
Please don't forget about the hybrid approach -- API supports FauxRequests
- so an API call can be made without doing a web call, but an internal one
instead, without any json or startup overhead:
http://www.mediawiki.org/wiki/API:Calling_internally
On Wed, Feb 6, 2013 at 2:08 PM, Gabriel Wicke
It seems Opus is going full speed ahead with both Mozilla and Chrome
already supporting it in beta. Any plans for that?
http://www.infoworld.com/d/applications/webrtc-creates-interop-between-chrome-and-firefox-212230
.tsv - tab separated values?
On Thu, Jan 31, 2013 at 7:23 PM, Chad innocentkil...@gmail.com wrote:
On Thu, Jan 31, 2013 at 7:12 PM, Andrew Otto o...@wikimedia.org wrote:
Ah, no I mean change the future ones back to their original names. We'd
leave the ones that are being generated as
After talking to some of you at the office on Friday (was great, thanks!),
and reading some new suggestions (thanks Brion), it seems API has gained
several distinct, slightly overlapping areas of operation.
* SQL-like: This is what the API is best at - providing complex structured
access to the
Localization in v2 - all errors AND warnings are localized in default
language unless lang= is given, in which case you can get parameter array
or a non-default language. All standard translation magic
(plural/gender/etc) will be supported. Warnings will always include a
warning code.
I feel we should keep in line with the function calls. There are already
way too
many spaces as it is, and there is never a problem telling control
structures
apart - they are always followed by an indented block and most editors
highlight
them. I think if(), while(), foreach(), for() should all
101 - 143 of 143 matches
Mail list logo