Re: [Wikitech-l] MediaWiki logo

2020-10-26 Thread Denny Vrandečić
SInce thanking yourself would be weird, allow me to express my thanks for
all your work and shepherding during the logo process (and well beyond
that, but let's keep focus here).

Amir, thank you!



On Mon, Oct 26, 2020 at 8:05 AM Amir Sarabadani  wrote:

> Hello people!
> So the voting period is done now and the second proposal is the clear
> winner.
>
> https://www.mediawiki.org/wiki/Project:Proposal_for_changing_logo_of_MediaWiki,_2020/Round_2
>
> 163 people participated and these are the top winners using Schulze method:
> * 2: Gradient (translucent - CamelCase)
> * 8: Yellow (solid - CamelCase)
> * 6: Yellow (translucent - CamelCase)
>
> Comparing 2 and 8, 92 people preferred 2 over 8 while 71 people preferred
> the other way around.
>
> Now we are in the legal clearance period (which hopefully will take a
> month) and once it's over, we will start implementing the change, if 2
> doesn't get cleared, 8 will be implemented and so on.
>
> Thank you for everyone who participated (except myself, thanking myself
> would be weird)
> --
> Amir (he/him)
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Upgrading mailman (the software behind mailing lists)

2020-08-08 Thread Denny Vrandečić
Thank you so much!

On Sat, Aug 8, 2020, 13:56 Amir Sarabadani  wrote:

> Hey,
> Mailman, the software that powers our mailing lists, is extremely old, by
> looking at https://lists.wikimedia.org/ you can guess how old it is.
>
> I would really like to upgrade it to mailman 3 which has these benefits:
> * Much better security (including but not limited to
> https://phabricator.wikimedia.org/T181803)
> * Much better UI and UX
> * Much easier moderation and maintaining mailing lists
> * Ability to send mail from the web
> * Ability to search in archives.
> * Ability to like/dislike an email
> * List admins will be able to delete emails, merge threads, and much more.
> * Admins won't need to store passwords for each mailing list separately,
> they just login as their account everywhere.
> * The current mailman stores everything as files (even mailing list
> settings), mailman3 actually uses a proper database for everything meaning
> proper backup and recovery, high availability and much more.
>
> I have already put up a test setup and humbly ask you (specially list
> admins) to test it (and its admin interface), if you want to become a list
> admin, drop me a message. Keep in mind that we don't maintain the software
> so the most I can do is to change configuration and can't handle a feature
> request or solve a bug (you are more than welcome to file it against
> upstream though)
>
> Here's the test setup:
> * https://lists.wmcloud.org
>
> Here's a mailing list:
> * https://lists.wmcloud.org/postorius/lists/test.lists.wmcloud.org/
>
> Here's an archive post:
> *
>
> https://lists.wmcloud.org/hyperkitty/list/t...@lists.wmcloud.org/thread/RMQPKSS4ID3WALFXAF636J2NGBVCN3UA/
>
> Issues that I haven't figured out yet:
> * This system has profile picture support but it's only gravatar which we
> can't enable due to our privacy policy but when you disable it, it shows
> empty squares and looks bad. Reported upstream [1] but also we can have a
> gravatar proxy in production. And in the worst case scenario we can just
> inject "$('.gravatar').remove();" and remove them. Feel free to chime in in
> the phabricator ticket in this regard:
> https://phabricator.wikimedia.org/T256541
>
> * Upgrade will break archive links, making it work forever is not trivial
> (you need write apache rewrite rule) (You can read about it in
> https://docs.mailman3.org/en/latest/migration.html#other-considerations)
>
> * Mailman allows us to upgrade mailing list by mailing list, that's good
> but we haven't found a way to keep the old version and the new ones in sync
> (archives, etc.). Maybe we migrate a mailing list and the archives for the
> old version will stop getting updated. Would that work for you? Feel free
> to chime in: https://phabricator.wikimedia.org/T256539
>
> * We don't know what would be the size of the database after upgrade
> because these two versions are so inherently different, one idea was to
> check the size of a fully public mailing list, then move the files to the
> test setup, upgrade it to the new version  and check how it changes, then
> extrapolate the size of the final database. The discussion around the
> database is happening in https://phabricator.wikimedia.org/T256538
>
> If you want to help in the upgrade (like puppetzining its configuration,
> etc.) just let me know and I add you to the project! It uses a stand-alone
> puppetmaster so you don't need to get your puppet patches merged to see its
> effects.
>
> The main ticket about the upgrade:
> https://phabricator.wikimedia.org/T52864
>
> [1] https://gitlab.com/mailman/hyperkitty/-/issues/303#note_365162201
>
> Hope that'll be useful for you :)
> --
> Amir (he/him)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A difficult goodbye

2019-01-15 Thread Denny Vrandečić
Thanks Victoria! I really enjoyed our interaction, and thank you for what
you did for the movement. I wish you all the best on your future path, and
hope you are not becoming a stranger!

Best wishes,
Denny


On Tue, Jan 15, 2019 at 10:25 AM Amir Sarabadani 
wrote:

> One thing I want to point out is when she joined, we had only 16 engineers
> in site reliability engineering [1] but now it's 26 [2]. Running the fifth
> most visited website and the greatest thing you can find in the internet
> with only 16 engineers is CRAZY. Such well-needed expansion shows its
> effect on every day life of me as a developer.
>
> Kudos Victoria! Thank you for all of your work for the movement, even
> though some of them might not be easily seen or understood by everyone.
>
> [1]:
>
> https://foundation.wikimedia.org/w/index.php?title=Template:Staff_and_contractors=107326
> [2] https://wikimediafoundation.org/role/staff-contractors/
>
> Best
>
> On Tue, Jan 15, 2019 at 6:21 PM Nuria Ruiz  wrote:
>
> > Many thanks for your work in these two years, Victoria.  You leave the
> > Technology team in much better shape than you found it.
> >
> > For those of you not in the know I think is worth mentioning that in her
> > tenure here Victoria has created the Technical Engagement team to better
> > attend technical contributors (already mentioned on this thread), hired a
> > director of security that has, in turn, build a solid security team,
> given
> > weight to MediaWiki as a platform and created a team that will steward
> > MediaWiki's development . She has also overseen much of the technical
> work
> > that has gone into our litigation against the NSA, has instituted a
> process
> > to better do promotions and has hired support staff that has made the
> work
> > of all of us in the department much (much!) easier.
> >
> > On Mon, Jan 14, 2019 at 12:02 PM Nuria Ruiz  wrote:
> >
> > > Many thanks for your work in these two years, Victoria.  You leave the
> > > Technology team in much better shape than you found it.
> > >
> > > For those of you not in the know I think is worth mentioning that in
> her
> > > tenure here Victoria has created the Technical Engagement team to
> better
> > > attend technical contributors (already mentioned on this thread),
> hired a
> > > director of security that has, in turn, build a solid security team,
> > given
> > > weight to MediaWiki as a platform and created a team that will steward
> > > MediaWiki's development . She has also overseen much of the technical
> > work
> > > that has gone into our litigation against the NSA, has instituted a
> > process
> > > to better do promotions and has hired support staff that has made the
> > work
> > > of all of us in the department much (much!) easier.
> > >
> > >
> > >
> > > On Mon, Jan 14, 2019 at 10:17 AM Chico Venancio <
> > chicocvenan...@gmail.com>
> > > wrote:
> > >
> > >> Victoria,
> > >>
> > >> I echo Isarra's comment here, it is the Wikimedia community loss to
> see
> > >> you
> > >> part.
> > >> In particular, I was able to see your leadership in the formation of
> the
> > >> Technical Engagement team, consolidating initiatives around volunteer
> > >> developers and leveraging the work for the communities.
> > >>
> > >> Wish you the best on this new challenge.
> > >> Chico Venancio
> > >>
> > >>
> > >> Em seg, 14 de jan de 2019 às 15:06, Isarra Yos 
> > >> escreveu:
> > >>
> > >> > Sad to see you go - from what I saw as a volunteer, things only
> > improved
> > >> > under your leadership here. We were lucky to have you.
> > >> >
> > >> > All the best.
> > >> >
> > >> > -I
> > >> >
> > >> > On 11/01/2019 00:29, Victoria Coleman wrote:
> > >> > > Dear all,
> > >> > >
> > >> > > I have some difficult news to share.
> > >> > >
> > >> > > A few months ago and completely out of the blue, an opportunity
> came
> > >> up
> > >> > for me to exercise the full spectrum of my skills as the CEO of an
> > early
> > >> > stage mission oriented startup. It has been a mighty struggle
> between
> > my
> > >> > commitment to my team, the Foundation and  the movement and this
> > >> > opportunity to bring all my skills to bear and build something from
> > the
> > >> > ground up to do good in the world. I will not lie to you - it has
> not
> > >> been
> > >> > easy. But ultimately I decided that I have to give it a go and I
> > >> accepted
> > >> > the offer. I plan on starting there on Feb 4th so my last day at the
> > >> > Foundation will be Feb 1st.
> > >> > >
> > >> > > These past two years have been amongst the most enjoyable in my
> > >> > professional career. I’ve enjoyed getting to know the movement and
> > what
> > >> > fuels it and I’ve been incredibly privileged to work with a Tech
> team
> > >> like
> > >> > no other. We have together strengthened the technical infrastructure
> > of
> > >> the
> > >> > movement and while much work remains to be done we have a much
> > stronger
> > >> > team in place to take the mission to the next level. And I have
> > 

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-15 Thread Denny Vrandečić
Cool, thanks! I read this a while ago, rereading again.

On Tue, Jan 15, 2019 at 3:28 AM Sebastian Hellmann <
hellm...@informatik.uni-leipzig.de> wrote:

> Hi all,
>
> let me send you a paper from 2013, which might either help directly or at
> least to get some ideas...
>
> A lemon lexicon for DBpedia, Christina Unger, John McCrae, Sebastian
> Walter, Sara Winter, Philipp Cimiano, 2013, Proceedings of 1st
> International Workshop on NLP and DBpedia, co-located with the 12th
> International Semantic Web Conference (ISWC 2013), October 21-25, Sydney,
> Australia
>
> https://github.com/ag-sc/lemon.dbpedia
>
> https://pdfs.semanticscholar.org/638e/b4959db792c94411339439013eef536fb052.pdf
>
> Since the mappings from DBpedia to Wikidata properties are here:
> http://mappings.dbpedia.org/index.php?title=Special:AllPages=202
> e.g. http://mappings.dbpedia.org/index.php/OntologyProperty:BirthDate
>
> You could directly use the DBpedia-lemon lexicalisation for Wikidata.
>
> The mappings can be downloaded with
>
> git clone https://github.com/dbpedia/extraction-framework ; cd core ;
> ../run download-mappings
>
>
> All the best,
>
> Sebastian
>
>
>
>
> On 14.01.19 18:34, Denny Vrandečić wrote:
>
> Felipe,
>
> thanks for the kind words.
>
> There are a few research projects that use Wikidata to generate parts of
> Wikipedia articles - see for example https://arxiv.org/abs/1702.06235 which
> is almost as good as human results and beats templates by far, but only for
> the first sentence of biographies.
>
> Lucie Kaffee has also quite a body of research on that topic, and has
> worked very succesfully and tightly with some Wikipedia communities on
> these questions. Here's her bibliography:
> https://scholar.google.com/citations?user=xiuGTq0J=de
>
> Another project of hers is currently under review for a grant:
> https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-resourced_Wikipedia_Editors_in_Creating_New_Articles
> - I would suggest to take a look and if you are so inclined to express
> support. It is totally worth it!
>
> My opinion is that these projects are great for starters, and should be
> done (low-hanging fruits and all that), but won't get much further at least
> for a while, mostly because Wikidata rarely offers more than a skeleton of
> content. A decent Wikipedia article will include much, much more content
> than what is represented in Wikidata. And if you only use that for input,
> you're limiting yourself too much.
>
> Here's a different approach based on summarization over input sources:
> https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/
>  -
> this has a more promising approach for the short- to mid-term.
>
> I still maintain that the Abstract Wikipedia approach has certain
> advantages over both learned approaches, and is most aligned with Lucie's
> work. The machine learned approaches always fall short on the dimension of
> editability, due to the black-boxness of their solutions.
>
> Also, furthermore, agree to Jeblad.
>
> Remains the question, why is there not more discussion? Maybe because
> there is nothing substantial to discuss yet :) The two white papers are
> rather high level and the idea is not concrete enough yet, so that I
> wouldn't expect too much discussion yet going on on-wiki. That was similar
> to Wikidata - the number who discussed Wikidata at this level of maturity
> was tiny, it increased considerably once an actual design plan was
> suggested, but still remained small - and then exploded once the system was
> deployed. I would be surprised and delighted if we managed to avoid this
> pattern this time, but I can't do more than publicly present the idea,
> announce plans once they are there, and hope for a timely discussion :)
>
> Cheers,
> Denny
>
>
> On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad  wrote:
>
>> An additional note; what Wikipedia urgently needs is a way to create
>> and reuse canned text (aka "templates"), and a way to adapt that text
>> to data from Wikidata. That is mostly just inflection rules, but in
>> some cases it involves grammar rules. To create larger pieces of text
>> is much harder, especially if the text is supposed to be readable.
>> Jumbling sentences together as is commonly done by various botscripts
>> does not work very well, or rather, it does not work at all.
>>
>> On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad 
>> wrote:
>> >
>> > Using an abstract language as an basis for translations have been
>> > tried before, and is almost as hard as translating between two common
>> > languages.
>&g

Re: [Wikitech-l] [Wikidata] [Wikipedia-l] Fwd: [Wikimedia-l] Wikipedia in an abstract language

2019-01-14 Thread Denny Vrandečić
Felipe,

thanks for the kind words.

There are a few research projects that use Wikidata to generate parts of
Wikipedia articles - see for example https://arxiv.org/abs/1702.06235 which
is almost as good as human results and beats templates by far, but only for
the first sentence of biographies.

Lucie Kaffee has also quite a body of research on that topic, and has
worked very succesfully and tightly with some Wikipedia communities on
these questions. Here's her bibliography:
https://scholar.google.com/citations?user=xiuGTq0J=de

Another project of hers is currently under review for a grant:
https://meta.wikimedia.org/wiki/Grants:Project/Scribe:_Supporting_Under-resourced_Wikipedia_Editors_in_Creating_New_Articles
- I would suggest to take a look and if you are so inclined to express
support. It is totally worth it!

My opinion is that these projects are great for starters, and should be
done (low-hanging fruits and all that), but won't get much further at least
for a while, mostly because Wikidata rarely offers more than a skeleton of
content. A decent Wikipedia article will include much, much more content
than what is represented in Wikidata. And if you only use that for input,
you're limiting yourself too much.

Here's a different approach based on summarization over input sources:
https://www.wired.com/story/using-artificial-intelligence-to-fix-wikipedias-gender-problem/
-
this has a more promising approach for the short- to mid-term.

I still maintain that the Abstract Wikipedia approach has certain
advantages over both learned approaches, and is most aligned with Lucie's
work. The machine learned approaches always fall short on the dimension of
editability, due to the black-boxness of their solutions.

Also, furthermore, agree to Jeblad.

Remains the question, why is there not more discussion? Maybe because there
is nothing substantial to discuss yet :) The two white papers are rather
high level and the idea is not concrete enough yet, so that I wouldn't
expect too much discussion yet going on on-wiki. That was similar to
Wikidata - the number who discussed Wikidata at this level of maturity was
tiny, it increased considerably once an actual design plan was suggested,
but still remained small - and then exploded once the system was deployed.
I would be surprised and delighted if we managed to avoid this pattern this
time, but I can't do more than publicly present the idea, announce plans
once they are there, and hope for a timely discussion :)

Cheers,
Denny


On Mon, Jan 14, 2019 at 2:54 AM John Erling Blad  wrote:

> An additional note; what Wikipedia urgently needs is a way to create
> and reuse canned text (aka "templates"), and a way to adapt that text
> to data from Wikidata. That is mostly just inflection rules, but in
> some cases it involves grammar rules. To create larger pieces of text
> is much harder, especially if the text is supposed to be readable.
> Jumbling sentences together as is commonly done by various botscripts
> does not work very well, or rather, it does not work at all.
>
> On Mon, Jan 14, 2019 at 11:44 AM John Erling Blad 
> wrote:
> >
> > Using an abstract language as an basis for translations have been
> > tried before, and is almost as hard as translating between two common
> > languages.
> >
> > There are two really hard problems, it is the implied references and
> > the cultural context. An artificial language can get rid of the
> > implied references, but it tend to create very weird and unnatural
> > expressions. If the cultural context is removed, then it can be
> > extremely hard to put it back in, and without any cultural context it
> > can be hard to explain anything.
> >
> > But yes, you can make an abstract language, but it won't give you any
> > high quality prose.
> >
> > On Mon, Jan 14, 2019 at 8:09 AM Felipe Schenone 
> wrote:
> > >
> > > This is quite an awesome idea. But thinking about it, wouldn't it be
> possible to use structured data in wikidata to generate articles? Can't we
> skip the need of learning an abstract language by using wikidata?
> > >
> > > Also, is there discussion about this idea anywhere in the Wikimedia
> wikis? I haven't found any...
> > >
> > > On Sat, Sep 29, 2018 at 3:44 PM Pine W  wrote:
> > >>
> > >> Forwarding because this (ambitious!) proposal may be of interest to
> people
> > >> on other lists. I'm not endorsing the proposal at this time, but I'm
> > >> curious about it.
> > >>
> > >> Pine
> > >> ( https://meta.wikimedia.org/wiki/User:Pine )
> > >>
> > >>
> > >> -- Forwarded message -
> > >> From: Denny Vrandečić 
> > >> Date: Sat, Sep 29, 2018 at 6:32 

Re: [Wikitech-l] Security question re password resets and Spectre

2018-01-04 Thread Denny Vrandečić
Ah, that sounds good. I was thinking of a scenario where someone runs code
in, say labs, and gains access to memory while that machine generates my
temporary code to send it to me, and thus gains access to that code.

Or, alternatively, just attack my browser through a compromised site
running a JS exploit and gaining access to anything in my memory. But
that's on my side to fix (or, rather, on the browser developers).

One way or the other, I have set up 2FA for now.

Use more lynx!



On Thu, Jan 4, 2018 at 10:18 AM Cyken Zeraux  wrote:

> Spectre can be exploited in just only javascript.
>
>
>
> https://blog.mozilla.org/security/2018/01/03/mitigations-landing-new-class-timing-attack/
>
> Browsers are making changes to mitigate this.
>
>
> http://www.tomshardware.com/news/meltdown-spectre-exploit-browser-javascript,36221.html
>
> The actual extents of the attack that are realistically possible in this
> scenario, I do not know. But as stated in the article google suggests:
> "Where possible, prevent cookies from entering the renderer process' memory
> by using the SameSite and HTTPOnly cookie attributes, and by avoiding
> reading from document.cookie."
>
> I would take that to mean that cookies could be accessed, at the least.
>
> On Thu, Jan 4, 2018 at 12:16 PM, Stas Malyshev 
> wrote:
>
> > Hi!
> >
> > > So far so good. What I am wondering is whether that password reset
> trial
> > is
> > > actually even more dangerous now given Spectre / Meltdown?
> >
> > I think for those you need local code execution access? In which case,
> > if somebody gained one on MW servers, they could just change your
> > password I think. Spectre/Meltdown from what I read are local privilege
> > escalation attacks (local user -> root or local user -> another local
> > user) but I haven't heard anything about crossing the server access
> > barrier.
> >
> > > (I probably should set up 2FA right now. Have been too lazy so far)
> >
> > Might be a good idea anyway :)
> >
> > --
> > Stas Malyshev
> > smalys...@wikimedia.org
> >
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Security question re password resets and Spectre

2018-01-04 Thread Denny Vrandečić
I often get emails that someone is trying to get into my accounts. I guess
there are just some trolls, trying to login into my Wikipedia account. So
far, these have been unsuccessful.

Now I got an email that someone asked for a temporary password for my
account.

So far so good. What I am wondering is whether that password reset trial is
actually even more dangerous now given Spectre / Meltdown?

Thoughts?

(I probably should set up 2FA right now. Have been too lazy so far)

Happy new year,
Denny
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Turkey ban

2017-04-29 Thread Denny Vrandečić
Surely it should be possible to have a more resilient access to Wikipedia
content in the app than most vanilla browsers provide?

On Sat, Apr 29, 2017, 13:40 Florian Schmidt <
florian.schmidt.wel...@t-online.de> wrote:

> Because the app uses the same base URL for the requests to Wikipedia as
> other access methods (e.g. internet browsers), a block a Wikipedia would
> apply to the App, too. Sorry, but there's no way to work around this, if a
> block is implemented to block all requests to a specific domain. One
> workaround on the client side would be to use a VPN, so that the request
> will be made from another origin and the answer forwarded to your
> phone/computer. However, I'm not sure, if this is allowed in any way.
>
> Best,
> Florian
>
> -Ursprüngliche Nachricht-
> Von: Wikitech-l [mailto:wikitech-l-boun...@lists.wikimedia.org] Im
> Auftrag von Denny Vrandecic
> Gesendet: Samstag, 29. April 2017 21:16
> An: Wikimedia developers 
> Betreff: [Wikitech-l] Turkey ban
>
> Does it also affect the app?
>
> If so, why, and can we circumvent that?
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Turkey ban

2017-04-29 Thread Denny Vrandečić
Does it also affect the app?

If so, why, and can we circumvent that?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "must be of type int, int given"

2017-04-18 Thread Denny Vrandečić
I tried to change the int to a float, easy enough, and the error went away.

Just to be replaced with

Exception handler threw an object exception: TypeError: Argument 2 passed
to stream_set_blocking() must be an instance of int, bool given in
/vagrant/mediawiki/includes/libs/redis/RedisConnectionPool.php:233

And I can't find a bool being given here. For fun, I cast $port to an int,
but that didn't change anything.

Grmpf.

Thanks for the pointer though.

On Mon, Apr 17, 2017 at 7:49 PM Chad <innocentkil...@gmail.com> wrote:

> On Mon, Apr 17, 2017 at 3:07 PM Denny Vrandečić <vrande...@gmail.com>
> wrote:
>
> > Ah, that's a good point! Sorry, stupid error.
> >
> > I think I did it now - adding to
> > mediawiki-vagrant/puppet/hieradata/common.yaml (I hope that is the right
> > approach).
> >
> > Now I get the following error, which seems unrelated to my extension. Hm.
> >
> > Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #033[0m#033[22;31m[Mon Apr
> > 17 21:48:57 2017] [hphp] [29575:7ff81f7ff700:1:02] [] Exception
> handler
> > threw an object exception: TypeError: Argument 5 passed to pfsockopen()
> > must be an instance of float, int given in
> > /vagrant/mediawiki/includes/libs/redis/RedisConnectionPool.php:233
> >
> >
> That almost sounds like we should be casting the input to
> RedisConnectionPool
> to a float from an int. If it expects floats, easy enough to provide
> one...could just
> check with ctype_digit() so we can handle ints and strings that look like
> numbers.
>
> And probably throw an exception if we're not, it's probably bogus config or
> a bug
> if we're looking at non-numeric input.
>
> -Chad
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "must be of type int, int given"

2017-04-18 Thread Denny Vrandečić
Indeed, it seems your right - if I namespace the thing, the compiler
actually tells me that namespace/int is not the same as int. So yeah, it
probably assumes a type int here, which has no relation to the built-in
scalar int.

D'oh.

I didn't realize until know that one wasn't allowed to use type hints with
scalar types (int, string, bool, etc.) until PHP7. This explains a lot.

On Mon, Apr 17, 2017 at 9:33 PM Stas Malyshev 
wrote:

> Hi!
>
> > So, if I start a fresh MediaWiki Vagrant installation, and the vagrant
> ssh
> > into the virtual machine, the fastest way to reproduce the issue is to
> > start hhvmsh and type
> >
> > function f():int { return 1; };
>
> Suspicion: HHVM thinks "int" is a class name, not primitive type name.
> Not sure why though...
>
> --
> Stas Malyshev
> smalys...@wikimedia.org
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "must be of type int, int given"

2017-04-18 Thread Denny Vrandečić
That would be neat!

On Mon, Apr 17, 2017 at 7:51 PM Chad <innocentkil...@gmail.com> wrote:

> On Mon, Apr 17, 2017 at 2:07 PM Gergo Tisza <gti...@wikimedia.org> wrote:
>
> > On Mon, Apr 17, 2017 at 8:35 PM, Denny Vrandečić <vrande...@gmail.com>
> > wrote:
> >
> > > Value returned from function f() must be of type int, int given
> >
> >
> > Have you enabled PHP7 mode
> > <https://docs.hhvm.com/hhvm/configuration/INI-settings#php-7-settings>?
> >
> >
> I'm curious if we should enable this in Vagrant and our Jenkins testing so
> we can avoid weird failures that spawned this thread.
>
> -Chad
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "must be of type int, int given"

2017-04-17 Thread Denny Vrandečić
Ah, that's a good point! Sorry, stupid error.

I think I did it now - adding to
mediawiki-vagrant/puppet/hieradata/common.yaml (I hope that is the right
approach).

Now I get the following error, which seems unrelated to my extension. Hm.

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #033[0m#033[22;31m[Mon Apr
17 21:48:57 2017] [hphp] [29575:7ff81f7ff700:1:02] [] Exception handler
threw an object exception: TypeError: Argument 5 passed to pfsockopen()
must be an instance of float, int given in
/vagrant/mediawiki/includes/libs/redis/RedisConnectionPool.php:233

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: Stack trace:

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #0
/vagrant/mediawiki/includes/libs/redis/RedisConnectionPool.php(233):
Redis->pconnect()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #1
/vagrant/mediawiki/includes/libs/objectcache/RedisBagOStuff.php(354):
RedisConnectionPool->getConnection()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #2
/vagrant/mediawiki/includes/libs/objectcache/RedisBagOStuff.php(151):
RedisBagOStuff->getConnection()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #3
/vagrant/mediawiki/includes/libs/objectcache/WANObjectCache.php(304):
RedisBagOStuff->getMulti()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #4
/vagrant/mediawiki/includes/libs/objectcache/WANObjectCache.php(251):
WANObjectCache->getMulti()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #5
/vagrant/mediawiki/includes/libs/objectcache/WANObjectCache.php(948):
WANObjectCache->get()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #6
/vagrant/mediawiki/includes/libs/objectcache/WANObjectCache.php(895):
WANObjectCache->doGetWithSetCallback()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #7
/vagrant/mediawiki/includes/user/User.php(515):
WANObjectCache->getWithSetCallback()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #8
/vagrant/mediawiki/includes/user/User.php(445): User->loadFromCache()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #9
/vagrant/mediawiki/includes/user/User.php(409): User->loadFromId()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #10
/vagrant/mediawiki/includes/session/UserInfo.php(88): User->load()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #11
/vagrant/mediawiki/includes/session/CookieSessionProvider.php(119):
MediaWiki\Session\UserInfo::newFromId()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #12
/vagrant/mediawiki/includes/session/SessionManager.php(487):
MediaWiki\Session\CookieSessionProvider->provideSessionInfo()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #13
/vagrant/mediawiki/includes/session/SessionManager.php(190):
MediaWiki\Session\SessionManager->getSessionInfoForRequest()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #14
/vagrant/mediawiki/includes/WebRequest.php(735):
MediaWiki\Session\SessionManager->getSessionForRequest()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #15
/vagrant/mediawiki/includes/user/User.php(1143): WebRequest->getSession()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #16
/vagrant/mediawiki/includes/user/User.php(384): User->loadDefaults()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #17
/vagrant/mediawiki/includes/user/User.php(5225): User->load()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #18
/vagrant/mediawiki/includes/user/User.php(2847): User->loadOptions()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #19
/vagrant/mediawiki/includes/context/RequestContext.php(364):
User->getOption()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #20
/vagrant/mediawiki/includes/Message.php(380): RequestContext->getLanguage()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #21
/vagrant/mediawiki/includes/Message.php(1257): Message->getLanguage()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #22
/vagrant/mediawiki/includes/Message.php(842): Message->fetchMessage()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #23
/vagrant/mediawiki/includes/Message.php(934): Message->toString()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #24
/vagrant/mediawiki/includes/exception/MWExceptionRenderer.php(244):
Message->text()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #25
/vagrant/mediawiki/includes/exception/MWExceptionRenderer.php(179):
MWExceptionRenderer::msg()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #26
/vagrant/mediawiki/includes/exception/MWExceptionRenderer.php(50):
MWExceptionRenderer::reportHTML()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #27
/vagrant/mediawiki/includes/exception/MWExceptionHandler.php(74):
MWExceptionRenderer::output()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #28
/vagrant/mediawiki/includes/exception/MWExceptionHandler.php(140):
MWExceptionHandler::report()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #29 ():
MWExceptionHandler::handleException()

Apr 17 21:48:57 mediawiki-vagrant hhvm[29575]: #30 {main}

On Mon, Apr 17, 2017 at 2:07 PM Gergo Tisza <gti...@wikimedia.org> wrote:

&g

Re: [Wikitech-l] "must be of type int, int given"

2017-04-17 Thread Denny Vrandečić
The same works with bool too. I am glad I don't have to write (bool)true :)

On Mon, Apr 17, 2017 at 1:19 PM Denny Vrandečić <vrande...@gmail.com> wrote:

> Hm. If I try it in https://3v4l.org/WOTg0 I actually get good behavior
> for HHMV 3.12.14 - so it might be something problematic with our deployed
> HHMV version?
>
>
> On Mon, Apr 17, 2017 at 1:16 PM Denny Vrandečić <vrande...@gmail.com>
> wrote:
>
>> Thanks for the suggestion, but I get exactly the same error message when
>> I try to typecast the 1 to int first.
>>
>> ‪On Mon, Apr 17, 2017 at 11:56 AM ‫יגאל חיטרון‬‎ <khit...@gmail.com>
>> wrote:‬
>>
>>> I'm not so good in this, but try {return (int)1;} . Maximum, I'm wrong.
>>> Igal
>>>
>>> On Apr 17, 2017 21:36, "Denny Vrandečić" <vrande...@gmail.com> wrote:
>>>
>>> > I'm running into a weird problem, which made me reset my whole vagrant.
>>> >
>>> > I assume this is not strictly a MediaWiki issue, but probably an HHVM
>>> > problem, but maybe someone can help me here.
>>> >
>>> > So, if I start a fresh MediaWiki Vagrant installation, and the vagrant
>>> ssh
>>> > into the virtual machine, the fastest way to reproduce the issue is to
>>> > start hhvmsh and type
>>> >
>>> > function f():int { return 1; };
>>> >
>>> > =f()
>>> >
>>> > I get:
>>> >
>>> > Hit fatal : Value returned from function f() must be of type int, int
>>> given
>>> >
>>> > #0 at [:1]
>>> >
>>> > #1 f(), called at [:1]
>>> >
>>> > #2 include(), called at [:1]
>>> >
>>> >
>>> > Any way to circumvent this (besides getting rid of the type hint,
>>> > obviously)? Update to a new HHVM version (According to
>>> Special:Version, the
>>> > HHMV version is 3.12.14 (srv)?
>>> > ___
>>> > Wikitech-l mailing list
>>> > Wikitech-l@lists.wikimedia.org
>>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>> ___
>>> Wikitech-l mailing list
>>> Wikitech-l@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>
>>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "must be of type int, int given"

2017-04-17 Thread Denny Vrandečić
Hm. If I try it in https://3v4l.org/WOTg0 I actually get good behavior for
HHMV 3.12.14 - so it might be something problematic with our deployed HHMV
version?


On Mon, Apr 17, 2017 at 1:16 PM Denny Vrandečić <vrande...@gmail.com> wrote:

> Thanks for the suggestion, but I get exactly the same error message when I
> try to typecast the 1 to int first.
>
> ‪On Mon, Apr 17, 2017 at 11:56 AM ‫יגאל חיטרון‬‎ <khit...@gmail.com>
> wrote:‬
>
>> I'm not so good in this, but try {return (int)1;} . Maximum, I'm wrong.
>> Igal
>>
>> On Apr 17, 2017 21:36, "Denny Vrandečić" <vrande...@gmail.com> wrote:
>>
>> > I'm running into a weird problem, which made me reset my whole vagrant.
>> >
>> > I assume this is not strictly a MediaWiki issue, but probably an HHVM
>> > problem, but maybe someone can help me here.
>> >
>> > So, if I start a fresh MediaWiki Vagrant installation, and the vagrant
>> ssh
>> > into the virtual machine, the fastest way to reproduce the issue is to
>> > start hhvmsh and type
>> >
>> > function f():int { return 1; };
>> >
>> > =f()
>> >
>> > I get:
>> >
>> > Hit fatal : Value returned from function f() must be of type int, int
>> given
>> >
>> > #0 at [:1]
>> >
>> > #1 f(), called at [:1]
>> >
>> > #2 include(), called at [:1]
>> >
>> >
>> > Any way to circumvent this (besides getting rid of the type hint,
>> > obviously)? Update to a new HHVM version (According to Special:Version,
>> the
>> > HHMV version is 3.12.14 (srv)?
>> > ___
>> > Wikitech-l mailing list
>> > Wikitech-l@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>> ___
>> Wikitech-l mailing list
>> Wikitech-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "must be of type int, int given"

2017-04-17 Thread Denny Vrandečić
Thanks for the suggestion, but I get exactly the same error message when I
try to typecast the 1 to int first.

‪On Mon, Apr 17, 2017 at 11:56 AM ‫יגאל חיטרון‬‎ <khit...@gmail.com> wrote:‬

> I'm not so good in this, but try {return (int)1;} . Maximum, I'm wrong.
> Igal
>
> On Apr 17, 2017 21:36, "Denny Vrandečić" <vrande...@gmail.com> wrote:
>
> > I'm running into a weird problem, which made me reset my whole vagrant.
> >
> > I assume this is not strictly a MediaWiki issue, but probably an HHVM
> > problem, but maybe someone can help me here.
> >
> > So, if I start a fresh MediaWiki Vagrant installation, and the vagrant
> ssh
> > into the virtual machine, the fastest way to reproduce the issue is to
> > start hhvmsh and type
> >
> > function f():int { return 1; };
> >
> > =f()
> >
> > I get:
> >
> > Hit fatal : Value returned from function f() must be of type int, int
> given
> >
> > #0 at [:1]
> >
> > #1 f(), called at [:1]
> >
> > #2 include(), called at [:1]
> >
> >
> > Any way to circumvent this (besides getting rid of the type hint,
> > obviously)? Update to a new HHVM version (According to Special:Version,
> the
> > HHMV version is 3.12.14 (srv)?
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] "must be of type int, int given"

2017-04-17 Thread Denny Vrandečić
I'm running into a weird problem, which made me reset my whole vagrant.

I assume this is not strictly a MediaWiki issue, but probably an HHVM
problem, but maybe someone can help me here.

So, if I start a fresh MediaWiki Vagrant installation, and the vagrant ssh
into the virtual machine, the fastest way to reproduce the issue is to
start hhvmsh and type

function f():int { return 1; };

=f()

I get:

Hit fatal : Value returned from function f() must be of type int, int given

#0 at [:1]

#1 f(), called at [:1]

#2 include(), called at [:1]


Any way to circumvent this (besides getting rid of the type hint,
obviously)? Update to a new HHVM version (According to Special:Version, the
HHMV version is 3.12.14 (srv)?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-10 Thread Denny Vrandečić
Good points. All too familiar from trying to parse and understand the ways
the templating system has been used :)

Unfortunately, I am unsure whether I can restrict the usage this tightly,
and whether I'd rather let the users shoot themselves in the foot or
restrict them from doing so... age old question!

On Mon, Apr 10, 2017 at 2:42 PM Bartosz Dziewoński <matma@gmail.com>
wrote:

> On 2017-04-10 06:17, Denny Vrandečić wrote:
> > On Sat, Apr 8, 2017 at 11:30 PM James Hare <jamesmh...@gmail.com> wrote:
> >
> >> Why, exactly, do you want a wikitext intermediary between your JSON and
> >> your HTML? The value of wikitext is that it’s a syntax that is easier to
> >> edit than HTML. But if it’s not the native format of your data, nor can
> >> browsers render it directly, what’s the point of having it?
> >>
> >
> > Ah, good question indeed. The reason is that users would be actually
> > putting fragments of wikitext into the JSON structure, and then the JSON
> > structure gets assembled into wikitext. Not only would I prefer to have
> the
> > users work with fragments of wikitext than fragments of HTML, but some
> > things are almost impossible with HTML - e.g. making internal links red
> or
> > blue depending on the existence of the article, etc.
>
> It would be more reliable to parse each fragment of wikitext separately,
> and then build the HTML out of them. If you try to make a big chunk of
> wikitext with user-supplied fragments, you'll soon run into two problems:
>
> * Users will accidentally put '' or something into one of the
> fields and completely mess up the rendering.
> * Users will intentionally put '[[Foo|' in one field and ']]' into
> another and then be mad at you when you change the structure in a minor
> way and their link no longer works.
>
> --
> Bartosz Dziewoński
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-10 Thread Denny Vrandečić
Uh, that sounds nifty. Thanks for that!

On Mon, Apr 10, 2017 at 12:15 PM C. Scott Ananian <canan...@wikimedia.org>
wrote:

> More related functionality: Parsoid supports JSON as a contenttype, and
> emits a special type of table (the same as the HTML table generated by the
> HTML handler for JSON in mediawiki).  You can edit this in VE, although w/o
> any special support, and Parsoid will serialize it back to JSON.
>
> This could be turned into a very pleasant type-aware editor for JSON in VE
> pretty easily.
>  --scott
>
> On Sun, Apr 9, 2017 at 2:23 AM, Denny Vrandečić <vrande...@gmail.com>
> wrote:
>
> > Here's my requirement:
> > - a wiki page is one JSON document
> > - when editing, the user edits the JSON directly
> > - when viewing, I have a viewer that turns the JSON into wikitext, and
> that
> > wikitext gets rendered as wikitext and turned into HTML by MediaWiki
> >
> > I have several options, including:
> >
> > 1) hook for a tag like , and write an extension that parses the
> > content between the tags and turns it into wikitext (not ideal, as I
> don't
> > use any of the existing support for JSON stuff, and also I could have
> > several such tags per page, which does not fit with my requirements)
> >
> > 2) I found the JsonConfig extension by yurik. This allows me to do almost
> > all of the things above - but it returns HTML directly, not wikitext. It
> > doesn't seem trivial to be able to return wikitext instead of HTML, but
> > hopefully I am wrong? Also, this ties in nicely with the Code Editor.
> >
> > 3) there is actually a JsonContentHandler in core. But looking through it
> > it seems that this suffers from the same limitations - I can return HTML,
> > but not wikitext.
> >
> > 3 seems to have the advantage to be more actively worked on that 2 (which
> > is not based on 3, probably because it is older than 3). So future
> goodies
> > like a Json Schema validator will probably go to 3, but not to 2, so I
> > should probably go to 3.
> >
> > Writing this down, one solution could be to create the wikitext, and then
> > call the wikitext parser manually and have it create HTML?
> >
> > I have already developed the extension in 1, and then fully rewritten it
> in
> > 2.  Before I go and rewrite it again in 3, I wanted to ask whether I am
> > doing it right, or if should do it completely differently, and also if
> > there are examples of stuff developed in 3, i.e. of extensions or
> features
> > using the JsonContent class.
> >
> > Example:
> > I have a JSON document
> > { "username": "Denny" }
> > which gets turned into wikitext
> > ''Hello, [[User:Denny|Denny]]!''
> > which then gets turned into the right HTML and displayed to the user,
> e.g.
> > Hello, Denny!
> >
> > Cheers,
> > Denny
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
>
>
> --
> (http://cscott.net)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-10 Thread Denny Vrandečić
Ah, just an internal data structure. In the end, the UI will be form-based
anyway. But in these forms, the user will be able to enter some wikitext
fragments. Yucky, I know.

I prefer JSON over XML only because it is the Zeitgeist, and I expect the
tool support for JSON to grow whereas I don't see similar activity around
XML in the MediaWiki development community.

Or, put differently, the same reason Wikibase is using JSON.

On Mon, Apr 10, 2017 at 11:06 AM Daniel Kinzler <daniel.kinz...@wikimedia.de>
wrote:

> Am 10.04.2017 um 06:17 schrieb Denny Vrandečić:
> > Ah, good question indeed. The reason is that users would be actually
> > putting fragments of wikitext into the JSON structure, and then the JSON
> > structure gets assembled into wikitext. Not only would I prefer to have
> the
> > users work with fragments of wikitext than fragments of HTML, but some
> > things are almost impossible with HTML - e.g. making internal links red
> or
> > blue depending on the existence of the article, etc.
>
> Why JSON? JSON is an absolute pain to edit by hand. HTML/XML, as annoying
> as it
> is, is still much better for that than JSON is. And Wikitext is designed
> to mix
> well with HTML.
>
> --
> Daniel Kinzler
> Principal Platform Engineer
>
> Wikimedia Deutschland
> Gesellschaft zur Förderung Freien Wissens e.V.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-10 Thread Denny Vrandečić
Thanks Brad, that's perfect! I'll proceed as suggested.

Thanks list, that was really helpful and saved me plenty of trial and
errors!

On Mon, Apr 10, 2017 at 7:07 AM Brad Jorsch (Anomie) <bjor...@wikimedia.org>
wrote:

> On Sun, Apr 9, 2017 at 11:38 AM, Daniel Kinzler <
> daniel.kinz...@wikimedia.de
> > wrote:
>
> > Generating wikitext from some other thing is what Scribunto does.
>
>
> Not really. What Scribunto does is let you run a program to generate
> wikitext.
>
> If you wanted to write code that took some JSON and turned it into wikitext
> without going through all the trouble of writing an extension and getting
> it deployed, you might write that code in Lua as a Scribunto module.
>
>
> On Mon, Apr 10, 2017 at 12:17 AM, Denny Vrandečić <vrande...@gmail.com>
> wrote:
>
> > On Sat, Apr 8, 2017 at 11:30 PM James Hare <jamesmh...@gmail.com> wrote:
> >
> > > Why, exactly, do you want a wikitext intermediary between your JSON and
> > > your HTML? The value of wikitext is that it’s a syntax that is easier
> to
> > > edit than HTML. But if it’s not the native format of your data, nor can
> > > browsers render it directly, what’s the point of having it?
> >
> > Ah, good question indeed. The reason is that users would be actually
> > putting fragments of wikitext into the JSON structure, and then the JSON
> > structure gets assembled into wikitext. Not only would I prefer to have
> the
> > users work with fragments of wikitext than fragments of HTML, but some
> > things are almost impossible with HTML - e.g. making internal links red
> or
> > blue depending on the existence of the article, etc.
> >
>
> What you probably want to do then is to extend JsonContent and
> JsonContentHandler. In the fillParserOutput() method, you'd convert the
> JSON to wikitext and then pass that wikitext to the Parser; for the latter
> step you could look at how WikitextContent does it.
>
> You might also look at implementing Content::getWikitextForTransclusion()
> to let people transclude the resulting wikitext.
>
>
> --
> Brad Jorsch (Anomie)
> Senior Software Engineer
> Wikimedia Foundation
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-09 Thread Denny Vrandečić
Thanks Tpt, that sounds relevant! I will take a look into the code. More
code examples are really useful to figure this all out :)

On Sun, Apr 9, 2017 at 9:53 AM Thomas PT <thoma...@hotmail.fr> wrote:

> An other maybe relevant example : ProofreadPage has a content model for
> proofreading pages with multiple areas of Wikitext and use some for
> transclusion and others for rendering:
>
> *
> https://github.com/wikimedia/mediawiki-extensions-ProofreadPage/blob/master/includes/page/PageContent.php
> *
> https://github.com/wikimedia/mediawiki-extensions-ProofreadPage/blob/master/includes/page/PageContentHandler.php
>
> Thomas
>
> > Le 9 avr. 2017 à 18:44, Isarra Yos <zhoris...@gmail.com> a écrit :
> >
> > Sounds like what they want is what collaborationlistcontent/handler
> does, specifically, at least for now.
> >
> > On 09/04/17 06:30, James Hare wrote:
> >> Why, exactly, do you want a wikitext intermediary between your JSON and
> >> your HTML? The value of wikitext is that it’s a syntax that is easier to
> >> edit than HTML. But if it’s not the native format of your data, nor can
> >> browsers render it directly, what’s the point of having it?
> >>
> >> The CollaborationKit extension [0] has two content models,
> >> CollaborationHubContent and CollaborationListContent, and they have two
> >> different strategies for parsing. CollaborationHubContent takes
> validated
> >> JSON and puts out raw HTML; this is the most straightforward to work
> with.
> >> CollaborationListContent instead puts out wikitext that is fed into the
> >> parser, since it is expected that lists can be transcluded onto other
> >> pages, and this means using wikitext (as transcluded HTML is not an
> >> option). However, this creates a lot of limitations, including parser
> >> restrictions that make sense in the context of arbitrary wikitext
> parsing
> >> but not when the markup is provided directly by an extension. The long
> term
> >> plan is for CollaborationListContent to put out HTML, since it’s more
> >> straightforward than using a wikitext intermediary that the user does
> not
> >> see anyway.
> >>
> >> [0] https://www.mediawiki.org/wiki/Extension:CollaborationKit
> >>
> >> On April 8, 2017 at 11:23:38 PM, Denny Vrandečić (vrande...@gmail.com)
> >> wrote:
> >>
> >> Here's my requirement:
> >> - a wiki page is one JSON document
> >> - when editing, the user edits the JSON directly
> >> - when viewing, I have a viewer that turns the JSON into wikitext, and
> that
> >> wikitext gets rendered as wikitext and turned into HTML by MediaWiki
> >>
> >> I have several options, including:
> >>
> >> 1) hook for a tag like , and write an extension that parses the
> >> content between the tags and turns it into wikitext (not ideal, as I
> don't
> >> use any of the existing support for JSON stuff, and also I could have
> >> several such tags per page, which does not fit with my requirements)
> >>
> >> 2) I found the JsonConfig extension by yurik. This allows me to do
> almost
> >> all of the things above - but it returns HTML directly, not wikitext. It
> >> doesn't seem trivial to be able to return wikitext instead of HTML, but
> >> hopefully I am wrong? Also, this ties in nicely with the Code Editor.
> >>
> >> 3) there is actually a JsonContentHandler in core. But looking through
> it
> >> it seems that this suffers from the same limitations - I can return
> HTML,
> >> but not wikitext.
> >>
> >> 3 seems to have the advantage to be more actively worked on that 2
> (which
> >> is not based on 3, probably because it is older than 3). So future
> goodies
> >> like a Json Schema validator will probably go to 3, but not to 2, so I
> >> should probably go to 3.
> >>
> >> Writing this down, one solution could be to create the wikitext, and
> then
> >> call the wikitext parser manually and have it create HTML?
> >>
> >> I have already developed the extension in 1, and then fully rewritten
> it in
> >> 2. Before I go and rewrite it again in 3, I wanted to ask whether I am
> >> doing it right, or if should do it completely differently, and also if
> >> there are examples of stuff developed in 3, i.e. of extensions or
> features
> >> using the JsonContent class.
> >>
> >> Example:
> >> I have a JSON document
> >> { "username": "Denny" }
> >&g

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-09 Thread Denny Vrandečić
On Sun, Apr 9, 2017 at 8:38 AM Daniel Kinzler <daniel.kinz...@wikimedia.de>
wrote:

> Am 09.04.2017 um 08:23 schrieb Denny Vrandečić:
> > Here's my requirement:
> > - a wiki page is one JSON document
> > - when editing, the user edits the JSON directly
> > - when viewing, I have a viewer that turns the JSON into wikitext, and
> that
> > wikitext gets rendered as wikitext and turned into HTML by MediaWiki
>
> Quick thoughts off the top of my head:
>
> Generating wikitext from some other thing is what Scribunto does. Instead
> of
> using the Lua handler, you would make a handler for Json, with whetever
> rules
> you like for generating wikitext.
>

Thanks, I will take a look into how Scribunto does that.

>
> I have ne4ver looked at customizing Scribunto, but this seems to right
> place to
> plug this in.
>
> The alternative is to try to have some kind of "cascading" ContentHandler,
> that
> generates a WikiTextContent object first, and then turns that into HTML.
>
>
Not sure what you mean here - just create wikitext and then run it through
the parser?


>
> --
> Daniel Kinzler
> Senior Software Developer
>
> Wikimedia Deutschland
> Gesellschaft zur Förderung Freien Wissens e.V.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-09 Thread Denny Vrandečić
On Sun, Apr 9, 2017 at 4:11 AM Gergo Tisza  wrote:

> You probably want to subclass JsonContentHandler and add wikitext transform
>

What does it mean to add wikitext transform?


> and whatever else you need. For schemas, have a look
> at JsonSchemaContentHandler in EventLogging.
>

Thanks, that looks very helpful! I am surprised this is not part of Core,
or a stand-alone component, but it is in EventLogging. Thanks!




> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Editing JSON in MediaWiki

2017-04-09 Thread Denny Vrandečić
On Sat, Apr 8, 2017 at 11:30 PM James Hare <jamesmh...@gmail.com> wrote:

> Why, exactly, do you want a wikitext intermediary between your JSON and
> your HTML? The value of wikitext is that it’s a syntax that is easier to
> edit than HTML. But if it’s not the native format of your data, nor can
> browsers render it directly, what’s the point of having it?
>

Ah, good question indeed. The reason is that users would be actually
putting fragments of wikitext into the JSON structure, and then the JSON
structure gets assembled into wikitext. Not only would I prefer to have the
users work with fragments of wikitext than fragments of HTML, but some
things are almost impossible with HTML - e.g. making internal links red or
blue depending on the existence of the article, etc.


>
> The CollaborationKit extension [0] has two content models,
> CollaborationHubContent and CollaborationListContent, and they have two
> different strategies for parsing. CollaborationHubContent takes validated
> JSON and puts out raw HTML; this is the most straightforward to work with.
> CollaborationListContent instead puts out wikitext that is fed into the
> parser, since it is expected that lists can be transcluded onto other
> pages, and this means using wikitext (as transcluded HTML is not an
> option). However, this creates a lot of limitations, including parser
> restrictions that make sense in the context of arbitrary wikitext parsing
> but not when the markup is provided directly by an extension. The long term
> plan is for CollaborationListContent to put out HTML, since it’s more
> straightforward than using a wikitext intermediary that the user does not
> see anyway.
>

Thanks, that is super helpful to know! And thanks for the rationale in
particular.


>
> [0] https://www.mediawiki.org/wiki/Extension:CollaborationKit
>
> On April 8, 2017 at 11:23:38 PM, Denny Vrandečić (vrande...@gmail.com)
> wrote:
>
> Here's my requirement:
> - a wiki page is one JSON document
> - when editing, the user edits the JSON directly
> - when viewing, I have a viewer that turns the JSON into wikitext, and that
> wikitext gets rendered as wikitext and turned into HTML by MediaWiki
>
> I have several options, including:
>
> 1) hook for a tag like , and write an extension that parses the
> content between the tags and turns it into wikitext (not ideal, as I don't
> use any of the existing support for JSON stuff, and also I could have
> several such tags per page, which does not fit with my requirements)
>
> 2) I found the JsonConfig extension by yurik. This allows me to do almost
> all of the things above - but it returns HTML directly, not wikitext. It
> doesn't seem trivial to be able to return wikitext instead of HTML, but
> hopefully I am wrong? Also, this ties in nicely with the Code Editor.
>
> 3) there is actually a JsonContentHandler in core. But looking through it
> it seems that this suffers from the same limitations - I can return HTML,
> but not wikitext.
>
> 3 seems to have the advantage to be more actively worked on that 2 (which
> is not based on 3, probably because it is older than 3). So future goodies
> like a Json Schema validator will probably go to 3, but not to 2, so I
> should probably go to 3.
>
> Writing this down, one solution could be to create the wikitext, and then
> call the wikitext parser manually and have it create HTML?
>
> I have already developed the extension in 1, and then fully rewritten it in
> 2. Before I go and rewrite it again in 3, I wanted to ask whether I am
> doing it right, or if should do it completely differently, and also if
> there are examples of stuff developed in 3, i.e. of extensions or features
> using the JsonContent class.
>
> Example:
> I have a JSON document
> { "username": "Denny" }
> which gets turned into wikitext
> ''Hello, [[User:Denny|Denny]]!''
> which then gets turned into the right HTML and displayed to the user, e.g.
> Hello, Denny!
>
> Cheers,
> Denny
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Editing JSON in MediaWiki

2017-04-09 Thread Denny Vrandečić
Here's my requirement:
- a wiki page is one JSON document
- when editing, the user edits the JSON directly
- when viewing, I have a viewer that turns the JSON into wikitext, and that
wikitext gets rendered as wikitext and turned into HTML by MediaWiki

I have several options, including:

1) hook for a tag like , and write an extension that parses the
content between the tags and turns it into wikitext (not ideal, as I don't
use any of the existing support for JSON stuff, and also I could have
several such tags per page, which does not fit with my requirements)

2) I found the JsonConfig extension by yurik. This allows me to do almost
all of the things above - but it returns HTML directly, not wikitext. It
doesn't seem trivial to be able to return wikitext instead of HTML, but
hopefully I am wrong? Also, this ties in nicely with the Code Editor.

3) there is actually a JsonContentHandler in core. But looking through it
it seems that this suffers from the same limitations - I can return HTML,
but not wikitext.

3 seems to have the advantage to be more actively worked on that 2 (which
is not based on 3, probably because it is older than 3). So future goodies
like a Json Schema validator will probably go to 3, but not to 2, so I
should probably go to 3.

Writing this down, one solution could be to create the wikitext, and then
call the wikitext parser manually and have it create HTML?

I have already developed the extension in 1, and then fully rewritten it in
2.  Before I go and rewrite it again in 3, I wanted to ask whether I am
doing it right, or if should do it completely differently, and also if
there are examples of stuff developed in 3, i.e. of extensions or features
using the JsonContent class.

Example:
I have a JSON document
{ "username": "Denny" }
which gets turned into wikitext
''Hello, [[User:Denny|Denny]]!''
which then gets turned into the right HTML and displayed to the user, e.g.
Hello, Denny!

Cheers,
Denny
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Phabricator access

2016-11-07 Thread Denny Vrandečić
Phabricator still has my wikimedia.de Email-Address and tries to confirm
it's me by sending an Email to that address.

Is there a way to reset the email?

(The weird thing is that I seem to receive those Emails on my regular gmail
account, but for logging in it still requires that I verify the wikimedia.de
address? I am a bit confused)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mobile JavaScript

2016-10-13 Thread Denny Vrandečić
Thanks Dmitry and Jon! That's the info I was looking for.

Since both https://en.wikipedia.org/wiki/MediaWiki:Mobile.js and
https://en.wikipedia.org/wiki/MediaWiki:Minerva.js are empty, does that
mean English Wikipedia has no local JavaScript? Nice!

Cheers,
Denny


On Thu, Oct 13, 2016 at 6:48 PM Jon Robson <jdlrob...@gmail.com> wrote:

> The web runs MediaWiki:mobile.js or MediaWiki:minerva.js (skin) but not
> MediaWiki:Common.js
> It's my regret that we created MediaWiki:mobile.js to make this more
> confusing ...  so I'd advise using minerva.js ... :) Similar
> User:Name/minerva.js will work for a user.
>
>
> On Thu, 13 Oct 2016 at 17:20 Dmitry Brant <dbr...@wikimedia.org> wrote:
>
> > Hi Denny,
> >
> > Speaking for the apps, we do not load any JavaScript, except for a few
> > packaged scripts used for app-specific features (e.g. collapsing
> > infoboxes).
> >
> >
> > -Dmitry
> >
> > On Thu, Oct 13, 2016 at 7:30 PM, Denny Vrandečić <vrande...@gmail.com>
> > wrote:
> >
> > > Two stupid questions, I couldn't find the answer on MediaWiki.org:
> > >
> > > 1) is MediaWiki:common.js being loaded on the mobile web view? Or any
> > other
> > > js? What about User:Name/common.js?
> > >
> > > 2) same question but for the Wikipedia app. Does the app run any js?
> > > ___
> > > Wikitech-l mailing list
> > > Wikitech-l@lists.wikimedia.org
> > > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> >
> >
> >
> > --
> > Dmitry Brant
> > Senior Software Engineer / Product Owner (Android)
> > Wikimedia Foundation
> > https://www.mediawiki.org/wiki/Wikimedia_mobile_engineering
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Mobile JavaScript

2016-10-13 Thread Denny Vrandečić
Two stupid questions, I couldn't find the answer on MediaWiki.org:

1) is MediaWiki:common.js being loaded on the mobile web view? Or any other
js? What about User:Name/common.js?

2) same question but for the Wikipedia app. Does the app run any js?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mass migration to new syntax - PRO or CON?

2016-02-12 Thread Denny Vrandečić
Although I haven't touched MediaWiki code for a year or so, based on my
experience with large codebases with tons of contributors, I would be very
much PRO.

I understand it is a pain, but as Legoktm points out, it is a manageable
pain. Having a consistent and higher-quality code base is worth the
migration pain. Three more advantages:
* future changesets are cleaner, as one does not have to do the clean up in
addition to the actual change they wanted to do
* automatic testing tools can capture issues with a higher confidence if it
doesn't have to take historical exceptions into account
* most developers code by copy-and-paste of style, structures, and ideas.
So even if a new styleguide is in place, it can often be the case that a
developer will start building off the old styleguide as they simply keep
their code consistent with the code that they are looking at

hth


On Fri, Feb 12, 2016 at 11:57 AM Brad Jorsch (Anomie) 
wrote:

> On Fri, Feb 12, 2016 at 2:26 PM, Legoktm 
> wrote:
>
> > I think you're going to end up in rebase hell regardless, so we should
> > rip off the bandaid quickly and get it over with, and use the automated
> > tools we have to our advantage.
> >
>
> This. Just get it over with, and then it's only one patch screwing up
> rebases and blames instead of lots.
>
>
>
> --
> Brad Jorsch (Anomie)
> Senior Software Engineer
> Wikimedia Foundation
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What's the previous=yes parameter?

2015-11-16 Thread Denny Vrandečić
It comes up as the search result on a few Google queries, e.g. here:

https://www.google.com/search?q=%D7%92%27%D7%A8%D7%9E%D7%99+%D7%A7%D7%95%D7%A8%D7%91%D7%99%D7%9F=%D7%92%27%D7%A8%D7%9E%D7%99+%D7%A7%D7%95%D7%A8%D7%91%D7%99%D7%9F=chrome..69i57j0.319j0j7=chrome_sm=91=UTF-8=he



On Mon, Nov 16, 2015 at 4:21 PM Brian Wolff <bawo...@gmail.com> wrote:

> What did you do to get to a page that had that parameter?
>
> Parameters can come from anywhere, so its hard to say. But I would say
> most likely its from javascript (either common.js or gadget) [OTOH:
> searching for insource:previous doesn't give me anything]. Most of the
> time, MediaWiki would convert that to a url with index.php in it.
>
> --bawolff
>
> On 11/16/15, Denny Vrandečić <vrande...@gmail.com> wrote:
> > An example here:
> >
> >
> https://he.wikipedia.org/wiki/%D7%92'%D7%95%D7%A0%D7%AA%D7%9F_%D7%A4%D7%A8%D7%99%D7%99%D7%A1?previous=yes
> >
> > I couldn't find a description of the "previous" parameter. Does someone
> > know (and how do I find this out by myself)?
> >
> > (I was looking here:
> > https://www.mediawiki.org/wiki/Manual:Parameters_to_index.php but could
> not
> > find anything, so I assume this comes from some extension, but I do not
> > know how to figure that out)
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki on Google App Engine

2014-04-07 Thread Denny Vrandečić
Oh, my use case is simple: I just want to run my own instance of MediaWiki
for my personal wiki. I need to move it from my current provider.

I thought App Engine would be kinda simple and cheap to run MediaWiki on,
but it looks like it has a few rough edges, and I am wondering whether it
is worth to get those fixed or if I rather should just take another service
to run my server on.


On Sat Apr 05 2014 at 9:01:19 PM, Rusty Burchfield gicodewarr...@gmail.com
wrote:

 On Sat, Apr 5, 2014 at 8:49 PM, Tyler Romeo tylerro...@gmail.com wrote:
  App Engine is a different type of service than EC2. With App Engine you
  basically don't have to handle your own system administration.

 Thanks Tyler.  Just to clarify, I wasn't asking about the differences
 between the services.  I am familiar with App Engine.

 Maybe App Engine is perfect for what Denny needs, but it isn't
 strictly easier to use.

 Knowing the problems he is trying to solve will help with figuring out
 the solution.

 ~Rusty

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki on Google App Engine

2014-04-05 Thread Denny Vrandečić
Did anyone manage to get MediaWiki running on Google App Engine? I am a bit
dense, it seems, and would appreciate a few pointers.

Cheers,
Denny
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Webfonts

2014-03-14 Thread Denny Vrandečić
It's really in the tradeoffs, as others have mentioned.

It is obvious that we would love to get rid of tofu at all time. But what
is the effect on readership? As far as I know, a higher delivery speed of a
Website increases readership and the other way around. So if there was any
way to estimate what the trade-off would be -- i.e. how many readers gained
or lost for how much tofu avoided -- that would be great.

Bonus points if we could qualify both sides (e.g. readers who turn into
editors vs the other readers, tofu that prevents you from understanding the
article vs tofu in a language link you don't care anyway).

I don't think that actual bandwidth costs and loading time are interesting
per se for us, but only as secondary indicators.

That's speaking from a fantasy dream world where such metrics are easily
accessible :)


On Fri Mar 14 2014 at 8:08:12 AM, Kartik Mistry kartik.mis...@gmail.com
wrote:

 On Fri, Mar 14, 2014 at 7:06 PM, praveenp me.prav...@gmail.com wrote:
  ..
  And requirement of webfonts is also getting smaller and smaller with
 modern
  OSs. Even today's mobile OSs have better language support than that of
 old
  desktop OSs.

 For example: My Nexus 5 with modern OS, Android Kitkat has no font for
 my native language by default.

 --
 Kartik Mistry/કાર્તિક મિસ્ત્રી | IRC: kart_
 {kartikm, 0x1f1f}.wordpress.com

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: afch licensing question

2013-09-26 Thread Denny Vrandečić
CC-BY-SA/GFDL is in the same spirit as GPL, which is an obviously
acceptable license for WMF (since MediaWiki is written in it).

Can you explain the need to relicense it under MIT?


2013/9/26 Steven Walling swall...@wikimedia.org

 Forwarding, with permission.

 For background the AFC  Helper script is one that assists English
 Wikipedians reviewing pages in the Articles for Creation queue, which
 currently is severely backlogged.

 Any thoughts on the licensing issue, from folks with experience on the
 question of gadget/userscript licensing?

 -- Forwarded message --
 From: Mr. Donald J. Fortier II technical...@yahoo.com
 Date: Wed, Sep 25, 2013 at 8:30 PM
 Subject: afch licensing question
 To: swall...@wikimedia.org swall...@wikimedia.org


 Per the discussion on https://github.com/WPAFC/afch/issues/61 one of the
 previous contributors to the project refuses to agree to relicense AFCH
 under the MIT license. Right now, the script is licensed under
 CC-BY-SA/GFDL, as it was originally coded on-wiki (per
 [[Wikipedia:Copyright]] -- all text-based contributions).  This came about
 due to some confusion that can be seen
 https://github.com/WPAFC/afch/issues/60.  So, we are unsure as to where to
 go from here.  If we replace any code contributed to the project by the
 person that refuses to agree, can we dissolve any requirements to get him
 to agree?  Is there enough contribution from him to actually worry about it
 as he hasn't actually written any functions, just converted some stuff from
 old school JavaScript to jQuery?  Any advice/assistance on this would be
 appreciated.



 --
 Steven Walling,
 Product Manager
 https://wikimediafoundation.org/
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] WMFs stance on non-GPL code

2013-08-28 Thread Denny Vrandečić
Tim, thanks, I found this a very interesting aspect that I have not
considered before.


2013/8/28 Tim Starling tstarl...@wikimedia.org

 On 27/08/13 03:12, C. Scott Ananian wrote:
  Stated more precisely: a non-GPL-compatible license for an extension
 means
  that the extension can never be distributed with core.

 That is incorrect, the GPL does not say that. The GPL allows verbatim
 copies of source code, with no restrictions on the license of any
 bundled or dynamically linked code. Only non-source forms (or
 binaries in v2) have more restrictive conditions. Since the
 MediaWiki core and extensions are distributed solely in source form,
 the non-source (binary) conditions do not apply.

 -- Tim Starling


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] SpamBlackList change needs review

2013-08-21 Thread Denny Vrandečić
Hi all,

we really would like to deploy the URL datatype to Wikidata with the next
deployment, i.e. next week. But it would make a lot of sense to have the
SpamBlackList extension be aware of ContentHandler for that.

There's already a +1 on the changeset, we would really appreciate someone
reviewing and merging it:

https://gerrit.wikimedia.org/r/#/c/75867/

Your help would be very much appreciated.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Updating Wikipedia based on Wikidata changes

2013-07-29 Thread Denny Vrandečić
No, neither table would have unqiueness constraints (besides the primary
keys).


2013/7/29 Sean Pringle sprin...@wikimedia.org

 On Tue, Jul 23, 2013 at 1:42 AM, Denny Vrandečić 
 denny.vrande...@wikimedia.de wrote:

 
  * EntityUsage: one table per client. It has two columns, one with the
  pageId and one with the entityId, indexed on both columns (and one column
  with a pk, I guess, for OSC).


  * Subscriptions: one table on the client. It has two columns, one with
 the
  pageId and one with the siteId, indexed on both columns (and one column
  with a pk, I guess, for OSC).
 
  EntityUsage is a potentially big table (something like pagelinks-size).
 
  On a change on Wikidata, Wikidata consults the Subscriptions table, and
  based on that it dispatches the changes to all clients listed there for a
  given change. Then the client receives the changes and based on the
  EntityUsage table performs the necessary updates.
 
  We wanted to ask for input on this approach, and if you see problems or
  improvements that we should put in.
 

 Sounds OK to me.

 Will (or could) pageId/entityId and pageId/siteId have unique constraints?

 BR
 Sean
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Remove 'visualeditor-enable' from $wgHiddenPrefs

2013-07-23 Thread Denny Vrandečić
2013/7/23 MZMcBride z...@mzmcbride.com

 What we need is for you and Erik to recognize that you're wrong and to
 make this right. Is there anyone besides you and Erik who agree with the
 position you're taking here?


They are not alone. I also agree with their position, and I sincerely hope
we are not alone.

E.g. here:

http://insideintercom.io/product-strategy-means-saying-no/



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Denny Vrandečić
2013/7/19 Antoine Musso hashar+...@free.fr

 And installing yoursite would be something like:

  mkdir mysite
  cd mysite
  composer require mediawiki/core
  composer require wikibase/wikibase
  # which installs data-values/data-values ask/ask as well


Just curious, why is composer require mediawiki/core needed?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Denny Vrandečić
Ah, OK, understood. Thanks.


2013/7/22 Jeroen De Dauw jeroended...@gmail.com

 Hey,

  wikibase/wikibase does not list mediawiki/core as a dependency :)

 Indeed. Right now this just allows you to install Wikibase into an existing
 MW install. Before we can go all the way, we first need to be able to do a
 MediaWiki (+extensions) install, which is something still under discussion.

 Cheers

 --
 Jeroen De Dauw
 http://www.bn2vs.com
 Don't panic. Don't be evil. ~=[,,_,,]:3
 --
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Denny Vrandečić
Hi,

please, everyone calm down and honesty try harder to assume faith and
respect the capabilities of each other. Respect includes to avoid
terminology like to bitch, to sneak in, stupid when describing each
other's actions.

One good way is when you are angry about an email, step back, wait a day,
calm down, and answer only then. Since this matter is important, but not
urgent, there is nothing that requires quick responses. No one's going to
make a decision for all of us right away, so there is no need to reply
quickly and escalate.


I assume Ryan didn't mean to single out the Wikidata development team.
Other teams have done this as well -- the Translate extension depends on
ULS, CodeEditor depends on WikiEditor, Semantic Result Formats depends on
Semantic MediaWiki etc. I assume Ryan just choose Wikibase because it
exemplifies the symptoms so well. Ryan, please correct me if my assumptions
are wrong.

The reminder of this mail has two parts, first it tries to explain the
motivation of why the Wikidata dev team did things the way we did, and
second, it asks this list to please resolve the underlying actual issue.

First:

The original message by Ryan lists the number of dependencies for Wikibase.
He lists, e.g. Scribunto as an extension.
The question is, how to avoid this dependency?
Should we move Scribunto to core?
Should we turn Scribunto and Wikibase into one extension?

The same for Babel and ULS, listed as optional extensions.

Another extension mentioned is Diff. Diff provides generic functionality
that can be used by different extensions.
It seems that in this case the suggestion that Ryan makes is to move it to
core.
So shall we move functionality that more than one extension depend on
generally to core?

One of the reasons for DataValues, Ask and some others being in their own
extension is that they already are or are planned very soonish to be reused
in SMW.
Since it is suggested that generic functionality should be moved to core,
would that include functionality shared by Wikibase and SMW?
Or how else would we manage that shared code?

Another reason for having separate components like the WikibaseModel or
Diff is that they are not MediaWiki extensions, but pure PHP libraries.
Any PHP script can reuse them. Since the WikibaseModel is not trivial, this
should help with the writing of bots and scripts dealing with Wikidata data.
How should we handle such components?
Should they be moved to Wikibase and we require every script to depend on
the whole of Wikibase, and thus MediaWiki?

If you add everything needed by Wikibase into a single extension, how do
you ensure that no unnecessary dependencies creep in?
Is there a code-analyzer that can run as part of Jenkins that check that
the architecture is not being violated, and that parts of the code to not
introduce dependencies on other parts where they should not?
Separating such components allows us to check this part of the architecture
during CI, which is indeed extremely helpful.


I would indeed be very much interested in better solutions for these
questions than we currently have.

As Ryan said in his thread-opening email, For legitimate library-like
extensions I have no constructive alternative, but there must be some sane
alternative to this.
A lot of the issues would be resolved if we had this constructive
alternative. The solution will likely also help to deal with the other
dependencies.
I hope it is understandable that I do not consider the time of the Wikidata
development well spent to replace our architecture with something else,
before we have agreed on what this something else should be.


Second:

I would be interested in answers to the above questions.
But maybe we really should concentrate on getting the actual question
resolved, which has been discussed on this list several times without
consensus, those that would allows us to answer the above questions
trivially:
how should we deal with modularity, extensions, components, etc. in
MediaWiki?
I hope the answer is not throw everything in core or into monolithical
extensions which do not have dependencies among each other, but let's see
what the discussion will bring. Once we have this answer, we can implement
the results. Until then I am not sure whether I found it productive to
single the Wikidata team out in the way we are doing things.

I do not think the Wikidata development team is anticipating and
predetermining answers to those questions, but in order for us to continue
develop we had to make some decisions for us. But we don't tell anyone else
how to do their job, and we try to stick to current consensus on how to do
things. And as soon as we have consensus, we will adopt. We did this
previously a few times, and I don't see why we should not do it again.

Cheers,
Denny





2013/7/22 Jeroen De Dauw jeroended...@gmail.com

 Hey,

  Validator was rejected from being included in core

 I'm happy it did. The code was quite poor at that time (two years back?).
 And it still 

[Wikitech-l] Updating Wikipedia based on Wikidata changes

2013-07-22 Thread Denny Vrandečić
Hi,

sorry for another long Email today.

Currently, when you change a Wikidata item, its associated Wikipedia
articles get told to update, too. So your change to the IMDB ID of a movie
in Wikidata will be pushed to all language versions of that article on
Wikipedia. Yay!

There are two use cases that currently are not possible:

* a Wikipedia article on a city might display the mayor. Now someone
changes on Wikidata the label of the mayor - the Wikipedia article will get
updated the next time the page is rendered, but there is no active update
of the page.

* a Wikipedia article might want to include data about another item than
the associated item - most importantly for references, where I might be
interested in the author of a book, it's year of publication, etc. This
feature is currently disabled (even though it would be trivial to switch it
on) because this information would only get updated when the page is
actively rerendered.

In order to enable these use cases we need to track on which pages (on
Wikipedia) an item (from Wikidata) is used. We are thinking of doing this
in two tables:

* EntityUsage: one table per client. It has two columns, one with the
pageId and one with the entityId, indexed on both columns (and one column
with a pk, I guess, for OSC).

* Subscriptions: one table on the client. It has two columns, one with the
pageId and one with the siteId, indexed on both columns (and one column
with a pk, I guess, for OSC).

EntityUsage is a potentially big table (something like pagelinks-size).

On a change on Wikidata, Wikidata consults the Subscriptions table, and
based on that it dispatches the changes to all clients listed there for a
given change. Then the client receives the changes and based on the
EntityUsage table performs the necessary updates.

We wanted to ask for input on this approach, and if you see problems or
improvements that we should put in.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Updating Wikipedia based on Wikidata changes

2013-07-22 Thread Denny Vrandečić
Small correction.


2013/7/22 Denny Vrandečić denny.vrande...@wikimedia.de

 * Subscriptions: one table on the client. It has two columns, one with the
 pageId and one with the siteId, indexed on both columns (and one column
 with a pk, I guess, for OSC).


That's entityId - siteId, not pageId to siteId.


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Updating Wikipedia based on Wikidata changes

2013-07-22 Thread Denny Vrandečić
Another correction, same line. Gosh, it's hot here. Brain not working. Me
off home.



2013/7/22 Denny Vrandečić denny.vrande...@wikimedia.de

 2013/7/22 Denny Vrandečić denny.vrande...@wikimedia.de

 * Subscriptions: one table on the client. It has two columns, one with
 the pageId and one with the siteId, indexed on both columns (and one column
 with a pk, I guess, for OSC).


 That's entityId - siteId, not pageId to siteId.


And that's repo. Not client.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Denny Vrandečić
2013/7/22 Antoine Musso hashar+...@free.fr

 Given we migrated our community from
 subversion to git, I am confident enough that using composer will be
 very easy to the community.


:D
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Denny Vrandečić
2013/7/22 Ryan Lane rlan...@gmail.com

 For pure PHP libraries, they could be distributed like pure PHP libraries
 usually are. They can be packaged for multiple distros and be available via
 apt/yum/composer (or pear). Having them as MediaWiki extensions is somewhat
 awkward.


Yes, agree on that.




 How does splitting extensions apart make it easier to check this during CI?
 Is there some automated way to do this when they are split, but not when
 they are together?


Yes.
Assume you have class Ext which relies on class Core, but class Core should
not rely on class Ext, because they are on different architectural levels.
If Ext and Core are together, your CI will load them both and you won't
notice if you access Ext from Core. No error is thrown.
If Core is not together with Ext, then unit-testing Core will not load Ext.
A call to Core will fail, and your CI will discover the error.
The split helps keeping the architectural integrity of the code, and avoids
it being



 My concern was increasing complexity for admins with no effort towards
 decreasing it. Composer may indeed be an answer to some of the issues. From
 an admin's perspective using composer shouldn't be much harder than what
 they are currently expected to do and it would simplify the extension
 process.


Yes, I think so too. Composer could be a viable option, and within Wikidata
we are quite happy with it. We would recommend adoption by the wider
community.
If the wider community chooses another solution for the dependency problem,
we will adapt Wikibase to it too.

Cheers,
Denny
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki extensions as core-like libraries: MediaWiki's fun new landmine for admins

2013-07-22 Thread Denny Vrandečić
2013/7/22 Tyler Romeo tylerro...@gmail.com

 Architectural integrity of code is a design-level issue. Continuous
 integration is a programming and quality assurance-level issue. They have
 nothing to do with each other, and you can maintain architectural integrity
 just fine without having to split your singular product into multiple
 products.


I would disagree with you regarding your statement that architectural
integrity and quality assurance have nothing to do with each other. I hope
I do not have to explain - but if I do, feel free to ask.

I agree with you regarding your statement that you can maintain
architectural integrity without checking it automatically. You can also
make sure that code style is not violated manually. Or that code works
without unit tests and by testing manually. But considering that reviewing
resources are scarce, I prefer to test as much automatically as possibly by
CI and relieve the reviewer from considering e.g. the dependency
architecture of your classes during a review. I do not see the advantage of
doing that manually.

I think that also core would benefit if architectural constraints could be
enforced by CI.

 If that were true, than the MW core would be split across fifty
 different repositories. (If Makefiles can compile different parts of a
 product independently, then so can we.)

I fail to understand what you mean here, sorry.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Public hangout on Travis CI

2013-06-19 Thread Denny Vrandečić
Tomorrow at 2pm Berlin time, the Wikidata team will host a public hangout
on Travis. This is mostly meant to inform ourselves, but it might be a good
resource for others as well.

We might be late, as this is the first time we are doing such a thing, so
bring a bit patience.

We will try to record it and make it available afterwards.

We will send the login data and URLs to IRC on #wikimedia-wikidata around
that time.

Thanks to Jeroen for preparing the session, and Lydia for the technical
setup.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [GSoC 2013] Wikidata Entity Suggester prototype

2013-05-22 Thread Denny Vrandečić
Awesome, that looks already pretty promising!

I am not completely sure I understand a few things:

1074167410
106
107215627
156

what do the two properties without a value mean here?

I would have expected:

1074167410
107215627

and now ask for suggested values for 31,
or for suggested properties to add.

But these are already details. The results seem pretty promising.

How quickly are updates processed by the backend, any idea?







2013/5/21 Nilesh Chakraborty nil...@nileshc.com

 Hello,

 I have some updates on the Entity Suggester prototype. Here are the two
 repos:
 1. https://github.com/nilesh-c/wikidata-entity-suggester
 2. https://github.com/nilesh-c/wes-php-client

 As it stands now, deployment-wise, I have a single Java war file that's
 deployed on Tomcat. And there's a PHP client that can be used from PHP code
 to push data into or fetch suggestions from that engine.

 I have made a simple, crude demo that you can access here
 -http://home.nileshc.com/wesTest.php.
 You can find the code for it in the wes-php-client repo. It's hosted on my
 home desktop temporarily. I am having some non-technical problems with the
 VPS I'm managing and customer support is working on it. After it starts to
 work, I may try deploying this to the VPS. So, if you have to face an
 embarrassing 404 page, I'm really sorry, I'll be working on it. If it stays
 up, well and good. :)
 http://home.nileshc.com/wesTest.php

 You can give it a bunch of property IDs, or a bunch of property-value
 pairs, or a mix of both; select the the type of recommendation and hit Get
 suggestions! :) Feedback is much appreciated.

 Cheers,
 Nilesh


 On Tue, May 14, 2013 at 2:36 AM, Matthew Flaschen
 mflasc...@wikimedia.orgwrote:

  On 05/13/2013 04:28 PM, Nilesh Chakraborty wrote:
   Hi Matt,
  
   Yes, you're right, they are available as separately licensed downloads.
   Only the stand-alone Serving Layer is needed for the Entity
 Suggester.
   It's licensed under Apache v. 2.0. Since I'm using the software as-is,
   without any code modifications, I suppose it's compatible with what
   Wikidata would allow?
 
  Apache 2.0-licensed software should be fine, even if you do need/want to
  modify it.
 
  Matt Flaschen
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 A quest eternal, a life so small! So don't just play the guitar, build one.
 You can also email me at cont...@nileshc.com or visit my
 websitehttp://www.nileshc.com/
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Code review of the Wikidata code

2013-05-17 Thread Denny Vrandečić
Hey,

in order to sanity check the code we have written in the Wikidata project,
we have asked an external company to review our code and discuss it with
the team. The effort was very instructional for us.

We want to share the results with you. The report looks dauntingly big, but
this is mostly due to the appendixes. The first 20 pages are quite worth a
read.

Since the Wikidata code is an extension to MediaWiki, a number of the
issues raised are also relevant to the MediaWiki code proper. I would hope
that this code review can be regarded as a contribution towards the
discussion of the status of our shared code-base as well.

I will unfortunately not be at the Hackathon, but a number of Wikidata
developers will, please feel free to chat them up. Daniel Kinzler is also
preparing a presentation to discuss a few lessons learned and ideas at the
Hackathon.

The review is available through this page: 
http://meta.wikimedia.org/wiki/Wikidata/Development/Code_Review

Cheers,
Denny



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] category intersection conversations

2013-05-14 Thread Denny Vrandečić
I agree with most of the use cases, and I think they will possible with
Wikidata.

My suggestion would be to wait for this year, and then see which of the use
cases are still open: I think that by the end of the year we should have
made all of them possible (besides the Searches might be more than
interesection, I am not sure about this one)


2013/5/9 James Forrester jforres...@wikimedia.org

 On 8 May 2013 18:26, Sumana Harihareswara suma...@wikimedia.org wrote:

  Recently a lot of people have been talking about what's possible and
  what's necessary regarding MediaWiki, CatScan-like tools, and real
  category intersection; this mail has some pointers.
 
  The long-term solution is a sparkly query for, e.g., people with aspects
  novelist + Singaporean, and it would be great if Wikidata could be the
  data-source.  Generally people don't really want to search using
  hierarchical categories; they want tags and they want AND. But
  MediaWiki's current power users do use hierarchical labels, so any
  change would have to deal with current users' expectations.  Also my
  head hurts just thinking of the but my intuitively obvious ontology is
  better than yours arguments.
 

 To put a nice clear stake in the ground, a magic-world-of-loveliness
 sparkly proposal for 2015* might be:

 * Categories are implemented in Wikidata
 * - They're in whatever language the user wants (so fr:Chat and en:Cat and
 nl:kat and zh-han-t:貓 …)
 * - They're properly queryable
 * - They're shared between wikis (pooled expertise)

 * Pages are implicitly in the parent categories of their explicit
 categories
 * - Pages in Politicians from the Netherlands are in People from the
 Netherlands by profession (its first parent) and People from the
 Netherlands (its first parent's parent) and Politicians (its second
 parent) and People (its second parent's parent) and …
 * - Yes, this poses issues given the sometimes cyclic nature of
 categories' hierarchies, but this is relatively trivial to code around

 * Readers can search, querying across categories regardless of whether
 they're implicit or explicit
 * - A search for the intersection of People from the Netherlands with
 Politicians will effectively return results for Politicians from the
 Netherlands (and the user doesn't need to know or care that this is an
 extant or non-extant category)
 * - Searches might be more than just intersections, e.g. Painters from
 the United Kingdom AND Living people NOT Members of the Royal Academy
 or whatever.
 * - Such queries might be cached (and, indeed, the intersections that
 people search for might be used to suggest new categorisation schemata that
 wikis had previously not considered - e.g. British politicians  People
 with pet cats  People who died in hot-ballooning accidents)

 * Editors can tag articles with leaf or branch categories, potentially
 over-lapping and the system will rationalise the categories on save to the
 minimally-spanning subset (or whatever is most useful for users, the
 database, and/or both)
 * - Editors don't need to know the hierarchy of categories *a priori* when
 adding pages to them (yay, less difficulty)
 * - Power editors don't need to type in loads of different categories if
 they have a very specific one in mind (yay, still flexible)
 * - Categories shown to readers aren't necessarily the categories saved in
 the database, at editorial judgement (otherwise, would a page not be in
 just a single category, namely the intersection of all its tagged
 categories?)

 ​Apart from the time and resources needed to make this happen and
 operational, does this sound like something we'd want to do? It feels like
 this, or something like it, would serve our editors and readers the best
 from their perspective, if not our sysadmins. :-)

 [Snip]
 ​

  I think the best place to pursue this topic is probably in
  https://meta.wikimedia.org/wiki/Talk:Beyond_categories .  It's unlikely
  Wikimedia Foundation will be able to make engineers available to work on
  this anytime soon, but I would not be surprised if the Wikidata
  developer community or volunteers found this interesting enough to work
 on.


 ​I guess I should post this there too, maybe once someone's told me if it's
 mad-cap. ;-)​

 J.
 --
 James D. Forrester
 Product Manager, VisualEditor
 Wikimedia Foundation, Inc.

 jforres...@wikimedia.org | @jdforrester
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.

Re: [Wikitech-l] category intersection conversations

2013-05-14 Thread Denny Vrandečić
2013/5/9 Brian Wolff bawo...@gmail.com

 From what I hear wikidata phase 3 is going to basically be support for
 inline queries. Details are vauge but if they support the typical types of
 queries you associate with semantic networks - there is category
 intersection right there.



Right.



 If any of the wikidata folk could comment on what sort of queries are
 planned for phase 3, performance/scaling considerations, technologies being
 considered (triple store?) Id be very interested in hearing. (I recognize
 that future plans may not exist yet)


For now we aim at property value or property value-restriction type
queries, and their intersections, i.e.

lives in - California
born - before 1980
lives in - California AND born - before 1980

We are considering several different technologies, and we had a number of
discussions already with a number of people.

My current gut feeling, based on the limited tests we did so far is, that
we start with our normal SQL setup for the first two type of queries, and
that we extend to Solr for the third type of queries (i.e. intersections).



 more generally it would be interesting to know the performance
 characteristics of SPARQL type query systems, since people seem to be
 talking about them. Are they a non starter or could they be feasible?


We are currently not aiming to provide an unrestricted SPARQL endpoint.
But, I know that a few other organizations are very interested in setting
that up. If it shows that it would be feasible for us, I'd be very happy if
we did it too.



 Semantic and efficient are not words I associate with each other, but that
 is due to rumour not actual data. (Although my brief googling doesnt
 exactly look promising)


That's indeed a rumor, but leads to another discussion :)

Cheers,
Denny



 -bawolff
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [GSoC 2013] Wikidata Entity Suggester prototype

2013-05-13 Thread Denny Vrandečić
That's awesome!

Two things:
* how set are you on a Java-based solution? We would prefer PHP in order to
make it more likely to be deployed.
* could you provide a link to a running demo?

Cheers,
Denny



2013/5/13 Nilesh Chakraborty nil...@nileshc.com

 Hi everyone,

 I'm working on a prototype for the Wikidata Entity Suggester (Bug
 #46555https://bugzilla.wikimedia.org/show_bug.cgi?id=46555).
 As of now, it is a command-line client, completely written in Java, that
 fetches recommendations from a Myrrix server layer.

 Please take a look at the GitHub repository here:
 https://github.com/nilesh-c/wikidata-entity-suggester/
 I would really appreciate it if you can take the time to go through the
 README and provide me with some much-needed feedback. Any questions or
 suggestions are welcome. If you're curious, you can set up the whole thing
 on your own machine.

 Check out a few examples too:
 https://github.com/nilesh-c/wikidata-entity-suggester/wiki/Examples

 It can suggest properties and values for new/not-yet-created items (and
 also currently present items), if it's given a few properties/values as
 input data.

 I intend to write a REST API and/or a simple PHP frontend for it before I
 set it up on a remote VPS, so that everyone can test it out. Some
 experimentation and quality optimization is also due.

 Cheers,
 Nilesh
 (User Page - https://www.mediawiki.org/wiki/User:Nilesh.c)

 --
 A quest eternal, a life so small! So don't just play the guitar, build one.
 You can also email me at cont...@nileshc.com or visit my
 websitehttp://www.nileshc.com/
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Usage of Wikidata: the brilliance of Wikipedians

2013-04-25 Thread Denny Vrandečić
I am completely amazed by a particularly brilliant way that Wikipedia uses
Wikidata. Instead of simply displaying the data from Wikidata and removing
the local data, a template and workflow is proposed, which...

* grabs the relevant data from Wikidata
* compares it with the data given locally in the Wikipedia
* displays the Wikipedia data
* adds a maintenance category in case the data is different

This allows both communities to check the maintenance category, provide a
security net for vandal changes, still notice if some data has changed,
etc. -- and to phase out the local data over time when they get comfortable
and if they want to. It is a balance of maintenance effort and data quality.

I am not saying that is the right solution in every use case, for every
topic, for every language. But it is a perfect example how the community
will surprise us by coming up with ingenious solutions if they get enough
flexibility, powerful tools, and enough trust.

Yay, Wikipedia!

The workflow is described here:


http://en.wikipedia.org/wiki/Template_talk:Commons_category#Edit_request_on_24_April_2013:_Check_Wikidata_errors


There is an RFC currently going on about whether and how to use Wikidata
data in the English Wikipedia, coming out of the discussion that was here a
few days ago. If you are an English Wikipedian, you might be interested:


http://en.wikipedia.org/wiki/Wikipedia:Requests_for_comment/Wikidata_Phase_2



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Support for multiple content languages in MW core

2013-04-24 Thread Denny Vrandečić
Just to add, ContentHandler is deployed on all Wikimedia projects.


2013/4/24 Bartosz Dziewoński matma@gmail.com

 I think ContentHandler  already theoretically has the ability to store
 per-page language info, it's just not being used. (And of course it'd
 have to be actually deployed somewhere else than Wikidata.) Unless I'm
 missing something, this mostly needs an interface (which is not a
 small undertaking by any means, either).

 --
 -- Matma Rex

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Data-URI and WebP for thumbnails, a cute experiment

2013-04-22 Thread Denny Vrandečić
That looks like a cool idea.

I am trying to experiment it on a few pages, and it seems to considerably
reduce the number of web requests (for
https://en.wikipedia.org/wiki/Vincent_van_Gogh it goes from 120 to under 40
requests).

But the pages get quite bigger, obviously. Also, it introduces a caching
issue for images...

Still, pretty cool.





2013/4/22 Mathias Schindler mathias.schind...@gmail.com

 Hi everyone,

 Magnus was very kind to implement an idea that consists of two parts:

 1. use of WebP (http://en.wikipedia.org/wiki/WebP) instead of PNG/JPEG
 for thumbnails in Wikipedia articles
 2. use of Data-URIs (https://en.wikipedia.org/wiki/Data_URI_scheme) to
 inline incluse those thumbnails.

 The experiment can be watched using Chrome and Opera browsers at

 http://toolserver.org/~magnus/wp_data_url.php?lang=entitle=Albrecht_D%C3%BCrerwebp=1

 Firefox and other Data-URI capable but currently WebP incapable
 browsers can watch the second part of the experiment at

 http://toolserver.org/~magnus/wp_data_url.php?lang=entitle=Albrecht_D%C3%BCrer

 Base64 encoding has a ~1/3 overhead. This overhead is reduced when the
 server sends out gzip files.

 Mathias

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [GSoc2013] Interested in developing Entity Suggester

2013-04-22 Thread Denny Vrandečić
You can get the data from here:
http://dumps.wikimedia.org/wikidatawiki/20130417/

All items with all properties and their values are inside the dump. The
questions would be, based on this data, could we make suggestions for:

* when I create a new statement, suggest a property. then suggest a value
* suggest qualifier properties, then suggest qualifier values (there is no
data yet on qualifiers, but this would change soon)
* suggest properties for references, and values

Does this help?

Cheers,
Denny





2013/4/19 Nilesh Chakraborty nil...@nileshc.com

 Hi,

 I am a 3rd year undergraduate student of computer science, pursuing my
 B.Tech degree at RCC Institute of Information Technology. I am proficient
 in Java, PHP and C#.

 Among the project ideas on the GSoC 2013 ideas page, the one particular
 idea that seemed really interesting to me is developing an Entity
 Suggester for Wikidata. I want to work on it.

 I am passionate about data mining, big data and recommendation engines,
 therefore this idea naturally appeals to me a lot. I have experience with
 building music and people recommendation systems, and have worked with
 Myrrix and Apache Mahout. I recently designed and implemented such a
 recommendation system and deployed it on a live production site, where I'm
 interning at, to recommend Facebook users to each other depending upon
 their interests.

 The problem is, the documentation for Wikidata and the Wikibase extension
 seems pretty daunting to me since I have not ever configured a mediawiki
 instance or actually used it. (I am on my way to try it out following the
 instructions at
 http://www.mediawiki.org/wiki/Summer_of_Code_2013#Where_to_start.) I can
 easily build a recommendation system and create a web-service or REST based
 API through which the engine can be trained with existing data, and queried
 and all. This seems to be a collaborative filtering problem (people who
 bought x also bought y). It'll be easier if I could get some help about the
 part where/how I need to integrate it with Wikidata. Also, some sample
 datasets (csv files?) or schemas (just the column names and data types?)
 would help a lot, for me to figure this out.

 I have added this email as a comment on the bug report at
 https://bugzilla.wikimedia.org/show_bug.cgi?id=46555#c1.

 Please ask me if you have any questions. :-)

 Thanks,
 Nilesh

 --
 A quest eternal, a life so small! So don't just play the guitar, build one.
 You can also email me at cont...@nileshc.com or visit my
 websitehttp://www.nileshc.com/
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ideating on GSoC project : Mobilize Wikidata

2013-04-17 Thread Denny Vrandečić
2013/4/10 Pragun Bhutani pragu...@gmail.com

 Based on a discussion I had with YuviPanda and MaxSem on #wikimedia-mobile,
 I've got a few things to add:

 - It might be a good idea to let Wikidata detect when it's being accessed
 through a mobile device and then have it adjust the widths and such of the
 box-structures accordingly and then pass them to MobileFrontend.

 Maybe we can set up a Wikilabs instance with MobileFrontend like Quim Gil
 suggested and then we can see how much work there is involved with trying
 to make WIkidata mobile-friendly.

 If we can get it to work with MobileFrontend, that'll be excellent but if
 it turns out to be too complex or too dirty a solution, it would make more
 sense to make a completely new extension for it.


I think that sounds like a good plan.


 Although the scope of the project is not very clear at the moment, I think
 that a feasible implementation plan could be worked out with respect to the
 GSoC timeline and if it's required, I can continue to work on the project
 after GSoC ends.


I am glad to hear that. But I think it would be important to scope the
project so that it can be finished in GSoC time - but obviously, further
work on it afterwards will be gladly appreciated.

So, let's consider what should be working:
* create a mobile site for Wikidata
* displays the content in a layout that is more adequate for mobile devices
* retains different language versions
* Bonus: easy to edit

First step would be to figure out the exact technology to use, i.e. whether
it would use the MobileFrontend or not, etc. We would help with setting it
up on labs.

Cheers,
Denny


 On Tue, Apr 9, 2013 at 6:49 PM, Quim Gil q...@wikimedia.org wrote:

  On 04/09/2013 02:39 AM, Denny Vrandečić wrote:
 
  I would hope
 
 
   It would also be extremely good to look
 
 
   I would assume
 
   I don't think
 
 
  Can the Wikidata and Mobile teams please answer with the best of your
  knowledge to the questions at
 
  Bug 43065 - WikibaseRepo to be mobile friendly (tracking)
  https://bugzilla.wikimedia.**org/show_bug.cgi?id=43065
 https://bugzilla.wikimedia.org/show_bug.cgi?id=43065
 
  Beyond hope and believe, the fact is that I couldn't get any answer more
  precise than Interesting when asking about this project to people in
  those teams. And as for today I'm not confident to tell to a candidate
 like
  Pragun whether this project is too complex or too simple, and where the
  complexity/simplicity relies.
 
  In case of doubt I'd prefer to play safe and actually not encourage GSOC
 /
  OPW to apply for a project like this, before we regret around August.
 
  Is it possible to have a Wikidata / WikidataRepo test instance somewhere
  with MobileFrontend enabled, so we can all have a look and know more
 about
  the gap this project should fill?
 
  --
  Quim Gil
  Technical Contributor Coordinator @ Wikimedia Foundation
  http://www.mediawiki.org/wiki/**User:Qgil
 http://www.mediawiki.org/wiki/User:Qgil
 
 
  __**_
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/**mailman/listinfo/wikitech-l
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 



 --
 Pragun Bhutani
 http://pragunbhutani.in
 Skype : pragun.bhutani
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Pushing Technical changes to Wikimedia projects

2013-04-15 Thread Denny Vrandečić
One problem is that the necessity to link to a working test environment
does not allow to develop ideas with the community before they are
implemented.


2013/4/9 Amir E. Aharoni amir.ahar...@mail.huji.ac.il

 2013/4/9 Steven Walling steven.wall...@gmail.com:
  One system that I find a lot of potential value in is the Wikitech
  Ambassadors mailing list. I hope that mailing list grows and can be the
  place where we make announcements that should be communicated widely.

 Yes, that, but to make it really useful testing environments must be set
 up.

 Every email to that mailing list must have a tl;dr version that MUST
 have the following two things:
 1. A two-line-max description of the feature that is going to be deployed.
 2. A link to a working testing environment where the feature can be
 tested. It can be labs or something like test.wikipedia. Or a feature
 can be available using a preference which is off be default.

 Let people test - and make it easy.

 --
 Amir Elisha Aharoni · אָמִיר אֱלִישָׁע אַהֲרוֹנִי
 http://aharoni.wordpress.com
 ‪“We're living in pieces,
 I want to live in peace.” – T. Moore‬

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Pushing Technical changes to Wikimedia projects

2013-04-15 Thread Denny Vrandečić
Those are very good points, both of them. Thanks.


2013/4/9 Matthew Flaschen mflasc...@wikimedia.org

 On 04/09/2013 12:18 PM, Denny Vrandečić wrote:
  I thought that in order to discuss these design decisions with the
  community before hand, telling them on their respective village pump is
  sufficient. Not so it seems. No single channel would find acceptance to
  communicate with the community. This, obviously means, that it is not
  actionable to communicate with the community.

 First of all, some people (including on English Wikipedia) are quite
 happy with the idea of deploying Wikidata Phase II.  Those who are not
 seem to be arguing for a community-wide RFC before allowing deployment.
  It does not seem that they are arguing no one was notified.

  What about setting up a community selected body of representatives to
  discuss such issues beforehand? At first, it sounds like a good idea -
 but
  the issue is, it makes the process only more complicated without at all
  resolving the underlying issues. Does anyone really think that such a
 body
  would stop the criticism before or after the deployment of the change in
  question? Yeah, right. Doesn't change a thing.

 Yeah, I do not think this is a good idea.  When something does need a
 community decision (not saying everything does), I don't think some new
 special council will help anything.  That would basically introduce
 indirect democracy in place of direct consensus.  Community-wide RFCs
 are not always smooth, but they do usually work.

 Matt Flaschen

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployment highlights - week of April 8th

2013-04-10 Thread Denny Vrandečić
Thank you, also for the explanation. I am glad we could defuse this.
 On Apr 10, 2013 7:07 AM, Risker risker...@gmail.com wrote:

 On 9 April 2013 12:15, Denny Vrandečić denny.vrande...@wikimedia.de
 wrote:

  Risker,
 
  I find myself unconvinced by your argumentation as I perceive it as
  inconsistent.
 
  On the one hand, you suggest that before we enable the option to access
  data from Wikidata using either Lua or a parser function should be
  discussed and decided by the community beforehand - the same community,
  that has been informed since mid 2011 that this change is coming. You
  suppose that the community can actually come together and decide this
  globally.
 

  On the other hand, you are not trusting the community with the use of the
  same feature. You say they would weaponize the feature, that the
  community will be unable to adapt to the new feature, and that it needs
 to
  discuss first how to use it, and for deployment to wait a few months (I
 do
  not fully understand why you assume that a few months will be enough to
  sort things out). You seem to assume that a wiki as large and active as
 the
  English Wikipedia is not resilient enough to absorb the rather minor
  technical change we are introducing.
 
  It is, technically, a minor change. Socially it can lead to bigger
 changes
  -- but I found it hard to believe that anyone can predict the actual
 effect
  on the English Wikipedia community. This has to be seen and experienced,
  and I, for one, trust the English Wikipedia community to be as awesome as
  always, and to absorb and use this new features in ways no one has even
  imagined yet.
 

 I'll just quickly point out the dichotomy in what you're saying here: first
 you say that you doubt the project can come together and make a global
 decision, and then you say that it is resilient enough to ... make a global
 decision.

 This is, from the technical perspective, a small change.  It is a major
 philosophical change: actively preventing editors from making content
 changes on the home project.  It's also a contradictory change: it makes it
 more complex to edit content, but at the same time major investment of
 developer time and talent is being invested into making the editing process
 simpler and more intuitive through Visual Editor (and ultimately projects
 like Flow and Echo).  Infoboxes (and ultimately lists) are an integral part
 of the content of Wikipedias; making text easier to edit and other integral
 content more difficult to edit suggests that, at minimum, there are some
 fairly significant philosophical and practical conflicts within the overall
 platform development. From the community perspective, a simpler editing
 interface has been a community priority for almost as long as Wikipedia has
 been in existence. It is good to see the WMF putting its (much improved)
 financial resources into a project that has been near the top of the
 editorial wish list for so long, and that investment has very good
 prospects of paying off with both editor retention and editor recruitment.
 Unless I've missed something, that's still a key metric for measuring
 success.  I agree that Wikidata is cool (to use others' expressions), but
 I've not seen anything indicating it is attracting new editors; instead it
 seems to be drawing in editors from other WMF projects, who are now doing
 less editing in their home projects.  I'd hope that is a short-term
 change as Wikidata develops as a project.

 I suppose what I am saying here is that Wikidata doesn't seem to be working
 within the articulated master vision of the platform (which focuses on
 simplifying the editorial process), and absent the ability to edit the
 wikidata on the project where it appears, I don't see how it's going to get
 there.  It doesn't make Wikidata any less of a great idea, and I still
 think it has potential for new projects to build content. I'm just having a
 hard time seeing where it's fitting with everything else that is going on,
 if data can't be changed by using real words  directly on the wikipedia
 project.


 What I am looking for is a good, plain-English explanation of how these two
 different directions in software development are not divergent, and how
 they are intended to co-exist without one adversely affecting the
 effectiveness of the other.


  Since you are saying that our communication has not been sufficient, I
  would be very glad to hear which channels we have missed so that we can
 add
  them in the future.
 
 
Since Wikidata phase 2 is actually a less intrusive change than phase
  1,
and based on the effectiveness of the discussion about phase 2 on the
English Wikipedia so far, I think that a post-deployment discussion
 is
   the
right way to go.
  
   In what way is this less intrusive?  Phase 1 changed the links to other
   projects beside articles, a task that was almost completely done by
 bots,
   and did not in any way affect the ability to edit or to modify

Re: [Wikitech-l] Deployment highlights - week of April 8th

2013-04-10 Thread Denny Vrandečić
2013/4/10 Risker risker...@gmail.com

 On 9 April 2013 12:15, Denny Vrandečić denny.vrande...@wikimedia.de
 wrote:

  Risker,
 
  I find myself unconvinced by your argumentation as I perceive it as
  inconsistent.
 
  On the one hand, you suggest that before we enable the option to access
  data from Wikidata using either Lua or a parser function should be
  discussed and decided by the community beforehand - the same community,
  that has been informed since mid 2011 that this change is coming. You
  suppose that the community can actually come together and decide this
  globally.
 

  On the other hand, you are not trusting the community with the use of the
  same feature. You say they would weaponize the feature, that the
  community will be unable to adapt to the new feature, and that it needs
 to
  discuss first how to use it, and for deployment to wait a few months (I
 do
  not fully understand why you assume that a few months will be enough to
  sort things out). You seem to assume that a wiki as large and active as
 the
  English Wikipedia is not resilient enough to absorb the rather minor
  technical change we are introducing.
 
  It is, technically, a minor change. Socially it can lead to bigger
 changes
  -- but I found it hard to believe that anyone can predict the actual
 effect
  on the English Wikipedia community. This has to be seen and experienced,
  and I, for one, trust the English Wikipedia community to be as awesome as
  always, and to absorb and use this new features in ways no one has even
  imagined yet.
 

 I'll just quickly point out the dichotomy in what you're saying here: first
 you say that you doubt the project can come together and make a global
 decision, and then you say that it is resilient enough to ... make a global
 decision.


No, this is not what I am saying.

I am saying that the English Wikipedia is resilient enough to absorb such a
change after it happened. I would actually be a bit disappointed if that
would happen through a global and absolute decision on whether to replace
all template parameters through Wikidata, or to not use Wikidata at all. I
see a lot of middle ground there, which can be decided case by case,
Wikiproject per Wikiproject, template per template, article per article,
and even per single parameter in a template call.

I even hold the notion of a single English Wikipedia community to be
misleading. There are many overlapping communities working on the different
parts of the project. I expect that some of them might embrace the new
features that Wikidata offers, and others might less so. And that is OK.

If editors of classical composer articles don't want infoboxes for them, so
be it. Wikidata does not require them. It won't take long for these editors
to figure out that they can use Wikidata as an argument *against* having
Infoboxes: after all, if you want an Infobox just go to Wikidata. If the
editor community of Egyptian cities prefers to keep their mayors up to date
through Wikidata though, because they are more comfortable in Egyptian
Arabic, French, or Arabic, but still edit a bit on English - as so many do
- why deny them?

There are many different ways Wikidata will interact with existing
workflows. I can envision some, but I expect the creativity of Wikipedians
to go well beyond that, and amaze me again. But this can only happen if we
let Wikidata and Wikipedia grow together and co-evolve. If we wait a few
months to let Wikidata mature, there is a serious threat of the two
projects to grow too much apart.


 I suppose what I am saying here is that Wikidata doesn't seem to be working
 within the articulated master vision of the platform (which focuses on
 simplifying the editorial process)


Simplifying editing and reducing maintenance costs for the Wikipedias are
explicit goals of Wikidata. Obviously the simplest edit is the one you
don't have to do. Furthermore, simplifying template calls makes the job of
the Visual Editor team easier. Also Wikidata provides an API that makes
inline editing possible. James and I are talking with each other, and we
are making sure that the vision does not diverge.


 And yes, from the perspective
 of editors, infoboxes are part of the content of the article.  The
 technology change may be minor, but its use means changing the core anyone
 can edit philosophy that has created, and constantly renewed and
 developed, the wikipedia projects.


In that matter, we are currently failing. The often screen-filling Infobox
invocation code at the top of an article, that is displayed when you click
on edit, has scared off many potential contributors. Wikidata is going to
provide the means to improve the situation considerably.


I apologize to Denny for my
 being too much of a word wonk, and perhaps spending too much time reading
 political history.


Thank you for the explanation. As a non-native speaker I did not have the
same connotation and was thus confused by the strong reaction.

Cheers,
Denny

Re: [Wikitech-l] Ideating on GSOC project : Mobilize Wikidata

2013-04-09 Thread Denny Vrandečić
I would hope that the widgets we have developed for the Wikidata desktop UI
- especially the Entity Selector widget - would be reusable on mobile.

It would also be extremely good to look in what the mobile team is doing
for Wikipedia. Since both is MediaWiki in the backend anyway, I would
assume that the Wikipedia Mobile code has a lot of reusable parts, or even
should be a commons start ground. Brion, did you have a look at how
feasible that would be?

I disagree with Nishayn -- I don't think that's too big for a GSoC. It
really depends on where you draw the line and how perfect you want to be,
but we would define that in a way that makes it feasibly sized.




2013/4/9 Nischay Nahata nischay...@gmail.com

 At first glance this looks too big a project for GSoC.


 On Tue, Apr 9, 2013 at 1:12 PM, Pragun Bhutani pragu...@gmail.com wrote:

  Hi,
 
  I plan to propose a GSOC project through Wikimedia this year, based
 around
  the idea of mobilizing Wikidata. I'd like to share my understanding of
 the
  problem and gain some feedback on that, as well as some more input from
 the
  community.
  *
  *
  *Problem at hand:*
  *
  *
  Unlike Wikipedia, which displays information as mainly text, WIkidata
  presents information in the form of a grid of divs. While the
  MobileFrontend extension prints a mobile friendly version of Wikipedia,
  there needs to be a separate extension for Wikidata.
 
  This extension needs to be tailored to print these grids instead of text
  and lay them out in a way that's suitable for a mobile device.
 
  Additionally, the design should allow the addition of/should include
 'edit'
  functionality tailored to suit a mobile device.
 
 - Has there been any previous work done on this front that I could
 refer
 to?
 
  I look forward to receiving your feedback.
 
  Thanks,
  Pragun Bhutani
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l




 --
 Cheers,

 Nischay Nahata
 nischayn22.in
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployment highlights - week of April 8th

2013-04-09 Thread Denny Vrandečić
Risker,

I find myself unconvinced by your argumentation as I perceive it as
inconsistent.

On the one hand, you suggest that before we enable the option to access
data from Wikidata using either Lua or a parser function should be
discussed and decided by the community beforehand - the same community,
that has been informed since mid 2011 that this change is coming. You
suppose that the community can actually come together and decide this
globally.

On the other hand, you are not trusting the community with the use of the
same feature. You say they would weaponize the feature, that the
community will be unable to adapt to the new feature, and that it needs to
discuss first how to use it, and for deployment to wait a few months (I do
not fully understand why you assume that a few months will be enough to
sort things out). You seem to assume that a wiki as large and active as the
English Wikipedia is not resilient enough to absorb the rather minor
technical change we are introducing.

It is, technically, a minor change. Socially it can lead to bigger changes
-- but I found it hard to believe that anyone can predict the actual effect
on the English Wikipedia community. This has to be seen and experienced,
and I, for one, trust the English Wikipedia community to be as awesome as
always, and to absorb and use this new features in ways no one has even
imagined yet.

Also, to come back to the issue of deploying unmature code to Wikipedia:
this is absolutely intentional. You say you want a mature system to be
deployed on Wikipedia, not one in its infancy. I would ask you to
reconsider that wish. I have been there: we have developed Semantic
MediaWiki (SMW), with the intention to push it to the Wikipedias, and the
system became highly usable and extremely helpful. Be it NASA, Yu-Gi-Oh
fans, or the Wikimedia Foundation, SMW has found hundreds of uses and tens
of thousands of users. And SMW got better and better for these use case --
to the expense of getting less and less probable to be deployed on the
Wikipedias.

I would prefer to avoid this mistake a second time. Deploy early, deploy
often - and listen closely to the feedback of the users and analyse the
actual usage numbers. MZMcBride raises a number of very real issues that
need to be tackled soon (I disagree that they are blockers, but I agree
that they are important, and we are working on them). This was so far quite
successful on Wikidata itself, and also for what we have deployed to the
Wikipedias so far.

In all seriousness: thank you for your concerns. Having read carefully, I
find that I do not share them and that I see not sufficient reason to delay
deployment out of the points you mention.

A few minor comments inline in your mail below.


2013/4/8 Risker risker...@gmail.com

 On 6 April 2013 17:27, Denny Vrandečić denny.vrande...@wikimedia.de
 wrote:
   Or, put differently, the Wikidata proposal has been published nearly two
  years ago. We have communicated on all channels for more than one year. I
  can hardly think of any technical enhancement of Wikipedia - ever - which
  was communicated as strongly beforehand as Wikidata. If, in that time,
 the
  community has not managed to discuss the topic, it might be because such
  changes only get discussed effectively after they occur.
 

 All channels isn't really correct, although I can respect how difficult
 it is to try to find a way to communicate effectively with the English
 Wikipedia community.




 I do not recall ever reading about Wikidata on Wiki-en-L (the English
 Wikipedia mailing list), and only rarely on Wikimedia-L (mainly to invite
 people to meetings on IRC, but less than 5% of English Wikipedians use
 IRC).


We have been on Signpost several times, we have been on the village pump.
This is considered sufficient on the other Wikipedias.

A search over the mailing list archives shows that both lists you mentioned
had discussions about Wikidata. They contained links to pretty
comprehensive pages on Meta. There are pages inside of the English
Wikipedia discussing Wikidata. Furthermore, we had reached out in many
talks, e.g. at the last two Wikimanias, but also in smaller local events,
and always supported Wikipedians to talk about it in their local
communities.

Since you are saying that our communication has not been sufficient, I
would be very glad to hear which channels we have missed so that we can add
them in the future.


  Since Wikidata phase 2 is actually a less intrusive change than phase 1,
  and based on the effectiveness of the discussion about phase 2 on the
  English Wikipedia so far, I think that a post-deployment discussion is
 the
  right way to go.

 In what way is this less intrusive?  Phase 1 changed the links to other
 projects beside articles, a task that was almost completely done by bots,
 and did not in any way affect the ability to edit or to modify the content
 of the articles. Phase 2 is intended to directly affect content and the
 manner in which it is edited

[Wikitech-l] Pushing Technical changes to Wikimedia projects

2013-04-09 Thread Denny Vrandečić
Technical changes on the Wikimedia projects can be hairy. We are currently
having a discussion about the Wikidata deployment to the Wikipedias, and
there have been many examples in the past of deployments that raised
discussions.

One of my statements in this discussion is that the a priori discussion of
such features is highly undemocratic. What I mean with that is that design
and deployment decisions are often made by a very small group, which are in
the best case a part of the affected  community, but, in many cases, even
external to the affected community. So the decisions are made by a group
that does not represent or is constituted by the community - which I mean
with undemocratic.

This has repeatedly raised criticism. And I think that criticism is often
unfair. Additionally, it is usually true (which makes is not anymore fair,
though).

I thought that in order to discuss these design decisions with the
community before hand, telling them on their respective village pump is
sufficient. Not so it seems. No single channel would find acceptance to
communicate with the community. This, obviously means, that it is not
actionable to communicate with the community.

What about setting up a community selected body of representatives to
discuss such issues beforehand? At first, it sounds like a good idea - but
the issue is, it makes the process only more complicated without at all
resolving the underlying issues. Does anyone really think that such a body
would stop the criticism before or after the deployment of the change in
question? Yeah, right. Doesn't change a thing.

So, what do I want to achieve with this Mail? Merely to ask some community
members to be a bit more constructive in their comments. Claiming that the
product managers and designers have no idea of the Wikimedia communities
and the use of wikis is often neither help- nor truthful.

What would be even better would be to come up with processes or mechanisms
to avoid these issues in the future. I would be very glad if the people who
are often critically accompanying such changes would help in building
effective channels for their discussion.

Any thoughts?

-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployment highlights - week of April 8th

2013-04-06 Thread Denny Vrandečić
I fully agree with Robert and Phoebe in this matter. Wikidata is an option.
Requiring first to come up with rules on how to use Wikidata before it is
switched on simply won't work, because there is not sufficient interest and
experience for this discussion.

Or, put differently, the Wikidata proposal has been published nearly two
years ago. We have communicated on all channels for more than one year. I
can hardly think of any technical enhancement of Wikipedia - ever - which
was communicated as strongly beforehand as Wikidata. If, in that time, the
community has not managed to discuss the topic, it might be because such
changes only get discussed effectively after they occur.

I base this statement on having studied previous introductions of new
technical features to the Wikipedias (check for that my paper with Mathias
Schindler), like the category system or parserfunctions.

Since Wikidata phase 2 is actually a less intrusive change than phase 1,
and based on the effectiveness of the discussion about phase 2 on the
English Wikipedia so far, I think that a post-deployment discussion is the
right way to go.

Also, a very important consideration is raised by Phoebe: Wikidata is in
its current form still in its infancy, and for a well developed project
like the English Wikipedia this means that the actual usage (and effect) is
expected to be minimal in the current stage. The deployment of phase 2 this
week would merely be a start for an organic co-evolution of Wikidata and
the Wikipedias in the months and years to come.

But this can only happen 'in the wild', as a priori debates about the
possible usages of such features will remain not only too speculative, but
also highly undemocratic due to the minimal engagement of the community in
advance.

This email cannot resolve any worries surrounding the deployment of
Wikidata phase 2, but then again, no amount of discussion could. But I hope
it justifies the decision.

Cheers,
Denny,
who wrote this Email on his mobile phone because he didn't take his
computer to his vacations :-)
On Apr 6, 2013 10:40 AM, Robert Rohde raro...@gmail.com wrote:

 Risker,

 You are right that it will undoubtedly get used as soon as it is available,
 and it is unfortunate that it will presumably get deployed without any
 agreement having been reached on wiki about how it should be used.

 However, when it comes to an area like infoboxes, I think a lot of hardship
 could be avoided if the community can ultimately come together and adopt a
 sensible set of guidelines for how wikidata should be used.

 For example, one of the reasons Commons is able to work reasonably well
 within the global context is that every wiki ultimately has the option of
 ignoring it and uploading locally preferred files instead.  I would argue
 that the use of wikidata in infoboxes should follow much the same
 principle.  Specifically, templates ought to be engineered such that values
 are obtained from wikidata only when no corresponding value is present
 locally.  So for example, one might have an infobox with a field for
 birthplace.  If enwiki specifies birthplace = Athens, Georgia, then enwiki
 will be guaranteed to display Athens, Georgia.  And the template should
 query wikidata only if the field is omitted.  So, if birthplace= is left
 blank, then we might ask wikidata for the answer, and can use the value
 recorded there, but only so long as no value was filled in locally.  That's
 the kind of behavior that I think makes sense for infoboxes.  Decisions
 about when to rely on local values and when to rely on wikidata are
 obviously an issue were guidelines are needed.  For example, I'd argue that
 wikidata should never be used to define elements that are likely to be
 controversial or subject to dispute (e.g. Gdansk/Danzig).  It could be a
 reasonable policy that controversial data values should always be retained
 using strictly local input.  That would limit the potential for
 controversies over wikidata values from spilling into Wikipedia.

 Unfortunately, I suspect we are going to take a while finding our way when
 it comes to wikidata interactions, though that doesn't necessarily mean we
 won't ultimately have coherent policies on its use.  While obviously a bit
 late in the game to be starting now, I think many people would welcome a
 discussion on wiki of what best practices for the use of wikidata ought to
 look like, and I'm sure your input could be valuable to that discussion.

 -Robert Rohde
 aka Dragons_flight
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikidata queries

2013-04-03 Thread Denny Vrandečić
2013/3/28 Petr Onderka gsv...@gmail.com

 How will be the queries formatted? Do I understand it correctly that a
 QueryConcept is a JSON object?


Not decided yet. Probably it will be a JSON object, though, and edited
through an UI.


 Have you considered using something more in line with the format of
 action=query queries?


Yes, but it didn't seem a good fit, especially because the query module
works on the page metadata and the queries we discuss here work on the item
data.


 Though I guess what you need is much more complicated and trying to
 fit it into the action=query model wouldn't end well.


I am not even sure it is much more complicated. But I am very worried it is
too different.

Cheers,
Denny



 Petr Onderka
 [[en:User:Svick]]

 2013/3/28 Denny Vrandečić denny.vrande...@wikimedia.de:
  We have a first write up of how we plan to support queries in Wikidata.
  Comments on our errors and requests for clarifications are more than
  welcome.
 
  https://meta.wikimedia.org/wiki/Wikidata/Development/Queries
 
  Cheers,
  Denny
 
  P.S.: unfortunately, no easter eggs inside.
 
  --
  Project director Wikidata
  Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
  Tel. +49-30-219 158 26-0 | http://wikimedia.de
 
  Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
  Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
 unter
  der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
  Körperschaften I Berlin, Steuernummer 27/681/51985.
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikidata queries

2013-03-28 Thread Denny Vrandečić
We have a first write up of how we plan to support queries in Wikidata.
Comments on our errors and requests for clarifications are more than
welcome.

https://meta.wikimedia.org/wiki/Wikidata/Development/Queries

Cheers,
Denny

P.S.: unfortunately, no easter eggs inside.

-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Pronunciation recording tool wanted

2013-03-27 Thread Denny Vrandečić
I would strongly support to not lend support to the believe that everything
under the sun is copyrightable. We should, in my opinion, take the position
that trivial things like these are not copyrightable and should put a CC0
on it. We should not set an example and establish a practice that single
words can be copyrightable. At all. I think, by defaulting to that
assumption, we support the idea that these things can be legally protected
under copyright law, and by this we do a strong diservice to our actual
mission.

Sorry for the rant and for the not-completely-on-topicness.

Cheers,
Denny


2013/3/13 Matthew Flaschen mflasc...@wikimedia.org

 On 03/13/2013 03:17 AM, Nikola Smolenski wrote:
  Why CC0 (public domain)?  Your example
  (http://commons.wikimedia.org/wiki/File:Fr-go%C3%BBter.ogg) is CC-BY,
  which is not public domain and requires attribution (which I think all
  Wikimedia projects do for text).  I'd say CC-BY-SA or CC-BY would be a
  better default.
 
  I am not sure about copyrightability of a pronunciation of a single word.

 Neither am I, but if it's licensed under one of those and a court finds
 it's not copyrightable, so be it.  It still seems reasonable to use an
 attribution license.

 Matt Flaschen

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] EasyRDF light

2013-03-21 Thread Denny Vrandečić
Hey,

as you remembered, we were asking about EasyRDF in order to use it in
Wikidata.

We have now cut off the pieces that we do not need, in order to simplify
the review. Most of the interesting parts of EasyRDF regarding security
issues -- parsing, serving, etc. -- has been removed.

Our code is here. We would invite comments and reviews.

https://github.com/Wikidata/easyrdf_lite

Cheers,
Denny



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: image information API

2013-03-11 Thread Denny Vrandečić
I certainly don't want to cookie-lick this usecase for Wikidata / Wikibase,
but I think that using the Wikibase extension for this might be easier than
it looks. The one major point would be a bit of coding to allow claims for
the media namespace. But indeed, right now we do not have the bandwidth to
do this, and if we are to work on it, it would be deferred to the Summer
(the design can be started earlier, obviously).

On the other hand, as said, we don't want to cookie-lick the usecase: if
there is a quick way to implement it, it is also obviously clear that
moving well-defined structured data from one system to the other is far
easier than the original creation of that structured data, and thus
inefficient use of human work would be minimized anyway. As long as we are
creating such well-defined structured data, I am happy either way.





2013/3/10 Yuvi Panda yuvipa...@gmail.com

 While I understand that wikibase would be 'ideal' (allowing reads *and*
 writes), I do not know if the WikiData people have the personell bandwidth
 to do that. The separate extension outlined in this RFC would be a bit
 hackish, but still faaar more efficient than the current solution of having
 to retrieve the HTML and parse it again (or worse, parse the Wikitext
 itself). Would also be incredibly more efficient to get the data for
 multiple images, since you could feed a generator into this.

 +1 for implementing this now :)

 --
 Yuvi Panda T
 http://yuvi.in/blog
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Indexing structures for Wikidata

2013-03-07 Thread Denny Vrandečić
As you probably know, the search in Wikidata sucks big time.

Until we have created a proper Solr-based search and deployed on that
infrastructure, we would like to implement and set up a reasonable stopgap
solution.

The simplest and most obvious signal for sorting the items would be to
1) make a prefix search
2) weight all results by the number of Wikipedias it links to

This should usually provide the item you are looking for. Currently, the
search order is random. Good luck with finding items like California,
Wellington, or Berlin.

Now, what I want to ask is, what would be the appropriate index structure
for that table. The data is saved in the wb_terms table, which would need
to be extended by a weight field. There is already a suggestion (based on
discussions between Tim and Daniel K if I understood correctly) to change
the wb_terms table index structure (see here 
https://bugzilla.wikimedia.org/show_bug.cgi?id=45529 ), but since we are
changing the index structure anyway it would be great to get it right this
time.

Anyone who can jump in? (Looking especially at Asher and Tim)

Any help would be appreciated.

Cheers,
Denny

-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] EasyRDF dependency

2013-02-21 Thread Denny Vrandečić
After evaluating different options, we want to use for generating
Wikidata's RDF export the EasyRDF library: http://www.easyrdf.org/

We only need a part of it -- whatever deals with serializers. We do not
need parsers, anything to do with SPARQL, etc.

In order to minimize reviewing and potential security holes, is there an
opinion on what is the better approach:

* just use it as a dependency, review it all, and keep it up to date?

* fork the library, cut out what we do not need, and keep up with work
going on the main branch, backporting it, but reducing the used code size
thus?

How is this handled with other libraries, like Solarium, as a reference?

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Using wiki pages as databases

2013-02-19 Thread Denny Vrandečić
2013/2/19 Tim Starling tstarl...@wikimedia.org

 On 19/02/13 21:11, MZMcBride wrote: Has any thought been given to what to
 do about this? Will it require
  manually paginating the data over collections of wiki pages? Will this be
  something to use Wikidata for?

 Ultimately, I would like it to be addressed in Wikidata. In the
 meantime, multi-megabyte datasets will have to be split up, for
 $wgMaxArticleSize if nothing else.



I expect that, in time, Wikidata will be able to serve some of those
usecase, e.g. the one given by the languages Module on Wiktionary. I am
quite excited about the possibilities that access to Wikidata together with
Lua will be enabling within a year or so... :)

Not all use cases though should and will be handled by Wikidata obviously,
but some of those huge switches definitively can be saved in Wikidata items.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Lua rollout to en.wikipedia.org and a few others

2013-02-18 Thread Denny Vrandečić
For later.
As discussed before, access via HTTP is probably hardly an option for the
Wikimedia wikis (and they are our priority), but for other wikis that will
be crucial.

Cheers,
Denny



2013/2/18 Yuri Astrakhan yuriastrak...@gmail.com

 How useful would it be for Lua to access to the query/content/parser API
 be?

 I am suspecting there could be a lot of creative usages of this, including
 getting data from the wikidata (which won't have to do anything special to
 enable this)


 On Mon, Feb 18, 2013 at 7:15 AM, Jens Ohlig jens.oh...@wikimedia.de
 wrote:

 
  Am 18.02.2013 um 13:06 schrieb Tim Starling tstarl...@wikimedia.org:
 
   On 17/02/13 00:44, Luca Martinelli wrote:
   As of now, we write templates and we put data into them, article by
   article - but this is going to change during 2013, with the
   implementation of Wikidata, since we'll put data in the common repo
   and then just call them on local projects.
  
   Is this new prog language going to affect this? I mean, will it help
   us to write new templates which will call those data more easily?
  
   I believe that some members of the Wikidata team are interested in
   allowing Lua modules to fetch data directly from Wikidata. We have
   added a couple of hooks to Scribunto to support this, but the library
   hasn't been designed or implemented yet, as far as I can tell.
 
 
  It is in the works and I would love if you find the time to review it
 once
  it's up on Gerrit (which should happen this week).
 
  Lua scripting is indeed a big deal for us at Wikidata — structured data
  from Wikidata is available as JSON which can be thrown around and
 iterated
  over as Lua tables. This has the potential to unleash some niftyness in
  Wikidata-based Infobox templates.
 
  --
  Jens Ohlig
  Software developer Wikidata project
 
  Wikimedia Deutschland e.V.
  Obentrautstr. 72
  10963 Berlin
  www.wikimedia.de
 
  Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
 
  Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
  unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
  Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
 
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Extending Scribunto with new Lua functions

2013-02-06 Thread Denny Vrandečić
There will be (actually, there is already) a web API offering the kind of
data required, and for client wikis not running on WMF infrastructure this
will eventually be the way to access the data.

For WMF clients, like the Wikipedias, our decision was not to use HTTP web
requests, but to internally get the data directly from the respective DB.
This was deemed the deciding factor wrt performance: internal DB queries vs
HTTP requests.

The cost of serializing to and from JSON was not deemed relevant, as you
argue for.

This sums up my understanding of the situation, I might easily be wrong.

Cheers,
Denny





2013/2/5 Gabriel Wicke gwi...@wikimedia.org

 On 02/05/2013 02:35 AM, Jens Ohlig wrote:
  I'm wondering if some of the specialized functionality can be avoided by
  fetching JSON data from wikibase / wikidata through a web API. This
  would be more versatile, and could be used by alternative templating
  systems.
 
  This was actually my first idea! However, since the client (i.e.
  Wikipedia) currently must have access to the database at the repo (i.e.
  Wikidata) anyway, this would result in a huge performance loss without
  any obvious gain.

 Jens,

 I am not so sure about the potential performance loss. I am guessing
 that you fear the overheads of JSON serialization, which tends to be
 relatively low with current libraries. Moving or accessing PHP objects
 to/from Lua involves some overheads too, which is avoided when directly
 decoding in Lua.

 Apart from making the data generally available, using a web API means
 that the execution can be parallelized / distributed and potentially
 cached. It also tends to lead to narrow interfaces with explicit
 handling of state. Is direct DB access just needed because an API is
 missing, or are there technical issues that are hard to handle in a web
 API?

 Adding specialized Wikidata methods to Lua has a cost for users and
 developers. Users probably need to learn larger and less general APIs.
 Developers need to continue to support these methods once the content is
 there, which can be difficult if the specialized methods don't cleanly
 map to a future web API.

 Gabriel

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC: Parsoid roadmap

2013-01-30 Thread Denny Vrandečić
Thank you for the Roadmap, Gabriel! It is some exciting and interesting
stuff inside.

I am really happy that the roadmap would allow us this year to highly
optimize Wikidata-related changes on the Wikipedias, i.e. we would not need
to reparse the whole page when some data in Wikidata changes, and could
thus possibly afford to increase the currentness of all language editions.
That would be awesome -- for now I was always assuming that the Wikipedia
articles would only be updated on the next purge, whenever that happens. We
could optimize that based on the work of your team.

Thanks! Cheers,
Denny




2013/1/24 Gabriel Wicke gwi...@wikimedia.org

 Fellow MediaWiki hackers!

 After the pretty successful December release and some more clean-up work
 following up on that we are now considering the next steps for Parsoid.
 To this end, we have put together a rough roadmap for the Parsoid project
 at

   https://www.mediawiki.org/wiki/Parsoid/Roadmap

 The main areas we plan to work on in the next months are:

 Performance improvements: Loading a large wiki page through Parsoid into
 VisualEditor can currently take over 30 seconds. We want to make this
 instantaneous by generating and storing the HTML after each edit. This
 requires a throughput that can keep up with the edit rates on major
 wikipedias (~10 Hz on enwiki).

 Features and refinement: Localization support will enable the use of
 Parsoid on non-English wikipedias. VisualEditor needs editing support
 for more content elements including template parameters and extension
 tags. As usual, we will also continue to refine Parsoid's compatibility
 in round-trip testing and parserTests.

 Apart from these main tasks closely connected to supporting the
 VisualEditor, we also need to look at the longer-term Parsoid and
 MediaWiki strategy. Better support for visual editing and smarter
 caching in MediaWiki's templating facilities is one area we plan to look
 at. We also would like to make it easy to use the VisualEditor on small
 mediawiki installations by removing the need to run a separate Parsoid
 service.

 A general theme is pushing some of Parsoid's innovations back into
 MediaWiki core. The clean and information-rich HTML-based content model
 in particular opens up several attractive options which are discussed in
 detail in the roadmap.

 Please review the roadmap and let us know what you think!

 Gabriel and the Parsoid team

 --
 Gabriel Wicke
 Senior Software Engineer
 Wikimedia Foundation

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: [Wikidata-l] getting some stats for the Hungarian Wikipedia

2013-01-28 Thread Denny Vrandečić
(not sure if this is the right list)

There was the question raised about adding Wikidata pagecount data to the
dumps. As far as I can tell Wikivoyage is already inside the data (although
not described on the description page), but I could not find Wikidata
(maybe I am just missing the right abbreviation, maybe it is not there).

Can someone help with this? Should I make a bug on bugzilla?

Cheers,
Denny


-- Forwarded message --
From: Ed Summers e...@pobox.com
Date: 2013/1/28
Subject: Re: [Wikidata-l] getting some stats for the Hungarian Wikipedia
To: Discussion list for the Wikidata project. 
wikidat...@lists.wikimedia.org


On the subject of stats are there any plans to add wikidata access stats to:

http://dumps.wikimedia.org/other/pagecounts-raw/

Or are they available elsewhere?

//Ed

On Mon, Jan 28, 2013 at 10:31 AM, Nikola Smolenski smole...@eunet.rs
wrote:
 On 28/01/13 15:39, Lydia Pintscher wrote:

 Is anyone interested in getting us some stats for the deployment on
 the Hungarian Wikipedia? There is a database dump at
 http://dumps.wikimedia.org/backup-index.html from the 22nd of January
 that could be used. I'm interested in the effect Wikidata had so far
 on this one Wikipedia.


 Not from dumps, but from Toolserver, I don't see some reduction in bot
 activity. Number of bot edits in last 12 months:

 201201  61527
 201202  48472
 201203  3
 201204  60875
 201205  56364
 201206  56483
 201207  49836
 201208  50862
 201209  39235
 201210  44943
 201211  37492
 201212  52815
 201301  40258


 ___
 Wikidata-l mailing list
 wikidat...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l

___
Wikidata-l mailing list
wikidat...@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-l



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Jenkins doing code-review?

2013-01-09 Thread Denny Vrandečić
I am mildly surprised by Jenkins giving a +1 Code Review (not Verification)
here.

https://gerrit.wikimedia.org/r/#/c/33505/

Just wondering if there is maybe a setup issue.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Jenkins doing code-review?

2013-01-09 Thread Denny Vrandečić
Ah, thanks, that explains it.


2013/1/9 Brad Jorsch bjor...@wikimedia.org

 2013/1/9 Denny Vrandečić denny.vrande...@wikimedia.de:
  I am mildly surprised by Jenkins giving a +1 Code Review (not
 Verification)
  here.
 
  https://gerrit.wikimedia.org/r/#/c/33505/
 
  Just wondering if there is maybe a setup issue.

 Note that was on December 17. I don't remember the exact dates, but
 for a short time during the transition from Jenkins runs unit tests
 on every submission to Jenkins runs only lint tests for submissions
 from untrusted people it was configured to give CR+1 for linting and
 V+1 for unit tests.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] jenkins: unit test whitelist

2012-12-19 Thread Denny Vrandečić
Just curious -- I probably missed it in a previous mail -- why are the
tests switched off?

To preserve processing power?
To speed up tests for the whitelisted?
Security concerns when running tests with arbitrary code?
Other?

Cheers,
Denny


2012/12/19 Antoine Musso hashar+...@free.fr

 Hello,

 As you surely have noticed, the unit tests for mediawiki/core are no
 more run when a patch is submitted.

 I have just enabled a feature that whitelist people to have unit tests
 run for them on patch submission.  The patch still need to be reviewed
 and approved with a CR+2 though.

 Basically any user with a @wikimedia.org or @wikimedia.de email address
 is whitelisted by default.  I have also added a few contractors using
 their personal emails and several long term users.

 The related change is:
   https://gerrit.wikimedia.org/r/#/c/39310/

 This is only enabled for mediawiki/core for now, I will look at applying
 such a whitelist on extensions too in January.

 There is no process to be added in the whitelist, I guess you could talk
 about it on IRC. If there is no obvious veto there, you could probably
 just amend layout.yaml in integration/zuul-config.git file and get it
 approved :)

 cheers,

 --
 Antoine hashar Musso


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] (Roughly) 2 week delay for MediaWiki 1.21wmf7

2012-12-19 Thread Denny Vrandečić
This looks very well for us. Just one thing: Phase 4 of 1.21wmf7 should
probably be Monday, January 14, not January 11.

Cheers,
Denny


2012/12/17 Rob Lanphier ro...@wikimedia.org

 Hi everyone,

 Because a number of people are planning to take time off for the
 holidays, I'd like to postpone the regular release cycle for 2 weeks,
 with 1.21wmf7 being a slightly longer window than normal, and 1.21wmf8
 stretched out a little bit as well to accommodate MLK day (January
 21).

 Here's what things will look like in the revised schedule:

 1.21wmf7:
 *  Wednesday, January 2 (test, test2, mediawiki.org, wikidata.org)
 *  Monday, January 7 (non-Wikipedia)
 *  Wednesday, January 9  (English Wikipedia)
 *  Monday, January 11 (other Wikipedia)

 1.21wmf8
 *  Wednesday, January 16 (test, test2, mediawiki.org, wikidata.org)
 *  Wednesday, January 23 (non-Wikipedia)
 *  Monday, January 28 (English Wikipedia)
 *  Wednesday, January 30 (other Wikipedia)

 We would then resume the normal cadence Monday, February 4 when we
 deploy 1.21wmf9.

 The nice thing about doing things this way is that the cycle stretches
 out the time that 1.21wmf7 and 1.21wmf8 sit on test2 over the normal
 cycle, which gives us a little more time to notice problems.

 Any objections?  I don't think there's room to keep the standard
 cycle, but there's plenty of tweaking around the edges we can make.

 Rob

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Bugzilla attachment: too many redirects

2012-12-13 Thread Denny Vrandečić
I get a Too many redirects error when trying to open this attachement

https://bug-attachment.wikimedia.org/attachment.cgi?id=11489

from this bug

https://bugzilla.wikimedia.org/show_bug.cgi?id=42955

Anyone an idea?

Cheers,
Denny

P.S.: I just saw that Daniel poste da bug on this

https://bugzilla.wikimedia.org/show_bug.cgi?id=43061

-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikidata review status

2012-11-09 Thread Denny Vrandečić
Hi all,

we have here a number of bugfixes and testfixes and other stuff where
some reviewing input would be very appreciated.

== Bugfixes ==
* There was some vanishing content
https://bugzilla.wikimedia.org/show_bug.cgi?id=41352 , and a
temporary fix for it. Here is a proper fix for the problem, reverting
the temp and doing proper merge:
https://gerrit.wikimedia.org/r/#/c/29981/
* Revision sometimes had errors
https://bugzilla.wikimedia.org/show_bug.cgi?id=41244 , here is a fix
for that: https://gerrit.wikimedia.org/r/#/c/30624/
* Use ContentHandler together with OAI. There are two changesets that
have been both reviewed by Brion and accepted, but they have not been
validated. We can go ahead and validate ourselves, but I guess that is
bad practice. So if someone can validate these two changesets, it
would be awesome: https://gerrit.wikimedia.org/r/#/c/30722/ and
https://gerrit.wikimedia.org/r/#/c/30724/

== Better tests ==
* EditPages does not have enough tests! We have a changeset that does
not solve that issue, but it improves it:
https://gerrit.wikimedia.org/r/#/c/29973/
* Some core test was assuming wikitext in main namespace, which is
corrected here https://gerrit.wikimedia.org/r/#/c/28014/

== Refactoring ==
* New hook to let extensions override the ContentHandler model
https://gerrit.wikimedia.org/r/#/c/28199/
* Use the new hook https://gerrit.wikimedia.org/r/#/c/28201/

== Puppet and Vagrant setup ==
* We are working on polishing the Puppet script for Wikidata:
https://gerrit.wikimedia.org/r/#/c/30593/ (this will also lead
directly to the Vagrant setup)

== Ongoing discussions ==
* How will the data from Wikidata get to the Wikipedias?
** http://www.gossamer-threads.com/lists/wiki/wikitech/309727 --
seems to be the most current place
** http://meta.wikimedia.org/wiki/Wikidata/Notes/Change_propagation
- a bit out of date
** http://meta.wikimedia.org/wiki/Talk:Wikidata/Notes/Change_propagation
* ULS and caching do not go well together currently, and both are used
on Wikidata. Can you help us?
** https://bugzilla.wikimedia.org/show_bug.cgi?id=41451 (most of the
discussion seems here)
** http://www.gossamer-threads.com/lists/wiki/wikitech/310807
** https://gerrit.wikimedia.org/r/#/c/32030/

We invite input, comments, merges, and chocolate.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Data flow from Wikidata to the Wikipedias

2012-11-06 Thread Denny Vrandečić
Hi Tim,

thank you for the input.

Wikidata unfortunately will not contain all language links: a wiki can
locally overwrite the list (by extending the list, suppressing a link
from Wikidata, or replacing a link). This is a requirement as not all
language links are necessarily symmetric (although I wish they were).
This means there is some interplay between the wikitext and the links
coming from Wikidata. An update to the links coming from Wikidata can
have different effects on the actually displayed language links
depending on what is in the local wikitext.

Now, we could possibly also save the effects defined in the local
wikitext (which links are suppressed, which are additionally locally
defined) in the DB as well, and then when they get changed externally
smartly combine that is and create the new correct list --- but this
sounds like a lot of effort. It would potentially save cycles compared
to today's situation. But the proposed solution does not *add* cycles
compared to today's situation. Today, the bots that keep the language
links in synch, basically incur a re-rendering of the page anyway, we
would not being adding any cost on top of that. We do not make matters
worse with regards to server costs.

Also it would, as Daniel mentioned, be an optimization which only
would work for the language links. Once we add further data, that will
be available to the wikitext, this will not work at all anymore.

I hope this explains why we think that the re-rendering is helpful.


Having said that, here's an alternative scenario: Assuming we do not
send any re-rendering jobs to the Wikipedias, what is the worst that
would happen?

For this to answer, I need the answer to this question first: are the
squids and caches holding their content indefinitely, or would the
data, in the worst case, just be out of synch for, say, up to 24 hours
on a Wikipedia article that didn't have an edit at all?

If we do not re-render, I assume editors will come up with their own
workflows (e.g. changing some values in Wikidata, going to their home
wiki, purge the effected page, or write a script that gives them a
purge my homewiki page-link on Wikidata), which is fine, and still
cheaper than if we initiate re-rendering all pages every time. It just
means that in some cases some pages will not be up to date.

So, we could go without re-rendering at all, if there is consensus
that this is the preferred solution and that this is better than the
solution we suggested.

Anyone having any comments, questions, or insights?

Cheers,
Denny



2012/11/5 Tim Starling tstarl...@wikimedia.org:
 On 02/11/12 22:35, Denny Vrandečić wrote:
 * For re-rendering the page, the wiki needs access to the data.
 We are not sure about how do to this best: have it per cluster,
 or in one place only?

 Why do you need to re-render a page if only the language links are
 changed? Language links are only in the navigation area, the wikitext
 content is not affected.

 As I've previously explained, I don't think the langlinks table on the
 client wiki should be updated. So you only need to purge Squid and add
 an entry to Special:RecentChanges.

 Purging Squid can certainly be done from the context of a wikidatawiki
 job. For RecentChanges the main obstacle is accessing localisation
 text. You could use rc_params to store language-independent message
 parameters, like what we do for log entries.

 -- Tim Starling


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] ULS and caching

2012-11-06 Thread Denny Vrandečić
Hi all,

Wikidata is planned as a multilingual resource, and we are using ULS
for switching languages. ULS is pretty cool, if you have not tried it
out yet, you definitively should.
ULS works great if you are a logged in user.
ULS on Wikidata does not work so well right now if you are not logged
in. Due to caching issues, the system sometimes switches your language
randomly.

There is a bug with quite some discussion going on:
https://bugzilla.wikimedia.org/show_bug.cgi?id=41451

There is a changeset to ULS providing one solution:
https://gerrit.wikimedia.org/r/#/c/32030/

There seems to be an agreement that the solution could fix the issue
for now with Wikidata, but it is unclear if this will scale to the
Wikipedias. This is a call for some input and attention to anyone who
can help resolve the issue.

So far huge thanks to Daniel K., Mark, Niklas, Katie, Siebrand, Faidon
for pushing for solutions.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Data flow from Wikidata to the Wikipedias

2012-11-02 Thread Denny Vrandečić
Hi all,

Wikidata aims to centralize structured datas from the Wikipedias in
one central wiki, starting with the language links. The main technical
challenge that we will face is to implement the data flow on the WMF
infrastructure efficiently. We invite peer-review on our design.

I am trying to give a simplified overview here. The full description
is on-wiki: http://meta.wikimedia.org/wiki/Wikidata/Notes/Change_propagation

There are a number of design choices. Here is our current thinking:

* Every change on the language links in Wikidata is stored in the
wb_changes table on Wikidata
* A script (or several, depends on load), run per wiki cluster, checks
wb_changes, gets a batch of changes it has not seen yet, and
creates jobs for all pages that are affected in all wikis on the
given cluster
* When the jobs are executed, the respective page is re-rendered and
the local recentchanges filled
* For re-rendering the page, the wiki needs access to the data.
We are not sure about how do to this best: have it per cluster,
or in one place only?

We appreciate comments. A lot. This thing is make-or-break for the
whole project, and it is getting kinda urgent.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] A bot to create articles about species

2012-10-18 Thread Denny Vrandečić
For now, we have no plans for Wikidata to create articles. This would,
in my opinion, meddle too much with the autonomy of the Wikipedia
language projects.

What will be possible is to facilitate the creation of such bots, as
some data that might be used for the article might be taken from and
maintained in Wikidata, and the creation of templates that use data
from Wikidata.

Wikidata currently has no plans for creating text using natural
language generation techniques. We would love for someone else to do
this kind of awesome on top of Wikidata.

I hope this helps,
Denny




2012/10/18 Nikola Smolenski smole...@eunet.rs:
 On 18/10/12 09:25, Steven Walling wrote:

 On Wed, Oct 17, 2012 at 11:46 PM, Nikola Smolenskismole...@eunet.rs
 wrote:

 The need for such bots should cease after Wikidata is fully deployed. I
 suggest to interested programmers that they should direct their effort
 there.


 Why is that the case?

 I didn't understand the scope of Wikidata to include actual creation
 of articles that don't exist. Only to provide data about topics
 across projects. Sure, that might be extremely helpful to someone with a
 bot to populate species articles, but I'm skeptical that Wikidata would
 or should be creating millions of articles about such things. If you
 consider something even slightly more controversial than species, such as
 schools, many projects would not welcome a third party mass-creating pages
 about a topic that is described in Wikidata.


 Wikidata won't need to create articles. Rather, if you are trying to see a
 page without an article, Wikipedia will check if an item with appropriate
 name exists in Wikidata and generate the article on the fly if Wikipedia has
 a local article template for this type of article.


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikidata review status

2012-10-18 Thread Denny Vrandečić
Dear all,

let me say it like this:

WOH!!!

All of our patchsets have been merged into core. You are awesome!
Thank you so much!

We won't leave you without new work, though. Three points:

First, the merge of the ContentHandler branch is, now that it is being
deployed, revealing issues in several places. If you discover
something, please file it in Bugzilla under the ContentHandler
component. The list of currently open bugs is here, and if you can
help us, this would be great.

https://bugzilla.wikimedia.org/buglist.cgi?resolution=---query_format=advancedbug_status=UNCONFIRMEDbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDcomponent=ContentHandlerproduct=MediaWikilist_id=153119

Second, we have created a document describing how data from Wikidata
is being synched (or percolated, or propagated, or moved, or whatever
the word shall be) to the Wikipedias. This is the core technical heart
of Wikidata's inner magic, and we really would benefit from peer
review and scrutiny on this topic. Comment. Suggest ideas. Help us
out. We do not have the one perfect solution here, and it can make
quite an impact.

https://meta.wikimedia.org/wiki/Wikidata/Notes/Change_propagation

Third, there is an oldish bug to MW core, filed in May, which we have
a bad feeling about. We expect that it will bite us more often on
Wikidata. The trouble is, there is no fix, and actually, we do not
know what is going on there, and several others have also tried to
crack this nut. If you want a challenge, squash this beast!

https://bugzilla.wikimedia.org/show_bug.cgi?id=37209

Enjoy all,
Denny




-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Let's talk about Solr

2012-10-18 Thread Denny Vrandečić
That is great to hear. Thanks for tying us together, Asher.

For Wikidata, we have not uploaded our Solr extension yet (mostly
because we are waiting for the repository to be set up), but we will
then upload it soon once it is there. I would be especially interested
in sharing schema and config snippets for the many languages we
support, but as far as I can tell this is not so much a requirement
for Max and maybe not even for the TranslateMemory, not sure.

We also selected Solarium to connect to Solr. It is a bit worrysome
that it is basically a one person project if I see this correctly, but
the library seems small enough not to pose the risk of becoming too
much of a maintenance legacy I'd say -- especially compared to the
alternatives.

Any preferences for Solr 3 v 4? It seems that 4 is the smarter choice,
but we are having trouble to get 4 run on labs. Solr 3 works pretty
much out of the box, though.

Also, we should probably at some point consider how the different
extensions and their dependencies should be handled. I'd prefer not to
ship three different versions of Solarium with three extensions :)

Cheers,
Denny

P.S.: Yuri, regarding GESIS' Solr implementation, they have done some
great work for using Solr as a store for the structured data in SMW.
Funny thing is, they actually do not use Solr for the search itself!
This work is somewhat relevant for Wikidata phase 3, but unfortunately
quite irrelevant for TranslationMemory or Geodata.


2012/10/18 Asher Feldman afeld...@wikimedia.org:
 Hi all,

 I'm excited to see that Max has made a lot of great progress in adding Solr
 support to the GeoData extension so that we don't have to use mysql for
 spatial search - https://gerrit.wikimedia.org/r/#/c/27610/

 GeoData makes use of the Solarium php client, which is currently included as
 a part of the extension.  GeoData will be our second use of Solar, after
 TranslationMemory extension which is already deployed -
 https://www.mediawiki.org/wiki/Help:Extension:Translate/Translation_memories
 and the Wikidata team is working on using Solr in their extensions as well.

 TranslationMemory also uses Solarium, a copy of which is also bundled with
 and loaded from the extension.  For a loading and config example -
 https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=blob;f=wmf-config/CommonSettings.php;h=1e7a0e24dcbea106042826474607ec065d328472;hb=HEAD#l2407

 I think Solr is the right direction for us to go in.  Current efforts can
 pave the way for a complete refresh of WMF's article full text search as
 well as how our developers approach information retrieval.  We just need to
 make sure that these efforts are unified, with commonality around the client
 api, configuration, indexing (preferably with updates asynchronously pushed
 to Solr in near real-time), and schema definition.  This is important from
 an operational aspect as well, where it would be ideal to have a single
 distributed and redundant cluster.

 It would be great to see the i18n, mobile tech, wikidata, and any other
 interested parties collaborate and agree on a path forward, with a quick
 sprint around common code that all can use.

 -Asher



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikidata review mail

2012-10-12 Thread Denny Vrandečić
Hi all,

here is our weekly report on changesets to core relevant for Wikidata
development.

* Great news! The Wikidata branch got merged, possibly the biggest
single changeset to MediaWiki. Thanks to everyone for their input, I
am afraid to try to list them all because I would fail. Special thanks
to Tim for accompanying the development of the branch for the last
seven months (!), and congratulations to Duesentrieb for creating it.
We had cake. http://instagram.com/p/QmtQaqhE1N Thanks to Siebrand
for pushing the button.

The ContentHandler branch created a number of follow up items, many of
which are already merged. Here are a few open ones:
* Fix declaration of content_model and content_format fields:
https://gerrit.wikimedia.org/r/#/c/27394/
* Support plain text content:
https://gerrit.wikimedia.org/r/#/c/27399/ (two +1s already)
* Silence warnings about deprecation by ContentHandler (since there
are too many right now, has already been discussed on this list)
https://gerrit.wikimedia.org/r/#/c/27537/

Besides the content handler als the ORMTable to access foreign wikis
changeset got merged. Yay to Chad!
https://gerrit.wikimedia.org/r/#/c/25264/

Also the Sites management got merged after a review by Asher for its
DB impact and by Chad for the code. You are awesome! Thanks to Jeroen
for working on this https://gerrit.wikimedia.org/r/#/c/23528/

We have one more changeset open, and I'd love to see that one closed too:

* Sorting in jQuery tables:
https://gerrit.wikimedia.org/r/#/c/22562/ That would be awesome to
merge. It got several +1's over its lifetime, it has been around for
more than a month, it got improved continually. If you have comments,
please write them into the code, Henning will take care of them.

Thank you so much for your awesome help everyone!

Cheers,
Denny

-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikidata review status

2012-10-05 Thread Denny Vrandečić
Hi all,

here's a status update of the reviews on the Wikidata related changes to core:

* ContentHandler: a lot has happened here in the last few days. We
created a faux commit that squashed the changes in order to gather
comments, and have responded to most of them, implementing the
suggested changes. Here's a link to the faux commit (note that we are
not updating the faux commit):
https://gerrit.wikimedia.org/r/#/c/25736/
Here's a link to the Gitweb of the actual branch:
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=tree;h=refs/heads/Wikidata;hb=refs/heads/Wikidata
Here's a link to the Bugzilla bug tracking the change and discussing the state:
https://bugzilla.wikimedia.org/show_bug.cgi?id=38622
The further plan is to have the actual merge commit read early next
week, and then hopefully have this one land as one of the first big
features in 1.21.

* Sites management: this one is awaiting Asher's review, after Chad
gave it a thumbs up. The database impact is expected to be low, so we
hope that the review will happen soonish and the change be approved.
Link to Gerrit here: https://gerrit.wikimedia.org/r/#/c/23528/

* Sorting in jQuery tables: this patchset has received a number of
updates, many comments, and now three +1s, and it is also verified.
Maybe someone will be brave enough to +2 it? It would be nice to see
this merged in too. Here's the link to Gerrit:
https://gerrit.wikimedia.org/r/#/c/22562/

* Allow ORMTable to access foreign wikis: this allows a wiki to use
the load balancer to access another wikis tables. It has been verified
and +1'ed by two, but not yet +2'ed and merged. Some reviewing love
would be appreciated. https://gerrit.wikimedia.org/r/#/c/25264/

All in all, I am thankful for the increasing activity in the last few
weeks, and I hope that by next week this mail will be much shorter and
clearer.

Cheers,
Denny


-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Content handler feature merge (Wikidata branch) scheduled early next week

2012-09-29 Thread Denny Vrandečić
Hi,

I have tried to create one changeset with the whole content handler
for your convenience:

https://gerrit.wikimedia.org/r/#/c/25736/

Note that is based on 7ccc77a and needs to get rebased before merge,
but the major parts should be there.

Since I am not a Git expert, I may have mad it all completely wrong. I
tried squashing, but that didn't work, so I created a diff and patched
master with it and then submitted that as a changeset to Gerrit... I
think some whitespace issue creeped in through that.

So, as said, I don't have a clue how useful or correct this changeset
is, but I hope it is somehow helping in understanding the change.

The actual change to be merged continues to be in the Wikidata branch.

Cheers,
Denny



2012/9/27 Rob Lanphier ro...@wikimedia.org:
 On Thu, Sep 27, 2012 at 10:54 AM, Roan Kattouw roan.katt...@gmail.com wrote:
 Yes and no. The person that merges the branch can create a merge commit and
 submit that for review (git checkout -b mergewikidata master  git merge
 wikidata  git review) but Gerrit will not show the diff properly: it'll
 either show just the conflict resolutions, or nothing at all. You can view
 the full diff by fetching the commit on your localhost and using standard
 git tools (e.g. git review -d 12345  git show), but you won't be able to
 use inline comments quite as nicely.

 Someone could create a faux commit (with DO NOT MERGE on it) which
 is not a merge commit, but a fully squashed, rebased single commit.
 That would give a target for inline review comments.  I'm not
 volunteering to do that myself, but anyone could do that, and drop a
 comment on bug 38622.

 Rob

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikidata review status

2012-09-28 Thread Denny Vrandečić
Hi all,

here's our weekly mail on core related stuff for Wikidata. Thanks to
everyone giving feedback and reviews, most notably Chris, Chad, Tim,
Matmarex, DJ, and Krinkle!

* the ContentHandler branch is being reviewed to land in Core next
week, as Rob said. There is a separate thread on that. Further
comments are welcome -
https://bugzilla.wikimedia.org/show_bug.cgi?id=38622

* the Sites management branch has been reviewed by Chad and got a +1
and is awaiting a review with respect to its DB impact by Asher before
it can be merged. Further comments are welcome -
https://gerrit.wikimedia.org/r/#/c/23528/

* The sort in jQuery has been further improved based on incoming
comments and reviews. As requested by Krinkle, instead of using an
event, a method can be used. So if anyone can do some JavaScript
reviewing here, it would be appreciated. -
https://gerrit.wikimedia.org/r/#/c/22562/


We also have branched off the Wikibase extension itself to what is
destined to become Wikibase 0.1. This is undergoing security and
further review right now, so that it can be deployed. Comments are
welcome for any blockers or problems with that. Phase 1 is basically
done. - 
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/Wikibase.git;a=tree;h=refs/heads/wikidata-wmfphase1beta;hb=refs/heads/wikidata-wmfphase1beta

We are continuing to work on the master branch with developing new
features and are working on Phase 2. Input is welcome.

Thanks for all the feedback!
Cheers,
Denny

-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New extension: Diff

2012-09-26 Thread Denny Vrandečić
2012/9/26 Tim Starling tstarl...@wikimedia.org:
 On 26/09/12 03:54, Tyler Romeo wrote:
 This looks pretty interesting. Is there a reason we don't just put this in
 the core?

 It has about 50 lines of useful code wrapped in 1600 lines of
 abstraction. I don't think it is the sort of style we want in the core.

I am sorry, but I find this comment a bit harsh, and just wanted to
deliver some data on that. Maybe it was meant as a hyperbole, in which
case I apologize for not treating it as such.

The extension has, altogether 2846 total lines of code in 24 files, of
which 332 lines are blank, 1328 lines are comments, and 1186 are
physical lines of code.

Of the latter 1186, 641 are tests. I find that commendable.
(That leaves us with 1663 total lines of code, which are probably the
1600 from the original comment)

Another 148 physical lines go to initializing the extension and
internationalization.

Remain 402 physical lines of code.

Now, one might discuss the suitability of using interfaces in PHP, but
we have that in core: IDBAccessObject, ICacheHelper, IDeviceDetector,
and a few others. Not many, but they exist. Anyway, the two interface
classes account for only 18 physical lines of code anyway.
The only 18 lines a user of the extension needs to care about.

One might discuss the suitability of using a class hierarchy to
represent different DiffOps. But then again, that is the same class
design as in the DairikiDiff engine, included in core as well.

There sure are other design choices that can be discussed. But the
picture painted above was exaggerated, and I merely wanted to add some
data to it.

Cheers,
Denny


P.S.: terminology: total lines of code - lines in all code files,
physical lines of code every line that has at least one non-comment
or non-whitespace character. All numbers according to cloc.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] #switch limits

2012-09-21 Thread Denny Vrandečić
2012/9/21 Strainu strain...@gmail.com:
 Well, you said something about Wikidata. But even if the client Wiki
 would not need to load the full census, can it be avoided on Wikidata?

Talking about the template that Tim listed:
https://fr.wikipedia.org/w/index.php?title=Mod%C3%A8le:Donn%C3%A9es_PyrF1-2009action=edit

I was trying to understand the template and its usage. As far as I can
tell it maps a ZIP (or some other identifier) of a commune to a value
(maybe a percentage or population, sorry, the documentation did not
exist and my French is rusty).

So basically it provide all values for a given property. Differently
said that Wikipage implements a database table with the columns key
and value and holds the whole table. (I think when Ward Cunningham
described a wiki the simplest online database that could possibly
work, this is *not* what he envisioned.)

In Wikidata we are not storing the data by the property, but for every
item. Put differently, every row in that template would become one
statement for the item identified by its key.

So Wikidata would not load the whole census data for every article,
but only the data for the items that is actually requested.

On the other hand, we would indeed load the whole data for one item on
the repository (not the Wikipedias), which might lead to problems with
very big items at some points. We will test make tests to see how this
behaves once these features have been developed, and then see if we
need to do something like partition by property groups (similar as
Cassandra does it).

I hope that helps,
Denny

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] #switch limits

2012-09-21 Thread Denny Vrandečić
I took another look at the output that is created with the data, and I
am at the same time delighted and astonished by the capability and
creativity of the Wikipedia community to solve such tasks with
MediaWiki template syntax and at the same time horrified by the
necessity of the solution taken.

Adding to my own explanation of how Wikidata would help here: we plan
to implement some form of query answering capabilities in phase III
which would actually not work on the full items, as described in my
previous mail, but just on some smarter derived representation of the
data. So specific queries -- the possible expressivity is not defined
yet -- would be performed much more efficiently than performing them
on the fly over all relevant items. (That is covered by the technical
proposal as item P3.2 in
http://meta.wikimedia.org/wiki/Wikidata/Technical_proposal#Technical_requirements_and_rationales_3).

Cheers,
Denny

2012/9/21 Denny Vrandečić denny.vrande...@wikimedia.de:
 2012/9/21 Strainu strain...@gmail.com:
 Well, you said something about Wikidata. But even if the client Wiki
 would not need to load the full census, can it be avoided on Wikidata?

 Talking about the template that Tim listed:
 https://fr.wikipedia.org/w/index.php?title=Mod%C3%A8le:Donn%C3%A9es_PyrF1-2009action=edit

 I was trying to understand the template and its usage. As far as I can
 tell it maps a ZIP (or some other identifier) of a commune to a value
 (maybe a percentage or population, sorry, the documentation did not
 exist and my French is rusty).

 So basically it provide all values for a given property. Differently
 said that Wikipage implements a database table with the columns key
 and value and holds the whole table. (I think when Ward Cunningham
 described a wiki the simplest online database that could possibly
 work, this is *not* what he envisioned.)

 In Wikidata we are not storing the data by the property, but for every
 item. Put differently, every row in that template would become one
 statement for the item identified by its key.

 So Wikidata would not load the whole census data for every article,
 but only the data for the items that is actually requested.

 On the other hand, we would indeed load the whole data for one item on
 the repository (not the Wikipedias), which might lead to problems with
 very big items at some points. We will test make tests to see how this
 behaves once these features have been developed, and then see if we
 need to do something like partition by property groups (similar as
 Cassandra does it).

 I hope that helps,
 Denny



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikidata blockers

2012-09-21 Thread Denny Vrandečić
Daniel,

sorry for the previous tone of my answer. Indeed I mixed up the Sites
management RFC, that I was discussing, and your proposal to change the
Sitelinks table. Although they are related, the former does not depend
on the latter.

Whereas I see that changing it both in one go might have advantages,
in order to proceed with the site management being reviewed, merged
and implemented in a timely manner I would suggest that we keep them
separated.

The Sitelinks discussion you have started has indeed not received
answers as fat as I can tell. I suggest that this might be due to it
being currently a sub-part of the sites management topic. I would
suggest that this gets started in its own RFC, what do you think?

Cheers,
Denny


2012/9/21 Daniel Friesen dan...@nadir-seen-fire.com:
 On Thu, 20 Sep 2012 05:54:02 -0700, Denny Vrandečić
 denny.vrande...@wikimedia.de wrote:

 2012/9/20 Daniel Friesen dan...@nadir-seen-fire.com:

 I already started the discussion ages ago. No-one replied.


 Daniel,

 can you please point to the discussion that you started where no one
 replied? As far as I can tell, I can find discussions that you started
 on-wiki, on this mailing list, and I see comments by you on Gerrit. I
 found these discussions enlightening and I think they improved the
 design and the code - but in all these discussion threads there have
 been replies.

 If there are discussions you have started where no one replied, can
 you please provide links to them? I cannot find them.

 Denny


 https://www.mediawiki.org/wiki/Thread:Talk:Requests_for_comment/New_sites_system/Sitelinks


 --
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikidata blockers

2012-09-20 Thread Denny Vrandečić
Daniel,

regarding the sites management:

Since the last activity on the discussion page was from August 26, and
the last substantial change to the RFC itself was from August 21, and
the patch set has been there since September 12 -- without any
comments, by the way -- I hope you find it understandable when I
express a certain frustration with a comment like it still needs
discussion.

This change to the code is blocking us. If it needs discussion, then
please discuss it. But being silent for several weeks, even a month,
while people are working on this and spending resources at this, and
then suddenly saying oh wait, I think it still needs some discussion
is not very respectful of the work done by others. We have invited for
discussion several times, and you have been one of the most active in
the discussions.

So, please, at least be as helpful as to say what imperfection in the
current state of affairs irks you, so that we can discuss them and
work on them. Otherwise your comment merely leads to an increase in
frustration, and I do not see how it helps us improve MediaWiki.

Sorry if the words are received as harsh,
Denny


2012/9/20 Daniel Friesen dan...@nadir-seen-fire.com:
 On Wed, 19 Sep 2012 12:53:54 -0700, Denny Vrandečić
 denny.vrande...@wikimedia.de wrote:

 Hi all,

 here's our weekly list of Wikidata review items. Due to the hands-on
 meeting last week we refrained from sending it earlier. Now that most
 should be back home, I wanted to give an overview of the open
 items before our telco tomorrow.

 * ContentHandler. This one is seriously blocking us now, and we would
 need to get it reviewed. It is our highest priority right now. The
 review was promised for this week. We are eagerly awaiting the review
 and further input. Here's the bug:
 https://bugzilla.wikimedia.org/show_bug.cgi?id=38622

 * Sites. The RFC seems to be stable:
 https://www.mediawiki.org/wiki/Requests_for_comment/New_sites_system
 Chad was reminded one and half weeks ago to take a look, and since no
 further input has come in we assume that it is acceptable, which is
 why we started with the implementation work. Here's the link to the
 patch: https://gerrit.wikimedia.org/r/#/c/23528/

 The Sitelinks topic still needs discussion.



 * jQuery table sorting improvements. This improves the UI on initial
 display of a sorted table. There has been some comments and updates,
 thanks to Krinkle for the review and comments. The work is ongoing
 here, the ball is in our courtyard, we are working on the new
 patchset: https://gerrit.wikimedia.org/r/#/c/22562/

 * Towards nested transactions (2):
 https://gerrit.wikimedia.org/r/#/c/21584/ open with comments from
 Aaron Schulz.

 Got merged since last mail:
 * userWasLastToEdit improvement.
 https://gerrit.wikimedia.org/r/#/c/22049/ Yay! Thanks to Demon.
 * Towards nested transactions (1):
 https://gerrit.wikimedia.org/r/#/c/21582/ got merged! Yay! Thanks to
 Aaron Schulz.

 Thanks for everyone, especially to Demon, Krinkle, The DJ, Dantman,
 Matmarex, and Aaron Schulz for reviewing, and Rob, Tim, and Chad who
 participated in the phone conference last week.

 It would be crucial to get the first two items off this list as soon
 as possible.

 Cheers,
 Denny



 --
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 | http://wikimedia.de

Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   >