[Wikitech-l] OutreachProgramForWomen Aspirant

2014-03-06 Thread Nitika Verma
Hey,
This is Nitika from India. I am comfortable with c/c++ , Php,
Javascript,HTML and MySql. In the projects listed on
https://www.mediawiki.org/wiki/FOSS_Outreach_Program_for_Women/Round_8
I am very interested in the project "Book Management in
Wikibooks/WikiSource" (mentor :
https://www.mediawiki.org/wiki/User:Raylton_P._Sousa)

Can please someone guide me how to go about this project?

Cheers,
Nitika
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread MZMcBride
Tim Starling wrote:
>I would never have merged it, because it had a -1 from Steven Walling,
>apparently speaking on behalf of others on design-l. I think changes
>should be made by consensus.

The change also had five +1s, as noted in this thread. I find it
interesting that, to you, consensus now includes a liberum veto. I'm fine
with a temporary revert if the change was causing an issue in production,
but the overall goal and execution seems fine to me.

As Jon notes at , there may be ways
to optimize the overall behavior here in the future (e.g., by implementing
a user.user_display_name field or re-using user.user_real_name). But for
now, I don't see any substantive issue with this change or with merging it
in core. Silently changing a user's preferred username is a valid bug:
"Mzmcbride' plainly looks stupid and the silent and unexpected change to a
user's input is an unfair trick on the user, especially given what a
nightmare it is to rename a user account.

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread C. Scott Ananian
On Thu, Mar 6, 2014 at 7:08 PM, Erik Bernhardson  wrote:

> Does core have any  policies related to merging?  The core features team
> has adopted a methodology(although slightly different) that we learned of
> from the VE team.  Essentially +2 for 24 hours before a deployment branch
> is cut is limited to fixes for bugs that  were introduced since the last
> deployment branch was cut or reverts for patches that turned out to not be
> ready for deployment.  Core is certainly bigger and with more participants,
> but perhaps a conversation about when to +2 and how that effects the
> deployment process would be benefitial?
>

While we're talking about +2s in core, I'd like to ask for special care on
parser-related patches.  Two issues:

  1. Even minor changes can silently affect a large number of rendered
pages.  We are (slowly) beginning to create tools to search for affected
(or deprecated) wikitext so that we can tell before deployment how
widespread the changes are.  Slow down and ask for the tools to be run to
prevent surprises later.

  2. We are trying to keep two different parsers in sync: parsoid and the
PHP parser.  For any significant change to the PHP parser (and sanitizer,
and parser tests), there is likely a change needed to Parsoid (and vice
versa).  Try to make sure you get +1s from both teams before a patch gets
+2ed.

If I had a procedures wishlist, it would be for:

 a) a prominent link beside gerrit's +2 where teams could write
project-specific +2 guidelines.

 b) a gerrit page banner in the "24 hours before deployment" window for a
given project; it's easy to lose track of the deployment train, especially
if you work on multiple projects.

  --scott

-- 
(http://cscott.net)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Isarra Yos

On 07/03/14 03:58, Tim Starling wrote:
I would never have merged it, because it had a -1 from Steven Walling, 
apparently speaking on behalf of others on design-l. I think changes 
should be made by consensus.


The changeset was the result of the discussion on the Design list. The 
reason Steven Walling gave for the -1 was simply not true, but attempts 
to explain this failed and consensus apparently wound up being to ignore 
him.


Just for a little background/context here.

-I

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tim Starling
On 07/03/14 11:38, Greg Grossmeier wrote:
> The suite of automatic tests caught this bug, actually. It's how the
> mobile team found out about it as they got to work this morning. So the
> testing is quite robust.

As I understand it, there was no bug, it was just a controversial
design change. That's not what automated testing is supposed to
prevent. If Bartosz had known about the Cucumber test failure, he
might have patched it to pass, in a change to be merged simultaneously.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tim Starling
On 07/03/14 09:07, Chris McMahon wrote:
> In over two years at WMF I have never been involved in a discussion
> like this, but here goes:
> 
> In this case, I think it was entirely appropriate to revert
> immediately and pick up the pieces later.  The source of the code
> is immaterial, if Tim Starling  or Brion Vibber had merged this we
> would have done exactly the same thing.

I would never have merged it, because it had a -1 from Steven Walling,
apparently speaking on behalf of others on design-l. I think changes
should be made by consensus.

On Gerrit, Bartosz Dziewoński wrote:
> This has been silently reverted by Jon in
> https://gerrit.wikimedia.org/r/#/c/117234/ . I would appreciate
> leaving a comment here, or reverting through gerrit's interface
> instead of manually to do it automatically. It's not cool to do it
> like this.

I support this complaint. Please use the "revert" button in Gerrit to
revert commits, or add a comment to the reverted change. A post to
mobile-l is not sufficient.

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tyler Romeo
On Thu, Mar 6, 2014 at 10:13 PM, George Herbert wrote:

> Based on the timing description here, it seems more like "Either rush 1 or
> rush 2".
>

This is also not true. Something does not have to be reverted in Gerrit in
order for it to be undeployed from production. If there was any timing
issue to consider here, I would say after a few days we'd have to reach a
solution.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC project: calculating the quality of editors and content (was Guidance for the Project Idea for GSOC 2014)

2014-03-06 Thread Benjamin Lees
Hi, Devander.  Have you looked at WikiTrust[0]?  It does roughly what you
describe (though I don't think the live demo works anymore).

[0] https://en.wikipedia.org/wiki/Wikipedia:WikiTrust
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread George Herbert
Tyler wrote:

> > 1.  Fix the extension quickly
> > 2.  Revert the change
>  > 3.  Undeploy the extension until its fixed to be compatible with core


> So to summarize, #3 is obviously not an option. For #2, are we supposed to
>

block core development, and let this bug persist indefinitely, because of a
> much less serious bug in an extension? That really only leaves #1, but
> apparently the vast minority of opponents of the original patch decided it
> was a good idea to jump right over and skip to #2.


I think you're setting up a false premise.

Based on the timing description here, it seems more like "Either rush 1 or
rush 2".

One of these is going back to a known state, albeit with a known bug.  The
other is going forwards to an unknown state.

As the grumpy cat cartoon going around recently points out, our code had 99
bugs in it, we patched one, and now we have 127 bugs in it.  Rushing new,
untested patches forwards under deadlines starts to approach the "real
world people get fired for this" behavior from earlier comments.

Rolling back lets you work out what the right thing to do is for next week,
without (much) time pressure.  If the pre-existing bug was tolerable enough
to have been there for a while without requiring an emergency prod patch,
it can stay another week without disaster striking.



On Thu, Mar 6, 2014 at 6:58 PM, Tyler Romeo  wrote:

> On Thu, Mar 6, 2014 at 9:21 PM, Jon Robson  wrote:
>
> > I wonder in future if it might be practical useful for test failures like
> > this to automatically revert changes that made them or at least submit
> > patches to revert them that way it's clear how and when things should
> > be reverted.
> >
>
> Or, at the very least, changes are not deployed until these tests pass. I
> agree that running expensive browser tests on every Gerrit change is
> unnecessary and performance-intensive, but we can expect it to be OK to run
> it before new deploys.
>
> Rob said:
>
> > I wholeheartedly disagree with this.  Changes to core should definitely
> > take into account uses by widely-deployed extensions (where
> > "widely-deployed" can either mean by installation count or by end-user
> > count), even if the usage is "incorrect".  We need to handle these things
> > on a case by case basis, but in general, *all* of the following are
> options
> > when a core change introduces an unintentional extension incompatibility:
> > 1.  Fix the extension quickly
> > 2.  Revert the change
> > 3.  Undeploy the extension until its fixed to be compatible with core
>
>
> I don't think you see the problem here. Consider this case as an example (I
> agree that this is case-by-case, so let's limit the scope to this one).
> You're forgetting that the original patch fixes a bug. In fact, it fixes a
> pretty serious UX bug in my opinion (and many others who supported merging
> this patch).
>
> So to summarize, #3 is obviously not an option. For #2, are we supposed to
> block core development, and let this bug persist indefinitely, because of a
> much less serious bug in an extension? That really only leaves #1, but
> apparently the vast minority of opponents of the original patch decided it
> was a good idea to jump right over and skip to #2.
>
> *-- *
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2016
> Major in Computer Science
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
-george william herbert
george.herb...@gmail.com
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Multimedia team architecture update

2014-03-06 Thread Gergo Tisza
Hi all,

the multimedia team [1] had a chat about some architectural issues with
MultimediaViewer [2] today, and Robla has pointed out that we should
publish such discussions on wikitech-l to make sure we do no reinvent to
many wheels, so here goes. Comments pointing out all the obvious solutions
we have missed are very welcome.

== Use of OOjs-UI ==

Mark experimentally implemented a panel showing share/embed links [3][4] in
OOjs-UI [5]; we discussed some concerns about that. We want to avoid having
to roll our own UI toolkit, and also don't want to introduce yet another
third-party toolkit as the list of libraries loaded by some Wikipedia
extension or other is already too long, and the look-and-feel already too
fractured, so we would be happy to free-ride on another team's efforts :)
On the other hand, OOjs-UI is still under development, does not have tests
and has very little documentation, and the browser support is less than we
would prefer. We could contribute back in those areas, but it would slow us
down significantly.

In the end we felt that's not something we should decide on our own as it
involves redefining the goals of the team somewhat (it was a developer-only
meeting), so we left it open. (The next MultimediaViewer features which
would depend heavily on a UI toolkit are a few months away, so it is not an
urgent decision for us.)

== The state of unit tests ==

MultimediaViewer used to have a messy codebase; we felt at the time that it
is better to have ugly tests than no tests, so we ended up with some large
and convoluted tests which are hard to maintain. Since then we did a lot of
refactoring but kept most of the tests, so now we have some tests which are
harder to understand and more effort to maintain than the code they are
testing. Also, we fixed failing tests after the refactorings, but did not
check the working ones, we cannot be sure they are still testing what they
are supposed to.

We discussed these issues, and decided that writing the tests was still a
good decision at the time, but once we are done with the major code
refactorings, we should take some time to refactor the tests as well. Many
of our current tests test the implementation of a class; we should replace
them with ones that test the specification.

== Plugin architecture ==

We had plans to create some sort of plugin system so that gadgets can
extend the functionality of MultimediaViewer [6]; we discussed whether that
should be an open model where it is possible to alter the behavior of any
component (think Symfony2 or Firefox) and plugins are not limited in their
functionality, or a closed model where there are a limited number of
junction points where gadgets can influence behavior (think MediaWiki hook
system, just much more limited).

The open model seems more in line with Wikimedia philosophy, and might
actually be easier to implement (most of it is just good architecture like
services or dependency injection which would make sense even if we did not
want plugins); on the other hand it would mean a lot of gadgets break every
time we change things, and some possibly do even if we don't. Also, many
the community seems to have much lower tolerance for breakage in
WMF-maintained tools breaking than in community-maintained tools, and most
people probably wouldn't make the distinction between MultimediaViewer
breaking because it is buggy and it breaking because a gadget interacting
with it is buggy, so giving plugins enough freedom to break it might be
inviting conflict. Some sort of hook system (with try-catch blocks, strict
validation etc) would be much more stable, and it would probably require
less technical expertise to use, but it could prevent many potential uses,
and forces us to make more assumptions about what kind of plugins people
would write.

Decision: go with the closed model; reach out for potential plugin writers
and collect requirements; do not guess, only add plugin functionality where
it is actually requested by someone. In general try not to spend too much
effort on it, having a useful plugin system by the time MultimediaViewer is
deployed publicly is probably too ambitious a goal.


[1] https://www.mediawiki.org/wiki/Multimedia
[2] https://www.mediawiki.org/wiki/Extension:MultimediaViewer
[3] https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/147
[4] https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/148
[5] https://www.mediawiki.org/wiki/OOjs_UI
[6] https://wikimedia.mingle.thoughtworks.com/projects/multimedia/cards/168
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tyler Romeo
On Thu, Mar 6, 2014 at 9:21 PM, Jon Robson  wrote:

> I wonder in future if it might be practical useful for test failures like
> this to automatically revert changes that made them or at least submit
> patches to revert them that way it's clear how and when things should
> be reverted.
>

Or, at the very least, changes are not deployed until these tests pass. I
agree that running expensive browser tests on every Gerrit change is
unnecessary and performance-intensive, but we can expect it to be OK to run
it before new deploys.

Rob said:

> I wholeheartedly disagree with this.  Changes to core should definitely
> take into account uses by widely-deployed extensions (where
> "widely-deployed" can either mean by installation count or by end-user
> count), even if the usage is "incorrect".  We need to handle these things
> on a case by case basis, but in general, *all* of the following are options
> when a core change introduces an unintentional extension incompatibility:
> 1.  Fix the extension quickly
> 2.  Revert the change
> 3.  Undeploy the extension until its fixed to be compatible with core


I don't think you see the problem here. Consider this case as an example (I
agree that this is case-by-case, so let's limit the scope to this one).
You're forgetting that the original patch fixes a bug. In fact, it fixes a
pretty serious UX bug in my opinion (and many others who supported merging
this patch).

So to summarize, #3 is obviously not an option. For #2, are we supposed to
block core development, and let this bug persist indefinitely, because of a
much less serious bug in an extension? That really only leaves #1, but
apparently the vast minority of opponents of the original patch decided it
was a good idea to jump right over and skip to #2.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Jon Robson
I wonder in future if it might be practical useful for test failures like
this to automatically revert changes that made them or at least submit
patches to revert them that way it's clear how and when things should
be reverted.
On 6 Mar 2014 18:09, "Chris McMahon"  wrote:

> On Thu, Mar 6, 2014 at 6:07 PM, OQ  wrote:
>
> > So the testsuite only runs on merged code and not pending-merge? That
> > sounds like a large oversight.
>
>
> Picture in your mind every branch pending merge for every extension in
> gerrit.   Imagine how many of those branches are eventually abandoned,
> imagine how many patch sets each receives, imagine how many times each gets
> rebased.
>
> And even if we had such tests, they would not have exposed today's issue.
>
> We run UI-level regression tests against a model of the Wikipedia cluster
> on beta labs running the master branch *exactly* so that we can expose
> cross-repo problems, configuration problems, etc. before they go to
> production.
>
> Today's issue was hardly unique.  Just one week ago our tests picked up an
> entirely unrelated but similarly surprising issue that had the
> MobileFrontend team scrambling on a Thursday morning:
> https://bugzilla.wikimedia.org/show_bug.cgi?id=62004.  We stop bugs *all
> the time* this way.
>
> This is hardly an "oversight".  These tests and these test environments are
> very carefully designed to expose exactly the kind of issues that they
> expose.  They have saved us an extraordinary amount of pain by preventing
> bugs released to production.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Chris McMahon
On Thu, Mar 6, 2014 at 6:07 PM, OQ  wrote:

> So the testsuite only runs on merged code and not pending-merge? That
> sounds like a large oversight.


Picture in your mind every branch pending merge for every extension in
gerrit.   Imagine how many of those branches are eventually abandoned,
imagine how many patch sets each receives, imagine how many times each gets
rebased.

And even if we had such tests, they would not have exposed today's issue.

We run UI-level regression tests against a model of the Wikipedia cluster
on beta labs running the master branch *exactly* so that we can expose
cross-repo problems, configuration problems, etc. before they go to
production.

Today's issue was hardly unique.  Just one week ago our tests picked up an
entirely unrelated but similarly surprising issue that had the
MobileFrontend team scrambling on a Thursday morning:
https://bugzilla.wikimedia.org/show_bug.cgi?id=62004.  We stop bugs *all
the time* this way.

This is hardly an "oversight".  These tests and these test environments are
very carefully designed to expose exactly the kind of issues that they
expose.  They have saved us an extraordinary amount of pain by preventing
bugs released to production.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Jon Robson
For the record this the test that alerted us to this issue was the following:
https://wmf.ci.cloudbees.com/job/MobileFrontend-en.m.wikipedia.beta.wmflabs.org-linux-firefox/392/testReport/junit/(root)/Create%20failure%20messages/Create_account_password_mismatch_message/

2 problems here - tests run only for the extension the code touches
and currently browser tests only run after as these are slow and
people would be annoyed if code took 20 minutes to merge. We've had
issues in the past where changes in VisualEditor have broken things in
MobileFrontend that trigger failed tests. Maybe core should run all
browser tests for all deployed extensions as part of the merge process
to avoid this?

"If MobileFrontend is so tightly coupled with the desktop login form,
that is a problem with MobileFrontend."
I don't really understand this. I get people moaning at me all the
time that mobile has its own version of Watchlist compared to core,
has its own skin etc etc and I get told the opposite that we should be
closer to core. This is what we are striving for but we are not quite
there yet.

"It seems that commenters here believe that the patch made it
impossible to create an account if JavaScript was disabled, or via
MobileFrontend - this is obviously not true, it just required an
additional confirmation"
I've not sure anyone has said it was impossible to create an account
but user experience was badly effected as you point out. The statement
I made was "Since most mobile device input fields default to lowercase
... pretty much anyone who now tries to register an account on mobile
will see a warning that their username has been capitalized and will
have to fill in the  registration form again" [1]

[1] http://lists.wikimedia.org/pipermail/mobile-l/2014-March/006557.html

On Thu, Mar 6, 2014 at 5:13 PM, Greg Grossmeier  wrote:
> 
>> On Thu, Mar 6, 2014 at 5:49 PM, OQ  wrote:
>>
>> > So I'm confused on the timeline here.
>> >
>> > Did the commit get merged before the testsuite found the breakage, or did
>> > the commit get merged despite the testsuite failing?
>>
>>
>> The commit was merged late Wednesday.  The automated tests that
>> demonstrated the problem failed over Wednesday night and we analyzed the
>> failures early Thursday morning, which is routine.
>
> And to clarify more:
> Not all tests run right away, some are more expensive and run on a
> schedule. These are part of that.
>
> Should some of these tests be moved up to run immediately? Yeah, but
> we'd need to define the set of 'smoke tests' (tests that just test basic
> functionality, quickly) because we can't run all the tests all the time.
>
> Greg
>
> --
> | Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
> | identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
* http://jonrobson.me.uk
* https://www.facebook.com/jonrobson
* @rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Greg Grossmeier

> On Thu, Mar 6, 2014 at 5:49 PM, OQ  wrote:
> 
> > So I'm confused on the timeline here.
> >
> > Did the commit get merged before the testsuite found the breakage, or did
> > the commit get merged despite the testsuite failing?
> 
> 
> The commit was merged late Wednesday.  The automated tests that
> demonstrated the problem failed over Wednesday night and we analyzed the
> failures early Thursday morning, which is routine.

And to clarify more:
Not all tests run right away, some are more expensive and run on a
schedule. These are part of that.

Should some of these tests be moved up to run immediately? Yeah, but
we'd need to define the set of 'smoke tests' (tests that just test basic
functionality, quickly) because we can't run all the tests all the time.

Greg

-- 
| Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Should MediaWiki CSS prefer non-free fonts?

2014-03-06 Thread Quim Gil
On 03/06/2014 12:21 PM, Quim Gil wrote:
> On 03/05/2014 02:00 PM, Ryan Kaldari wrote:
>> What do people think of the following stack:
>>
>> Arimo, Liberation Sans, Helvetica Neue, Helvetica, Arial, sans-serif;
> 
> 
> it would be useful to have a table showing
> which fonts are rendered by the most popular browsers in the most
> popular platforms [1] when you specify

Please help filling this table:

https://www.mediawiki.org/wiki/Typography_refresh/Font_choice/Test

BUT BEFORE please check the wikitext code to assure that the syntax is
correct, and therefore the tests will be valid. Thank you!

PS: I don't have any of the "most popular browsers in the most popular
platforms". I have started a second table of Linux desktop combinations,
just for the fun of it.

-- 
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread OQ
So the testsuite only runs on merged code and not pending-merge? That
sounds like a large oversight.


On Thu, Mar 6, 2014 at 6:55 PM, Chris McMahon wrote:

> On Thu, Mar 6, 2014 at 5:49 PM, OQ  wrote:
>
> > So I'm confused on the timeline here.
> >
> > Did the commit get merged before the testsuite found the breakage, or did
> > the commit get merged despite the testsuite failing?
>
>
> The commit was merged late Wednesday.  The automated tests that
> demonstrated the problem failed over Wednesday night and we analyzed the
> failures early Thursday morning, which is routine.
>
> As noted above, code committed late on Wednesday or early Thursday only
> resides in the test environment on beta labs for a short time before going
> to production wikis.  We intend to improve this situation in the not too
> distant future, but for now that is the situation on the ground.
> -Chris
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Chris McMahon
On Thu, Mar 6, 2014 at 5:49 PM, OQ  wrote:

> So I'm confused on the timeline here.
>
> Did the commit get merged before the testsuite found the breakage, or did
> the commit get merged despite the testsuite failing?


The commit was merged late Wednesday.  The automated tests that
demonstrated the problem failed over Wednesday night and we analyzed the
failures early Thursday morning, which is routine.

As noted above, code committed late on Wednesday or early Thursday only
resides in the test environment on beta labs for a short time before going
to production wikis.  We intend to improve this situation in the not too
distant future, but for now that is the situation on the ground.
-Chris
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread OQ
So I'm confused on the timeline here.

Did the commit get merged before the testsuite found the breakage, or did
the commit get merged despite the testsuite failing?

Either way sounds like "When to merge a changeset" needs reviewed.


On Thu, Mar 6, 2014 at 6:38 PM, Greg Grossmeier  wrote:

> 
> > To take a couple of steps back...
> >
> > This happened because testing isn't robust enough?
> >
> > That should be discussed and followed up on.
>
> Ish.
>
> The suite of automatic tests caught this bug, actually. It's how the
> mobile team found out about it as they got to work this morning. So the
> testing is quite robust.
>
>
> I'd argue, that the revert didn't happen fast enough.
>
> "Whoa! Greg! Don't be incendiary!"
>
> What I mean by that is:
>
> I want us (where 'us' == any developer writing MediaWiki or extension
> code) to get to the point where we reject and revert any commit which
> breaks the test suite. Basically, when a test fails after a commit you
> (as the developer) should:
>
> A) fix it right away (like, now)
> or
> B) revert your commit that broke it and work on a fix
>
>
> B enables other people to continue working with a good state of the
> software.
>
> Doing C, which is what happened here, makes *your work stop other
> people's productivity*. Period.
>
> This should happen no matter where the test fails; if it's "your code"
> or not. Your code caused it (in the sense that the test didn't fail
> before), so you should work on fixing it.
>
> There's a bit more nuance here:
>
> A test can fail for any number of reasons including badly written tests
> or the test infrastructure failing somehow. Part of the above decision
> tree should include determining what actually broke. If the test was
> badly written, rewrite it or remove it if it no longer applies.
>
>
> We aren't to this point yet; we need a bit more test coverage and we
> need to speed up the feedback cycle for auto browser tests, but we're
> headed in this direction. On purpose.
>
> We need to get in the habit of making every commit deployable. No more
> breaking beta cluster for a day while we work something out. No more
> breaking other parts of the code base and ignoring it because 'it's not
> core'.
>
>
> Greg
>
> --
> | Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
> | identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Greg Grossmeier

> To take a couple of steps back...
> 
> This happened because testing isn't robust enough?
> 
> That should be discussed and followed up on.

Ish.

The suite of automatic tests caught this bug, actually. It's how the
mobile team found out about it as they got to work this morning. So the
testing is quite robust.


I'd argue, that the revert didn't happen fast enough.

"Whoa! Greg! Don't be incendiary!"

What I mean by that is:

I want us (where 'us' == any developer writing MediaWiki or extension
code) to get to the point where we reject and revert any commit which
breaks the test suite. Basically, when a test fails after a commit you
(as the developer) should:

A) fix it right away (like, now)
or 
B) revert your commit that broke it and work on a fix


B enables other people to continue working with a good state of the
software.

Doing C, which is what happened here, makes *your work stop other
people's productivity*. Period.

This should happen no matter where the test fails; if it's "your code"
or not. Your code caused it (in the sense that the test didn't fail
before), so you should work on fixing it.

There's a bit more nuance here:

A test can fail for any number of reasons including badly written tests
or the test infrastructure failing somehow. Part of the above decision
tree should include determining what actually broke. If the test was
badly written, rewrite it or remove it if it no longer applies.


We aren't to this point yet; we need a bit more test coverage and we
need to speed up the feedback cycle for auto browser tests, but we're
headed in this direction. On purpose.

We need to get in the habit of making every commit deployable. No more
breaking beta cluster for a day while we work something out. No more
breaking other parts of the code base and ignoring it because 'it's not
core'.


Greg

-- 
| Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Bartosz Dziewoński

It seems that commenters here believe that the patch made it impossible to 
create an account if JavaScript was disabled, or via MobileFrontend – this is 
obviously not true, it just required an additional confirmation (which was by 
design and +1'd by five people). Please stop spreading this disinformation.

--
Matma Rex

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Chris Steipp
On Thu, Mar 6, 2014 at 4:08 PM, Erik Bernhardson  wrote:
>
> Does core have any  policies related to merging?  The core features team
> has adopted a methodology(although slightly different) that we learned of
> from the VE team.  Essentially +2 for 24 hours before a deployment branch
> is cut is limited to fixes for bugs that  were introduced since the last
> deployment branch was cut or reverts for patches that turned out to not be
> ready for deployment.  Core is certainly bigger and with more participants,
> but perhaps a conversation about when to +2 and how that effects the
> deployment process would be benefitial?
>
>
Formally, no (not that I know of). Informally, I know a lot of us do a lot
of merging on Fridays, partly for this reason. I resisted merging a big
patch this morning because I want it to sit in beta for a while. I know a
few patches were merged this morning so that they *would* make it into
today's deploy. Everyone with +2 should always think about how/when things
will be deployed, and merge as appropriate. And it seems like most people
use good judgement most of the time.

If this is coming up a lot, then yeah, let's make some policy about it, or
just enforce it in how we do the branch cutting.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Usability testing

2014-03-06 Thread David Gerard
On 7 March 2014 00:17, Trevor Parscal  wrote:

> I believe, from lots of first-hand experience and some research on the
> subject, that anytime you can get at least 5 users in front of a product
> and run them through well written tasks you are going to reveal about 80%
> of the problems. Getting fancy with the methodology usually only affects
> the final 20%.


I have frequently seen the claim that a usable usability test can be
done with five test subjects. I suppose there's betas and mailing ists
and wiki forums and other such yelling shops for the other 20% of the
problems.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Usability testing

2014-03-06 Thread Trevor Parscal
I just wanted to add that in the past, as many people know, we tried a few
different kinds of testing and even hired a usability testing firm to help
us. We conducted research in a lab here in SF and also did some remote
testing, compensating participants with gift cards.

We learned that lab testing is very expensive, complicated and slow. It has
its own unique filtering qualities that prevent certain kinds of people
from participating and encourage others. Participants being in a foreign
environment, using someone else's computer and being run through tasks with
a giant 2-way mirror behind their back and cameras rolling might distort
behavior a bit.

Remote testing done with a facilitator and screen-sharing (like what Steven
is talking about with Google Hangout) is still time consuming, but far
cheaper and easier than lab testing and can be done on shorter notice. It
filters out less tech-savvy people or those who use alternative or legacy
devices like phones, tablets or older computers. It's interesting that it
allows people to use a computer they are already familiar with, but it may
not be relevant to the test.

Remote testing done using usertesting.com is the cheapest and easiest, but
even further filters out less tech-savvy people.

I believe, from lots of first-hand experience and some research on the
subject, that anytime you can get at least 5 users in front of a product
and run them through well written tasks you are going to reveal about 80%
of the problems. Getting fancy with the methodology usually only affects
the final 20%.

I'm really looking forward to having a UX testing person on staff who can
facilitate more testing. I find it very valuable and would like to do more
in the future.

- Trevor


On Thu, Mar 6, 2014 at 4:06 PM, David Gerard  wrote:

> On 6 March 2014 23:47, Steven Walling  wrote:
>
> > more automated remote testing and is $35/test (this is really cheap since
> > the going US rate for an in-person test is something like a $50 Amazon
> gift
> > card).
>
>
> off-topic on off-topic: Offer swag instead. Wikipedia branded stuff is
> presently uncommon enough to *delight* people. I remember doing a
> usability test for Ubuntu and accepting some stickers and a £2 USB
> stick rather than a £40 cheque ... I could tell it was a £2 USB
> because it stopped working 6 months later.
>
> Anyway. Work the swag angle. Puzzle globes. People LOVE that stuff.
>
>
> - d.
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Chris McMahon
On Thu, Mar 6, 2014 at 4:54 PM, Tyler Romeo  wrote:

> On Thu, Mar 6, 2014 at 6:34 PM, Brion Vibber 
> wrote:
>
> > Is there anything specific in the communications involved that you found
> > was problematic, other than a failure to include a backlink in the
> initial
> > revert?
> >
>
> I think this entire thing was a big failure in basic software development
> and systems administration. If MobileFrontend is so tightly coupled with
> the desktop login form, that is a problem with MobileFrontend. In addition,
> the fact that a practically random code change was launched into production
> an hour later without so much as a test...


It was in fact our automated browser test suite that alerted us that a
change to some other area of the software overnight had broken some central
MobileFrontend functionality.  It was rather unexpected, and we moved
quickly to identify the issue and revert it in the short amount of time we
had before the code went to production.


> That's the kind of thing that
> gets people fired at other companies.
>
> But apparently I'm the only person that thinks this, so the WMF can feel
> free to do what it wants.


That sort of thing is not necessary.

-Chris
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Erik Bernhardson
On Thu, Mar 6, 2014 at 4:05 PM, Brion Vibber  wrote:

> On Thu, Mar 6, 2014 at 3:54 PM, Tyler Romeo  wrote:
>
> > On Thu, Mar 6, 2014 at 6:34 PM, Brion Vibber 
> > wrote:
> >
> > > Is there anything specific in the communications involved that you
> found
> > > was problematic, other than a failure to include a backlink in the
> > initial
> > > revert?
> > >
> >
> > I think this entire thing was a big failure in basic software development
> > and systems administration. If MobileFrontend is so tightly coupled with
> > the desktop login form, that is a problem with MobileFrontend.
>
>
> As noted already in the thread, the commit was broken for non-JS users as
> well as for mobile; it's not a deficiency in MobileFrontend specifically.
>
>
> > In addition,
> > the fact that a practically random code change was launched into
> production
> > an hour later without so much as a test...
>
>
> Aha -- this seems to strike to the heart of the matter. Would you agree
> this incident has more to do with problems with the branch deployment
> scheduling than with commit warring?
>
> -- brion
>
> t
Does core have any  policies related to merging?  The core features team
has adopted a methodology(although slightly different) that we learned of
from the VE team.  Essentially +2 for 24 hours before a deployment branch
is cut is limited to fixes for bugs that  were introduced since the last
deployment branch was cut or reverts for patches that turned out to not be
ready for deployment.  Core is certainly bigger and with more participants,
but perhaps a conversation about when to +2 and how that effects the
deployment process would be benefitial?


> That's the kind of thing that
> > gets people fired at other companies.
> >
> > But apparently I'm the only person that thinks this, so the WMF can feel
> > free to do what it wants.
> >
> > *-- *
> > *Tyler Romeo*
> > Stevens Institute of Technology, Class of 2016
> > Major in Computer Science
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Usability testing

2014-03-06 Thread David Gerard
On 6 March 2014 23:47, Steven Walling  wrote:

> more automated remote testing and is $35/test (this is really cheap since
> the going US rate for an in-person test is something like a $50 Amazon gift
> card).


off-topic on off-topic: Offer swag instead. Wikipedia branded stuff is
presently uncommon enough to *delight* people. I remember doing a
usability test for Ubuntu and accepting some stickers and a £2 USB
stick rather than a £40 cheque ... I could tell it was a £2 USB
because it stopped working 6 months later.

Anyway. Work the swag angle. Puzzle globes. People LOVE that stuff.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Rob Lanphier
Hi Tyler,

I understand you're frustrated here.  As Jon says: "communication in the
wikiverse is hard".  Also, running a top 10 website is also hard.

Others have covered many of the other points, but I wanted to make sure I
addressed one of the points that hasn't been covered yet:

On Thu, Mar 6, 2014 at 2:08 PM, Tyler Romeo  wrote:

> Changes to MediaWiki core
> should not have to take into account extensions that incorrectly rely on
> its interface, and a breakage in a deployed extension should result in an
> undeployment and a fix to that extension, not a revert of the core patch.


I wholeheartedly disagree with this.  Changes to core should definitely
take into account uses by widely-deployed extensions (where
"widely-deployed" can either mean by installation count or by end-user
count), even if the usage is "incorrect".  We need to handle these things
on a case by case basis, but in general, *all* of the following are options
when a core change introduces an unintentional extension incompatibility:
1.  Fix the extension quickly
2.  Revert the change
3.  Undeploy the extension until its fixed to be compatible with core

#3 is last and least.  It should be exceedingly rare that we undeploy a
long-running extension because of a newly-introduced core change.

It is often difficult to know exactly how core "should" be used, and
sometimes, extension developers need to do things that seem hacky or wrong
to achieve the desired result.  It will often be the case that we'll need
to continue to support misfeatures because breaking them would be too
disruptive.  Over time, if we improve our practices, this sort of tradeoff
will need to happen less-and-less, but it does need to happen.

Drawing the analogy to wiki world, what has happened here is exactly this:
https://en.wikipedia.org/wiki/Wikipedia:BOLD,_revert,_discuss_cycle

We're in the discuss part, which is actually where we should be.

Rob
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Brion Vibber
On Thu, Mar 6, 2014 at 3:54 PM, Tyler Romeo  wrote:

> On Thu, Mar 6, 2014 at 6:34 PM, Brion Vibber 
> wrote:
>
> > Is there anything specific in the communications involved that you found
> > was problematic, other than a failure to include a backlink in the
> initial
> > revert?
> >
>
> I think this entire thing was a big failure in basic software development
> and systems administration. If MobileFrontend is so tightly coupled with
> the desktop login form, that is a problem with MobileFrontend.


As noted already in the thread, the commit was broken for non-JS users as
well as for mobile; it's not a deficiency in MobileFrontend specifically.


> In addition,
> the fact that a practically random code change was launched into production
> an hour later without so much as a test...


Aha -- this seems to strike to the heart of the matter. Would you agree
this incident has more to do with problems with the branch deployment
scheduling than with commit warring?

-- brion


That's the kind of thing that
> gets people fired at other companies.
>
> But apparently I'm the only person that thinks this, so the WMF can feel
> free to do what it wants.
>
> *-- *
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2016
> Major in Computer Science
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Ryan Lane
On Thu, Mar 6, 2014 at 3:54 PM, Tyler Romeo  wrote:

> I think this entire thing was a big failure in basic software development
> and systems administration. If MobileFrontend is so tightly coupled with
> the desktop login form, that is a problem with MobileFrontend. In addition,
> the fact that a practically random code change was launched into production
> an hour later without so much as a test... That's the kind of thing that
> gets people fired at other companies.
>
>
At shitty companies maybe.

Things break. You do a post-mortem and track the things that lead to an
outage and try to make sure it doesn't break again, ideally by adding
automated tests, if possible.

- Ryan
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tyler Romeo
On Thu, Mar 6, 2014 at 6:34 PM, Brion Vibber  wrote:

> Is there anything specific in the communications involved that you found
> was problematic, other than a failure to include a backlink in the initial
> revert?
>

I think this entire thing was a big failure in basic software development
and systems administration. If MobileFrontend is so tightly coupled with
the desktop login form, that is a problem with MobileFrontend. In addition,
the fact that a practically random code change was launched into production
an hour later without so much as a test... That's the kind of thing that
gets people fired at other companies.

But apparently I'm the only person that thinks this, so the WMF can feel
free to do what it wants.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread George Herbert
To take a couple of steps back...

This happened because testing isn't robust enough?

That should be discussed and followed up on.



On Thu, Mar 6, 2014 at 3:34 PM, Brion Vibber  wrote:

> On Thu, Mar 6, 2014 at 2:08 PM, Tyler Romeo  wrote:
>
> > On Thu, Mar 6, 2014 at 4:15 PM, Steven Walling  > >wrote:
> >
> > > If your patch causes a serious UX regression like this, it's going to
> get
> > > reverted. The core patch involved was being deployed to Wikimedia
> sites /
> > > impacting MobileFrontEnd users today. If we had more time in the
> > deployment
> > > cycle to wait and the revert was a simple disagreement, then waiting
> > would
> > > be appropriate. It is obvious in this case no one tested the core
> change
> > on
> > > mobile. That's unacceptable.
> > >
> >
> > You quoted my email, but didn't seem to read it. Changes to MediaWiki
> core
> > should not have to take into account extensions that incorrectly rely on
> > its interface, and a breakage in a deployed extension should result in an
> > undeployment and a fix to that extension, not a revert of the core patch.
> >
>
> Changes to MediaWiki core should avoid breaking Wikipedia in production,
> especially since we aggressively push new versions of core and extensions
> to Wikipedia every few weeks.
>
> For years and years and years we've been very free about reverting things
> that break. No one, including old-timers like me and Tim, has the "right"
> to not have something reverted. If it needs to be reverted it will be
> reverted -- there is nothing personal in a revert. Remember it can always
> be put back once all problems are resolved.
>
> Is there anything specific in the communications involved that you found
> was problematic, other than a failure to include a backlink in the initial
> revert?
>
> -- brion
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
-george william herbert
george.herb...@gmail.com
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Usability testing

2014-03-06 Thread Steven Walling
From: David Gerard 
Date: Thu, Mar 6, 2014 at 9:44 PM
Subject: Re: [Wikitech-l] Should MediaWiki CSS prefer non-free fonts?
To: Wikimedia developers 

(Veering off topic: So what does WMF use for a usability lab, anyway?)


...not sure what Kaldari did. In this case, he may have simply sat down
with the UX designers and done a test in person.

We do not have a usability testing lab on-site in San Francisco, and
typically prefer to do remote usability tests. Either we do this "manually"
via sending out a survey,[1] and then running a Google Hangout which we
record for later. This is good since it is guided by the person who wrote
the test script, so they can adapt to what the user is doing/failing to do.
It takes a lot more leg work though.

More often, we write a testing script and use usertesting.com, which is
more automated remote testing and is $35/test (this is really cheap since
the going US rate for an in-person test is something like a $50 Amazon gift
card). The service uses people from all over the English-speaking world who
have a variety of levels of technical expertise, and the tests are recorded
for viewing after they're completed.[2]

The UX team is actually in the process of hiring a UX researcher, so expect
to hear more about this kind of qualitative research soon.

Steven

1. We recently did this kind of recruitment and testing for article drafts
work. https://www.mediawiki.org/wiki/Draft_namespace/Usability_testing and
the /Results subpage
2. This kind of testing is something we used during the account creation
redesign
https://www.mediawiki.org/wiki/Account_creation_user_experience/User_testing



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Brion Vibber
On Thu, Mar 6, 2014 at 2:08 PM, Tyler Romeo  wrote:

> On Thu, Mar 6, 2014 at 4:15 PM, Steven Walling  >wrote:
>
> > If your patch causes a serious UX regression like this, it's going to get
> > reverted. The core patch involved was being deployed to Wikimedia sites /
> > impacting MobileFrontEnd users today. If we had more time in the
> deployment
> > cycle to wait and the revert was a simple disagreement, then waiting
> would
> > be appropriate. It is obvious in this case no one tested the core change
> on
> > mobile. That's unacceptable.
> >
>
> You quoted my email, but didn't seem to read it. Changes to MediaWiki core
> should not have to take into account extensions that incorrectly rely on
> its interface, and a breakage in a deployed extension should result in an
> undeployment and a fix to that extension, not a revert of the core patch.
>

Changes to MediaWiki core should avoid breaking Wikipedia in production,
especially since we aggressively push new versions of core and extensions
to Wikipedia every few weeks.

For years and years and years we've been very free about reverting things
that break. No one, including old-timers like me and Tim, has the "right"
to not have something reverted. If it needs to be reverted it will be
reverted -- there is nothing personal in a revert. Remember it can always
be put back once all problems are resolved.

Is there anything specific in the communications involved that you found
was problematic, other than a failure to include a backlink in the initial
revert?

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Jon Robson
Communication in the wikiverse is hard.

To clarify, this is _not_ an issue with MobileFrontend. The same
problem effects users without JavaScript. There was a fundamental
problem with this patch that sadly didn't get caught during code
review. It broke the workflow of mobile on an important page in
production which is a bad thing. On a side note it saddens me that
mobile gets very little attention during code review on essential
parts of our infrastructure. If anyone has any ideas on how this can
be remedied please let me know.

Moan about the deployment train:
The code was merged at the final hour before a deployment train (this
is another issue that our deployment train doesn't distinguish between
patches that have been sitting on master for a week with patches that
have been sitting there for 1 hour). Had this been merged on a
Thursday morning we would have had more luxury and a revert maybe
could have been avoided (but I still don't think that patch was in a
mergeable format).

In answer to a few statements you made...

"Wikipedia has a notorious policy against edit warring, where users
are encouraged to discuss changes and achieve consensus..."
Agreed but that consensus should also be achieved during review. It
seems during the code review process [1] there was an open concerns
that had been raised and a -1 from Steven that was unaddressed. In
this case we have the luxury to discuss this more and explore problems
and in my opinion it was not worthy of a rushed merge. Yes we can't
please everyone but it would have been good to get more people
involved in this conversation.

"not everybody is subscribed to mobile-l, so you cannot expect the
original reviewers to see or know about it"
Yes, and posts to wikimedia-l go straight to my archive, so I usually
miss them so I wasn't aware of this mail until someone pointed me to
it. Communicating so everyone gets a message is hard :-).

That said I did screw up here though in that I didn't comment on the
patchset with a link to the mobile-l mailing list.  In fact I started
to and then got distracted by a conversation and forgot to hit save. I
Will be more careful in future. All conversations about code should
start in code and I'm sorry I didn't adhere to this rule this time.

[1] https://gerrit.wikimedia.org/r/#/c/114400/

On Thu, Mar 6, 2014 at 2:08 PM, Tyler Romeo  wrote:
> On Thu, Mar 6, 2014 at 4:15 PM, Steven Walling 
> wrote:
>
>> If your patch causes a serious UX regression like this, it's going to get
>> reverted. The core patch involved was being deployed to Wikimedia sites /
>> impacting MobileFrontEnd users today. If we had more time in the deployment
>> cycle to wait and the revert was a simple disagreement, then waiting would
>> be appropriate. It is obvious in this case no one tested the core change on
>> mobile. That's unacceptable.
>>
>
> You quoted my email, but didn't seem to read it. Changes to MediaWiki core
> should not have to take into account extensions that incorrectly rely on
> its interface, and a breakage in a deployed extension should result in an
> undeployment and a fix to that extension, not a revert of the core patch.
>
> *-- *
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2016
> Major in Computer Science
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
Jon Robson
* http://jonrobson.me.uk
* https://www.facebook.com/jonrobson
* @rakugojon

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] GSOC project: calculating the quality of editors and content (was Guidance for the Project Idea for GSOC 2014)

2014-03-06 Thread Quim Gil
Hi Devender, I'm not a developer but I hope my feedback as editor is useful.

On 03/06/2014 12:02 AM, Devender wrote:
> I want to implement a ranking system of the editors(especially 3rd party
> editors) of the Wikipedia through which viewers can differentiate between
> the content of the page. 

What do you mean with "3rd party editors"?


> This ranking system will increase the content reliability

Content reliability is indeed an interesting value for wiki content,
especially in projects like Wikipedia. However, basing the reliability
of the content on the quantity of edits done by an editor is risky --to
say the least.

Reliability is based on quantity, not quality. If you would find a way
to assess the quality of the editions of an editor (and therefor the
reliability of an editor)... Then maybe you could provide a hint about
the reliability of an article based on the reliability of the editors
that edited it.

Even in that case it might be complex to figure out when the reliable
editors are acting to add more quality to an already good article, or to
fix the worst issues of a horrible article. When they add and when they
revert...

And of course it may also happen that editors not identified as reliable
produce great content, as it often the case with editors very
specialized in certain topic, with a short history of excellent edits.

> 2. Make the different color of the line/paragraph if the content of the
> line/paragraph is very new and its reliability score is less.

Even if there is some probability that older paragraphs that have
survived many edits intact are somewhat reliable, it is too easy to find
examples disproving this point. This is true especially in the articles
needing more a quality assessment, those that are not edited often and
are not watched by many experienced editors.


> Please let me if I should go with this idea. If not, guide me how to start
> working on different idea.

This is just my personal opinion and I'm not an expert. Maybe someone
else will ave a different, more positive opinion about your project, or
advice to re-focus it.

In general, students proposing new projects have more chances of success
if they start pitching and testing their ideas months before the GSoC.
Add a factor of x5 at least if your main target is a Wikimedia project.

If you don't get mentors for your project very soon, then the safest
option is to choose a project at
https://www.mediawiki.org/wiki/Summer_of_Code_2014 and go for it.

Thank you for your interest in contributing to Wikimedia. Also thank you
for following my suggestion to post at wikitech-l. I hope you wll get
more feedback from other people in this list.

-- 
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Erik Bernhardson
On Thu, Mar 6, 2014 at 2:08 PM, Tyler Romeo  wrote:

> On Thu, Mar 6, 2014 at 4:15 PM, Steven Walling  >wrote:
>
> > If your patch causes a serious UX regression like this, it's going to get
> > reverted. The core patch involved was being deployed to Wikimedia sites /
> > impacting MobileFrontEnd users today. If we had more time in the
> deployment
> > cycle to wait and the revert was a simple disagreement, then waiting
> would
> > be appropriate. It is obvious in this case no one tested the core change
> on
> > mobile. That's unacceptable.
> >
>
> You quoted my email, but didn't seem to read it. Changes to MediaWiki core
> should not have to take into account extensions that incorrectly rely on
> its interface, and a breakage in a deployed extension should result in an
> undeployment and a fix to that extension, not a revert of the core patch.
>
>
I don't think core is in any way special here.  It doesn't matter what
broke what, the whole is much more important than the individual parts.  If
the patch to core is what broke things reverting it is the appropriate
course of action.

*-- *
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2016
> Major in Computer Science
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikipedia App Reboots, HTTPS, and Wikipedia Zero (was Re: [WikimediaMobile] Wikimedia Commons mobile photo uploader app updated on iOS and Android)

2014-03-06 Thread Adam Baso
Another note in case you missed it earlier. If your'e looking in general to
test the Wikipedia app reboot, at the moment the Android APK can be
downloaded from
https://releases.wikimedia.org/mobile/android/apps-android-wikipedia-sprint25.apkand
bugs can be filed via Bugzilla. The iOS build is currently internal
due
to installation limits, although simulator and debugging stuff can be done
on the latest beta of Xcode.

I also forgot to mention my peer Yuri's great work! The guy knuckled down
to considerably revise Varnish scripts, reviewed and helped me improve
code, and offered really good advice on API-app interaction. Thanks Yuri!

-Adam


On Wed, Mar 5, 2014 at 4:16 PM, Adam Baso  wrote:

> I realized I should be clear that the "rebooted apps" I mention are "the
> future Wikipedia mobile app"s mentioned earlier in the thread. Sorry if any
> confusion.
>
> -Adam
>
>
> On Wed, Mar 5, 2014 at 11:43 AM, Adam Baso  wrote:
>
>> +mobile-l
>>
>> Greetings. Rupert, an update!
>>
>> The rebooted Android (Android 2.3+) and iOS (iOS 6+) apps will have
>> Wikipedia Zero flourishes built into them, making it possible for the user
>> to know whether the app access is free of data usage charges. The rebooted
>> apps are tentatively slated for store submission at the end of the month.
>> The flourishes will hinge on each operator's zero-rating of HTTPS.
>>
>> Likewise, HTTPS contributory features are about to be introduced on the
>> Wikipedia Zero mobile web experience as well for operators that zero-rate
>> HTTPS.
>>
>> WMF is starting the work with partner operators to add support for
>> zero-rating of HTTPS. There will be, at least, technical hurdles
>> (networking equipment architecture varies) in this transition, but it's
>> underway! Indeed, we have some carriers that have noted support for HTTPS
>> zero-rating already.
>>
>> I'm very much grateful to Brion, Yuvi, and Monte for their assistance
>> while I added code to the Android and iOS platforms, and am happy to get to
>> work with them more while putting final touches in place this month. Props
>> to Faidon, Mark, and Brandon in Ops Engineering as well on helping us
>> overcome some rather non-trivial hurdles in order to retain good
>> performance and maintainability while adding HTTPS support.
>>
>> -Adam
>>
>>
>> On Mon, Aug 26, 2013 at 3:34 PM, Brion Vibber wrote:
>>
>>> On Mon, Aug 26, 2013 at 8:19 AM, Adam Baso  wrote:
>>>
>>> > Rupert, I saw your question regarding Wikipedia Zero. Wikipedia Zero is
>>> > currently targeted for the mobile web, but I'll take this question
>>> back to
>>> > the business team as to whether we'd be able to support zero-rating of
>>> apps
>>> > traffic at some point in the future, at least in locales where moderate
>>> > bandwidth is available.
>>> >
>>>
>>> I think that once the zero-rating is switched to support HTTPS by using
>>> IP-based instead of Deep Packet Inspection-based HTTP sniffing, ISP
>>> partners wouldn't actually be able to distinguish between mobile web and
>>> mobile apps content unless we actively choose to make them use separate
>>> IPs
>>> and domain names.
>>>
>>> Especially if, as we think we're going to, the future Wikipedia mobile
>>> app
>>> will consist mostly of native code widgets and modules that plug into the
>>> web site embedded in a web control... it'll be loading mostly the same
>>> web
>>> pages from the same servers, but running a different mix of JavaScript.
>>>
>>> -- brion
>>> ___
>>> Wikitech-l mailing list
>>> Wikitech-l@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>>>
>>
>>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Benjamin Lees
Special:Code automatically generated a list of followup revisions for each
revision (based on linking/mentioning).  It would be nice to have that in
Gerrit.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tyler Romeo
On Thu, Mar 6, 2014 at 4:15 PM, Steven Walling wrote:

> If your patch causes a serious UX regression like this, it's going to get
> reverted. The core patch involved was being deployed to Wikimedia sites /
> impacting MobileFrontEnd users today. If we had more time in the deployment
> cycle to wait and the revert was a simple disagreement, then waiting would
> be appropriate. It is obvious in this case no one tested the core change on
> mobile. That's unacceptable.
>

You quoted my email, but didn't seem to read it. Changes to MediaWiki core
should not have to take into account extensions that incorrectly rely on
its interface, and a breakage in a deployed extension should result in an
undeployment and a fix to that extension, not a revert of the core patch.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Chris McMahon
In over two years at WMF I have never been involved in a discussion like
this, but here goes:

In this case, I think it was entirely appropriate to revert immediately and
pick up the pieces later.  The source of the code is immaterial, if Tim
Starling  or Brion Vibber had merged this we would have done exactly the
same thing.

As Steven noted, the immediate issue was that it created a serious problem
with the mobile account creation process.  This blocked our ability to test
other aspects of mobile account creation and login that have changed
recently.  And, since this occurred on Thursday morning in the run-up to
the weekly deployment, we had little time to prevent this going live to
production.

Beyond that, there are serious concerns with any feature that a) requires
javascript support in the client in order to create an account on the wiki
and b) does not honor the characters that the user types in the username
and password fields.  I know of at least one historical instance where
violating b) caused a significant problem in UniversalLanguageSelector.
We prevented the ULS problem from going live to production at the time,
also.

-Chris




On Thu, Mar 6, 2014 at 1:29 PM, Tyler Romeo  wrote:

> Hi everybody,
>
> I cannot believe I have to say something about this, but I guess it's no
> surprise.
>
> Wikipedia has a notorious policy against edit warring, where users are
> encouraged to discuss changes and achieve consensus before blindly
> reverting. This applies even more so to Gerrit, since changes to software
> have a lot bigger effect.
>
> Here's a nice example:
> https://gerrit.wikimedia.org/r/114400
> https://gerrit.wikimedia.org/r/117234
> https://gerrit.wikimedia.org/r/117247
>
> Some key points to note here:
> * The revert commit was not linked to on the original commit
> * The time between the revert patch being uploaded and +2ed was a mere two
> minutes
> * All the reviewers on the revert patch were also reviewers on the original
> patch
>
> This is unacceptable behavior, and is extremely disrespectful to the
> developers here. If you are going to revert a patch for reasons other than
> a blatant code review issue (such as a fatal error or the likes), you
> should *at the very least* give the original patch reviewers time to
> understand why the patch is being reverted and give their input on the
> matter. Otherwise it defeats the entire point of the code review process
> and Gerrit in the first place.
>
> The argument being made in this specific case is that the change broke the
> workflow of mobile, and that the revert was announced on mobile-l. This is
> not sufficient for a number of reasons:
>
> 1) not everybody is subscribed to mobile-l, so you cannot expect the
> original reviewers to see or know about it
> 2) this is an issue with MobileFrontend, not MediaWiki core
> 3) code being merged does not automatically cause a deployment, and if code
> being deployed breaks something in production, it is the operations team's
> job to undeploy that change
>
> Overall, the lesson to take away here is to be more communicative with
> other developers, especially when you are negating their changes or
> decisions.
>
> Thanks in advance,
> *-- *
> *Tyler Romeo*
> Stevens Institute of Technology, Class of 2016
> Major in Computer Science
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Should MediaWiki CSS prefer non-free fonts?

2014-03-06 Thread David Gerard
On 6 March 2014 20:21, Quim Gil  wrote:

> Ryan, *thank you* very much for the research, and for contacting the
> Liberation Sans maintainers with specific bugs.


Seconded! Arguing is one thing - bothering to go out and finding out
what people actually think is quite another.

Would it be possible to run a test like this - for the squishy
subjective feelings that, nevertheless, are the results we actually
want to achieve - with a reasonable sample of ordinary users? I'm
surprised you got results like this from designers (who probably
*knew* what fonts they were looking at), but tests on normal people
would also be good. This would be *really good* usability data, highly
suitable to defend a decision from design obsessives, unreconstructed
Stallmanites or anyone between ...

(Veering off topic: So what does WMF use for a usability lab, anyway?)


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Captcha Idea Proposal for GSOC 2014

2014-03-06 Thread Aalekh Nigam
Hello,First of all sorry for inappropriate way of presenting the content 
, As advised by community members
I present my ideas regarding Multilingual, usable and effective 
captchas at my proposal page for GSOC-2014 given here 
: https://www.mediawiki.org/wiki/User:AalekhN/GSoC_proposal_2014
I therefore request all members to please go through the proposal and give your 
viewpoint/advice regarding the content of the proposal.
Thank YouAalekh Nigam"aalekhN"https://www.mediawiki.org/wiki/User:AalekhN
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Steven Walling
On Thu, Mar 6, 2014 at 8:29 PM, Tyler Romeo  wrote:

> 1) not everybody is subscribed to mobile-l, so you cannot expect the
> original reviewers to see or know about it
> 2) this is an issue with MobileFrontend, not MediaWiki core
> 3) code being merged does not automatically cause a deployment, and if code
> being deployed breaks something in production, it is the operations team's
> job to undeploy that change
>

If your patch causes a serious UX regression like this, it's going to get
reverted. The core patch involved was being deployed to Wikimedia sites /
impacting MobileFrontEnd users today. If we had more time in the deployment
cycle to wait and the revert was a simple disagreement, then waiting would
be appropriate. It is obvious in this case no one tested the core change on
mobile. That's unacceptable.

And yes, Jon should have made sure the revert and the original patch were
cross-referenced. I'm sure he'll do that next time he commits a revert.

Steven
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Gerrit Commit Wars

2014-03-06 Thread Tyler Romeo
Hi everybody,

I cannot believe I have to say something about this, but I guess it's no
surprise.

Wikipedia has a notorious policy against edit warring, where users are
encouraged to discuss changes and achieve consensus before blindly
reverting. This applies even more so to Gerrit, since changes to software
have a lot bigger effect.

Here's a nice example:
https://gerrit.wikimedia.org/r/114400
https://gerrit.wikimedia.org/r/117234
https://gerrit.wikimedia.org/r/117247

Some key points to note here:
* The revert commit was not linked to on the original commit
* The time between the revert patch being uploaded and +2ed was a mere two
minutes
* All the reviewers on the revert patch were also reviewers on the original
patch

This is unacceptable behavior, and is extremely disrespectful to the
developers here. If you are going to revert a patch for reasons other than
a blatant code review issue (such as a fatal error or the likes), you
should *at the very least* give the original patch reviewers time to
understand why the patch is being reverted and give their input on the
matter. Otherwise it defeats the entire point of the code review process
and Gerrit in the first place.

The argument being made in this specific case is that the change broke the
workflow of mobile, and that the revert was announced on mobile-l. This is
not sufficient for a number of reasons:

1) not everybody is subscribed to mobile-l, so you cannot expect the
original reviewers to see or know about it
2) this is an issue with MobileFrontend, not MediaWiki core
3) code being merged does not automatically cause a deployment, and if code
being deployed breaks something in production, it is the operations team's
job to undeploy that change

Overall, the lesson to take away here is to be more communicative with
other developers, especially when you are negating their changes or
decisions.

Thanks in advance,
*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Should MediaWiki CSS prefer non-free fonts?

2014-03-06 Thread Quim Gil
Ryan, *thank you* very much for the research, and for contacting the
Liberation Sans maintainers with specific bugs.

On 03/05/2014 02:00 PM, Ryan Kaldari wrote:
> What do people think of the following stack:
> 
> Arimo, Liberation Sans, Helvetica Neue, Helvetica, Arial, sans-serif;


After so much discussion, it would be useful to have a table showing
which fonts are rendered by the most popular browsers in the most
popular platforms [1] when you specify

a) sans-serif;

b) Arimo, Liberation Sans, sans-serif;

c) Arimo, Liberation Sans, Helvetica Neue, Helvetica, Arial, sans-serif;


This way we can see what happens exactly when you define fonts or not,
when you define only free fonts or also proprietary. We will also be
able to see where exactly this picture differs from the vision of the
promoters of proprietary fonts.

I think this table is going to be useful to defend the decision,
regardless of which decision is made.

Based on the results we might conclude that e.g. there is no need to
specify Helvetica / Arial because they are already picked by the
browsers that would display them when explicitly specified.

We might also find out where does Helvetica Neue appear or not based on
each option, and this would open the possibility to file bugs or provide
feedback to the specific projects.

Imagine if all this discussion could be solved by proposing a patch to
the Webkit project, just defining Helvetica Neue as a fallback for Arimo
and Liberation Sans.


[1] Proposed browsers:

Chrome (Windows, Mac)
Firefox (Windows, Mac)
MSIE
Safari
Android
iPhone
iPad

http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm

-- 
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ogv.js - JavaScript video decoding proof of concept

2014-03-06 Thread Derk-Jan Hartman

On 6 mrt. 2014, at 14:26, Brion Vibber  wrote:
> 
> Spent some time tonight on audio/video sync; it's now working both with
> native Web Audio API and with the Flash audio shim.
> 
> Check out a video with someone talking, and marvel at the synchronization!
> https://brionv.com/misc/ogv.js/demo/#file=Sneak_Preview_-_Wikipedia_VisualEditor.webm
> (Hi Trevor!)

Wow, really impressive brion. !


> I also snuck in a size selection override, so you can pick the larger 480p
> or original files (note that webm originals don't play as only a Theora
> decoder is included so far), or down to the tiny 160p, to see the quality
> and CPU usage differences.
> 
> If there's no objection to use of a Flash shim, I'll keep doing some
> experiments in that direction and see if I can rig up a fully-Flash version
> that works in old versions of Internet Explorer as well. (The main ogv.js
> needs no Flash as long as needed APIs are available, and even runs on iOS
> though it's a bit slow for videos.)
> 

Well, even if there are objections, it might be useful to make sure that it 
becomes a more suitable solution for other parties as well. And more interested 
parties would mean those might want to invest in the codebase. Should be a win

DJ


>> 
>> Audio-only files run great on iOS 7 devices. The 160p video transcodes we
>> experimentally enabled recently run *great* on a shiny 64-bit iPhone 5s,
>> but are still slightly too slow on older models.
>> 
>> 
>> The Flash audio shim for IE is a very simple ActionScript3 program which
>> accepts audio samples from the host page and outputs them -- no proprietary
>> or patented codecs are in use. It builds to a .swf with the open-source
>> Apache Flex SDK, so no proprietary software is needed to create or update
>> it.
>> 
>> I'm also doing some preliminary research on a fully Flash version, using
>> the Crossbridge compiler[3] for the C codec libraries. Assuming it performs
>> about as well as the JS does on modern browsers, this should give us a
>> fallback for old versions of IE to supplement or replace the Cortado Java
>> player... Before I go too far down that rabbit hole though I'd like to get
>> peoples' opinions on using Flash fallbacks to serve browsers with open
>> formats.
>> 
>> As long as the scripts are open source and we're building them with an
>> open source toolchain, and the entire purpose is to be a shim for missing
>> browser feature support, does anyone have an objection?
>> 
>> [3] https://github.com/adobe-flash/crossbridge
>> 
>> -- brion
>> 
>> 
>> On Mon, Oct 7, 2013 at 9:01 AM, Brion Vibber wrote:
>> 
>>> TL;DR SUMMARY: check out this short, silent, black & white video:
>>> https://brionv.com/misc/ogv.js/demo/ -- anybody interested in a side
>>> project on in-browser audio/video decoding fallback?
>>> 
>>> 
>>> One of my pet peeves is that we don't have audio/video playback on many
>>> systems, including default Windows and Mac desktops and non-Android mobile
>>> devices, which don't ship with Theora or WebM video decoding.
>>> 
>>> The technically simplest way to handle this is to transcode videos into
>>> H.264 (.mp4 files) which is well supported by the troublesome browsers.
>>> Unfortunately there are concerns about the patent licensing, which has held
>>> us up from deploying any H.264 output options though all the software is
>>> ready to go...
>>> 
>>> While I still hope we'll get that resolved eventually, there is an
>>> alternative -- client-side software decoding.
>>> 
>>> 
>>> We have used the 'Cortado ' Java applet
>>> to do fallback software decoding in the browser for a few years, but Java
>>> applets are aggressively being deprecated on today's web:
>>> 
>>> * no Java applets at all on major mobile browsers
>>> * Java usually requires a manual install on desktop
>>> * Java applets disabled by default for security on major desktop browsers
>>> 
>>> Luckily, JavaScript engines have gotten *really fast* in the last few
>>> years, and performance is getting well in line with what Java applets can
>>> do.
>>> 
>>> 
>>> As an experiment, I've built Xiph's ogg, vorbis, and theora C libraries
>>> cross-compiled to JavaScript using 
>>> emscriptenand written a wrapper that 
>>> decodes Theora video from an .ogv stream and
>>> draws the frames into a  element:
>>> 
>>> * demo: https://brionv.com/misc/ogv.js/demo/
>>> * code: https://github.com/brion/ogv.js
>>> * blog & some details:
>>> https://brionv.com/log/2013/10/06/ogv-js-proof-of-concept/
>>> 
>>> It's just a proof of concept -- the colorspace conversion is incomplete
>>> so it's grayscale, there's no audio or proper framerate sync, and it
>>> doesn't really stream data properly. But I'm pleased it works so far!
>>> (Currently it breaks in IE, but I think I can fix that at least for 10/11,
>>> possibly for 9. Probably not for 6/7/8.)
>>> 
>>> Performance on iOS devices isn't great, but is better with lower
>>> resolution file

Re: [Wikitech-l] Zürich Hackathon, hacking on what?

2014-03-06 Thread Amir Ladsgroup
I made this topic:
https://www.mediawiki.org/wiki/Z%C3%BCrich_Hackathon_2014/Topics#How_to_help_pywikibot

I'll send a note in pywikipedia-l and discuss about how and who want
to present at the Hackathon.

Best

On 3/6/14, Quim Gil  wrote:
> On 03/06/2014 04:04 AM, Amir Ladsgroup wrote:
>> I'll be there if I can get the visa in time and I love to have a
>> presentation about it but I'm not sure others will like the idea or
>> not
>
> The process for anybody in this situation is simple;
>
> # Create a new topic at
> https://www.mediawiki.org/wiki/Z%C3%BCrich_Hackathon_2014/Topics
>
> # Mobilize your best contributors to attend the hackathon (deadline for
> requesting travel sponsorhip: March 16)
>
> # Promote your proposal in your project list and other related spaces
> like wikitech-l, asking Hackathon participants to add their names to
> your activity.
>
> If you happen to be a maintainer of the project you want to push (as it
> is the case of Amir) then you are in a very good position to push this
> process successfully.
>
> --
> Quim Gil
> Technical Contributor Coordinator @ Wikimedia Foundation
> http://www.mediawiki.org/wiki/User:Qgil
>
>


-- 
Amir

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki unit tests being run via HHVM!

2014-03-06 Thread Erik Bernhardson
On Thu, Mar 6, 2014 at 8:12 AM, Antoine Musso  wrote:
>
> The job is now being run along other testing jobs.  It is slightly
> slower (4 min 30s) than the other jobs so that would delay the reporting
> I back to Gerrit by roughly a minute.  I have made the job to timeout
> after 8 minutes to avoid unnecessarily blocking changes.
>

I took a quick glance at the patch, it looks like the tests are being run
in the default interpreter mode with the JIT disabled.  I submitted a patch
that turns on the JIT[1].  I can't promise its faster but probably worth
testing.

Erik B

[1]https://gerrit.wikimedia.org/r/117226
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Zürich Hackathon, hacking on what?

2014-03-06 Thread Quim Gil
On 03/06/2014 04:04 AM, Amir Ladsgroup wrote:
> I'll be there if I can get the visa in time and I love to have a
> presentation about it but I'm not sure others will like the idea or
> not

The process for anybody in this situation is simple;

# Create a new topic at
https://www.mediawiki.org/wiki/Z%C3%BCrich_Hackathon_2014/Topics

# Mobilize your best contributors to attend the hackathon (deadline for
requesting travel sponsorhip: March 16)

# Promote your proposal in your project list and other related spaces
like wikitech-l, asking Hackathon participants to add their names to
your activity.

If you happen to be a maintainer of the project you want to push (as it
is the case of Amir) then you are in a very good position to push this
process successfully.

-- 
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tutorial to edit Help Pages

2014-03-06 Thread Quim Gil
On 03/06/2014 05:26 AM, Andre Klapper wrote:
> Hi,
> 
> On Thu, 2014-03-06 at 09:24 +, Anjali Sharma wrote:
>> I wish to edit the documentation of Wikidata help pages. For that need to
>> learn some advanced editing options like creating anchors for topics in a
>> page. Where can I find a possible help ???
> 
> https://www.mediawiki.org/wiki/Help:Links covers how to use anchors.
> If that's not what you asked for, please be more specific and provide an
> example. :)

And generally, all questions related to editing should find answers and
links for more information at

https://www.mediawiki.org/wiki/Help:Contents


-- 
Quim Gil
Technical Contributor Coordinator @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil



signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki unit tests being run via HHVM!

2014-03-06 Thread Antoine Musso
Hello,

I have added a job in Jenkins which runs the Mediawiki core PHPUnit test
suite using the Facebook HipHop virtual machine.

The job is now being run along other testing jobs.  It is slightly
slower (4 min 30s) than the other jobs so that would delay the reporting
back to Gerrit by roughly a minute.  I have made the job to timeout
after 8 minutes to avoid unnecessarily blocking changes.


hhmv is installed on some labs instance and is using the version in our
apt repository (thought not automatically upgraded 'ensure => present').


The job page:
https://integration.wikimedia.org/ci/job/mediawiki-core-phpunit-hhvm/


It is very experimental, one of the build segfaulted:
https://integration.wikimedia.org/ci/job/mediawiki-core-phpunit-hhvm/2/

Another one has one failing test:
https://integration.wikimedia.org/ci/job/mediawiki-core-phpunit-hhvm/3/testReport/junit/(root)/DjVuTest/testPageCount/

DjVuTest::testPageCount
Object of class UnregisteredLocalFile could not be converted to string


That one probably need some code to be fixed.


Culprit: the install.php and update.php scripts are still using php.


Reference:
--
Bug 62278 "write a jenkins job to use hhvm for mwcore
https://bugzilla.wikimedia.org/show_bug.cgi?id=62278

-- 
Antoine "hashar" Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tutorial to edit Help Pages

2014-03-06 Thread Andre Klapper
Hi,

On Thu, 2014-03-06 at 09:24 +, Anjali Sharma wrote:
> I wish to edit the documentation of Wikidata help pages. For that need to
> learn some advanced editing options like creating anchors for topics in a
> page. Where can I find a possible help ???

https://www.mediawiki.org/wiki/Help:Links covers how to use anchors.
If that's not what you asked for, please be more specific and provide an
example. :)

Thanks!
andre
-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ogv.js - JavaScript video decoding proof of concept

2014-03-06 Thread Brion Vibber
On Sun, Feb 23, 2014 at 6:43 AM, Brion Vibber  wrote:

> Just an update on this weekend project, see the current demo in your
> browser[1] or watch a video of Theora video playing on an iPhone 5s![2]
>
> [1] https://brionv.com/misc/ogv.js/demo/
> [2] http://www.youtube.com/watch?v=U_qSfHPhGcA
>
> * Got some fixes and testing from one of the old Cortado maintainers --
> thanks Maik!
> * Audio/video sync is still flaky, but everything pretty much decodes and
> plays properly now.
> * IE 10/11 work, using a Flash shim for audio.
> * OS X Safari 6.1+ works, including native audio.
> * iOS 7 Safari works, including native audio.
>

Spent some time tonight on audio/video sync; it's now working both with
native Web Audio API and with the Flash audio shim.

Check out a video with someone talking, and marvel at the synchronization!
https://brionv.com/misc/ogv.js/demo/#file=Sneak_Preview_-_Wikipedia_VisualEditor.webm
(Hi Trevor!)

I also snuck in a size selection override, so you can pick the larger 480p
or original files (note that webm originals don't play as only a Theora
decoder is included so far), or down to the tiny 160p, to see the quality
and CPU usage differences.

If there's no objection to use of a Flash shim, I'll keep doing some
experiments in that direction and see if I can rig up a fully-Flash version
that works in old versions of Internet Explorer as well. (The main ogv.js
needs no Flash as long as needed APIs are available, and even runs on iOS
though it's a bit slow for videos.)

-- brion



>
> Audio-only files run great on iOS 7 devices. The 160p video transcodes we
> experimentally enabled recently run *great* on a shiny 64-bit iPhone 5s,
> but are still slightly too slow on older models.
>
>
> The Flash audio shim for IE is a very simple ActionScript3 program which
> accepts audio samples from the host page and outputs them -- no proprietary
> or patented codecs are in use. It builds to a .swf with the open-source
> Apache Flex SDK, so no proprietary software is needed to create or update
> it.
>
> I'm also doing some preliminary research on a fully Flash version, using
> the Crossbridge compiler[3] for the C codec libraries. Assuming it performs
> about as well as the JS does on modern browsers, this should give us a
> fallback for old versions of IE to supplement or replace the Cortado Java
> player... Before I go too far down that rabbit hole though I'd like to get
> peoples' opinions on using Flash fallbacks to serve browsers with open
> formats.
>
> As long as the scripts are open source and we're building them with an
> open source toolchain, and the entire purpose is to be a shim for missing
> browser feature support, does anyone have an objection?
>
> [3] https://github.com/adobe-flash/crossbridge
>
> -- brion
>
>
> On Mon, Oct 7, 2013 at 9:01 AM, Brion Vibber wrote:
>
>> TL;DR SUMMARY: check out this short, silent, black & white video:
>> https://brionv.com/misc/ogv.js/demo/ -- anybody interested in a side
>> project on in-browser audio/video decoding fallback?
>>
>>
>> One of my pet peeves is that we don't have audio/video playback on many
>> systems, including default Windows and Mac desktops and non-Android mobile
>> devices, which don't ship with Theora or WebM video decoding.
>>
>> The technically simplest way to handle this is to transcode videos into
>> H.264 (.mp4 files) which is well supported by the troublesome browsers.
>> Unfortunately there are concerns about the patent licensing, which has held
>> us up from deploying any H.264 output options though all the software is
>> ready to go...
>>
>> While I still hope we'll get that resolved eventually, there is an
>> alternative -- client-side software decoding.
>>
>>
>> We have used the 'Cortado ' Java applet
>> to do fallback software decoding in the browser for a few years, but Java
>> applets are aggressively being deprecated on today's web:
>>
>> * no Java applets at all on major mobile browsers
>> * Java usually requires a manual install on desktop
>> * Java applets disabled by default for security on major desktop browsers
>>
>> Luckily, JavaScript engines have gotten *really fast* in the last few
>> years, and performance is getting well in line with what Java applets can
>> do.
>>
>>
>> As an experiment, I've built Xiph's ogg, vorbis, and theora C libraries
>> cross-compiled to JavaScript using 
>> emscriptenand written a wrapper that 
>> decodes Theora video from an .ogv stream and
>> draws the frames into a  element:
>>
>> * demo: https://brionv.com/misc/ogv.js/demo/
>> * code: https://github.com/brion/ogv.js
>> * blog & some details:
>> https://brionv.com/log/2013/10/06/ogv-js-proof-of-concept/
>>
>> It's just a proof of concept -- the colorspace conversion is incomplete
>> so it's grayscale, there's no audio or proper framerate sync, and it
>> doesn't really stream data properly. But I'm pleased it works so far!
>> (Currently it breaks in IE, b

Re: [Wikitech-l] Zürich Hackathon, hacking on what?

2014-03-06 Thread Amir Ladsgroup
I'll be there if I can get the visa in time and I love to have a
presentation about it but I'm not sure others will like the idea or
not

Best

On 3/6/14, Strainu  wrote:
> 2014-03-06 13:43 GMT+02:00 Amir Ladsgroup :
>> I have another idea that someone have a presentation about "how we can
>> help pywikibot" I don't know if others like this idea too
>
>
> Amir, I won't be in Zurich, but I would love to see (and use) such a
> presentation.
>
> Thanks,
>Strainu
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l


-- 
Amir

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Zürich Hackathon, hacking on what?

2014-03-06 Thread Strainu
2014-03-06 13:43 GMT+02:00 Amir Ladsgroup :
> I have another idea that someone have a presentation about "how we can
> help pywikibot" I don't know if others like this idea too


Amir, I won't be in Zurich, but I would love to see (and use) such a
presentation.

Thanks,
   Strainu

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Zürich Hackathon, hacking on what?

2014-03-06 Thread Amir Ladsgroup
I haven't read this topic carefully but I suggest we make a sprint for
pywikibot issues (e.g. very important bugs or a bug triage to take
care of 200 left uncategorized/unknown importance bug)

I have another idea that someone have a presentation about "how we can
help pywikibot" I don't know if others like this idea too

Best

On 3/5/14, Quim Gil  wrote:
> On 03/04/2014 02:30 PM, Marc A. Pelletier wrote:
>> On 03/04/2014 05:05 PM, Quim Gil wrote:
>>> If I understood correctly the
>>> proposals listed so far, we seem to have only one hacking sprint.
>>
>> It's difficult to classify some of the activities.  I, for instance,
>> will set up a 'Migrate to Tool Labs / work on Labs' corner intended to
>> last the entire event.  Is that a sprint or a workshop?
>>
>> IMO, it's a workshop that'll contain a lot of small sprints and a few
>> presentation -- a dev room?  :-).
>
> Labels are just labels. What matters is the organization of a schedule,
> since your activity will require time and space, and it will happen at
> the same time than other activities which may or may not overlap with.
>
> What about scheduling an explicit workshop at the beginning (e.g. one
> hour with you going through the migration of one project from the
> Toolserver to Labs) followed by a long sprint consisting of you and
> other experienced Toolserver / Labs contributors working on migrating
> projects with whoever else is interested, willing to learn and help.
>
> You will probably get many extra people happy to attend a pre-scheduled
> workshop and maybe come by and help a couple of hours during the sprint.
>
> --
> Quim Gil
> Technical Contributor Coordinator @ Wikimedia Foundation
> http://www.mediawiki.org/wiki/User:Qgil
>
>


-- 
Amir

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] open positions at WMF

2014-03-06 Thread Željko Filipin
On Thu, Mar 6, 2014 at 7:29 AM, Sumana Harihareswara
wrote:

> * automate more of the systems that help developers test new code to find
> bugs early (Test Infrastructure
> Engineer<
> http://hire.jobvite.com/CompanyJobs/Careers.aspx?c=qSa9VfwQ&cs=9UL9Vfwt&page=Job%20Description&j=oFtlYfwb
> >)
>

I am not sure who can fix this: for the above job the last link (
https://wikitech.wikimedi) in "More Information" section is broken.

Željko
Test All The Things Team
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Tutorial to edit Help Pages

2014-03-06 Thread Anjali Sharma
I wish to edit the documentation of Wikidata help pages. For that need to
learn some advanced editing options like creating anchors for topics in a
page. Where can I find a possible help ???
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC-2014 project "A system for reviewing funding requests".

2014-03-06 Thread Gryllida
Hi! I think you may want to contact the mentors directly (there is several 
listed) and ask them for a more detailed spec on the project.

Siko runs the IEG program [1] and I have forwarded your message to the 
Committee as well to see anyone who wants to volunteer and help you during your 
work (as I'm assuming that the development will happen out in the open).

[1] https://meta.wikimedia.org/wiki/Grants:IEG

Gryllida

On Thu, 6 Mar 2014, at 5:19, Karan Dev wrote:
> hi,
> I am B.Tech. (CS) 3rd year student from India. I have 4 years of
> programming experience. I am comfortable with c/c++ , Php, Javascript,
> HTML, MySql.I have developed some small web based projects in PHP and using
> MySQL in my 2nd year.
> As someone already is working on my previous selected project (Catalogue
> for MediaWiki extensions). After going through the ideas list again I found
> interest in project "A system for reviewing funding requests" as I am
> comfortable with the required skills.
> I am new, I need some guidance from the mentors for the procedure. And Let
> me know if I missed something.
> 
> 
> (Karan Dev)
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Guidance for the Project Idea for GSOC 2014

2014-03-06 Thread Devender
Hi,

I am a 4th year student of the Department of Computer Science and
Engineering at Indian Institute of Technology( IIT),  Kharagpur, India.

I am good at programming in PHP, MySQL, JavaScript, jQuery, HTML, CSS,
Java, C, C++ and Python. I have done all the important courses including
Machine Learning, Artificial Intelligence, Algorithms, Information
Retrieval, Natural language processing, Advanced Graph theory.

I am very enthusiast to work with Wikimedia in GSOC 2014. I have an idea
which I believe can help improve Wikipedia content:

I want to implement a ranking system of the editors(especially 3rd party
editors) of the Wikipedia through which viewers can differentiate between
the content of the page. This ranking system will
increase the content reliability. We can implement :

1. An extension which take all the editors information--

a. How many times editor has edited this  particular page,
b. The number of pages he edited and editor's reputation( i.e number and
type of badge)

We can get this information from the "view history tab and then user info
from the user page" and generate a reliability score by using (i.e Data
clustering ) for every
line/paragraph of the content for all Wikipedia page. After installing this
extension, user right click on any line to see the reliability score, all
editor info and history in concise form.

2. Make the different color of the line/paragraph if the content of the
line/paragraph is very new and its reliability score is less.

Please let me if I should go with this idea. If not, guide me how to start
working on different idea.


Thanks and Regards

Devender (Linkedin
Profile
)
4th Year Student
Dual Degree(B.Tech+M.Tech)
Computer Science and Engineering
IIT Kharagpur
Phone +91-8967224480
Alternate Email *deven...@cse.iitkgp.ernet.in
*
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l