On Thu, Mar 21, 2013 at 6:40 PM, Asher Feldman <afeld...@wikimedia.org>wrote:

> Right now, varying amounts of effort are made to highlight potential
> performance bottlenecks in code review, and engineers are encouraged to
> profile and optimize their own code.  But beyond "is the site still up for
> everyone / are users complaining on the village pump / am I ranting in
> irc", we've offered no guidelines as to what sort of request latency is
> reasonable or acceptable.  If a new feature (like aftv5, or flow) turns out
> not to meet perf standards after deployment, that would be a high priority
> bug and the feature may be disabled depending on the impact, or if not
> addressed in a reasonable time frame.  Obviously standards like this can't
> be applied to certain existing parts of mediawiki, but systems other than
> the parser or preprocessor that don't meet new standards should at least be
> prioritized for improvement.
>
> Thoughts?
>

As a features product manager, I am totally behind this. I don't take
adding another potential blocker lightly, but performance is a feature, and
not a minor one. For me the hurdle to taking this more seriously, beyond
just "is this thing unusably/annoyingly slow when testing it?", has always
been a way to reliably measure performance, set goals, and a set of
guidelines.

Like MZ suggests, I think the place to discuss that is in an RFC on
mediawiki.org, but in general I want to say that I support creating a
reasonable set of guidelines based data.

Steven
_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to