[Wikitech-l] Query-string normalization enabled on MediaWiki.org

2022-08-17 Thread Ori Livneh
Hello,

A short while ago, the SRE Traffic team enabled
 query-string
normalization for MediaWiki.org. This means that the caching layer
(Varnish, specifically) now rewrites the query string portion of request
URLs, so that query parameters are sorted by key. The re-written URL is
then used for the cache look-up and (if necessary) the backend request.
This improves the efficiency of the cache, since it allows
logically-equivalent requests with different URL forms to be served by a
single cache entry.

For example, this URL:
  https://www.mediawiki.org/w/index.php?title=Squid=history

Is re-written as:
  https://www.mediawiki.org/w/index.php?action=history=Squid

Care has been taken to make the sort stable for duplicate keys, and to
handle PHP array syntax (?foo[]=a[]=b) correctly.

MediaWiki.org is the first wiki that gets real traffic where this is
enabled. If you encounter any weirdness, please file a task
.
If everything looks good, we'll proceed with an incremental roll-out to
other wikis. This is tracked in T314868
.

Cheers
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Re: Request: Help with Fresh

2021-07-02 Thread Ori Livneh
I haven't tested these because I don't have a Windows machine handy. You
can try one of the following:

Option A: Replace '/usr/local/bin' in the invocation with '/usr/bin', which
does exist in git bash.

Option B:
1) Run:
 echo 'export PATH="$PATH:/usr/local/bin"' >> ~/.bashrc
 mkdir -p /usr/local/bin
 chmod 775 /usr/local/bin

2) Close git bash and start it again
Run 'echo $PATH' without the quotes to confirm that /usr/local/bin is in it
Continue to run the installation command unmodified.

On Fri, Jul 2, 2021 at 8:34 AM DannyS712 Wiki 
wrote:

> Hi all.
> Is anyone able to help me troubleshoot getting Fresh set up and working on
> Windows? I recently set up mediawiki with docker and want to be able to use
> fresh. I use Windows 10.
> Based on the instructions in the latest version of the README file[1], I
> ran within git bash
> ```
> bash -c 'curl -fsS
> https://gerrit.wikimedia.org/g/fresh/+/21.04.1/bin/fresh-node10?format=TEXT
> \
> | base64 --decode > /usr/local/bin/fresh-node \
> && echo "d38c34d542dc685669485bbe04a9d1a926a224a4ba27a01d59ae563558d8e987
>  /usr/local/bin/fresh-node" | shasum -a 256 -c \
> && chmod +x /usr/local/bin/fresh-node \
> && echo -e "\n\xf0\x9f\x8c\xb1\x20Fresh\x20is\x20ready\x21\n"||(echo -e
> "\xe2\x9d\x8c";false)'
> ```
> and got the error
> ```
> bash: line 1: /usr/local/bin/fresh-node: No such file or directory
> curl: (23) Failure writing output to destination
> ❌
> ```
>
> Some investigation in the file system shows that `/usr/local` does not
> exist. Where should I be trying to install it instead so that it works as a
> normal command (eg `fresh-node -env -net`)?
>
> Thanks,
> --DannyS712
>
> [1]
> https://gerrit.wikimedia.org/r/plugins/gitiles/fresh/+/2eabb06e32a9971b1e9523f33bb4da0b3fce522b/README.md
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread Ori Livneh
On Wed, Aug 8, 2018 at 7:13 PM MZMcBride  wrote:

> If I had written "Why did you do that?!" instead of "What the fuck.", do
> you
> think I would have had my Phabricator account disabled for a week?
>

No, but asking "are you for real?" would have been similarly problematic in
my view. The distinction hinges on whether you are expressing bafflement or
scandal.

I don't think it's bad to be critical, but in my opinion the way you go
about it is acutely and unnecessarily painful sometimes, and leads to
burnout rather than understanding.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] My Phabricator account has been disabled

2018-08-08 Thread Ori Livneh
On Wed, Aug 8, 2018 at 2:48 PM bawolff  wrote:

> MZMcbride (and any other individual contributor) is at a power
> disadvantage here relative to how the foundation is an organized
> group


Have you *been* on the receiving end of an MZMcBride diatribe? I was, when
barely two months into my role as a software engineer at the Wikimedia
Foundation (and newly transplanted in the Bay Area), MZMcBride wrote a Signpost
op-ed

centered around an inconsiderate remark I made on a bug that I closed as
WONTFIX. The responses to that included on-wiki comments telling me to go
fuck myself
,
calls for my immediate resignation, and unbelievably vicious anonymous
hate-mail. My mental state after that was bordering on suicidal.

I hope that you are struck by the parallels between that affair back in
2012 and the one we are presently discussing. The germ-cell of both cases
was a legitimate grievance about Foundation engineers being dismissive
toward a bug report. MZMcBride has a very good ear for grievances, and he
knows how to use his considerable social clout to draw attention to them,
and then use words as a kind of lightning-rod for stoking outrage and
focusing it on particular targets. I don't know why he does it and I won't
speculate, but I am convinced he knows exactly what he is doing. How could
he not? This has been going on for nearly a decade.

When I saw MZMcBride's "what the fuck" I *instantly* knew what was coming.
After it happens to you, you never forget the sensation of instant regret
and absolute panic as the Eye of Sauron fixates on you. It is a
*miserable* experience
and I understand completely why the CoC might feel compelled to intervene.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] ResistanceManual.org looking for co-maintainers

2017-01-29 Thread Ori Livneh
Resistance Manual 
is a guide for organizing resistance to the policies of the Trump
administration in the United States. The site is running MediaWiki 1.28,
and its admins are looking for help maintaining the site. The main page
says to reach out to i...@staywoke.org if interested.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Where has mw.log gone again?

2016-09-16 Thread Ori Livneh
On Fri, Sep 16, 2016 at 9:47 AM, Brad Jorsch (Anomie)  wrote:

> On Fri, Sep 16, 2016 at 12:05 PM, Strainu  wrote:
>
> > Back when preprocessor optimization data was displayed on the preview
> > page, one could see the output of mw.log and mw.logObject calls in the
> > LUA code there. Right now, I can't find any place when I can do that
> > any longer. I know about the debug console, but that is populated only
> > when you introduce something there and it's almost impossible to
> > exactly reproduce the parameters generated by passing through 3
> > templates and 2 other module calls.
> >
>
> It looks like Aaron removed all the performance data that was previously
> displayed at the bottom of the preview page in favor of hiding it in a
> JavaScript variable. To access the logs now, you should be able to do
> "mw.config.get(
> 'wgPageParseReport' ).scribunto['limitreport-logs']".
>
> To make a user-friendly display now, someone would have to write a gadget
> to generate some visible output from the data in that variable.
>

Peter Hedenskog is working on it. You can preview the work by visiting
https://en.wikipedia.beta.wmflabs.org/wiki/Main_Page , clicking on
"performance inspector", and then "parser profiling data". We (=WMF
performance team) have been getting ready to announce this and to start
soliciting feedback. The extension code is available at
https://github.com/wikimedia/mediawiki-extensions-PerformanceInspector .
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki pingback

2016-07-21 Thread Ori Livneh
What proportion of MediaWiki installations run on 32-bit systems? How much
memory is available to a typical MediaWiki install? How often is the Oracle
database backend used?

These are the kinds of questions that come up whenever we debate changes
that impact compatibility. More often than not, the questions go
unanswered, because we don't have good statistical data about the
environments in which MediaWiki is running.

Starting with version 1.28, MediaWiki will provide operators with the
option of sharing anonymous data about the local MediaWiki instance and its
environment with MediaWiki's developer community via a pingback to a URL
endpoint on MediaWiki.org.

The configuration variable that controls this behavior ($wgPingback) will
default to false (that is: don't share data). The web installer will
display a checkbox for toggling this feature on and off, and it will be
checked by default (that is: *do* share data). This ensures (I hope) that
no one feels surprised or violated.

The information that gets sent is described in <
https://meta.wikimedia.org/wiki/Schema:MediaWikiPingback>. Here is a
summary of what we send:

- A randomly-generated unique ID for the wiki.
- The chosen database backend (e.g., "mysql", "sqlite")
- The version of MediaWiki in use
- The version of PHP
- The name and version of the operating system in use
- The processor architecture and integer size (e.g. "x86_64")
- The name of the web server software in use (e.g. "Apache/1.3.14")

Neither the wiki name nor its location is shared.

The plan is to make this data freely available to all MediaWiki developers.
Before that can happen, I will need to solicit reviews from security folks
and from the WMF's legal team, but I don't expect any major issues.

Please chime in if you have any thoughts about this. :)

The change-set implementing this functionality is <
https://gerrit.wikimedia.org/r/#/c/296699/>, if you want to take a look.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Please use HTTPS, not HTTP, to access RCStream

2016-07-19 Thread Ori Livneh
If you have a client that connects to RCStream
, please take a moment to
make sure that you are using a secure connection. Are you connecting to '
http://stream.wikimedia.org/rc' or '//stream.wikimedia.org/rc'? If so, the
only thing you need to change is the URL scheme, replacing any http: and
protocol-relative URLs with HTTPS (that is, '
https://stream.wikimedia.org/rc').
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] The train will resume tomorrow (was Re: All wikis reverted to wmf.8 last night due to T119736)

2016-07-12 Thread Ori Livneh
On Tue, Jul 12, 2016 at 4:07 PM, Greg Grossmeier  wrote:

> 
> > https://phabricator.wikimedia.org/T119736 - "Could not find local user
> data for {Username}@{wiki}"
> >
> > There was an order of magnitude increase in the rate of those errors
> > that started on July 7th.
> >
> > Investigation and remediation is on-going.
>
> Investigation and remediation is mostly complete[0] and the vast
> majority of cases have been addressed. There are still users who will
> experience this error for the next ~1 day.[1]
>

Is it actually fixed? It doesn't look like it, from the logs.

Since midnight UTC on July 7, 3,195 distinct users have tried and failed to
log in a combined total of 25,047 times, or an average of approximately
eight times per user. The six days that have passed since then were
business as usual for the Wikimedia Engineering.

Our failure to react to this swiftly and comprehensively is appalling and
embarrassing. It represents failure of process at multiple levels and a
lack of accountability.

I think we need to have a serious discussion about what happened, and think
very hard about the changes we would need to make to our processes and
organizational structure to prevent a recurrence.

I think we should also reach out to the users that were affected and
apologize.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] DNS, TLS change for RCStream (stream.wikimedia.org)

2016-06-20 Thread Ori Livneh
== What is happening? ==
Secure connections to RCStream[1] currently use an SSL/TLS certificate[2]
specific to stream.wikimedia.org. To streamline certificate management, we
are moving RCStream behind our misc caching cluster, which will allow us to
use the wildcard certificate[3] for *.wikimedia.org, making the
RCStream-specific certificate redundant. This will reduce operating costs
and improve performance in certain cases.

== When will this happen? ==
June 23rd.

== How could this affect me? ==
This change requires updating the DNS record for stream.wikimedia.org. We
do not expect any service disruptions. It is conceivable (but unlikely)
that you will need to restart your client. If your client is based on one
of the published examples[4], you should be fine. If you are not sure, feel
free to get in touch with me (o...@wikimedia.org).

If you are connecting to RCStream over an insecure (http) connection, now
would be a great time to migrate to https. http access to RCStream will
eventually be disabled; migrating now will protect you from any
interruptions down the line. In most cases, making your client use https is
as simple as prefixing 'stream.wikimedia.org' with 'https://'. Sample
client code on Wikitech[4] has been updated.

== How can I track this work? ==
By following https://phabricator.wikimedia.org/T134871.

[1]: https://wikitech.wikimedia.org/wiki/RCStream
[2]: https://en.wikipedia.org/wiki/Public_key_certificate
[3]: https://en.wikipedia.org/wiki/Wildcard_certificate
[4]: https://wikitech.wikimedia.org/wiki/RCStream#Clients
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Update from the Perf Team

2016-05-27 Thread Ori Livneh
Hello,

Here's what the performance team has been up to.

== Dashboards & instrumentation ==
We spent time instrumenting software and curating displays of performance
data. We have several new dashboards to share with you:

* Global edit rate and save failures (new)
  https://grafana.wikimedia.org/dashboard/db/edit-count

* Performance metrics (revamped)
  https://grafana-admin.wikimedia.org/dashboard/db/performance-metrics

* Page load performance
  https://grafana.wikimedia.org/dashboard/db/navigation-timing

  ...by continent:
https://grafana.wikimedia.org/dashboard/db/navigation-timing-by-continent
  ...by country  :
https://grafana.wikimedia.org/dashboard/db/navigation-timing-by-geolocation
  ...by browser  :
https://grafana.wikimedia.org/dashboard/db/navigation-timing-by-browser

* We found that certain browsers were reporting wildly inaccurate timing
data and skewing our summary performance metrics, and reacted by validating
browser metric data more strictly against Navigation Timing API specs.


== ResourceLoader ==
ResourceLoader is the MediaWiki subsystem responsible for loading CSS,
JavaScript, and i18n interface messages for dynamic site features. It is
critical to site performance. Changes to ResourceLoader are focused on
reducing backend response time, ensuring we make efficient use of the
browser cache, and reducing time to first paint (the time it takes any
content to appear). This work is led by Timo Tijhof.

* The "/static/$mwBranch" entry point has been deprecated and removed in
favor of wmfstatic - a new multiversion-powered entrypoint accessed via
"/w" (via RewriteRule)
  https://phabricator.wikimedia.org/T99096

* Restricting addModuleStyles() to style-only modules (ongoing)
  https://phabricator.wikimedia.org/T92459

* Startup module check is now based on a feature test instead of browser
blacklist
  https://phabricator.wikimedia.org/T102318


== WebPageTest ==
Page load performance varies by browser, platform, and network. To
anticipate how code changes will impact page performance for readers and
editors, we use WebPageTest (https://wikitech.wikimedia.org/wiki/WebPageTest),
a web performance browser automation tool. WebPageTest loads pages on
Wikimedia wikis using real browsers and collects timing metrics. This work
is led by Peter Hedenskog.

* We now generate waterfall charts for page loads on Firefox. Previously we
were only able to produce them with Chrome.

* We tracked downs two bugs in WebPageTest that caused it to report an
incorrect value for time-to-first-byte and reported them upstream.
  https://phabricator.wikimedia.org/T130182
  https://phabricator.wikimedia.org/T129735

* We upgraded the WebPageTest agent instance after observing variability in
measurements when the agent is under load.
  https://phabricator.wikimedia.org/T135985

* We designed a new dashboard to help us spot performance regressions
  https://grafana.wikimedia.org/dashboard/db/webpagetest


== Databases ==
The major effort in backend performance has been to reduce replication lag.
Replication lag occurs when a slave database is not able to reflect changes
on the master database quickly enough and falls behind. Aaron Schulz set
out to bring peak replication lag down from ten seconds to below five, by
identifying problematic query patterns and rewriting them to be more
efficient. We are very close to hitting that target: replication lag is
almost entirely below five seconds on all clusters.

https://phabricator.wikimedia.org/T95501

* High lag on databases used to generate special pages no longer stops job
queue processing
  https://phabricator.wikimedia.org/T135809

== Multi-DC ==
"Multi-DC" refers to ongoing work to make it possible to serve reads from a
secondary data center. Having MediaWiki running and serving requests in
more than one data center will reduce latency and improve site reliability.
This project is led by Aaron Schulz.

In order for this to be possible, we need to be able to anticipate which
requests will need the master database, so we can route them accordingly.
The plan is to achieve this by making sure that GET requests never require
a master database connection. We've made progress incremental progress
here, most recently by changing action=rollback to use JavaScript to
perform HTTP POST requests.

We also need to be able to broadcast cache purges across data centers. The
major work on this front has been the addition to core of EventBus classes
that relay cache proxy and object cache purges. Stas Malyshev of the
discovery team is assisting with this work.

== Thumbor ==
"Thumbor" is shorthand for the project to factor thumbnail rendering out of
MediaWiki and into a standalone service based on Thumbor (
http://thumbor.org/). This project is led by Gilles Dubuc. The following
list summarizes recent progress:

- Simplified the VCL as much as possible
- Added client throttling with the tbf vmod
- Added progressive JPEG support to ImageMagick engine
- Added configurable chroma 

Re: [Wikitech-l] Automatic image colorization

2016-05-03 Thread Ori Livneh
On Tue, May 3, 2016 at 3:31 PM, Rob Lanphier  wrote:

> It might be worthwhile to ask the authors why they chose CC BY-NC-SA 4.0
> instead of a free license (like MIT, Apache, GPL or AGPL).  If we
> approach them respectfully, we might convince them to learn more about
> our ideals, and change the license on their software.
>

Good idea. I sent a note to the authors to explain why the non-commercial
clause is incompatible with "open source" and to encourage them to release
the code and model under a permissive license. I will share any updates
with the list.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Automatic image colorization

2016-05-03 Thread Ori Livneh
Colorization

refers to the process of adding color to black-and-white photographs. This
work was historically done by hand. These days, colorization is usually
done digitally, with the support of specialized tooling. But it is still
quite labor-intensive.

A forthcoming paper
 from
researchers at Waseda University of Japan have developed a method for
automatic image colorization using deep learning neural network. The
results are both impressive and easy to reproduce, as the authors have
published
their code  to
GitHub with a permissive license.

Someone has already taken this code and packaged it as a simple webapp,
available at http://colorizr.io/ (NSFW). The webapp lets you upload
black-and-white pictures and colorizes them for you. (The site is currently
not safe for work because it displays a gallery of recent uploads. We know
how that goes.)

In this thread, let's discuss how this technology could be integrated with
the projects. Should we have a bot that can perform colorization on demand,
the way Rotatebot  can
rotate images?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Docs, use of, and admin privileges for wikimedia github project?

2016-04-26 Thread Ori Livneh
On Tue, Apr 26, 2016 at 4:24 PM, Stas Malyshev 
wrote:

> Now, having something like wikimedia.github.io would be an excellent
> idea. If somebody would do the design, loading up repos list and
> displaying them with a nice structure - given that we actually have
> pretty structured names already, so we could start from that - should
> not be super-hard?
>

I've pitched this idea before (to your team, no less :P). I'm not sure what
came out of it. Maybe Julien or Moiz can say. My suggestion was to take
http://twitter.github.io/ as a starting-point; the source code for that is
Apache-licensed (https://github.com/twitter/twitter.github.com).
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ResourceLoader, latency, & progressive enhancement

2016-04-25 Thread Ori Livneh
On Sat, Apr 23, 2016 at 3:08 PM, Brion Vibber  wrote:

> Started as quick thoughts, turned into more of an essay, so I've posted the
> bulk on mediawiki.org:
>
> https://www.mediawiki.org/wiki/User:Brion_VIBBER/ResourceLoader_and_latency
>
>
> tl;dr summary:
>
> On slow networks, latency in loading large JS and HTML resources means
> things don't always work right when we first see them.
>
> If we take advantage of HTTP 2 we could skip the concatenation of separate
> ResourceLoader modules to reduce latency until each module _runs_, without
> adding _network_ latency.
>


Not so straight-forward. Khan Academy tried unbundling JavaScript on HTTP/2
page views last November and found that performance got worse. They
attribute the regression primarily to the fact that bundling improves
compression. They concluded that "it is premature to give up on bundling
JavaScript files at this time, even for HTTP/2.0 clients."

(http://engineering.khanacademy.org/posts/js-packaging-http2.htm)

On most browsers, we take advantage of localStorage pretty heavily in order
to have a durable cache of individual modules. Without it, slight
variations in the module requirements would occasion re-downloading a lot
of JavaScript, as the browser had no way of reusing JavaScript and CSS
delivered under a different URL. (Service Workers now provide more
sophisticated means of doing that, but global browser support is still only
at 53%.

We had to disable localStorage caching in Firefox because of the way it
manages quotas. Is your primary mobile browser Firefox for Android / iOS?

Lastly, we have good evidence that above-the-fold external CSS is a bigger
contributor to page latency than JavaScript. Gabriel documented that pretty
well in T124966 . That CSS is a
bigger issue than JavaScript is not surprising (top-loaded CSS is
render-blocking, whereas all of our JavaScript is loaded asynchronously),
but the magnitude of its impact is impressive.

Krinkle is working on an arc of tasks that would get us there; T127328
 is the master task.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ResourceLoader, latency, & progressive enhancement

2016-04-25 Thread Ori Livneh
On Sun, Apr 24, 2016 at 1:39 AM, Brion Vibber  wrote:

> On Sunday, April 24, 2016, Daniel Friesen 
> wrote:
>
> > Tangentially related, Chrome plans to drop support for SPDY and go
> > HTTP/2 only this year, Edge already dropped support for SPDY, and other
> > browsers may too.
> > So before this is actually implemented, Wikimedia might want to upgrade
> > web server software to support HTTP/2 (currently MediaWiki.org looks to
> > only be using SPDY).
> >
> > Though a sad realization is that IE11 only supports HTTP/2 on Windows 10
> > (other Windows versions will only support SPDY) and same goes for Safari
> > only doing HTTP/2 on OSX 10.11+.
> > Which is relevant because some webservers like Nginx intentionally drop
> > the SPDY implementation in the release they implement HTTP/2.
>
>
> Yeah, transition will be "fun". We need to make sure we either have
> something that works well enough on both http 1.1 and 2 if we can't keep
> SPDY for the slightly older browsers, or play fun games with variant
> caching so we have a concatenated loader setup and a non-concatenated
> loader setup. :)
>
> For those not familiar, SPDY is roughly the experimental predecessor of the
> new HTTP/2, providing most of the same benefits but not quite compatible
> with the final standard. As a transitional technology it's getting dropped
> from some of the things that are updating, but we're going to see some
> folks stuck on browser versions in the middle with SPDY but no HTTP/2...
> And others with older browsers that support neither:
>
> http://caniuse.com/#feat=spdy
> http://caniuse.com/#feat=http2


We use Nginx for TLS and SPDY termination, which makes supporting both SPDY
and HTTP/2 unfeasible. The plan is to replace SPDY support with HTTP/2 on
or before Chrome does it on May 15
.
This is tracked in Phabricator as T96848
.

The browsers that support SPDY are all evergreen (self-updating by
default), so I expect the migration to HTTP/2 will be quicker than what we
are accustomed to with browser technology. But HTTP 1.1 won't go anywhere
for an aeon.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feelings

2016-04-04 Thread Ori Livneh
On Sun, Apr 3, 2016 at 7:02 PM, Risker  wrote:

> I sympathize with your concern, Ori.  I suspect, however, that it shows a
> fundamental misunderstanding of why the Teahouse works when other processes
> (several of which have included cute symbols) have been less effective.
>
> And the reason is: the Teahouse is explicitly designed for having
> conversations.
>
> Teahouse "convenors" were initially selected for their demonstrated
> communication skills and willingness to remain polite when dealing with
> often frustrated people, and their ability to explain often complex
> concepts in straightforward terms.  As their ranks have evolved, they have
> sought out and taught others those skills, and there's an element of
> self-selection that discourages the more curmudgeonly amongst us from
> participating.  (There's not a lot of overlap between those who regularly
> help out at the Teahouse and those who hang out on ANI, for example.)
> We're talking about a relatively small group of people who really excel at
> this type of communication, although it is certainly a skill that others
> can develop if they have the willingness and inclination - but it really
> comes down to being able to identify the right "level" at which to talk to
> people, and then actually talking.
>
> The Teahouse works because it doesn't [obviously] use a lot of fancy
> technology, because it doesn't use a lot of templates and automated
> messaging, because it's made a lot of effort to avoid massive hyperlinking
> to complex and inscrutable policies.  It's people talking to people.


Yes, fair point. But as long as there exists a need for developing new
features and modifying existing ones, I would like us to consider the
contribution that modifications to the user experience make to the
interpersonal climate on the wikis. Because the contribution is very much
greater than zero. Of course at the end of the day it is about people
making choices about how they relate to one another, and no amount of
Fisher-Price gadgetry will ever change that. But we don't communicate via mind
melds ; we use
imperfect and idiosyncratic media which end up shaping and coloring both
what we communicate and how it is received. So we ought to think carefully
about these effects.¹

(By the way, there was a great Radiolab  episode
about this recently: The Trust Engineers
. Keep in mind that I am
recommending the *episode*, not endorsing all the practices it describes,
some of which make me queasy.)

¹ Concrete example: the way that jenkins-bot gives you a -1 for changes it
can't rebase. Ugh!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feelings

2016-04-02 Thread Ori Livneh
On Sat, Apr 2, 2016 at 12:20 AM, Ryan Lane  wrote:

> To be totally honest, I think this is a great idea, but it should use
> emojis. Github added this for PRs and messages for instance, and it's
> amazingly helpful:
>
>
> https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments
>
> Slack has the same thing. It's fun, people like it, and it tends to
> encourage better interactions.
>

And, yeah: +1.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Feelings

2016-04-02 Thread Ori Livneh
On Fri, Apr 1, 2016 at 10:24 PM, Legoktm 
wrote:

> Hi,
>
> It's well known that Wikipedia is facing threats from other social
> networks and losing editors. While many of us spend time trying to make
> Wikipedia different, we need to be cognizant that what other social
> networks are doing is working. And if we can't beat them, we need to
> join them.
>
> I've written a patch[1] that introduces a new feature to the Thanks
> extension called "feelings". When hovering over a "thank" link, five
> different emoji icons will pop up[2], representing five different
> feelings: happy, love, surprise, anger, and fear. Editors can pick one
> of those options instead of just a plain thanks, to indicate how they
> really feel, which the recipient will see[3].
>

Of the many initiatives to improve editor engagement and retention that the
Wikimedia Foundation has launched over the years, the only one that had a
demonstrable and substantial impact (AFAIK) was the Teahouse.

The goal of the Teahouse initiative was "learning whether a social approach
to new editor support could retain more new editors there"; its stated
design goal was to create a space for new users which would feature "warm
colors, inviting pictorial and thematic elements, simple mechanisms for
communicating, and a warm welcome from real people."[0]

Several studies were made of the Teahouse's impact on editors. One study,
conducted by Jonathan Morgan and Aaron Halfaker, found that new editors who
were invited to participate in the Teahouse were 10% more likely to have
met the thresholds for survival in the weeks and months after
registration.[1]

Another significant fact about the Teahouse is the substantial
participation from women. Women make up 9% of the general editor
population, but 29% percent of Teahouse participants.[2]

When new editors who had been invited to the Teahouse were asked (in a 2012
survey) to described what they liked about their experiences, many
respondents spoke about the positive emotional environment, saying things
like: "the fact that there is somebody 'out there', that there is a sincere
community, gives a professional and safe feeling about Wikipedia", and "the
editors are very friendly and patient, which is great when compared to the
rest of Wikipedia in how new editors are treated."[2]

Why am I going on about this? I guess I'm a bit bummed out that the idea of
designing user interfaces that seek to improve the emotional environment by
making it easier to be warm and personal to one another is a joke. I don't
think any topic is sacrosanct, this topic included. But humor works best
when it provides a counterpoint and a foil to "serious" discourse, and
there just isn't very much serious discourse on this topic to go around. I
also worry that people in and around our community who feel a need for more
opportunities for positive emotional interactions will feel invalidated,
ridiculous, ashamed, or at any rate less confident about ever speaking up
about this topic in a serious way, and less hopeful about being heard.

  [0]: https://meta.wikimedia.org/wiki/Research:Teahouse
  [1]:
https://meta.wikimedia.org/wiki/Research:Teahouse_long_term_new_editor_retention#Results
  [2]:
https://meta.wikimedia.org/wiki/Research:Teahouse/Phase_2_report/Metrics
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] X-Wikimedia-Debug, your new secret side-kick

2016-03-31 Thread Ori Livneh
On Thu, Mar 31, 2016 at 6:15 PM, Yuri Astrakhan 
wrote:

> Isn't there a recommendation not to use the X- prefix for any new headers?
>

There is, but there is a clear existing convention for using the X- prefix
in Wikimedia-specific headers, so I think local consistency trumps the IETF
recommendation. We currently have X-Analytics, X-Trusted-Proxy,
X-Wikimedia-Security-Audit, X-Pass-Stream, X-Carrier, X-Carrier-Meta,
X-CDIS, X-Subdomain,
X-Orig-Cookie, X-MediaWiki-Original, X-WMF-NOCOOKIES, X-ZeroTLS,
X-Forwarded-By, X-CS2, and X-Subdomain. And this is not an exhaustive list.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] X-Wikimedia-Debug, your new secret side-kick

2016-03-30 Thread Ori Livneh
Hey all,

I'm writing to let you know of a cool new facility for debugging MediaWiki
code on the Wikimedia production cluster -- the X-Wikimedia-Debug
 HTTP header.

By setting this header on requests to Wikimedia wikis, you can:

- Bypass the cache.
- Force Varnish to pass your request to a specific backend server.
- Profile request and log profiling data to XHGui.
- Turn on all log channels and send log messages to a special view in
Kibana / Logstash.
- Force MediaWiki to process the request in read-only mode.

And the best part: there are browser extensions for Chrome and Firefox that
provide a friendly user-interface for these features:

http://i.imgur.com/XzWUk0h.gifv

http://i.imgur.com/lJ7l6Vl.gifv
​
Cool? Cool.

Read the docs on Wikitech
 for more
information.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] limited presence

2016-03-02 Thread Ori Livneh
On Wed, Mar 2, 2016 at 10:08 AM, Derk-Jan Hartman <
d.j.hartman+wmf...@gmail.com> wrote:

> Hi all,
>
> ive had a small accident with skiing. ill be fine, but because of it my
> participation will be low for some while and though i am reading along with
> some things, any answers will like be terse, and code will be none.
>
> DJ
>

I updated http://status.wikimedia.org/ (see "announcements") accordingly

Feel better! :o)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] New PHP profiling tool available on WMF cluster

2016-02-21 Thread Ori Livneh
Hello,

I set up XHGui on the Wikimedia Foundation's production cluster:

https://performance.wikimedia.org/xhgui/

XHGui is a web application for browsing and analyzing PHP profiling data
collected via the XHProf hierarchical profiler. It's now available on the
Wikimedia Foundation's production cluster:

https://performance.wikimedia.org/xhgui/

You can see an example of a profiled request here:

https://performance.wikimedia.org/xhgui/run/view?id=56ca93be7ed6ccf971b5b693

I set things up so all requests which bear the X-Wikimedia-Debug header
(see ) are profiled
and entered into XHGui's database.

Anyone can use this setup to profile requests and view profiling data, but
for the moment POSTing to XHGui (to add function watches, create custom
views, etc.) requires WMF staff or NDA LDAP credentials.

Note that cookie, client IP, form data, and all query parameters except
'action', are stripped from requests. Also note that the app server that
handles X-Wikimedia-Debug requests is configured to process messages in all
log channels, which grossly inflates the runtime of log-processing
functions.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mass migration to new syntax - PRO or CON?

2016-02-12 Thread Ori Livneh
On Fri, Feb 12, 2016 at 7:27 AM, Daniel Kinzler 
wrote:

> Now that we target PHP 5.5, some people are itching to make use of some new
> language features, like the new array syntax, e.g.
> .
>
> Mass changes like this, or similar changes relating to coding style, tend
> to
> lead to controversy. I want to make sure we have a discussion about this
> here,
> to avoid having the argument over and over on any such patch.
>
> Please give a quick PRO or CON response as a basis for discussion.
>

PRO. These syntax changes were implemented in PHP at the cost of breaking
backward-compatibility, which tells you that people understood their value
and were willing to pay a cost for modernizing and simplifying the
language. If PHP was willing to pay it, why wouldn't we?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Update to syntax highlighting extension adds support for more languages

2016-02-03 Thread Ori Livneh
Hello,

I am writing to let you know that Pygments, the library we use for
syntax-highlighting, has been updated to the latest release (version 2.1)
in the SyntaxHighlight_GeSHi extension and across Wikimedia wikis. This
release includes support for 53 new languages and language variants.

They are listed below, with the lexer name (the string to use as a value
for the 'lang' attribute of  tags) in parenthesis:

- ABNF ("abnf")
- ADL ("adl")
- Arduino ("arduino")
- BC ("bc")
- BNF ("bnf")
- Boogie ("boogie")
- CAmkES ("camkes", "idl4")
- CPSA ("cpsa")
- CSS+Mako ("css+mako")
- Component Pascal ("cp", "componentpascal")
- Crmsh ("pcmk", "crmsh")
- Csound Document ("csound-csd", "csound-document")
- Csound Orchestra ("csound", "csound-orc")
- Csound Score ("csound-score", "csound-sco")
- Earl Grey ("earlgrey", "earl-grey", "eg")
- Easytrieve ("easytrieve")
- Elm ("elm")
- Ezhil ("ezhil")
- Fish ("fishshell", "fish")
- FortranFixed ("fortranfixed")
- HTML+Mako ("html+mako")
- Hexdump ("hexdump")
- J ("j")
- JCL ("jcl")
- JavaScript+Mako ("js+mako", "javascript+mako")
- LessCss ("less")
- MSDOS Session ("doscon")
- Mako ("mako")
- ODIN ("odin")
- PacmanConf ("pacmanconf")
- ParaSail ("parasail")
- PkgConfig ("pkgconfig")
- PowerShell Session ("ps1con")
- Praat ("praat")
- QML ("qbs")
- QVTO ("qvt", "qvto")
- Roboconf Graph ("roboconf-graph")
- Roboconf Instances ("roboconf-instances")
- Shen ("shen")
- SuperCollider ("sc", "supercollider")
- TAP ("tap")
- Tcsh Session ("tcshcon")
- Termcap ("termcap")
- Terminfo ("terminfo")
- Terraform ("terraform", "tf")
- Thrift ("thrift")
- TrafficScript ("trafficscript", "rts")
- Turtle ("turtle")
- TypeScript ("typescript")
- X10 ("xten", "x10")
- XML+Mako ("xml+mako")
- cADL ("cadl")

J and BNF were previously supported by mapping them to other languages with
close-enough syntax: Objective-J and Extended BNF, respectively. Each of
those now has a dedicated lexer, so the highlighted output should hew more
closely to the language definition.

For more information on SyntaxHighlight_GeSHi and Pygments, see:
- https://www.mediawiki.org/wiki/Extension:SyntaxHighlight_GeSHi
- http://pygments.org/

Ori Livneh
Perf Team
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Close test2wiki?

2016-01-28 Thread Ori Livneh
On Thu, Jan 28, 2016 at 11:06 AM, Rob Lanphier  wrote:

> Hi Ori,
>
> Thanks for bringing this seemingly vestigial weirdness to our attention.
> As you say, it should be documented better.
>
> As I'm reading this, you are still making the case that we should still
> shut this down.


Sorry, I should have  been clearer. Given the responses (which identified
clear and reasonable use-cases), I think it's perfectly fine to keep it
around. I just wanted to explain why I bothered asking in the first place.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Close test2wiki?

2016-01-28 Thread Ori Livneh
On Wed, Jan 27, 2016 at 10:17 PM, Ricordisamoa  wrote:

> Il 28/01/2016 02:30, Dan Garry ha scritto:
>
>> On 27 January 2016 at 17:16, Legoktm  wrote:
>>
>> Especially when debugging and testing cross-wiki features, it is
>>> extremely useful to have two test wikis to use. MassMessage,
>>> GlobalCssJs, GlobalUserPage, and now cross-wiki notifications were all
>>> initially deployed using testwiki as the "central" wiki, and test2wiki
>>> as a "client" wiki.
>>>
>>> That sounds like a good reason to keep it, especially since global
>> notifications is an active, ongoing work. Perhaps, as an alternative to
>> shutting it down, we should just make it clearer that test2.wikipedia.org
>> is primarily intended for that purpose on that wiki's main page (or
>> anywhere else thought appropriate). If there's some specific overhead to
>> keeping test2 alive that might outweigh that benefit, now would seem to be
>> the time to make it clear. :-)
>>
>> Dan
>>
>>
> I second Legoktm's comment. And, for what it's worth, I don't think it
> makes much sense to limit test2wiki to a specific purpose.


Ok, understood. Keeping it around costs little. Dan, in case you were
volunteering, please go ahead and document the purpose of test2 on its main
page and/or wikitech -- I think it is a good idea.

If it is cheap to keep it, why did I even bother asking? I'm glad you asked!

As the Wikimedia software stack evolves, some of its components become
vestigial. Their existence makes it harder for anyone to form a systematic
understanding of the whole, because they don't have any clear functional
relationships with others components. And since they're not on anybody's
mind, they have a tendency to become "gotchas" for future upgrades and
migrations. So it's good to get rid of them, even if the resource costs are
small.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Proposal regarding the future of X-Wikimedia-Debug and testwiki

2016-01-27 Thread Ori Livneh
On Tue, Jan 26, 2016 at 11:58 PM, Mukunda Modell 
wrote:

> I think it would be very useful to have a way to, in addition to
> cache-busting, also force the request to be served from the pre-production
> branch rather that the current production branch. This way changes on the
> prod+1 branch can be conveniently tested on any wiki (not just testwiki)
> while disregarding the version specified in wikiversions.
>

Well, there is a way: you can edit /srv/mediawiki/wikiversions.php on
mw1017 to change the mapping of wikis to branches, and set the
X-Wikimedia-Debug header to ensure your request gets handled by mw1017.
Making this more convenient would be very risky, because it would mean that
two different versions of the code are transacting with data on shared
storage backends, each with the presumption of being the only game in town.
And this state could be triggered by anyone, with no !logging or
coordination.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Close test2wiki?

2016-01-27 Thread Ori Livneh
The setup of test2.wikipedia.org is no longer meaningfully different from
test.wikipedia.org. Is there a good reason for keeping test2?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Announcing mediawiki-containers, a Docker-based MediaWiki installer

2015-12-27 Thread Ori Livneh
On Thu, Dec 24, 2015 at 3:57 PM, Gabriel Wicke  wrote:

> I am writing to announce mediawiki-containers [1], a simple installer for
> MediaWiki with VisualEditor, Parsoid, RESTBase and other services, using
> Linux containers.
>

This is very nice work -- kudos. Is it too soon to envision running this
(or rather, some future iteration) in production at Wikimedia? What would
need to happen?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Update from the Wikimedia Performance Team

2015-11-05 Thread Ori Livneh
Hello,

This is the first monthly report from the Wikimedia Performance Team.
The purpose
of these reports is to showcase our work, track our progress, and spark
conversation.

## Who we are ##

As the Wikimedia Foundation’s Performance Team, we want to create
value for readers
and editors by making it possible to retrieve and render content at the
speed of thought, from anywhere in the world, on the broadest range of
devices and connection profiles.

We are five engineers: Aaron Schulz, Gilles Dubuc, Peter Hedenskog,
Timo Tijhof,
and me. Each of us was born in a different country and has a different mother
tongue. We work out of San Francisco, London, Stockholm, and Montpellier.

## What we are working on ##

* Availability. Although Wikimedia Foundation currently operates six data
centers, MediaWiki is only running on one (in Ashburn, Virginia). If you
are an editor in Jakarta, Indonesia, content has to travel over 15,000
kilometers to get from our servers to you (or vice versa). To run MediaWiki
concurrently from multiple places across the globe, our code needs to be
more resilient to failure modes that can occur when different subsystems
are geographically remote from one another.
https://phabricator.wikimedia.org/T88445

* Performance testing infrastructure. WebPageTest provides a stable
reference for a set of browsers, devices, and connection types from
different points in the world. It collects very detailed telemetry that we
use to find regressions and pinpoint where problems are coming from. This
is addition to the more basic Navigation Timing metrics we gather from real
users in production.
https://phabricator.wikimedia.org/T109666

* Media stack. We're currently working on overhauling our thumbnail
infrastructure to achieve multiple goals. Improving future-proof
maintainability by taking thumbnail code out of MediaWiki-Core and using a
service instead to perform thumbnail operations. Saving on expensive
storage by no longer storing multiple copies of all thumbnails on disk
forever. Enabling far-future expires for images, to greatly improve their
client cacheability. And finally switching to the best performing
underlying software to generate thumbnails faster.
https://phabricator.wikimedia.org/T111718

* ResourceLoader. ResourceLoader is the MediaWiki subsystem that is
responsible for loading JavaScript and CSS. Whereas much of MediaWiki's
code executes only sparingly (in reaction to editors modifying content)
ResourceLoader code runs over half a billion times a day on hundreds of
millions of devices. Its contribution to how users experience our sites is
very large. Our current focus is on improving ResourceLoader's cache
efficiency by packaging and delivering JavaScript and CSS code in a way
that allows it to be reused across page views without needing to be
repeatedly downloaded.
https://phabricator.wikimedia.org/T102578

## How are we doing? ##

In future reports, we will spend less time on introductions and more time
reporting on how particular performance metrics have trended since the
previous report and why. In the meantime, we invite you to check out our
dashboards:

* https://performance.wikimedia.org/
* https://grafana.wikimedia.org/dashboard/db/navigation-timing
* https://grafana.wikimedia.org/dashboard/db/time-to-first-byte
* https://grafana.wikimedia.org/dashboard/db/job-queue-health

## Feedback? ##

Please let us know what you think about these reports and whether we have
picked the right lists to send it to. (We're going to make sure this
information is available on mediawiki.org too.)

Until next time,
Aaron, Gilles, Peter, Timo & Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] Update from the Wikimedia Performance Team

2015-11-05 Thread Ori Livneh
On Thu, Nov 5, 2015 at 12:17 PM, Brad Jorsch (Anomie) <bjor...@wikimedia.org
> wrote:

> On Thu, Nov 5, 2015 at 2:13 PM, Ori Livneh <o...@wikimedia.org> wrote:
>
> > Improving future-proof maintainability by taking thumbnail code out of
> > MediaWiki-Core and using a service instead to perform thumbnail
> operations.
>
> How does that affect third-party use of MediaWiki where people aren't able
> to (or just aren't wanting to) install bespoke services?
>

It doesn't. There is no plan to strip MediaWiki of its existing
capabilities or to make it depend on this service. And since the intention
is to improve existing quality of service (as opposed to providing new
functionality), I don't expect the feature-set to diverge.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] "Try the free Wikipedia app" banners

2015-09-01 Thread Ori Livneh
On Tue, Sep 1, 2015 at 8:30 AM, Ori Livneh <o...@wikimedia.org> wrote:

> We appear to be running a banner campaign on the mobile web site, driving
> people to download the mobile app:
>

Just in time!
http://techcrunch.com/2015/09/01/death-to-app-install-interstitials/
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] "Try the free Wikipedia app" banners

2015-09-01 Thread Ori Livneh
We appear to be running a banner campaign on the mobile web site, driving
people to download the mobile app:

https://en.m.wikipedia.org/wiki/?banner=Aug2015_app_banner_2
https://en.m.wikipedia.org/wiki/?banner=Aug2015_app_banner_1

Campaign definition:
https://meta.wikimedia.org/w/index.php?title=Special:CentralNotice=noticeDetail=Android_app

This isn't cool. This isn't us. We don't drive people from an open platform
to a closed one.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New RFC: better JS minification

2015-09-01 Thread Ori Livneh
On Tue, Sep 1, 2015 at 8:43 AM, Jérémie Roquet 
wrote:

> Has the RFC been abandonned because of lack of interest?
>

Speaking for myself: at the time the RFC was written, I was skeptical that
the benefits would be worth incurring a dependency on an external service
(or the requirement of shelling out). But since then we have had a lot of
discussions (in the RFC meetings) on how to do micro-services correctly,
and some good abstractions have come out of that. So +1 for reviving it.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How to serve up subtitles

2015-08-18 Thread Ori Livneh
On Mon, Aug 17, 2015 at 5:56 AM, Derk-Jan Hartman 
d.j.hartman+wmf...@gmail.com wrote:

 As part of Brion's and mine struggle for better A/V support on Wikipedia, I
 have concluded that our current support for subtitles is rather...
 improvised.

 Currently all our SRT files are referenced from HTML using action=raw. But
 then not actually used from action=raw, but instead served up as semi html
 using api.php. Which is ridiculous...
 If we want to move to more HTML5 compliancy, we also will want to switch
 from the SRT format to the VTT format.

 Ideally, I want to host multiple subtitle formats, and dynamically
 serve/convert them as either SRT or VTT. These can be directly referenced
 from a track element so that we are fully compatible.

 The question is now, how to best do this. The endpoint needs to be dynamic,
 cacheable, allow multiple content types etc.

 Ideas suggested have been:
 * Api.php
 * Restbase
 * New endpoint
 * ResourceLoader modules

 I'm listing the current problems, future requirements and discussing
 several ideas at:

 https://www.mediawiki.org/wiki/Extension:TimedMediaHandler/TimedTextRework?veaction=edit
 If you have any ideas or remarks, please contribute them !


I propose adding an additional associated namespace (like Talk:), except
for subtitles. The namespace will be associated with File pages which
represent videos, and it will be coupled to a ContentHandler class
representing subtitle content. The ContentHandler class will be a natural
place for validation logic and the specification of an alternate editing
interface suitable for editing subtitles.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-07-14 Thread Ori Livneh
On Tue, Jul 14, 2015 at 4:35 AM, John Mark Vandenberg jay...@gmail.com
wrote:

 On Mon, Jul 13, 2015 at 1:30 PM, John Mark Vandenberg jay...@gmail.com
 wrote:
  On Tue, Jun 23, 2015 at 10:48 AM, Ori Livneh o...@wikimedia.org wrote:
  Hello,
 
  Over the course of the next two days, a major update to the
  SyntaxHighlight_GeSHi extension will be rolled out to Wikimedia wikis.
 The
  change swaps geshi, the unmaintained PHP library which performs the
 lexical
  analysis and output formatting of code, for another library, called
  Pygments.
 
  The roll-out will remove support for 31 languages while adding support
 for
  several hundred languages not previously supported, including Dart,
 Rust,
  Julia, APL, Mathematica, SNOBOL, Puppet, Dylan, Racket, Swift, and many
  others. See https://people.wikimedia.org/~ori/geshi_changes.txt for a
  full list. The languages that will lose support are mostly obscure, with
  the notable exception of ALGOL68, Oz, and MMIX.
 
  I was surprised to find other languages not in your text file that
  appear to no longer be supported.
 
  I've gone through the geshi php files looking for assembler languages
  only so far:
  https://gerrit.wikimedia.org/r/224379
 
  How/Why were these excluded in your list?

 I've encountered more of these on Wikipedia, so, ...

 Here is a list of 59 geshi supported languages which were omitted from
 the above list of 31 languages being de-supported by the switch to
 Pygments.


These haven't been supported since
https://gerrit.wikimedia.org/r/#/c/197449/ -- see
https://phabricator.wikimedia.org/T93025 .
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-06-30 Thread Ori Livneh
On Mon, Jun 29, 2015 at 11:04 AM, Ricordisamoa ricordisa...@openmailbox.org
 wrote:

 Il 29/06/2015 20:01, Brad Jorsch (Anomie) ha scritto:

 On Mon, Jun 22, 2015 at 8:48 PM, Ori Livneh o...@wikimedia.org wrote:

  Over the course of the next two days, a major update to the
 SyntaxHighlight_GeSHi extension will be rolled out to Wikimedia wikis.
 The
 change swaps geshi, the unmaintained PHP library which performs the
 lexical
 analysis and output formatting of code, for another library, called
 Pygments.

  ... Please tell me we're not really going to have the final state here
 be
 an extension named SyntaxHighlight_GeSHi that doesn't use GeSHi anymore.

 Here we are.


The mixed case, nonsense word, and inconsistent word separation were simply
too dear to let go. It puts a song in your heart every time you need to
type it.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Parsoid announcement: Main roundtrip quality target achieved

2015-06-25 Thread Ori Livneh
On Thu, Jun 25, 2015 at 3:22 PM, Subramanya Sastry ssas...@wikimedia.org
wrote:

 Hello everyone,

 On behalf of the parsing team, here is an update about Parsoid, the
 bidirectional wikitext - HTML parser that supports  Visual Editor, Flow,
 and Content Translation.

 Subbu.

 ---
 TL:DR;

 1. Parsoid[1] roundtrips 99.95% of the 158K pages in round-trip testing
without introducing semantic diffs[2].


Congratulations, parsing team. This is very cool.


...and, pssst, wink wink, nudge nudge, etc:
http://cacm.acm.org/about-communications/author-center/author-guidelines
http://queue.acm.org/author_guidelines.cfm

:)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-06-23 Thread Ori Livneh
On Mon, Jun 22, 2015 at 10:30 PM, John Mark Vandenberg jay...@gmail.com
wrote:

 Ugh, is there a way to configure pygments to have fallbacks for
 languages which are substantially based on another?  e.g. xpp is
 basically java, and looks quite good when I tested it on betalabs.  I
 am sure that Pygments has some parser close to 'email', as they do
 support a 'http session' language.


Yes. We maintain a mapping from GeSHi lexer names to compatible Pygment
lexers:

https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi/blob/master/SyntaxHighlight_GeSHi.compat.php#L26-96

Making xpp render as Java would be as simple as adding xpp = java. If
you submit a patch, I will be happy to merge it.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-06-23 Thread Ori Livneh
On Tue, Jun 23, 2015 at 7:23 AM, Aran a...@organicdesign.co.nz wrote:

 Also for those that prefer the client to do the work I recently made an
 extension to use the highlight.js module at highlightjs.org.
 https://www.mediawiki.org/wiki/Extension:HighlightJS


This is very nice, and may be a target for a future migration. It'd be
great if the extension had a server-side implementation as well in the form
of a node.js service that MediaWiki could call.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-06-23 Thread Ori Livneh
On Tue, Jun 23, 2015 at 7:55 AM, Daniel Barrett d...@cimpress.com wrote:

 https://people.wikimedia.org/~ori/geshi_changes.txt

 What about languages that aren't listed in your Supported or
 Unsupported lists?


They'll continue to work as before.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-06-22 Thread Ori Livneh
Hello,

Over the course of the next two days, a major update to the
SyntaxHighlight_GeSHi extension will be rolled out to Wikimedia wikis. The
change swaps geshi, the unmaintained PHP library which performs the lexical
analysis and output formatting of code, for another library, called
Pygments.

The roll-out will remove support for 31 languages while adding support for
several hundred languages not previously supported, including Dart, Rust,
Julia, APL, Mathematica, SNOBOL, Puppet, Dylan, Racket, Swift, and many
others. See https://people.wikimedia.org/~ori/geshi_changes.txt for a
full list. The languages that will lose support are mostly obscure, with
the notable exception of ALGOL68, Oz, and MMIX.

The change is expected to slightly improve the time it takes to load and
render all pages on all wikis (not just those that contain code blocks!),
at the cost of a slight penalty (about a tenth of a second) on the time it
takes to save edits which introduce or modify a block of highlighted code
to an article.

Lastly, the way the extension handles unfamiliar languages will change.
Previously, if the specified language was not supported by the extension,
instead of a code block, the extension would print an error message. From
now on, it will simply output a plain, unhighlighted block of monospaced
code.

The wikitext syntax for highlighting code will remain the same.

-- Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Timo Tijhof joining the performance team

2015-06-22 Thread Ori Livneh
Hi all,

I am delighted to announce that Timo Tijhof (Krinkle) is joining the
Wikimedia Foundation's performance team.

Over the past several years, Timo has made many important contributions to
MediaWiki performance. Though he has been highly active across the stack,
his impact on front-end performance has been especially pronounced. Timo is
one of the architects of ResourceLoader, the set of JavaScript and PHP
software components that collectively implement the loading of JavaScript
and CSS. A role on the performance team will allow Timo to spend time on
modernizing ResourceLoader to leverage recent changes in browser networking
stacks and performance-related web APIs.

Timo has also been exemplary in his interactions with browser vendors and
other upstream software vendors, by reporting bugs, submitting patches,
providing feedback on proposals, and polling external subject-matter
experts for input on proposed changes to Wikimedia's stack. I expect Timo
to continue to play a leading role in evaluating new APIs and identifying
opportunities to utilize them to improve user experience.  Finally, Timo's
history of working closely with the VisualEditor team make him a natural
point-person for cross-team performance work on VisualEditor, which
continues to be a priority for us.

Welcome, Timo -- we're super excited and proud to have you.

Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Upcoming SyntaxHighlight_GeSHi changes

2015-06-22 Thread Ori Livneh
On Mon, Jun 22, 2015 at 6:37 PM, MZMcBride z...@mzmcbride.com wrote:

 I use the SyntaxHighlight_GeSHi MediaWiki extension on a few non-Wikimedia
 wikis. If I just run git pull in the extension's directory in a few
 days, is that sufficient?


Almost. There are updated instructions in
https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi/blob/master/README#L16-40
.



 The upgrade path isn't totally clear to me from
 your e-mail. I'm also curious if the Pygments version of the extension
 will change the required MediaWiki core version needed.


It requires MediaWiki 1.25, but the previous version did as well:

https://github.com/wikimedia/mediawiki-extensions-SyntaxHighlight_GeSHi/blob/f3476de0158e464db442213d6731b691a2dd8af1/SyntaxHighlight_GeSHi.php#L12


 Lastly, the way the extension handles unfamiliar languages will change.
 Previously, if the specified language was not supported by the extension,
 instead of a code block, the extension would print an error message. From
 now on, it will simply output a plain, unhighlighted block of monospaced
 code.

 Actually, not instead of, but in addition to.


Correct. My description of the current behavior was inaccurate. Thanks.


 My only feature request would be for
 a tracking category to be auto-populated. Something similar to Pages
 containing invalid syntax highlight languages which would then allow wiki
 editors to find and address these pages (by setting lang=text or by
 filing Phabricator Maniphest tasks to add support for missing languages).


Sounds reasonable. Could you file a task?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Facebook's HHVM performance sprint retrospective highlights MediaWiki gains

2015-06-11 Thread Ori Livneh
On Thu, Jun 11, 2015 at 9:02 PM, Ori Livneh o...@wikimedia.org wrote:

 a parse of the Barack Obama article.


A parse of *English Wikipedia's* Barack Obama article. I am trying, I am
trying... :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Facebook's HHVM performance sprint retrospective highlights MediaWiki gains

2015-06-11 Thread Ori Livneh
Facebook's HHVM team just completed their first performance lockdown, which
they spent focusing on the performance of open-source PHP frameworks under
HHVM. The achievement which they chose to highlight in their blog posts is
a gain of 19% in their MediaWiki performance benchmark, which is – you
guessed it – a parse of the Barack Obama article.

https://code.facebook.com/posts/902199373155728/-inside-the-hhvm-lockdown/

http://hhvm.com/blog/9293/lockdown-results-and-hhvm-performance
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia iOS, now Swift enabled!

2015-05-23 Thread Ori Livneh
Cool! The wmf:// NSURLProtocol work looks pretty neat as well.

On Sat, May 23, 2015 at 4:46 PM, Brian Gerstle bgers...@wikimedia.org
wrote:

 If you want to write some Swift in the main Wikipedia iOS app project,
 please work off the swift branch.

 Happy hacking!

 Brian

 --
 EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
 IRC: bgerstle
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Multimedia team?

2015-05-10 Thread Ori Livneh
On Sun, May 10, 2015 at 5:18 PM, Tim Starling tstarl...@wikimedia.org
wrote:

 On 10/05/15 07:06, Brian Wolff wrote:
  People have been talking about vr for a long time. I think there is more
  pressing concerns (e.g. video). I suspect VR will stay in the video game
  realm  or gimmick realm for a while yet

 Maybe VR is a gimmick, but VRML, or X3D as it is now called, could be
 a useful way to present 3D diagrams embedded in pages. Like SVG, we
 could use it with or without browser support.


This is pretty cool. There are some examples at http://examples.x3dom.org/
.

Simple example: a ball-joint pendulum: 
http://examples.x3dom.org/physics/ballJoint_pendulum.html
Complex example: a liver: 
http://liveranatomyexplorer.steven-birr.com/index.php?site=start
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSoC and Outreachy results out!

2015-04-27 Thread Ori Livneh
On Mon, Apr 27, 2015 at 1:00 PM, Niharika Kohli nko...@wikimedia.org
wrote:

 Hello everyone!

 After a long and intense selection process, Google and GNOME Foundation
 have finally announced the list of GSoC and Outreachy selects for the
 current round. I'm pleased to announce we have 9 interns for GSoC and 1 for
 Outreachy!


It is especially encouraging to see the diversity of participants -- by my
count, the students hail from five different countries, and nearly half are
women. It's also extremely impressive to see volunteers from the community
step up to serve as mentors. Kudos!

I'm ori on IRC (https://www.mediawiki.org/wiki/MediaWiki_on_IRC). Feel
free to reach out any time and say hello or ask for help.

Welcome to Wikimedia!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] For the Sake of Simplicity

2015-04-22 Thread Ori Livneh
From http://www.artima.com/weblogs/viewpost.jsp?thread=362327 :

That software systems are getting bigger and more pervasive is testament
that we can at times muster the wherewithal to manage the inherent
complexity of constructing large systems. At other times, however, it
simply supports the view that creating complexity is far easier than
hiding it — the creeping featurism, physical size, and bugginess of
certain operating systems and wordprocessing packages is tantamount to a
public admission of software engineering failure. To paraphrase Blaise
Pascal, The software to solve your problem is larger and more complex
than necessary, because we did not have the ability or resources to make
it smaller and simpler.

[...]

In other words, leaving or taking things out by constantly examining the
difference between inherent and actual complexity, questioning and
reducing the number of features, and questioning and reducing the amount
of code. For benighted management that still measures productivity in
terms of KLOCs, this is scary: it appears to represent negative
productivity. But... less code, fewer bugs... fewer features, greater ease
of use... and so on.

This leads to an interesting marketing possibility, and one that I have
seen in action only a few times: the idea that a subsequent release of a
product might be smaller or have fewer features than the previous version,
and that this property should be considered a selling point. Perhaps the
market is not mature enough to accept it yet, but it remains a promising
and classic ideal — less is more.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] client-side HTML templating now in core

2015-04-02 Thread Ori Livneh
On Thu, Apr 2, 2015 at 3:27 PM, Ryan Kaldari rkald...@wikimedia.org wrote:

 MediaWiki core now has built-in support for using Mustache HTML templates
 from your client-side Javascript code. You can also use the same templates
 on the server-side with the TemplateParser class.

 Documentation:

 https://www.mediawiki.org/wiki/Manual:HTML_templates#mw.template_.28client-side.29


Nice work, and kudos for the documentation!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Moritz Muehlenhoff joins as Ops Security Engineer

2015-04-02 Thread Ori Livneh
On Thu, Apr 2, 2015 at 2:07 AM, Mark Bergsma m...@wikimedia.org wrote:

 He used to be a
 frequent visitor of film festivals such as the San Sebastian festival,
 but with the baby around home theatre has become more prevalent. :-)


Prediction: you are about to become a very discerning critic of Disney
movies :D


 Please join me in welcoming Moritz to the team!


Welcome! Awesome to have you on board.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Lua module for simple molar mass calculations

2015-03-31 Thread Ori Livneh
On Tue, Mar 31, 2015 at 3:25 PM, Brenton Horne brentonhorn...@gmail.com
wrote:

 Thanks, I have also posted this question on Stackoverflow (
 http://stackoverflow.com/questions/29377097/lua-module-
 for-calculating-the-molar-mass-of-chemical-compounds) someone with Lua
 skills but not so much with MediaWiki Lua templating and they gave this
 code:

 |local  AtomicWeightLookup=  {
  C=  12.01,
  H=  1.001,
  O=  16
 }

 local  function  Calculate(Input)
  -- Input Example: {C = 2, H = 6, O = 1}
  local  Result=  0
  -- Iterate through Input table
  for  Element,Quantityin  next,Inputdo
   -- If element is not found in table, assume 0 weight.
   local  AtomicWeight=  AtomicWeightLookup[Element]  or  0
   -- Multiply
   Result=  Result+  Quantity*  AtomicWeight
  end
  return  Result
 end

 -- EXAMPLE
 print(Calculate({C=  2,  H=  6,  O=  1}))|

 but as you can see there's no variables in here that are set by MediaWiki
 templates, but it seems like a decent starting place.


Here you go:
https://test2.wikipedia.org/wiki/Module:Standard_atomic_weight
https://test2.wikipedia.org/wiki/Module:Molar_mass_calculator
demo: https://test2.wikipedia.org/wiki/Module_talk:Molar_mass_calculator

Note that your table had an error -- the atomic weight of Fluorine is
assigned to symbol 'C' rather than 'F'.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Flame graphs in Chrome / Chromium

2015-02-13 Thread Ori Livneh
Hello,

The timeline and flame graph features of Chrome's DevTools have been very
useful for us as we work to understand and improve the performance of
VisualEditor. Someone asked me today about how we use these tools, so I
recorded a short (3-minute) screencast. It unfortunately cut off near the
end, but only the last sentence or so got clipped.

https://commons.wikimedia.org/wiki/File:Demonstration_of_Chromium%27s_timeline_feature.webm

T88590 is a good example of a bug we caught using this feature:

https://phabricator.wikimedia.org/T88590

Hope it's useful,

Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] From Node.js to Go

2015-01-29 Thread Ori Livneh
(Sorry, this was meant for wikitech-l.)

On Thu, Jan 29, 2015 at 7:20 PM, Ori Livneh o...@wikimedia.org wrote:

 We should do the same, IMO.
 http://bowery.io/posts/Nodejs-to-Golang-Bowery/

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dev Summit debrief: SOA proliferation through specification

2015-01-28 Thread Ori Livneh
On Wed, Jan 28, 2015 at 12:30 PM, James Douglas jdoug...@wikimedia.org
wrote:

 Howdy all,

 It was a pleasure chatting with you at this year's Developer Summit[1]
 about how we might give SOA a shot in the arm by creating (and building
 from) specifications.

 The slides are available on the RESTBase project pages[2] and the session
 notes are available on Etherpad[3].


Hi James,

I missed your session at the developer summit, so the slides and notes are
very useful. I think that having a formal specification for an API as a
standalone, machine-readable document is a great idea. I have been poking
at Chrome's Remote Debugging API this week and found this project, which is
a cool demonstration of the power of this approach:
https://github.com/cyrus-and/chrome-remote-interface

The library consists of just two files: the protocol specification[0],
which is represented as a JSON Schema, and the library code[1], which
generates an API by walking the tree of objects and methods. This approach
allows the code to be very concise. If future versions of the remote
debugging protocol are published as JSON Schema files, the library could be
updated without changing a single line of code.

MediaWiki's API provides internal interfaces for API modules to describe
their inputs and outputs, but that's not quite as powerful as having the
specification truly decoupled from the code and published as a separate
document. I'm glad to see that you are taking this approach with RestBASE.

  [0]:
https://github.com/cyrus-and/chrome-remote-interface/blob/master/lib/protocol.json
  [1]:
https://github.com/cyrus-and/chrome-remote-interface/blob/master/lib/chrome.js
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] All Hands On Deck!

2015-01-23 Thread Ori Livneh
On Fri, Jan 23, 2015 at 9:11 AM, littlewmfb...@yandex.com wrote:

 Oh what a day! Which began when perforce
 a visitor from afar began to exhort
 us to look far afield, to travel and visit
 learn brand new things, uncover, elicit
 stories of users far from our homes!
 And so we set out, bravely to roam:
 perhaps ten full blocks! We found creatures strange!
 They all spoke English! The stories exchanged
 recalled those of family: Mom, Dad, and friends --
 it's true, then, we _are_ all the same in the end!

 Such relief not to grapple
 with projects baroque,
 languages strange, or
 features they wrote.
 In tune with this sentiment
 let's celebrate dominance!
 Hush the less pertinent --
 let's not mention those continents.

 Hurray, we all cheer: Our wiki is strong!
 Our projects are weak, but shush, sing along:
 our rivals are fierce, but yet we prevailed;
 it must be because our PHP scaled!
 Ignore those naysayers who laugh at our UX
 And claims by our editors that it obstructs:
 separate pages for talk, no friends and no chat --
 no Serious Software has all of that!

 Well, enough -- we're not free
 to change even fonts
 without acres of missives
 to agony aunts:
 let's move next to strategy,
 where with speeches prolonged
 new hires will tell us
 what we got wrong.

 Three commands we were given:
 the first, to be punctual.
 By fiat we've banished
 the correct but eventual;
 from now on our code
 is timely _and_ functional.
 Our prior disasters are
 vanished by ritual.

 The second was novel:
 exhorted to innovate!
 Our change-fearing userbase
 I'm sure will reciprocate.
 Perhaps we can grow
 new crops of good editors.
 New users, new processes,
 throw off our fetters.
 Perhaps we need spaces
 where we can be bold --
 it's hard else to see
 how to do what we're told.

 The last was to integrate,
 engage with community;
 never mind our tall silos
 and product disunity:
 we can have orphaned features
 conflicted teams, clashing visions --
 What's key is to synergize!
 says our stratcom tactician.
 Community discourse
 will fix all that ails us:
 except for those times
 when instead they've assailed us.

 Lift a glass to the mission!
 We'll muddle through fine.
 We all love each other,
 but this day's been a grind.


Madam / Sir,

Your lines are fantastically metric
With iambs that turn anapestic
So it's really a shame
that you left out your name
'Cause without one your words are domestic.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki 2.0 (was Fwd: No more Architecture Committee?)

2015-01-16 Thread Ori Livneh
On Thu, Jan 15, 2015 at 11:51 PM, Daniel Friesen dan...@nadir-seen-fire.com
 wrote:

 Any such discussion should probably start with this:
 https://www.mediawiki.org/wiki/MediaWiki_2.0


The page is an outline of a plan. It sounds pretty good to me. Whatever
happened to that initiative? Did it just fizzle out, or were there
substantive disagreements to work out?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: No more Architecture Committee?

2015-01-15 Thread Ori Livneh
On Thu, Jan 15, 2015 at 8:04 PM, Rob Lanphier ro...@wikimedia.org wrote:

 On the leadership front, let me throw out a hypothetical:  should we
 have MediaWiki 2.0, where we start with an empty repository and build
 up?  If so, who makes that decision?  If not, what is our alternative
 vision?  Who is going to define it?  Is what we have good enough?


Let's throw out that hypothetical, because it's too grotesque even as a
conversation starter.

The model I do think we should consider is Python 3. Python 3 did not
jettison the Python 2 codebase. The intent behind the major version change
was to open up a parallel development track in which it was permissible to
break backward-compatibility in the name of making a substantial
contribution to the coherence, elegance and utility of the language.

Python 3 development started with PEP-3000[1], which Guido published in
2006, some fifteen years after the initial public release of Python.
MediaWiki has been around for almost as long, and it has been subjected to
rather extraordinary stressors during that time.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: No more Architecture Committee?

2015-01-15 Thread Ori Livneh
On Thu, Jan 15, 2015 at 10:57 PM, Tim Starling tstarl...@wikimedia.org
wrote:

 Committee members are skeptical of many of the current ideas floating
 around at the moment, and have their own ideas about what things should be
 priorities, but have no expectation that those ideas will be considered for
 resourcing.


Keeping these thoughts to yourselves doesn't help. Please speak up.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] S Page debuts as Technical Writer

2015-01-05 Thread Ori Livneh
On Mon, Jan 5, 2015 at 2:06 PM, Quim Gil q...@wikimedia.org wrote:

 It is an honor to announce that S Page[1] has moved from the Collaboration
 (Flow) team to join the WMF Engineering Community team as a Technical
 Writer[2]. We were really lucky to find such a great combination of English
 communication skills, awareness of MediaWiki documentation pain points,
 more-than-basic MediaWiki development experience, and Wikimedia community
 mileage. Besides, S is self-driven, based in San Francisco, and accompanies
 almost any reaction with a smile, assets of great value in his new
 position.


This is great news. S is really excellent at this. Congrats.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ResourceLoader module wrapping

2014-12-19 Thread Ori Livneh
On Thu, Dec 18, 2014 at 4:34 PM, Gergo Tisza gti...@wikimedia.org wrote:

 I am experimenting with catching Javascript errors with raven.js [1] (see
 the JS error logging RfC [2] for background; see T1345 [3] for a prototype
 for JS error logging). For various reasons, Javascript does not have a
 reliable way to install a global exception handler like e.g. PHP does with
 set_exception_handler(), so the standard way of doing this is to wrap
 modules into a try-catch block.


Chrome's profiling tools continue to flag try / catch blocks because they
are ineligible for certain types of optimizations. We've discussed it
before on this list, but I don't think we ever quantified the performance
penalty (or even confirmed its existence), but I think that doing so should
probably be a prerequisite to this change.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki flame graphs

2014-12-15 Thread Ori Livneh
I'm writing to draw your attention to a newly-available resource for
performance analysis: flame graphs of MediaWiki code.

http://performance.wikimedia.org/xenon/svgs/

Flame graphs are a visualization of application stack traces.

Each application server on the Wikimedia cluster has a software timer that
interrupts MediaWiki once every ten minutes to capture a stack trace. The
stack trace shows what the code the application server was in the process
of executing when the timer expired. A central log aggregator collects
these traces and uses them to generate flame graphs.

Each box in a flame graph represents a function in the stack. The y-axis
shows stack depth. The topmost box shows the function that was on-CPU at
the moment the trace was captured. The function below a function is its
parent.

The x-axis spans the sample population. The width of the box shows the
total time it was on-CPU or part of an ancestry that was on-CPU, based on
sample count.

Here is an example:
http://performance.wikimedia.org/xenon/svgs/hourly/2014-12-15_22.svgz

To learn more about flame graphs and how to interpret them, see 
http://www.brendangregg.com/flamegraphs.html.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] ANN: WikiMedia recent changes DDP API

2014-12-14 Thread Ori Livneh
On Sat, Dec 13, 2014 at 11:01 AM, Mitar mmi...@gmail.com wrote:

 Hi!

 I made a a Meteor DDP API to the stream of recent changes on all
 WikiMedia wikis. Now you can simply use DDP.connect on in your Meteor
 application to connect to stream of changes on Wikipedia. :-)

 http://wikimedia.meteor.com/


 Mitar


This is really cool!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Phabricator migration part II: Replacing gitblit with Diffusion

2014-11-30 Thread Ori Livneh
On Fri, Nov 28, 2014 at 6:15 PM, MZMcBride z...@mzmcbride.com wrote:

 Chad wrote:
 On Fri Nov 28 2014 at 2:41:00 PM Bartosz Dziewoński matma@gmail.com
 wrote:
  On Thu, 27 Nov 2014 21:02:39 +0100, Merlijn van Deen
  valhall...@arctus.nl wrote (roughly):
  https://www.mediawiki.org/wiki/Special:Permalink/1287943
 
  I definitely like this proposal a lot more than the current one.
 
 +1. Big improvement for callsigns :)

 Yes, this is great, valhallasw. Thank you!

 So instead of DATAVALUEIMPLEMENTATIONS we'd have EDVM and instead of
 FUNDRAISINGTRANSLATEWORKFLOW we'd have EFTW. I'm sold.

 Does anyone hate or object to this four-character scheme?


I have my own mapping of four-letter words to repositories, but this one
works too.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] FOSS OPW Mentor Contact

2014-10-22 Thread Ori Livneh
Replied. Thanks.

On Thu, Oct 16, 2014 at 4:42 PM, Rachel Farrand rfarr...@wikimedia.org
wrote:

 Hello,

 The MediaWiki Core Team is having an offsite this week. They are mostly
 spending their time in discussion with closed laptops.
 I will make sure to pass your message on to Ori as soon as meetings end
 today.
 Thanks for your understanding!

 Rachel

 On Thu, Oct 16, 2014 at 4:35 PM, Brian Wolff bawo...@gmail.com wrote:

  On Oct 16, 2014 7:02 PM, E.C Okpo eco...@gmail.com wrote:
  
   Hello, I am working on my application for the FOSS Outreach Program,
 but
  I
   am having some trouble getting in contact with the mentor for my chose
   project - Ori Livneh https://www.mediawiki.org/wiki/User:Ori.livneh.
 I
   have idled on IRC for quite a while trying to get in contact, but no
  luck.
   Would anyone know of an alternate means of contact?
  
   Thanks,
   Christy
   
  
 
 
 https://www.mediawiki.org/wiki/FOSS_Outreach_Program_for_Women/Round_9#Wikimedia_Performance_portal
   
   ___
   Wikitech-l mailing list
   Wikitech-l@lists.wikimedia.org
   https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
  Isnt there some giant meeting today and tomorrow of wmf platform people?
  Maybe try again on monday?
 
  Otherwise i would suggest email. You can email him by using
  Special:emailuser on wiki, or find his email by looking it up in git,
  gerrit, bugzilla or old mailing list posts. (There is a good chance he
 may
  even be reading this)
 
  --bawolff
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Architecture Committee updates

2014-10-22 Thread Ori Livneh
On Wed, Oct 22, 2014 at 1:17 PM, Brion Vibber bvib...@wikimedia.org wrote:

 Hey all --

 Announcement time!

 The MediaWiki Architecture Committee has added two members by provisional
 internal consensus:
 * Roan Kattouw, Wikimedia Foundation (visual editor  core)
 * Daniel Kinzler, Wikimedia Deutschland


Congrats to both, and really -- congrats to us for having people of that
caliber around. I think they're the right people for the job.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Invitation to beta-test HHVM

2014-09-26 Thread Ori Livneh
On Fri, Sep 19, 2014 at 12:57 AM, This, that and the other 
at.li...@live.com.au wrote:

 Only once that issue is fixed, and the API is running via HHVM, will I be
 able
 to get excited...


You may be able to get excited now. :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Invitation to beta-test HHVM

2014-09-18 Thread Ori Livneh
(apologies for cross-posting)

I'm happy to announce that HHVM is available on all Wikimedia wikis for
intrepid beta testers. HHVM, you'll recall, is an alternative runtime for
PHP that provides substantial performance improvements over the standard
PHP interpreter. Simply put: HHVM is software that runs on Wikimedia's
servers to make your reading and editing experience faster.

You can read more about HHVM here: https://www.mediawiki.org/wiki/HHVM

* How do I enable HHVM?

You can enable HHVM by opting in to the beta feature. This short animated
gif will show you how: http://people.wikimedia.org/~ori/hhvm_beta.gif.

Enabling the beta feature will set a special cookie in your browser. Our
servers are configured to route requests bearing this cookie to a pool of
servers that are running HHVM.

* How do I know that it's working?

Opting-in to the beta feature does not change the user interface in any
way. If you like, you can copy the following code snippet to the global.js
subpage of your user page on MetaWiki:

https://meta.wikimedia.org/wiki/User:Ori.livneh/global.js

If you copy this script to your global.js, the personal bar will be
annotated with the name of the PHP runtime used to generate the page and
the backend response time. It looks like this:

http://people.wikimedia.org/~ori/hhvm_script.png

Edits made by users with HHVM enabled will be tagged with 'HHVM'. The tag
is there as a precaution, to help us clean up if we discover that HHVM is
mangling edits somehow. We don't expect this to happen.

* What sort of performance changes should I expect?

We expect HHVM to have a substantial impact on the time it takes to load,
preview, and save pages.

At the moment, API requests are not being handled by HHVM. Because
VisualEditor uses the API to save articles, opting in to the HHVM beta
feature will not impact the performance of VisualEditor. We hope to have
HHVM handling API requests next week.

* What sort of issues might I encounter?

Most of the bugs that we have encountered so far resulted from minute
differences in how PHP5 and HHVM handle various edge-cases. These bugs
typically cause a MediaWiki error page to be shown.

If you encounter an error, please report it on Bugzilla and tag with it the
'HHVM' keyword.


We're not done yet, but this is an important milestone. The roll-out of
HHVM as a beta feature caps many months of hard work from many developers,
both salaried and volunteer, from the Wikimedia Foundation, Wikimedia
Deutschland, and the broader Wikimedia movement.  I want to take this
opportunity to express my appreciation to the following individuals, listed
in alphabetical order:

Aaron Schulz, Alexandros Kosiaris, Brad Jorsch, Brandon Black, Brett
Simmers, Bryan Davis, Chad Horohoe, Chris Steipp, Erik Bernhardson, Erik
Möller, Faidon Liambotis, Filippo Giunchedi, Giuseppe Lavagetto, Greg
Grossmeier, Jack McBarn, Katie Filbert, Kunal Mehta, Mark Bergsma, Max
Semenik, Niklas Laxström, Rob Lanphier, and Tim Starling.

More good things to come! :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] +2 in mediawiki/core for Kevin Israel

2014-08-29 Thread Ori Livneh
Kevin Israel (User:PleaseStand) now has +2 in mediawiki/core, following a
successful nomination by MZMcBride[1]. Congratulations! :)


[1]:
https://www.mediawiki.org/w/index.php?title=Gerrit/Project_ownershipoldid=1072656
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] HHVM status update

2014-08-16 Thread Ori Livneh
Hello,

Here's a quick status update about the rollout of HHVM.

* Since migrating test.wikipedia.org to HHVM exactly one week ago, we've
had just one segfault (reported upstream: 
https://github.com/facebook/hhvm/issues/3438). That's very good.
* Giuseppe shared some benchmarks in an e-mail to wikitech: 
https://lists.wikimedia.org/pipermail/wikitech-l/2014-August/078034.html.
Also very good.
* Re-imaging an app server was surprisingly painful, in that Giuseppe and I
had to perform a number of manual actions to get the server up-and-running.
This sequence of steps was poorly automated: update server's salt key -
synchronize Trebuchet deployment targets - fetch scap scripts - run
sync-common to fetch MediaWiki - rebuild l10n cache. Doing this much
manual work per app server isn't viable. Giuseppe and I fixed some of this
but there's more work to be done.
* Mark submitted a series of patches (principally 
https://gerrit.wikimedia.org/r/#/c/152903/) to create a service IP and
Varnish backend for an HHVM app server pool, with Giuseppe and Brandon
providing feedback and amending the change. Brandon thinks it looks good
and he may be able to deploy it some time next week.
* The patch routes requests that are tagged with a specific cookie to the
HHVM backends. Initially, we'll ask you (Wikimedia engineers and
technically savvy / adventurous editors) to opt-in to help with testing by
setting the cookie explicitly. The next step after that will be to divert a
fraction of general site traffic to those backends. When exactly that
happens will depend on how many bugs the next round of testing will uncover.
* We'll be adding servers to HHVM pool as we reimage them.
* Tim is looking at modifying the profiling feature of LuaSandbox to work
with HHVM. We currently disable it, due to 
https://bugzilla.wikimedia.org/show_bug.cgi?id=68413. (Feedback from users
re: how important is this feature to you would be useful).
* Giuseppe and Filippo noticed a memory leak on the HHVM job runner
(mw1053). Aaron is trying to isolate it. Tracked in bug 
https://bugzilla.wikimedia.org/show_bug.cgi?id=69428.
* Giuseppe is on vacation for the week of 8/18 - 8/22; Filippo is the
point-person for HHVM in TechOps.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Migrating test.wikipedia.org to HHVM

2014-08-08 Thread Ori Livneh
On Tue, Aug 5, 2014 at 6:53 PM, Ori Livneh o...@wikimedia.org wrote:

 On either Thursday or Friday of this week, Giuseppe Lavagetto (of the
 Wikimedia TechOps team) and I are planning to migrate 
 https://test.wikipedia.org/ (testwiki) to HHVM. [snipped]


{{done}} :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Migrating test.wikipedia.org to HHVM

2014-08-08 Thread Ori Livneh
On Fri, Aug 8, 2014 at 4:15 PM, Giuseppe Lavagetto glavage...@wikimedia.org
 wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 08/08/14 14:36, Ori Livneh wrote:
  On Tue, Aug 5, 2014 at 6:53 PM, Ori Livneh o...@wikimedia.org
  mailto:o...@wikimedia.org wrote:
 
  On either Thursday or Friday of this week, Giuseppe Lavagetto (of
  the Wikimedia TechOps team) and I are planning to migrate
  https://test.wikipedia.org/ (testwiki) to HHVM. [snipped]
 
 
  {{done}} :)
 
 

 Some poor man's benchmarks, just to show ourselves if we're on the
 good path...


 I ran some tests on both testwiki and one traditional appserver
 running Zend PHP. As the appserver was currently out of rotation, it
 was getting exactly zero traffic; testwiki receives negligible traffic
 and that won't affect our results.

 We decided to run the simplest test of all, by requesting a lot (a
 LOT) of times the same page, testwiki's main page, bypassing all the
 outer cache layers to test exactly the performance and throughput of
 the appservers. When reading the results, keep into account that the
 hhvm appserver is heavily underoptimized at the moment and I'm
 confident in the coming weeks we'll be able to squeeze quite some
 performance out of it. Also keep in mind we still have to road-test
 hhvm for bugs and stability, so we are not going to roll out
 everywhere over the weekend :)

 So, here are the results:

 1) Speed test: measure the time taken to request the page 1000 times
 over just 10 concurrent connections:

 HHVMZenddiff
 Mean time (ms): 233 441 -47%
 99th percentile (ms):   370 869 -57%
 Request/s:  43  22.6+90%

 HHVM is clearly faster, a lot faster (its 99th percentile is below the
 mean response time for zend...), Note that the load generated in this
 situation is comparable to the everyday load of one appserver.


 2) Load test: measure how much thoughput we obtain when hogging the
 appserver with 50 concurrent requests for a grand total of 1
 requests. What I wanted to test in this case was the performance
 degradation and the systems resource consumption

 HHVMZenddiff
 Mean time (ms): 355 906 -61%
 99th percentile (ms):   791 1453-45%
 Request/s:  141 55.1+156%
 Network (Mbytes/s)  17  7   +142%
 RAM used (GBs): 5(1)11(4)
 CPU usage (%):  90(75)  100(90)

 for RAM  show the total ram occupied and the one actively occupied by
 mediawiki, respectively; for CPU the total and user-dedicated cpu
 usage. Here numbers show that the Zend appserver is clearly over
 capacity, while the HHVM one is only nearing its limits.

 This benchmark is very crude and I repeated measurements just a few
 times (but the results were pretty stable across runs). But I think we
 can safely conclude that HHVM delivers the kind of performance boost
 we expected - the boost in request/s in the load test is probably the
 most important thing to highlight here. Still, I won't take these
 numbers as projections on real-world usage of mediawiki, but we're
 prettty close to an accurate test.

 Cheers,

 Giuseppe

 - --
 Giuseppe Lavagetto
 Wikimedia Foundation - TechOps Team
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1

 iEYEARECAAYFAlPk6XYACgkQTwZ0G8La7IAZNgCgizdLmtYzlVoMSwLZiCcY8lxL
 rbAAn0/LOkUx7JEkxs3EQQWRV5x1CO6D
 =nQBt
 -END PGP SIGNATURE-

 ___
 Engineering mailing list
 engineer...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/engineering

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Migrating test.wikipedia.org to HHVM

2014-08-05 Thread Ori Livneh
(apologies for cross-posting)

On either Thursday or Friday of this week, Giuseppe Lavagetto (of the
Wikimedia TechOps team) and I are planning to migrate 
https://test.wikipedia.org/ (testwiki) to HHVM. The way testwiki is
configured makes it a natural next step on the path leading from the Beta
Cluster to production. Specifically, testwiki is served by the same
load-balancer, reverse-proxy, and database servers as the rest of
production, but it is powered by a single application server that is
segregated from the pool of servers that handle requests for all other
Wikimedia wikis. This means that if we run into stability issues with HHVM,
they will be confined to just testwiki, and will not spill over to other
sites.

To migrate testwiki to HHVM, Giuseppe will need to take the server that
powers testwiki off-line for several hours for re-imaging. Ordinarily, we
design our infrastructure for redundancy and graceful failover, so we can
take machines offline without impacting users. But the corollary to
testwiki being a special case is that it is not configured in this way. As
I explained above, this is just as well, because it means we can perform
this work without disturbing the rest of the cluster.

Giuseppe and I will provide additional notices via IRC and e-mail prior to
beginning this work. We know that testwiki is used by a diverse user-base
to test various MediaWiki software components and will do our best to
minimize disruption to such users. Feel free to get in touch via e-mail or
IRC (my nickname is 'ori') if you have concerns about the deployment.

Thanks for your patience and understanding! :)

Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Brief Labs outage

2014-07-31 Thread Ori Livneh
(Apologies for cross-posting.)

We've been noticing an issue with lock-ups on the beta cluster application
servers for the past few days. It happens about once or twice a day.

It just happened again on both application servers, and I'd really like to
try and get to the bottom of things this time. I'll give up and force a
restart if I haven't figured it out by 6:30 UTC, about an hour from now.
Please accept my apology if this is disrupting your development or QA work,
and ping me on IRC if you need Beta back up urgently.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki-Vagrant is going Trusty

2014-07-03 Thread Ori Livneh
On Fri, Jun 27, 2014 at 9:04 PM, Ori Livneh o...@wikimedia.org wrote:

 I'm writing to communicate an upcoming major change in MediaWiki-Vagrant.
 In the coming week, I'm going to upgrade the base image from Precise
 Pangolin to Trusty Tahr, the latest version from Canonical.


This just landed in master, along with HHVM as the default interpreter.
(Zend PHP is still installed, and can be enabled via the optional 'zend'
role). I hope that you'll find MediaWiki performs substantially better with
this setup. Feedback (positive or negative) and bug reports would much be
appreciated.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki-Vagrant is going Trusty

2014-06-27 Thread Ori Livneh
I'm writing to communicate an upcoming major change in MediaWiki-Vagrant.
In the coming week, I'm going to upgrade the base image from Precise
Pangolin to Trusty Tahr, the latest version from Canonical. Trusty
integrates many new open-source software components, many of them
specifically targeting cloud and virtual environments. You can read more
about the release in the release announcement and the release notes:

http://fridge.ubuntu.com/2014/04/17/ubuntu-14-04-trusty-tahr-released/
https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes

This change to MediaWiki-Vagrant anticipates a migration of Wikimedia's
application server stack from Precise to Trusty, the groundwork for which
is already well underway. (Wikimedia's TechOps team, substantially assisted
by volunteer contributors, has completed a migration to Puppet 3, and has
made a Trusty image available on Labs).

I hope to keep the migration painless so as to not disrupt the work of
those of you who use this stack as your primary development environment.
The fact that a similar migration process is underway for Wikimedia's
production infrastructure will, I hope, foster increased collaboration
across Wikimedia and the broader developer community.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Using Composer to manage libraries for mediawiki/core on Jenkins and Foundation cluster

2014-06-02 Thread Ori Livneh
On Mon, Jun 2, 2014 at 6:37 PM, James HK jamesin.hongkon...@gmail.com
wrote:

  That is just the unfortunate truth. We are
  not going to misuse libraries and hack together MediaWiki just so
 extension
  installation can be *slightly* easier.

 This sort of behaviour towards non-WMF extension developers is
 interesting and if your objective is to alienate (as with the attitude
 above) volunteer developers then your on the right path.


You are free to use composer.json to manage extensions, in which case you
should version it in SCM. There's no conflict here. We did not favor one
use-case over another; we went with the path that coheres with the design
of Composer, as explicitly discussed in fantastic detail in its
documentation, bringing MediaWiki in line with every other significant
application or framework that uses Composer that I could find.

We're not so far down a path that we can't change course, but I've yet to
see you rebut any of the points I raised in my commit message accompanying
change I3e7c668ee[0] or articulate a coherent alternative.

As for the accusation that the current approach favors the WMF, it's almost
not worth responding to. We don't even intend to use Composer in
production; all the momentum behind the recent work around Composer
integration has in mind how MediaWiki fits with the broader open-source
ecosystem.

[0]: https://gerrit.wikimedia.org/r/#/c/132788/
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] 404 errors

2014-05-29 Thread Ori Livneh
On Thu, May 29, 2014 at 1:34 PM, ENWP Pine deyntest...@hotmail.com wrote:

 Hi, I'm getting some 404 errors consistently when trying to load some
 English Wikipedia articles. Other pages load ok. Did something break?


TL;DR: A package update went badly.

Nitty-gritty postmortem:

At 20:25 (all times UTC), change Ie5a860eb9[0] (Remove
wikimedia-task-appserver from app servers) was merged. There were two
things wrong with it:

1) The appserver package was configured to delete the mwdeploy and apache
users upon removal. The apache user was not deleted because it was logged
in, but the mwdeploy user was. The mwdeploy account was declared in Puppet,
but there was a gap between the removal of the package and the next Puppet
run during which the account would not be present.

2) The package included the symlinks /etc/apache2/wmf and
/usr/local/apache/common, which were not Puppetized. These symlinks were
unlinked when the package was removed.

Apache was configured to load configuration files from /etc/apache2/wmf,
and these include the files that declare the DocumentRoot and Directory
directives for our sites. As a result, users were served with 404s. At
20:40 Faidon Liambotis re-installed wikimedia-task-appserver on all
Apaches. Since 404s are cached in Varnish, it took another five minutes for
the rate of 4xx responses to return to normal (20:45).[1]

[0]: https://gerrit.wikimedia.org/r/#/c/136151/
[1]:
https://graphite.wikimedia.org/render/?title=HTTP%204xx%20responses%2C%202014-05-29from=20:00_20140529until=21:00_20140529target=reqstats.4xxhideLegend=true
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Deployments of new, radically different default features

2014-05-21 Thread Ori Livneh
On Sat, May 17, 2014 at 2:55 PM, Rainer Rillke rainerril...@hotmail.comwrote:

 As some people do not care a lot for beta features or new features, do
 not read the mailing lists and overlook main discussion forums or are
 just unable to understand English, they were surprised and confused and
 wondered how they could disable the feature.


Surprised and confused is a good description of how I feel when I visit the
main page of Commons and my mouse cursor strays onto the picture of the
day. A purple hazard light flares along the edges of the image, presumably
intending to alert me to the gross misuse of JavaScript that is about to
occur. Then the image pops out like some demented, angry Jack-in-the-box,
only nominally larger then the original image (which is still visible below
it), but transposed so that it obscures interface elements and page text.

Commons has 17 gadgets enabled by default, more than any other wiki I am
aware of. The Foundation could always do a better job. But as the saying
goes: good manners begin at home.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What should be the recommended / supported way to do skins? (A proposal.)

2014-05-20 Thread Ori Livneh
On Tue, May 20, 2014 at 2:25 PM, Bartosz Dziewoński matma@gmail.comwrote:

 tl;dr Let's start putting all skins files in a single directory, and let's
 use a grown-up structure with one class per file + separate init code for
 them. Okay?


Sounds good to me. I agree with Tyler that there's more to wish for, but
what you're proposing doesn't at all foreclose further innovation, so I
think it's a practical way forward.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MW-Vagrant improvements at the Zürich Hackathon

2014-05-16 Thread Ori Livneh
On Fri, May 16, 2014 at 2:18 PM, Bryan Davis bd...@wikimedia.org wrote:

   All in all it felt like a very fruitful hack session, and we're closer
  than ever to having a ready-to-go developer instance that mimics our
  production environment. Big thanks to everyone involved in making our
 work
  successful.

 I'd like to follow on with a big thanks to Arthur for bringing the
 topic up and coordinating all of the meetings that led up to the
 event. Never underestimate the power of having a good organizer on
 your project!


Yes, definitely agree. Thanks very much, Arthur.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Advice needed for MediaWiki-Vagrant problem

2014-05-13 Thread Ori Livneh
On Sat, May 10, 2014 at 11:48 AM, Ori Livneh o...@wikimedia.org wrote:

 Vagrant 1.6 changed the order of steps Vagrant performs on initialization:
 it now evaluates the project's Vagrantfile after loading plugins and
 parsing command-line arguments. This means that the various subcommands
 provide for role management no longer work, since the relevant plugins are
 loaded from the top of Vagrantfile, which is now too late a stage to be
 loading plugins.


We're not the only ones whose Vagrant setup broke as a result of this
change. An hour ago Vagrant's lead developer said I'll think about this
and see if we can bring this back. 
https://github.com/mitchellh/vagrant/issues/3775#issuecomment-42980896. So
I'll wait a bit longer before trying out alternative approaches. In the
meantime, please stick to 1.5.x.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Advice needed for MediaWiki-Vagrant problem

2014-05-10 Thread Ori Livneh
Vagrant 1.6 changed the order of steps Vagrant performs on initialization:
it now evaluates the project's Vagrantfile after loading plugins and
parsing command-line arguments. This means that the various subcommands
provide for role management no longer work, since the relevant plugins are
loaded from the top of Vagrantfile, which is now too late a stage to be
loading plugins.

Loading plugins from Vagrantfile was always a bit of a hack, but it was a
good hack that allowed us to bridge over a complicated plugin packaging
process and provide a tailored Vagrant experience right from 'vagrant up'.

I'd like to fix this without adding steps to the installation process, but
I'm not sure how. I spent a few hours bashing my head against this problem
earlier today and didn't get anywhere. I would really welcome a creative
solution.

The relevant bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=65066 ,
originally reported by Robert Vogel.

Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Ori Livneh
On Sun, May 4, 2014 at 9:09 PM, Tyler Romeo tylerro...@gmail.com wrote:

 Just wondering, but has any performance testing been done on different
 socket.io implementations? IIRC, Python is pretty good, so I definitely
 approve, but I'm wondering if there are other implementations are are more
 performant (specifically, servers that have better parallelism and no GIL).


You still get the parallelism here, it just happens outside the language,
by having Nginx load-balance across multiple application instances. The
Puppet class, Upstart job definitions, and supporting shell scripts were
all designed to manage a process group of rcstream instances.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] recent changes stream

2014-05-04 Thread Ori Livneh
Hi,

Gerrit change Id819246a9 proposes an implementation for a recent changes
stream broadcast via socket.io, an abstraction layer over WebSockets that
also provides long polling as a fallback for older browsers. Comment on 
https://gerrit.wikimedia.org/r/#/c/131040/ or the mailing list.

Thanks,
Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Minding pull-requests on GitHub

2014-04-18 Thread Ori Livneh
I just stumbled across https://github.com/wikimedia/mediawiki-core/pull/19,
a small but useful contribution to core from an HHVM developer. It has gone
unnoticed for two months, which is a bit sad.

Is there a way to accept pull-requests from GitHub? According to 
https://github.com/wikimedia/mediawiki-core/settings/hooks (may not be
visible to non-Wikimedians, sorry), the WebHook receiver 
http://tools.wmflabs.org/suchaserver/cgi-bin/receiver.py is defunct.
Anyone know the story there?

It'd be good if some additional people were watching (that is, receiving
notifications for) https://github.com/wikimedia/mediawiki-core/.

I haven't responded yet, by the way, so feel free to if you know the
answers to these questions. I don't know what effect accepting the
pull-request will have on the code in master, and telling someone who has
already submitted a patch to go sign up for Gerrit seems impolite.

Ori
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Free fonts and Windows users

2014-04-07 Thread Ori Livneh
On Mon, Apr 7, 2014 at 4:08 PM, Erwin Dokter er...@darcoury.nl wrote:

 On 08-04-2014 00:45, Steven Walling wrote:

 On Mon, Apr 7, 2014 at 3:42 PM, Jon Robson jdlrob...@gmail.com wrote:

  I noticed from Kaldari's notes [1] that Open sans was rejected based
 on language support and install base.

 

 A similar example is Google's Noto font (

 https://en.wikipedia.org/wiki/Noto_fonts). It has basically no default
 install base that I'm aware of, but it's focused on readability in as many
 scripts as possible and is Apache-licensed.


 Noto is useless without a suitable localization mechanism.


Erwin, can you help me understand what is a suitable localization
mechanism? I filed bug 59983 (Investigate noto font as potential
replacement for diverse font families) back in January because I thought
it could help with localization, so I'd really like to grok this.

https://bugzilla.wikimedia.org/show_bug.cgi?id=59983
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Update on HHVM

2014-03-21 Thread Ori Livneh
Hi! This is a quick heads-up about the status of HHVM migration, and what
the MediaWiki Core team is working on.

There are three challenges that we have to solve before we can run HHVM in
production:

* We need good packages. The packages provided by Facebook have some deep
issues that need to be fixed before they meet our packaging standards.
 This is a good opportunity to recognize Faidon's leadership on this front:
he has been liasoning with Facebook and Debian, working to resolve the
outstanding issues. Thanks, Faidon!
* We need to port a bunch of C extensions to the Zend PHP interpreter to
HHVM. The most complex by far is LuaSandbox. Tim has been working on that.
In the process, he has made substantial improvements to the Zend extension
compatibility layer provided by HHVM, which we are waiting to have merged
upstream: https://github.com/facebook/hhvm/pull/1986.  Once they are
merged, they will be in the queue for the next release.  Releases are cut
every eight weeks.
* I also want to recognize Max Seminik, who stepped up to port Wikidiff2,
producing a patch in short order.
* We need to adapt our app server configuration for HHVM. This includes
configuring HHVM itself as well as reconfiguring Apache to act as a fastcgi
reverse-proxy.
* We need to amend our deployment process so that it implements additional
requirements for HHVM.  Specifically, we will need to add a build step to
produce a bytecode archive in advance of deployment. We are not working on
that piece yet, but I think that Bryan's work on scap is going to make this
a lot easier to implement once we do tackle it.

What we've done so far is to use Facebook's packages in Labs and in
MediaWiki-Vagrant, configured Jenkins to run the unit tests under HHVM
(Antoine), and configured a Jenkins job to build HHVM from source hourly so
we can test patches (Chad). Aaron and I reasoned our way out of having to
port the igbinary extension, and Aaron is now working on porting
FastStringSearch. Along the way, we have been running into small
compatibility nits which we have fixed either by changing core's behavior
to be cross-compatible or by filing bugs and submitting patches upstream.

As you can see, there are some hard blockers that stand between us and HHVM
in production, and the biggest ones are not entirely in our hands (i.e.,
they depend on upstream merging patches and fixing packages). At the same
time, there is a lot of useful work left to do that can continue without
being blocked by these things. For that reason, the Core MediaWiki team is
currently targetting the Beta cluster for HHVM work.

Our target for the current sprint is to have the ability to have Apache run
either the Zend interpreter or HHVM based on the presence of a magic
cookie. By default, visitors to the beta cluster will be served pages
generated using the Zend interpreter, but by setting the cookie, Apache
would serve MediaWiki using HHVM instead.  This is an idea we got from
Niklas, who has implemented something very similar for 
http://dev.translatewiki.net/.  Doing this this would allow the beta
cluster to continue to be faithful to production and thus continue to be a
good target for testing, while at the same time provide a way for people
working on HHVM specifically to test ported extensions and to identify and
fix integration points in a production-like environment. It also gives us a
way of making our progress visible to you.

We have benchmarked different workloads on different hardware and have
found the performance of HHVM to be impressively better than the Zend
interpreter in most cases, but we don't yet have numbers to share that
project the impact on users, because we don't have the means of simulating
the load patterns of production, and because some parts of the stack are
still in the process of being ported. We expect that having the option of
running HHVM on the Beta cluster with the complete set of extensions that
Wikimedia uses will make it possible for us to project how it will perform
in production. But we are optimistic, given what we've observed and given
the spate of independent evaluations of HHVM from different corners of the
PHP community.

We are using Bugzilla to track our progress. You can search for bugs with
the 'hiphop' keyword, or simply head to https://www.mediawiki.org/wiki/HHVM,
which aggregates the most recently touched items via RSS. If you'd like to
get involved, pick an open bug, or get in touch via the lists or IRC.

Regards, Core Platform.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Vagrant Cloud

2014-03-14 Thread Ori Livneh
Vagrant Cloud is a new service from Hashicorp that strives to make it easy
to share Vagrant boxes and to collaborate on provisioned instances
together. It's in very early beta, but I have started poking around a
little, and it looks interesting. I don't quite know yet how it will work
with our extensive custom plugin architecture, but I suspect that this is
not an insurmountable problem. Hashicorp is also committed to a freemium
model that makes the software stack free (as in speech) and the basic tier
of cloud services free (as in beer), so it may be possible for us to have
tighter integration with their service without compromising our values. If
your curiosity is piqued and you decide to check it out, please do report
back to the list with your findings -- it'd be good to know whether this is
something we'd want to watch or not.

It does require Vagrant 1.5.1, but I am happy to report that
MediaWiki-Vagrant is compatible! :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Webfonts

2014-03-14 Thread Ori Livneh
On Thu, Mar 13, 2014 at 8:08 AM, Niklas Laxström
niklas.laxst...@gmail.comwrote:

 There have recently been questions whether WMF is able to serve
 webfonts. Some people think that because of the issues that led to
 disabling webfonts by default in Universal Language Selector (ULS),
 WMF is not ready to consider webfonts for typography.

 I don't think that way


Me neither; I agree with much of what you are saying.

I'm not sure why Odder abandoned https://gerrit.wikimedia.org/r/#/c/115153/
(Odder, I'm sorry if my comments were hurtful for some reason -- and I want
to say that I appreciate the fact that you use your technical savvy to help
communities make their voices heard in technical forums.) What I was
actually going to propose is that the font required by Hebrew Wikisource be
loaded unconditionally, for all pages, by being referenced in Common.css.
It needs some scrutiny first, but in principle it strikes me as the right
way to go.

I also think we should consider biting the performance bullet and including
a font like Noto https://code.google.com/p/noto/ on all wikis. Again, not
a decision to undertake lightly, but something we should definitely
consider. It won't solve the problem of individual wikis needing a font for
the content language that is suitable for inputing and editing content, but
it may be good enough to cover the cases of occasional content in
non-primary scripts on most wikis. I filed 
https://bugzilla.wikimedia.org/show_bug.cgi?id=59983 about that, though to
my regret I did so at the height of the ULS drama so my tone was probably
not very inviting to future conversation. But if you can stand to ignore it
and to think the issue through, please do comment.

And finally: I also absolutely agree with Niklas that there remains a large
problem that would not be addressed by either approach, and that a platform
like ULS, backed by sufficient data and some trial-and-error experience,
represents a good approach for solving that.

There is one other thing that I think we should be doing: we should strive
to offer the very best HOWTO on the internet for obtaining and installing
additional fonts for users, so that we empower to use the internet in their
language, not just on Wikimedia (and MediaWiki) wikis, but across the web.
If something like that already exists and I am simply an ignorant oaf
(which is quite possible) than I eat my hat by way of apology :)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Latest Snowden docs MediaWiki

2014-02-18 Thread Ori Livneh
On Mon, Feb 17, 2014 at 11:18 PM, Philip Neustrom phi...@localwiki.orgwrote:

 The latest Snowden docs have some great screenshots of the NSA-internal
 MediaWiki installation Snowden is alleged to have obtained a lot of his
 material from:


 https://firstlook.org/theintercept/article/2014/02/18/snowden-docs-reveal-covert-surveillance-and-pressure-tactics-aimed-at-wikileaks-and-its-supporters/

 Looks like a static HTML dump, as a few of the external extension images
 haven't loaded.

 The last details on their technical infrastructure indicated that Snowden
 used web crawler (love the quotes) software to obtain information from
 their internal wiki:


 http://www.nytimes.com/2014/02/09/us/snowden-used-low-cost-tool-to-best-nsa.html?hp

 What's not mentioned in the NYT piece is that their MediaWiki instance
 likely didn't have any read-only ACLs set up, or if they did they were
 buggy (are any of the third-party ACL extensions good?) -- which was
 perhaps one reason why Snowden was able to access the entire site once he
 had any access at all?

 If you actually need fancy read restrictions to keep some of your own
 people from reading each others' writing, MediaWiki is not the right
 software for you. -brion.

 ..like, if you're a nation-state's intelligence agency, or something :P

 I think it's fascinating that this technical decision[1] by the MediaWiki
 team long ago may have had such an impact on the world!  And much more
 fascinating that the NSA folks may not have read the docs.


There's a good article about this on the Washington Post web site (
http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/02/10/how-the-911-commission-helped-edward-snowden/).
The author argues that the choice of software that facilitates discovery
and collaboration was deliberate, motivated by the 9/11 Commission Report,
which attributed intelligence failures to lack of effective
knowledge-sharing in the intelligence community.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] RFC on PHP profiling

2014-02-18 Thread Ori Livneh
On Tue, Dec 31, 2013 at 9:55 AM, Chad innocentkil...@gmail.com wrote:

 I'm starting a new RFC to discuss ways we can improve our PHP profiling.

 https://www.mediawiki.org/wiki/Requests_for_comment/Better_PHP_profiling

 Please feel free to help expand and/or comment on the talk page if you've
 got ideas :)


Apropos of wfProfileIn/Out:

A tracing infrastructure that relies on active collaboration from
application-level developers in order to function becomes extremely
fragile, and is often broken due to instrumentation bugs or omissions,
therefore violating the ubiquity requirement. This is especially important
in a fast-paced development environment such as ours.

From Dapper, a Large-Scale Distributed Systems Tracing Infrastructure
http://research.google.com/pubs/pub36356.html

It's a really cool paper. Here's how Dapper makes it possible to instrument
Google's distributed infrastructure:

* When a thread handles a traced control path, Dapper attaches a trace
context to thread-local storage. A trace context is a small and easily
copyable container of span attributes such as trace and span ids.
* When computation is deferred or made asynchronous, most Google developers
use a common control flow library to construct callbacks and schedule them
in a thread pool or other executor. Dapper ensures that all such callbacks
store the trace context of their creator, and this trace context is
associated with the appropriate thread when the callback is invoked. In
this way, the Dapper ids used for trace reconstruction are able to follow
asynchronous control paths transparently.
* Nearly all of Google’s inter-process communication is built around a
single RPC framework with bindings in both C++ and Java. We have
instrumented that framework to define spans around all RPCs. The span and
trace ids are transmitted from client to server for traced RPCs.

(cf pg 4)
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Core unit tests passing under HHVM 2.4

2014-02-07 Thread Ori Livneh
Core's unit tests are passing integration tests on Travis CI under the
latest release of HHVM:
https://travis-ci.org/wikimedia/mediawiki-core/builds/18445085

---
Ori Livneh
o...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Module storage is coming

2014-01-23 Thread Ori Livneh
On Tue, Dec 3, 2013 at 10:02 PM, Roan Kattouw roan.katt...@gmail.comwrote:

 * The drop occurred because ResourceLoaderLanguageDataModule had a bug
 in its mtime computation, causing it to recache all the time; module
 storage greatly dampened the impact of that bug.


This is true, and it was a confounding factor in the analysis. So our
initial results were bupkes.

Aaron and I decided to repeat the experiment, so we re-ran it for a ten-day
period between January 6th and January 16th. The results are more modest
but still significant: module storage reduces load time overall by about
100ms. This effect is consistent between mobile and desktop.

http://meta.wikimedia.org/wiki/File:Module_storage.experiment_2.load_time.geo_mean.by_group.svg

It is now re-enabled across Wikimedia wikis.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki-Vagrant can run MediaWiki under HHVM!

2014-01-19 Thread Ori Livneh
Thanks to some fantastic work by Erik Bernhardson[0], MediaWiki-Vagrant can
now run MediaWiki on HHVM. The setup runs HHVM behind Apache using FastCGI,
which is a close match to how we think we'll be running it in production.
It uses the 'hhvm-nightly' packages from Facebook, so you can test
MediaWiki core and extensions against cutting-edge builds.

To switch MediaWiki from PHP to HHVM, simply run 'vagrant enable-role
hhvm', followed by 'vagrant provision'. To switch back to PHP, run 'vagrant
disable-role hhvm' and reprovision.

Please try it out and FILE BUGS for any issues you encounter. This includes
not only provisioning failures (which should be reported under the
MediaWiki-Vagrant product in Bugzilla) but also any instances of PHP code
breaking under HHVM. There is now an 'hhvm' keyword in Bugzilla you can use
to tag your report.

Three cheers for Erik B., and for Facebook's Paul Tarjan, whose recent
packaging work makes this possible.

 [0]: https://gerrit.wikimedia.org/r/#/c/105834/

---
Ori Livneh
o...@wikimedia.org
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki-Vagrant can run MediaWiki under HHVM!

2014-01-19 Thread Ori Livneh
On Sun, Jan 19, 2014 at 5:39 PM, Niklas Laxström
niklas.laxst...@gmail.comwrote:

 Do you know where I can find these hhvm-nightly packages if I want to
 try them out on my own?

 Last time I tested hhvm on translatewiki.net, there were fastcgi
 parameter passing problems which blocked further testing there.
   -Niklas


It's part of the package repository provided by Facebook @
http://dl.hhvm.com/ubuntu/
If you're running Precise you can simply add this entry to sources.list:

deb http://dl.hhvm.com/ubuntu precise main

I also forgot to mention in my previous e-mail that if you aren't sure
which interpreter is running, you can simply check under Installed
software (or localized equivalent) in Special:Version. HHVM appears as
'5.4.999-hiphop (srv)'.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MacOS (OSX) developers / tech people with mac needed for huggle packaging

2013-12-10 Thread Ori Livneh
On Tue, Dec 10, 2013 at 8:14 AM, Petr Bena benap...@gmail.com wrote:

 Hi,

 Huggle 3 is slowly getting near to first release, and I have yet set
 up some built environment for early beta versions. 1 for windows on
 one of my own windows boxens (using NSIS and MinGW) which I use to
 release beta version packages on sourceforge, and other one for linux
 using launchpad.

 So that both Windows and Linux users can easily get and install huggle
 packages with no need to understand how compiling works or any need to
 resolve any dependencies themselves. [1]

 Unfortunately, we have no such thing for MacOS, not just because
 neither me or any other current huggle dev owns a Mac, but also
 because there is no free launchpad like service for mac's I know of.

 So, if someone of you has enough experience with packaging software
 for Macs and wants to help with huggle packaging for MacOS, let us
 know so that we can setup some build process for MacOS users as well.


I hacked together a Homebrew formula: https://gist.github.com/atdt/7894375

But:

 brew install --HEAD huggle3
Warning: Your Xcode (4.6.3) is outdated
Please update to Xcode 5.0.1.
Xcode can be updated from the App Store.
== Cloning https://github.com/huggle/huggle3-qt-lx.git
Updating /Library/Caches/Homebrew/huggle3--git
== ./configure --disable-silent-rules
--prefix=/usr/local/Cellar/huggle3/HEAD
== make
exception.cpp: In instantiation of ‘std::basic_ostream_CharT, _Traits
std::operator(std::basic_ostream_CharT, _Traits, const
std::basic_string_CharT, _Traits, _Alloc) [with _CharT = char, _Traits =
std::char_traitschar, _Alloc = std::allocatorchar]’:
exception.cpp:17:   instantiated from here
exception.cpp:17: error: explicit instantiation of
‘std::basic_ostream_CharT, _Traits
std::operator(std::basic_ostream_CharT, _Traits, const
std::basic_string_CharT, _Traits, _Alloc) [with _CharT = char, _Traits =
std::char_traitschar, _Alloc = std::allocatorchar]’ but no definition
available
make: *** [exception.o] Error 1
make: *** Waiting for unfinished jobs
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Module storage is coming

2013-12-09 Thread Ori Livneh
On Mon, Dec 9, 2013 at 2:58 PM, Ryan Kaldari rkald...@wikimedia.org wrote:

 I am somewhat concerned about the implications for JS debugging here.
 Debugging JS problems with the live sites is already pretty complicated:
 1. debug=true won't reproduce some bugs (usually race condition related)


Yeah, debug mode sucks. I think we need to think it over.


 2. Sometimes old code gets cached with the new cache-busting timestamp
 (due

to a race condition between bits and apache at deployment)


Fixed in https://gerrit.wikimedia.org/r/#/c/90541/ and 
https://gerrit.wikimedia.org/r/#/c/90541/.


 3. Sometimes the cache servers cache different code for the same URL, i.e.
 one cache server will have one version and another cache server will have
 another


I haven't seen this happen.


 4. In some cases ResourceLoader doesn't change the cache-busting timestamp
 (if you just remove files or add older files to a module)


Timo fixed this in https://gerrit.wikimedia.org/r/#/c/90541/.


 5. You always have to face the 5 Minutes of Doom (how frequently
 ResourceLoader rebuilds the startup manifest)


Well, yes, there's that.


 And that's not even mentioning client-side caching. Shouldn't we work on
 fixing or mitigating some of these issues before we make JS debugging even
 more complicated?


I don't entirely agree. Modules in localStorage are versioned in the same
way URLs are versioned, and to date there no bugs with module storage cache
management have been reported, despite the fact that module storage is now
used across all projects.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   3   >