Hoi,
Sergey, you may also want to take a visit at the folks at
http://translatewiki.net. This is where the internationalisation and
localisation effort for MediaWiki and its extensions and all the rest is
concentrated. In the past translatewiki,net has been instrumental in
bringing best practices to the internationalisation of MW extensions. It is
likely that when best practices become apparent for JS and CSS they will
again play this role.
Thanks,
      GerardM

2009/2/25 Sergey Chernyshev <sergey.chernys...@gmail.com>

> As mentioned already, I'm not sure if localization is the best for
> candidate
> for being held in JavaScript, but other things mentioned, e.g. single
> request, minified and infinitely cached JS is what I'm looking at for
> overall MW infrastructure - so far it's a big performance problem for MW -
> examples of waterfall diagram i posted for the infinite image cache also
> show main issue that I'm trying to attack for a while and might need more
> help with - there are too many JS and CSS requests are made to the server
> by
> MediaWiki:
>
> http://performance.webpagetest.org:8080/result/090218_132826127ab7f254499631e3e688b24b/1/details/-
> only 18th request is first image to be loaded and as you can see,
> JavaScript loads are blocking, meaning no parallel loading is happening.
>
> I think it's worth investing resources into creating some process for
> better
> handling of this. Right now it's possible to cut down this configuring
> MediaWiki not to use user scripts and stylesheets and manually combining JS
> and CSS files for skin, Ajax framework and all things needed by extensons -
> I did quite a lot for specific installations, but it seems that it needs
> more systematic approach.
>
> Good news is that MW already has some wrappers for style and script
> insertion that extensions use to refer to external files. It's a little bit
> lest fortunate with script loading sequence (e.g. it's ideal to load
> scripts
> only when the rest of the page is loaded), but that might be a much bigger
> challenge.
>
> It's also worth mentioning that reducing the amount of PHP that handles
> JavaScript and CSS is a good idea as serving static resources is much
> easier
> then starting up fullblown PHP engine even with opcode and variable caches.
>
> I think that there is a way to reduce the start-render delay as well as
> overall loading time plus, very likely to save some traffic by attacking
> front-end and will be happy to participate more in this.
>
> How do we go about doing this? Can it be tied into Usability project (
> http://usability.wikimedia.org/)?
>
> Thank you,
>
>          Sergey
>
>
> --
> Sergey Chernyshev
> http://www.sergeychernyshev.com/
>
>
> On Fri, Feb 20, 2009 at 8:07 PM, Gregory Maxwell <gmaxw...@gmail.com>
> wrote:
>
> > On Fri, Feb 20, 2009 at 5:51 PM, Brion Vibber <br...@wikimedia.org>
> wrote:
> > [snip]
> > > On the other hand we don't want to delay those interactions; it's
> > > probably cheaper to load 15 messages in one chunk after showing the
> > > wizard rather than waiting until each tab click to load them 5 at a
> time.
> > >
> > > But that can be up to the individual component how to arrange its
> > loads...
> >
> > Right. It's important to keep in mind that in most cases the user is
> > *latency bound*.
> > That is to say that the RTT between them and the datacenter is the
> primary
> > determining factor in the load time, not how much data is sent.
> >
> > Latency determines the connection time, it also influences how quickly
> > rwin can grow
> > and get you out of slow-start. When you send more at once you'll also be
> > sending
> > more of it with a larger rwin.
> >
> > So in terms of user experience you'll usually improve results by sending
> > more
> > data if doing so is able to save you a second request.
> >
> > Even ignoring the users experience— connections aren't free. There is
> > byte-overhead
> > in establishing a connection. Byte-overhead in lost compression by
> working
> > with
> > smaller objects. Byte-overhead in having more partially filled IP
> packets.
> > CPU
> > overhead from processing more connections, etc.
> >
> > Obviously there is a line to be drawn— You wouldn't improve
> > performance by sending
> > the whole of Wikipedia on the first request. But you will most likely
> > not be conserving
> > *anything* by avoiding sending another kilobyte of compressed user
> > interface text for
> > an application a user has already invoked, even if only a few percent use
> > the
> > additional messages.
> >
> > _______________________________________________
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Reply via email to