Re: [Wikitech-l] Skin JS cleanup and jQuery

2009-04-17 Thread Aryeh Gregor
On Thu, Apr 16, 2009 at 6:35 PM, Marco Schuster
ma...@harddisk.is-a-geek.org wrote:
 Are there any plans to use Google Gears for storage on clients? Okay, people
 have to enable it by hand, but it shoulda speed up page loads for people
 very much (at least for those who use it).

What, specifically, would be stored in Google Gears?  Would HTML5's
localStorage also be suitable?

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Skin JS cleanup and jQuery

2009-04-17 Thread Marco Schuster
On Fri, Apr 17, 2009 at 1:38 PM, Aryeh Gregor
simetrical+wikil...@gmail.comsimetrical%2bwikil...@gmail.com
 wrote:

 On Thu, Apr 16, 2009 at 6:35 PM, Marco Schuster
 ma...@harddisk.is-a-geek.org wrote:
  Are there any plans to use Google Gears for storage on clients? Okay,
 people
  have to enable it by hand, but it shoulda speed up page loads for people
  very much (at least for those who use it).

 What, specifically, would be stored in Google Gears?  Would HTML5's
 localStorage also be suitable?

Isn't GG supposed to be an implementation of localStorage for browsers who
don't support it yet (does any browser support localStorage *now*, btw?)?
What could be stored is JS bits likely not to change THAT often, i.e. if
Wikipedia is ever going to make a WYSIWYG editor available (Wikia has it!!!)
its JS files could be cached, same for those tiny little flag icons , the
wikipedia ball, the background of the page... maybe even some parts of the
sitewide CSS.

Actually, it could be expanded to store whole articles (then simply copy
over or enhance
http://code.google.com/intl/de-DE/apis/gears/articles/gearsmonkey.html - I'm
gonna modify it for german Wikipedia when i've got some time).


Marco


-- 
VMSoft GbR
Nabburger Str. 15
81737 München
Geschäftsführer: Marco Schuster, Volker Hemmert
http://vmsoft-gbr.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Skin JS cleanup and jQuery

2009-04-17 Thread Aryeh Gregor
On Fri, Apr 17, 2009 at 3:11 PM, Marco Schuster
ma...@harddisk.is-a-geek.org wrote:
 Isn't GG supposed to be an implementation of localStorage for browsers who
 don't support it yet

I don't think Gears uses the localStorage API.  It seems to use its
own APIs.  But I've never used either, to be fair.

 (does any browser support localStorage *now*, btw?)?

IE8 does, albeit maybe with a few quirks.  I'm pretty sure the most
recent Safari does, although Google is unhelpful on this point.
Firefox 3.5 betas do.  I don't know about Opera.

 What could be stored is JS bits likely not to change THAT often, i.e. if
 Wikipedia is ever going to make a WYSIWYG editor available (Wikia has it!!!)
 its JS files could be cached, same for those tiny little flag icons , the
 wikipedia ball, the background of the page... maybe even some parts of the
 sitewide CSS.

All of those things should already be cached by clients.  On stock
Firefox 3, the only things my browser actually sends requests for
(checking using the Firebug Net tab) on a typical page view is the
page itself, and images specific to that page.

 Actually, it could be expanded to store whole articles (then simply copy
 over or enhance
 http://code.google.com/intl/de-DE/apis/gears/articles/gearsmonkey.html - I'm
 gonna modify it for german Wikipedia when i've got some time).

That would be unreliable.  The article might have changed, so you'd
have to do an HTTP request anyway to get the 304.  And in that case,
again, the browser will have the HTML page cached already.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Skin JS cleanup and jQuery

2009-04-17 Thread Brion Vibber
On 4/16/09 3:35 PM, Marco Schuster wrote:
 Are there any plans to use Google Gears for storage on clients? Okay, people
 have to enable it by hand, but it shoulda speed up page loads for people
 very much (at least for those who use it).

For those not familiar with it, Google Gears provides a few distinct 
capabilities to client-side JavaScript code. Equivalents of these 
features are being standardized in HTML 5 / WHATWG work, and some of 
them are already available in some production browsers without 
installing a separate extension.

(Note that the first usage of Gears services on a site requires user 
interaction -- the user must click through a permission dialog -- so 
while you can make use of them for 'progressive enhancement' you can't 
do so transparently. The same isn't necessarily true of browsers 
implementing them natively.)


* Caching static files locally under application control ('LocalServer')

Most of the time not a huge win over simply setting decent caching 
headers. Main advantage is if you want to provide an offline mode for 
your application, you're more likely to actually have the resources you 
need since you can pre-fetch them and control expiration.

Note there has been some experimental work on hacking some offline 
viewing/editing with Gears into MediaWiki:
http://wiki.yobi.be/wiki/Mediawiki_LocalServer

but a really full implementation would be hard to hack into our 
architecture.


* Client-side SQLite database

Local database storage can be useful for various things like local edit 
drafts, storage of data for offline viewing, etc.

Note that anything stored client-side is *not* automatically replicated 
to other browsers, so it's not always a good choice for user-specific 
data since people may hop between multiple computers/devices/browsers.


* Background JavaScript worker threads

Not super high-priority for our largely client-server site. Can be 
useful if you're doing some heavy work in JS, though, since you can have 
it run in background without freezing the user interface.


* Geolocation services

Also available in a standardized form in upcoming Firefox 3.5. Could be 
useful for geographic-based search ('show me interesting articles on 
places near me') and 'social'-type things like letting people know about 
local meetups (like the experimental 'geonotice' that's been running 
sometimes on the watchlist page).

-- brion

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Dealing with Large Files when attempting a wikipedia database download - Focus upon Bittorrent List

2009-04-17 Thread Jameson Scanlon
Two separate sites indicate potential sources of torrents for *.tar.gz
downloads of the en wikipedia database material :

http://en.wikipedia.org/wiki/Wikipedia_database and
http://meta.wikimedia.org/wiki/Data_dumps#What_about_bittorrent.3F
(so far).

Is it possible for anyone to indicate more comprehensive lists of
torrents/trackers than these?  Are there any plans for all the
database download files to be available in this way (I imagine that
there would also be some PDF manual which would go along with these to
indicate offline viewing, and potentially more info than this).
J


On 4/15/09, Petr Kadlec petr.kad...@gmail.com wrote:
 2009/4/14 Platonides platoni...@gmail.com:
 IMHO the benefits of separated files are similar to the disadvantages. A
 side side benefit if it would be that hashes would be splitted, too. If
 you were unlucky, knowing that 'something' (perhaps just a bit) on the
 150GB you downloaded is wrong, is not that helpful.
 So having hashes for file sections on the big ones, even if not
 'standard' would be an improvement.

 For that, something like Parchive would probably be better…

 -- [[cs:User:Mormegil | Petr Kadlec]]

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Dealing with Large Files when attempting a wikipedia database download - Focus upon Bittorrent List

2009-04-17 Thread Chad
On Fri, Apr 17, 2009 at 5:55 PM, Jameson Scanlon
jameson.scan...@googlemail.com wrote:
 Two separate sites indicate potential sources of torrents for *.tar.gz
 downloads of the en wikipedia database material :

 http://en.wikipedia.org/wiki/Wikipedia_database and
 http://meta.wikimedia.org/wiki/Data_dumps#What_about_bittorrent.3F
 (so far).

 Is it possible for anyone to indicate more comprehensive lists of
 torrents/trackers than these?  Are there any plans for all the
 database download files to be available in this way (I imagine that
 there would also be some PDF manual which would go along with these to
 indicate offline viewing, and potentially more info than this).
 J


 On 4/15/09, Petr Kadlec petr.kad...@gmail.com wrote:
 2009/4/14 Platonides platoni...@gmail.com:
 IMHO the benefits of separated files are similar to the disadvantages. A
 side side benefit if it would be that hashes would be splitted, too. If
 you were unlucky, knowing that 'something' (perhaps just a bit) on the
 150GB you downloaded is wrong, is not that helpful.
 So having hashes for file sections on the big ones, even if not
 'standard' would be an improvement.

 For that, something like Parchive would probably be better…

 -- [[cs:User:Mormegil | Petr Kadlec]]

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


I seem to remember there being a discussion about the
torrenting issue before. In short: there's never been any
official torrents, and the unofficial ones never got really
popular.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dealing with Large Files when attempting a wikipedia database download - Focus upon Bittorrent List

2009-04-17 Thread Gregory Maxwell
On Fri, Apr 17, 2009 at 6:10 PM, Chad innocentkil...@gmail.com wrote:
 I seem to remember there being a discussion about the
 torrenting issue before. In short: there's never been any
 official torrents, and the unofficial ones never got really
 popular.

Torrent isn't a very good transfer method for things which are not
fairly popular as it has a fair amount of overhead.

The wikimedia download site should be able to saturate your internet
connection in any case…

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Dealing with Large Files when attempting a wikipedia database download - Focus upon Bittorrent List

2009-04-17 Thread Stig Meireles Johansen
On Fri, Apr 17, 2009 at 7:39 PM, Gregory Maxwell gmaxw...@gmail.com wrote:

 Torrent isn't a very good transfer method for things which are not
 fairly popular as it has a fair amount of overhead.

 The wikimedia download site should be able to saturate your internet
 connection in any case…


But some ISP's throttle TCP-connections (either by design or by simple
oversubscription and random packet drops), so many small connections *can*
yield a better result for the end user. And if you are so unlucky as to
having a crappy connection from your country to the download-site, maybe,
just maybe someone in your own country already has downloaded it and is
willing to share the torrent... :)

I can saturate my little 1M ADSL-link with torrent-downloads, but forget
about getting throughput when it comes to HTTP-requests... if it's in the
country, in close proximity and the server is willing, then *maybe*.. but
else.. no way.

Not everyone is very well connected, unfortunately...

/Stigmj
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Dealing with Large Files when attempting a wikipedia database download - Focus upon Bittorrent List

2009-04-17 Thread Gregory Maxwell
On Fri, Apr 17, 2009 at 9:21 PM, Stig Meireles Johansen
sti...@gmail.com wrote:
 But some ISP's throttle TCP-connections (either by design or by simple
 oversubscription and random packet drops), so many small connections *can*
 yield a better result for the end user. And if you are so unlucky as to
 having a crappy connection from your country to the download-site, maybe,
 just maybe someone in your own country already has downloaded it and is
 willing to share the torrent... :)
 I can saturate my little 1M ADSL-link with torrent-downloads, but forget
 about getting throughput when it comes to HTTP-requests... if it's in the
 country, in close proximity and the server is willing, then *maybe*.. but
 else.. no way.

There are plenty of downloading tools that will use range requests to
download a signal file with parallel connections…

But if you are running parallel connections to avoid slowdowns you're
just attempting to cheat TCP congestion control and get an unfair
share of the available bandwidth.  That kind of selfish behaviour
fuels non-neutral behaviour and ought not be encouraged.

We offered torrents in the past for commons picture of the year
results— a more popular thing to download, a much smaller file (~500mb
vs many gbytes), and not something which should become outdated every
month… and pretty much no one stayed connected long enough for anyone
else to manage to pull anything from them. It was an interesting
experiment, but it indicated that further use for these sorts of files
would be a waste of time.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Skin JS cleanup and jQuery

2009-04-17 Thread Marco Schuster
On Fri, Apr 17, 2009 at 11:42 PM, Brion Vibber br...@wikimedia.org wrote:

 * Background JavaScript worker threads

 Not super high-priority for our largely client-server site. Can be
 useful if you're doing some heavy work in JS, though, since you can have
 it run in background without freezing the user interface.


You mean...stuff like bots written in Javascript, using the XML API?
I could imagine also sending mails via Special:Emailuser in the background
to reach multiple recipients - that's a PITA if you want send mails to
multiple users.


 * Geolocation services

 Also available in a standardized form in upcoming Firefox 3.5. Could be
 useful for geographic-based search ('show me interesting articles on
 places near me') and 'social'-type things like letting people know about
 local meetups (like the experimental 'geonotice' that's been running
 sometimes on the watchlist page).

That sounds kinda interesting, even if the accuracy on non-GPS-enabled
devices isn't that high... can this in any way be joined with the OSM
integration?

Marco

-- 
VMSoft GbR
Nabburger Str. 15
81737 München
Geschäftsführer: Marco Schuster, Volker Hemmert
http://vmsoft-gbr.de
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l