I think the problem with new browsers is called by speculative parsing
functions of the browsers. The speculative parsing parses the content of
the page with simple functionality very fast (before or during the DOM
is built up), and tries to preload URLs it has found in the content. So
it will download the big images in your example. To avoid this a
lazyload solution must avoid writing the real urls with the big images
in the content. The speculative parsing / preloading helps in general to
improve the rendering time of the page. But it is possible that it
causes even accessing totally wrong urls.
Cheers,
Konstantin
Am 10.06.2011 14:41, schrieb tibolan:
Hi,
I used to use some scripts like LazyLoad.js (http://www.appelsiini.net/
projects/lazyload), but, as the author said, it's not anymore usable.
This script change the src of the image by a blank.gif if the image is
not visible for the moment (above the fold line),and listen the page
scroll to refresh DOM.
In old time, doing this, stopped the download of the image, and could
save many bytes during load period.
But, from now this trick do not work anymore, and the image continue
downloading untill this full load...
My questions:
1- why all bowsers do the same thing now ? is there a specification
about that ?
2- have you got a clue, idea to solve this problem in modern browsers,
without doing some so ugly stuff ?
I saw on a french news site (nouvelobs.fr), a rudimentary technique,
that directly set the src with a blank.gif, and store the real src in
the className of the image. The script just have to change the src by
the className content when it necessary. I'm looking for something
cleaner than that, that keep the HTML "well-formed".
Thank for your feedbacks !
--
To view archived discussions from the original JSMentors Mailman list:
http://www.mail-archive.com/[email protected]/
To search via a non-Google archive, visit here:
http://www.mail-archive.com/[email protected]/
To unsubscribe from this group, send email to
[email protected]