>
> Let's start saying that these optimizations are on the far End of your - 
> high end - project, right were you want to optimize till the last bit to 
> get < 200ms load times. This helps everyone to focus on the fact that these 
> operations need to be considered at that stage and that stage only: doing 
> lots of work to cut from 500ms to 450ms is not going to make your app 
> "speedy" at all, use your time to tune everything BUT static files.


Well the title does say "optimization". Nothing here is mandatory.

I do think that these are more than "guidelines for high-end projects" 
though : it's a matter of best-practices, which will help any project 
achieve top-grade responsivity without requiring a lot of research...

SEO-wise, your website's ranking on Google PageSpeed does affect your 
visibility on the search engine so I really don't think performance is 
wasted on any public website.
 

> - static assets from 3rd party: you can minify your own bundle and upload 
> it to a CDN or use publicly available CDNs. Publicly available CDNs are FAR 
> MORE reliable than your own, with one pro being the fact that the user 
> would probably not need to download the resource (its probably in the 
> browser's cache already). The con here is that your bundle (think, e.g., 
> jquery.js + moment.js) isn't available as a single file in publicly 
> available CDNs.
>

Interesting point : using public CDNs for open-source resources vs bundling 
them with your own code ?

I will have to disagree on public CDNs being more reliable that private 
ones. It's just mathematically wrong.
Public CDNs like cdnjs for instance will give an unstable bandwidth, 
depending on the load they get, which you can't measure beforehand.
Owning your own high-end CDN guarantees you will have a constant bandwidth.

In terms of load speed, it's usually better to have one optimised bundle 
than having to load multiple resources.
In terms of project management, it's also easier to manage one bundle than 
handling vendors & private code separately.

If you don't want to take my word for it, just have a look at big websites 
around : they all resort to bundling.
 

> - dynamic assets: again, minification and bundling to a CDN are not really 
> web2py's job but at most for a script. Use whatever you'd like
>

Minification & bundling are now, but versioning & caching should be. 
 

> - dynamic images: if you're going to serve them a lot, don't compress 
> on-the-fly. Compress either at first-access, then serve at nth access, or 
> compress with an async task.
>

The easy answer here is to compress & save into a file (for instance on 
Amazon S3). That would work just fine for most projects.

I'm working on an alternative solution these days : argument-based, on the 
fly image pre-processing with a cache proxy.

For instance, a request to 
"http://website.com/download/picture.png?width=200&height=300"; could be 
processed on the fly and served behind a CDN for caching. The CDN would 
ensure that you process this image for this size only once, negating the 
CPU overhead. This kind of structure is more flexible than fixed-size files.
 

> - html minification: I'd really like to see a gzipped response (which is 
> the 90% gain) confronted to a minified-gzipped response (which would be the 
> 10% gain). I don't see it in the wild and frankly I wouldn't spend cpu on 
> it. Just gzip it
>

Most of the gain does come from compression. Minfiication... will depend on 
how you structure your code I guess.

In my case, minification helped achieve roughly 1KB after compression so 
nothing fancy. I have no figures about the CPU overhead though but I'd be 
interested if anyone has them.
 

> - cache headers: use @cache.action: it's specifically coded for it
>

Yes & no.
In Python, we tend to think explicit > implicit.

@cache.action is a sweet helper, but it does everything implicitly so you 
don't really understand or control what you're doing.

Practically speaking, I had to stray from it because it's lacking the 
"Access-Control-Allow-Origin" header, which is mandatory for CORS 
management.
It also doesn't set the "Last-Modified" header which is important if you 
want to leverage browser-side caching (304 responses).
 

> - web2py's versioning system: it's hardly "even close to blablabla". 
> web2py's versioning system is specifically engineered to work with CDNs and 
> upstream proxyies. 
>
> On the last point, I really have to see a simpler develop-to-prod 
> deployment.
> Probably it's you not grasping it, the docs feel quite clear.... you 
> develop whatever you need, you create your main.js and main.css with 
> whatever build system you'd like, leave the files in the static folder 
> (e.g. /static/css/main.css, /static/js/main.js), you put in models
>
> response.static_version_urls = True
> response.static_version = '0.0.1'
>
> and voilà, at the first time a user accesses your page, the upstream proxy 
> will fetch the resource ONCE and serve it FOREVER.
> Need to correct a small issue with your main.css ? Edit it, save it over 
> /static/css/main.css, change 
>
> response.static_version = '0.0.2'
>
> and presto, the upstream proxy is forced to request the new file ONCE and 
> serve it FOREVER.
>
>
Sorry if my words seemed a bit harsh there. I know you're a web2py 
contributor and you like the system you contributed to build.

It seems to me that the current trend is checksum-based versioning, which 
allows deployment systems (like grunt, gulp or django's collectstatic) to 
build a manifest with a unique filename.

Let me explain why this system is better than plain, 3 digit versioning :

If you have 2 files in your project (main.css & main.js) and just make a 
simple change to main.css, plain versioning would require than your bump 
your version (something like response.static_version = '0.0.2'), which 
means that, to access main.js, users will be directed to 
http://project.com/static/0.0.2/main.js, thus losing their browser-side 
cache even though no change was made to main.js

A checksum-based versioning will not alter the version if no change was 
made to the file.

In terms of project management, having to manually bump your static file 
version can lead to mistakes, whereas deployment-based automatic versioning 
is more reliable.
I don't deny the educational purpose of versioning stuff yourself, but when 
it comes to middle-sized projects it just won't do.

I think it would be sweet if web2py could adapt to existing bundling 
systems out there.
Here's how I managed using a manifest-based system in web2py :

def manifest_staticfiles_helper(path):

    def load_manifest():

        manifest_path = os.path.join(request.folder, 'static', 'dist', 
'staticfiles.json')

        if os.path.isfile(manifest_path):

            with open(manifest_path, 'r') as f:

                manifest = json.loads(f.read())

                return manifest

    manifest = cache.ram('staticfiles_manifest', lambda: load_manifest(), 
time_expire=3600)

    if isinstance(manifest, dict) and path in manifest['paths']:

        versioned_filepath = manifest['paths'][path]

        if os.path.isfile(os.path.join(request.folder, 'static', 'dist', 
versioned_filepath)):

            return URL(c='static', f='dist', args=versioned_filepath)

    return URL('static', path)

STATIC = manifest_staticfiles_helper

Then, I just use STATIC('css/main.css') in my views.
A bit crude but it works fine for me :)

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to