>
>
>>
> Interesting point : using public CDNs for open-source resources vs 
> bundling them with your own code ?
>
> I will have to disagree on public CDNs being more reliable that private 
> ones. It's just mathematically wrong.
> Public CDNs like cdnjs for instance will give an unstable bandwidth, 
> depending on the load they get, which you can't measure beforehand.
> Owning your own high-end CDN guarantees you will have a constant bandwidth.
>

This is true only if you use a public CDN which isn't used by anyone else: 
which kinda invalidates the whole "public CDN": their high value is that 
the user in 99% of the cases doesn't have to download anything at all. 
Are you really stating that your own CDN serving jquery is better 
than https://code.jquery.com/jquery-2.1.4.min.js , both for SLA and speed ? 
 

>
> In terms of load speed, it's usually better to have one optimised bundle 
> than having to load multiple resources.
> In terms of project management, it's also easier to manage one bundle than 
> handling vendors & private code separately.
>
> If you don't want to take my word for it, just have a look at big websites 
> around : they all resort to bundling.
>

Again, this was covered multiple times, even in this forum, and if you're 
in the 10-15 files ballpark, serving 2 bundles vs 15 from public cdn 
improves the load time by less than 20 msec. Bonus points for cdns because 
they don't waste YOUR bandwith, nor the users's...
 

>  
>
>> - dynamic assets: again, minification and bundling to a CDN are not 
>> really web2py's job but at most for a script. Use whatever you'd like
>>
>
> Minification & bundling are not web2py's job, but versioning & caching 
> should be. 
>  
>
>> - dynamic images: if you're going to serve them a lot, don't compress 
>> on-the-fly. Compress either at first-access, then serve at nth access, or 
>> compress with an async task.
>>
>
> The easy answer here is to compress & save into a file (for instance on 
> Amazon S3). That would work just fine for most projects.
>
> I'm working on an alternative solution these days : argument-based, on the 
> fly image pre-processing with a cache proxy.
>
> For instance, a request to "
> http://website.com/download/picture.png?width=200&height=300"; could be 
> processed on the fly and served behind a CDN for caching. The CDN would 
> ensure that you process this image for this size only once, negating the 
> CPU overhead. This kind of structure is more flexible than fixed-size files.
>

you can also use a SaaS for it. see https://www.resrc.it/ .
 

>  
>
>> - html minification: I'd really like to see a gzipped response (which is 
>> the 90% gain) confronted to a minified-gzipped response (which would be the 
>> 10% gain). I don't see it in the wild and frankly I wouldn't spend cpu on 
>> it. Just gzip it
>>
>
> Most of the gain does come from compression. Minfiication... will depend 
> on how you structure your code I guess.
>
> In my case, minification helped achieve roughly 1KB after compression so 
> nothing fancy. I have no figures about the CPU overhead though but I'd be 
> interested if anyone has them.
>

Glad we agree on its utter unusefulness.
 

>  
>
>> - cache headers: use @cache.action: it's specifically coded for it
>>
>
> Yes & no.
> In Python, we tend to think explicit > implicit.
>
> @cache.action is a sweet helper, but it does everything implicitly so you 
> don't really understand or control what you're doing.
>
> Practically speaking, I had to stray from it because it's lacking the 
> "Access-Control-Allow-Origin" header, which is mandatory for CORS 
> management.
>

Not every page needs to be available for CORS. The fact that for everything 
else works out of the box IMHO is a big win.
 

> It also doesn't set the "Last-Modified" header which is important if you 
> want to leverage browser-side caching (304 responses).
>

The browser doesn't even think to re-request something if it has been 
served with an expires set in the future, so I really dont' see the point 
of leveraging Last-Modified (or even Etag, for that matter)
 

>  
>
>> - web2py's versioning system: it's hardly "even close to blablabla". 
>> web2py's versioning system is specifically engineered to work with CDNs and 
>> upstream proxyies. 
>>
>> On the last point, I really have to see a simpler develop-to-prod 
>> deployment.
>> Probably it's you not grasping it, the docs feel quite clear.... you 
>> develop whatever you need, you create your main.js and main.css with 
>> whatever build system you'd like, leave the files in the static folder 
>> (e.g. /static/css/main.css, /static/js/main.js), you put in models
>>
>> response.static_version_urls = True
>> response.static_version = '0.0.1'
>>
>> and voilà, at the first time a user accesses your page, the upstream 
>> proxy will fetch the resource ONCE and serve it FOREVER.
>> Need to correct a small issue with your main.css ? Edit it, save it over 
>> /static/css/main.css, change 
>>
>> response.static_version = '0.0.2'
>>
>> and presto, the upstream proxy is forced to request the new file ONCE and 
>> serve it FOREVER.
>>
>>
> Sorry if my words seemed a bit harsh there. I know you're a web2py 
> contributor and you like the system you contributed to build.
>
> It seems to me that the current trend is checksum-based versioning, which 
> allows deployment systems (like grunt, gulp or django's collectstatic) to 
> build a manifest with a unique filename.
>
> Let me explain why this system is better than plain, 3 digit versioning :
>
> If you have 2 files in your project (main.css & main.js) and just make a 
> simple change to main.css, plain versioning would require that your bump 
> your version (something like response.static_version = '0.0.2'), which 
> means that, to access main.js, users will be directed to 
> http://project.com/static/0.0.2/main.js, thus losing their browser-side 
> cache even though no change was made to main.js
>
> A checksum-based versioning will not alter the version if no change was 
> made to the file, and this could save you a lot of bandwidth.
>
>
Do you have figures ? I'd like to see those. 

BTW: nothing is stopping you from rebuilding your response.files with 
YYYYMMDD.HH.MM.SS (is the way I use it) at the moment you deploy those, 
inspecting each file modified date.
Using static_version "rebuilds" all URLs automatically, but it's perfectly 
fine to serve static from "different" versions alltogether.

response.files.append('static', 'js/_20150916.08.00.00/main.css')
response.files.append('static', 'js/_20120916.08.00.00/main.js')


-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to