On Mon, 19 Sep 2016 at 10:38 Greg Keogh <gfke...@gmail.com> wrote:

> I had an argument internally that caching was good, with the alternate
>> side saying that “cache invalidation” was hard so they never use it.
> I think it is "hard" but don't write it off completely. Search for "second
> level cache" and you'll see it's not that trivial to use properly. Some
> ORMs have it as an optional feature. You've got to consider what to cache,
> eviction or expiry policy, concurrency, capacity, etc. I implemented simple
> caching in a server app a long time ago, then about year later I put
> performance counters into the code and discovered that in live use the
> cache was usually going empty before it was accessed, so it was mostly
> ineffective. Luckily I could tweak it into working. So caching is great,
> but be careful -- *GK*

I'd argue caching is a good idea so long as it is not a substitute for good
performance optimisation as you go.

As a general discipline we roll with a rule I call "10x representative data
load" which means we take whatever we think the final system is going to
run with for a data set, load each dev with 10x of that on their
workstations, and make them live that dream.

The reality is that a bit of planning for optimal indexes as well as
casting an eye over the execution plan after you write each proc isn't a
lot of dev overhead. At least you know when what you have built rolls out
it performs as well as it can given other constraints.


David Connors
da...@connors.com | @davidconnors | LinkedIn | +61 417 189 363

Reply via email to