While on the topic of databases...
I made a flight booking via Altitude points system yesterday and if failed. Gave me a number to call during business hours. Turns out just the return flight was made but nothing charged. That's not very atomic hey? [😊] Hehe love that dialup db connection idea... ________________________________ From: ozdotnet-boun...@ozdotnet.com <ozdotnet-boun...@ozdotnet.com> on behalf of Greg Low (罗格雷格博士) <g...@greglow.com> Sent: Monday, 19 September 2016 11:06:05 AM To: ozDotNet Subject: RE: Entity Framework - the lay of the land I remember many years ago, connecting the devs to the DB via a dial-up 64kB modem. Worked wonders for the code that came back. Suddenly they noticed every call. Regards, Greg Dr Greg Low 1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax SQL Down Under | Web: www.sqldownunder.com<http://www.sqldownunder.com/> | http://greglow.me<http://greglow.me/> From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors Sent: Monday, 19 September 2016 12:34 PM To: ozDotNet <ozdotnet@ozdotnet.com> Subject: Re: Entity Framework - the lay of the land On Mon, 19 Sep 2016 at 10:38 Greg Keogh <gfke...@gmail.com<mailto:gfke...@gmail.com>> wrote: I had an argument internally that caching was good, with the alternate side saying that “cache invalidation” was hard so they never use it. I think it is "hard" but don't write it off completely. Search for "second level cache" and you'll see it's not that trivial to use properly. Some ORMs have it as an optional feature. You've got to consider what to cache, eviction or expiry policy, concurrency, capacity, etc. I implemented simple caching in a server app a long time ago, then about year later I put performance counters into the code and discovered that in live use the cache was usually going empty before it was accessed, so it was mostly ineffective. Luckily I could tweak it into working. So caching is great, but be careful -- GK I'd argue caching is a good idea so long as it is not a substitute for good performance optimisation as you go. As a general discipline we roll with a rule I call "10x representative data load" which means we take whatever we think the final system is going to run with for a data set, load each dev with 10x of that on their workstations, and make them live that dream. The reality is that a bit of planning for optimal indexes as well as casting an eye over the execution plan after you write each proc isn't a lot of dev overhead. At least you know when what you have built rolls out it performs as well as it can given other constraints. David. -- David Connors da...@connors.com<mailto:da...@connors.com> | @davidconnors | LinkedIn | +61 417 189 363