On 11/01/2011 12:47 PM, Michael C Tiernan wrote:
----- Original Message -----
From: "Paul Graydon"<[email protected]>
By his doing this, anyone could quickly narrow down where the problem
lies (and it never seemed to be the same twice) and attack the problem
using clear methods. If you expect early in the process that you'll
have to support your product, you'll make better decisions. Just my
nickel's worth of experience. Thanks for everyone's time.
_______________________________________________
Hell yeah.. really should have remembered that one.
We've got something like 80 different web-applications, almost all
developed in house and all bespoke. Only two have anything approaching
tests, and that's only because we employed a QA guy earlier this year
and those are the two projects he's worked on so far.
Outside of hours it can make my life a little tricky when things break.
I might not know my way around the particular web application all that
well and trying to establish that everything is fixed and working okay
can be difficult.
Another benefit of automated testing is you can tie in performance
metrics to the tests so you can catch potential problems before they are
significant. Plus should you have something break in a silent way to
your monitoring systems, your automated tests should alert you.
If you tie the tet data into deployments you may be able to spot whether
certain problems are down to the latest code release and if there is
something that ought to be investigated further. Shopzilla (
http://devopscafe.org/show/2010/7/19/open-mic-episode-1-video.html) and
Etsy (
http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/
and http://codeascraft.etsy.com/2010/12/08/track-every-release/ ) seem
to be fairly flagship companies for such approaches.
Paul
_______________________________________________
Tech mailing list
[email protected]
https://lists.lopsa.org/cgi-bin/mailman/listinfo/tech
This list provided by the League of Professional System Administrators
http://lopsa.org/