On Mon, 23 Jun 2003, Dann Corbit wrote: > > -----Original Message----- > > From: scott.marlowe [mailto:[EMAIL PROTECTED] > > Sent: Monday, June 23, 2003 12:25 PM > > To: Dann Corbit > > Cc: Bruce Momjian; Tom Lane; Jason Earl; PostgreSQL-development > > Subject: Re: [HACKERS] Two weeks to feature freeze > > > > > > On Mon, 23 Jun 2003, Dann Corbit wrote: > > > > > Vendor A: "We think our tool is pretty solid and our end > > users hardly > > > ever turn up any bugs." > > > > > > Vendor B:" We think our tool is pretty solid and our 8500 tests > > > currently show only 3 defects with the released version, > > and these are > > > low impact issues. To view our current database of issues, > > log onto > > > web form <page>." > > > > > > Which tool would you prefer to install? > > > > The one I've tested and found to meet my needs, both now and > > by providing > > fixes when I needed it.
How about the one that doesn't run tests in order to show how much better it is than the competition but to actually test operation? In other words Vendor B has an interest in having the tests pass, what gives you the confidence it just hasn't listed the ones that fail and that the tests that do pass are not just testing something vendor B wants to show it can do? > > Real world example: We run Crystal Reports Enterprise > > edition where I > > work. It's tested thouroughly (supposedly) and has all kinds of QA. > > However, getting it to work right and stay up is a nightmare. > > It's taken > > them almost a year to get around to testing against the OpenLDAP LDAP > > server we use. The box said "LDAP V3 compliant" and they > > assured us that > > it was. Well, it doesn't work with our LDAP V3 compliant > > LDAP server at > > all, and the problem is something they can't fix for months > > because it > > doesn't fit into their test cycle. > > > > > > Real world example: Postgresql aggregates in subselects. > > Someone found a bug in subselects in Postgresql with inner > > references to > > outter aggregates. The postgresql team delivered a patch in > > less than a > > week. User tested it and it works. > > > > I'm not against testing and all, but as one of the many beta > > testers for > > Postgresql, I do feel a bit insulted by your attitude that only a > > cohesive, organized testing effort can result in a reliable product. > > Let me rephrase it: > "Only a cohesive, organized testing effort can result in a product that > is proven reliable." > > Without such an effort, it is only an educated guess as to whether the > product is reliable or not. The data is the most valuable software > component in an organization. It is worth more than the hardware and it > is worth more than the software. If you are going to trust one billion > dollars worth of corporate data on a software system, you ought to > ensure that the system has been carefully tested. I don't think that is > just an opinion. It's simply common sense. So you've never worked on a project where the data is of high value, since in those circumstances the customer is always going to apply their own acceptance testing anyway. If you think that doesn't happen you try sitting through 2 solid days of Y2k testing on _one_ system and tell me customers never do their own testing. > Therefore, I am going to stop harping on it. But there is no need to, as has been mentioned before, if the testing is not upto your level of testing submit something that makes it so. Having said that I do believe you mentioned that you didn't have the time to create something but you would be happy to test it, i.e. test the test. -- Nigel J. Andrews ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend