I recently had been forced to move all my files to my ibook and make it my main computer while I figure out what to do with my old iMac G4 LCD dying...while doing so, I came across a file I saved and thought it important to post on the list in light of the open beta of 2.7.5...read slowly:

Subject: Should Testers Be Allowed to Do Their Job?
From: [EMAIL PROTECTED] (Edward V. Berard)
Date: Tue, 01 Aug 1995 07:35:23 -0400

Should testers be allowed to do their job? Before I can answer that, I need to define a few terms:

- To slightly paraphrase Glen Myers (author of "The Art of Software Testing"), testing is the process of examining some material with the intention (or goal) of finding errors. The "material" can include the material produced during the development part of the software life- cycle (e.g., the products of the analysis, design, and coding efforts), and the specifications for the test cases themselves.

- Testing is a comparison process. Specifically, it compares one set of material (e.g., object code) with some correspondingly related material (e.g., the specification for that object code). Symptoms of errors are detected when what is expressed in one set of material does not agree with what is expressed in its corresponding material. (It is not the job of the tester to find the exact source of the error. That is the job of the creator (developer) of the material.)

- To be considered minimally adequate, the testing effort would have to detect the symptoms of all critical errors, and the symptoms of most, if not all, of the symptoms of all other errors.

I propose that the job of the tester is to do at least minimally adequate testing, as defined above.

Consider the following (unfortunately true) story. A few years ago, I was teaching a testing course at a firm that made implantable medical devices, e.g., pacemakers. Modern pacemakers not only contain software, but they are also programmed, monitored, and adjusted by systems that contain software. The people in the class were charged with testing the software associated with pacemaker systems.

- During the class, the students were required to specify test cases for software. Part of any test case is the purpose for that test case. One of the students insisted that "the purpose of test case xyz is to _prove_ _that_ _some_ _particular_ _function_ _is_ _accomplished_." When I informed this person that, while this may have been the goal of the developer of the software, the goal of the tester was exactly the opposite.

- Later, in the same class, one of the testers noted that developers "felt bad" when they were informed of errors in the software products for which the developers were responsible. The tester said that she felt it was part of her job to provide some encouragement to the developers. Other testers admitted that they were often uncomfortable with being "the bearers of bad bad news" (i.e., reports of errors).

I was astonished. Was the ego of the developers more important than the health of the patients using the implanted medical devices? Were testers also required to be "psychological counselors" for the developers? Were both the testers and the developers aware of the negative impact on testing created by the reluctance of the testers to "make the developers feel bad?"

Should testers be allowed to do their job? Based on my experience over the past 15 years, the answer to that question in most shops is" "up to a point."

- Managers, developers, and testers often cannot distinguish between a _social_ situation and a _business_ situation. In a _social_ context, I would agree that people must seriously consider whether it is worthwhile to point out "problems." Specifically, one must weigh the feelings of the individuals involved against any "improvements" that might result from bringing a problem to light. In a business situation the rules change. The parties to a business arrangement are supposed to be aware of the terms and conditions of the business arrangement, and are required to act accordingly.

One should _expect_ testers to find symptoms of errors. That is their job. Further, testers that routinely do not uncover symptoms of serious errors are not performing their jobs successfully.

When a tester informs a developer of the symptoms of the errors he or she has uncovered, the tester is _not_ making a statement about the basic goodness or badness of the developer as a human being. The tester _is_ making a statement about some material for which the developer is responsible.

- The time, staff, and other resources that are allocated to developers are often woefully inadequate. However, the time, staff, and resources that are allocated to the testing effort are, by in large, a joke. In all my years of training and consulting, I have run across fewer than 3 organizations that had anything even approaching an adequate assignment of resources to the testing effort.

It is almost as if the managers know this. When testers point out errors, that "slows down" the development effort. Inadequate testing resources result in fewer errors being reported, and, consequently, faster development.

- Many organizations make no secret of the fact that they consider developers to be more important than testers (or, for that matter, software quality assurance personnel, maintenance programmers, and others). I have had more than one manager inform me that "development requires true creativity, whereas testing does not require any special brilliance." The developers are often required to have university degrees, while many testers are non-degreed. Developers routinely receive more training, case tools, and other resources than do the testers. (At one of the public testing courses that I recently taught, one tester had a hard time believing that case tools for testers even existed.)

- There is also a prevailing attitude that a separate testing group is "a luxury." Many managers (and developers) reason that, in addition to creating the software product, developers can also adequately test that product. This line of reasoning is very wrong. Consider:

- Developers seldom have enough time to design and implement the product, much less to test the product. (In fact, one of the most common reasons for the creation of a separate testing group is "to relieve the pressure on the developers.")

- Developers are seldom trained in software testing techniques. This often results in largely unfocused, hit or miss testing efforts. (When developers attend a software testing course, they are often astonished by the amount of work required for even minimal testing.)

- Of course, developers suffer from all the problems of human beings who are asked to evaluate their own efforts, e.g., lack of objectivity, psychological blindness (to their own errors), and lack of perspective.

Given the attitude that a separate testing group is a luxury, it is quite normal to deny resources to this group, or to cut it out entirely when budget constraints loom large.

It is very sad to see a group of people who are given an assignment, and are then prevented from carrying out that assignment to the fullest of their abilities.

-- Ed

---------------------------------------------------------------------
Edward V. Berard | Voice: (301) 548-9090 The Object Agency, Inc. | Fax: (301) 548-9094 101 Lakeforest Blvd., Suite 380 | E-Mail: [EMAIL PROTECTED] Gaithersburg, Maryland 20877

Shalom, Andrew
{Choose Life, Create Hope, Nurture Love...}



_______________________________________________
use-revolution mailing list
use-revolution@lists.runrev.com
Please visit this url to subscribe, unsubscribe and manage your subscription 
preferences:
http://lists.runrev.com/mailman/listinfo/use-revolution

Reply via email to