(Updated 22 Jan 13 at bottom)

A working aid for peter-principled Luddicrats and technolame business managers.

     I began operating and maintaining complex electronic systems when I joined the Army in 1973. I began formally testing them in '79 when I was assigned to the Army's Intel & Security Board on Ft. Huachuca. Since then, I have been involved in countless formal testing activities, including tests of systems operations, military communications protocol interoperability, and computer software. At the height of my testing career (before I switched permanently to instructional design & training), I was the system test Lead for a $60M software development project. My team produced testing-related deliverables that earned our company about $2.3M of that total, and were the first substantial project deliverables accepted by a very hard-to-please customer.

     By about six months into that project I had begun to hate software and system testing, but even more, I had come to despise stupid people who lack both technical expertise and common sense, but who somehow almost always end up in charge of tech projects. For example, the Show Me (Again) state employee in charge of that $60M project made her secretary print her e-mail, which she would respond to by writing on! The secretary would then transcribe the notes into an e-mail response. Many of the state government employees on the project team were technologically challenged, but to be honest, the folks in charge of my company were often equally confused, just in differently aggravating ways. Suffice to say, that is not a time I recall with any fondness or even with the satisfaction of knowing I did something well. Mostly I try not to think too hard about it.

~~

     Next month, the organization I work for now is going to transfer its most critical business-function financial tracking operations from one major software environment to another. Unfortunately, the project is being run mostly by in-house amateurs, so the transition is not following any kind of industry-standard plan. For example, they are not going to run both systems in parallel for a while to ensure the new system performs at least as well as the old one, and that the data collected and produced by the new system is both valid and accurate. Instead, the current system will be turned off on the last day of this month (to coincide neatly with the end of the fiscal year) and the new system will be turned on a month.5 or so later—if it works. If it doesn't (and many of us have little confidence it will), the plan is to keep the system off-line until it's fixed. Really.

     In the interim, all transactions are to be processed manually, using e-mail or paper files. Paper and electronic transaction records are to be maintained by case managers as they process customer and management requests pertaining to their portfolios. We have fewer than 200 primary customers, but we orchestrate several billion dollars worth of training conducted at hundreds of locations around the world for thousands of students a year. To be fair, it isn't like running a retail sales operation; however, case managers typically process scores of requests a day for schedule and location changes, resource allocation, course reservations and cancellations, payment authorizations, and so on. What the work lacks in volume, it makes up for in complexity.

     To make matters worse, the case managers are going be trained to use the new system during the down-time interim, so not only will the job be harder to do, there will be less time to do it. Add to this that we're often in meetings or travelling; that our customers can be difficult and demanding; and that sometimes our customers' futures (and even their lives) depend on what we do, and quickly it adds up to a prescription for chaos and debacle.

     But wait, there's more: When the new system is turned on, the manually processed transactions are to be posted by the case managers. In other words, for at least a month, scores of people are going to fill out paper forms and take notes pertaining to complicated transactions, then they are going to transcribe the notes and forms into the new system to 'catch it up'—all while trying to become proficient with the new system and doing the normal day's work. . . .

     Frankly, most of us are trying not to think too hard about it.

~~

     With that sad tale in mind, I decided to post an update to something I wrote in 2002 that was based on my collection of testing-related rules and guidelines, and on more than 20 years of experience as a system tester and trainer. If you are involved in system development of any kind, especially as a manager, you really need to know and understand the Rules of Testing. You also need to know the differences between test types, and especially what each kind of test is used for. Knowing this stuff will make you a better tech-project manager, which should result in a happier, more productive team and better products. If not, then at the very least, you'll have a greater appreciation of how screwed up your project is likely to be in the end.
 

System and Software Testing
 


    Every new software system or application must be tested during its development cycle to ensure it meets design requirements and performance specifications. To that end, a number of different tests are conducted by programmers, quality assurance teams, system test teams, and end-users. Test types are grouped into general and specific categories. General tests usually employ a test 'suite' that comprises the number of specific tests needed to adequately test the system. Descriptions of test types follow the Rules of Testing.

Rules of Testing

1.  All testing should start with the primary question: What don't we know that we need to know that can only be found out by testing? Typically, the primary question can be divided into many levels of subordinate questions, but each sub-question must be derived from, and a refinement of, the question above it.

2.  The only goal of testing should be meaningful results: Test planning should focus on the purposeful collection of useful, accurate data that will help answer the primary question. Tests designed to confirm or discredit a theory or to dis/prove anything are almost always fundamentally biased.

3.  Test design and development should focus on adequacy and suitability: How much, of what kind of testing, is sufficient to answer the primary question?

4.  Tests must be accomplishable (or 'do-able'): Performance objectives must be finite, realistic, reasonable, quantifiable, and measurable.

5.  Test results must be credible: Testers and their tests must have integrity, objectivity, and freedom from bias or external influence.

6.  Test results must be reproducible: Test plans and reports must adequately document the entire test process from development through completion, and the audit trail must be detailed enough to allow other people to recreate the same conditions and produce the same results.

7.  The purpose of testing is not to 'see what happens,' that's what experiments are for. The purpose of testing is to determine whether or not the tested item satisfies the criteria against which it is being tested, and to what extent it does so.

   And above all, remember this:

8.  If a test does not produce enough accurate, valid data to answer the primary question, no amount of data manipulation or report wordsmithing will change that fact. Hedging, quibbling, or prevaricating in test reports is always unethical, as is falsifying data, which is also often illegal.


Test Types (General)

•  Conformance Testing. Conformance tests are used to assess the conformance of a product to its defining specification or standard. Specialized test tools are used to exercise the product, to determine if the proper actions and reactions are produced. During a conformance test, the test tool is normally the only device to which the evaluated item is connected. Successful completion of a conformance test not only ensures the tested item 'meets the standard,' but also increases the probability that it will be able to interoperate with other products that have successfully passed conformance tests.

•  Developmental Testing. Developmental testing is a part of the product development cycle, and is conducted to determine if a product is performing as expected, or to determine the limits or operating parameters of a new product. Developmental testing can also be used to prepare a product for formal operational testing.

•  Interoperability Testing. Interoperability tests are used to assess the degree to which one product can successfully interact with other products of similar design and function. In the information systems arena, interoperability testing is performed to determine if, and how well, a system can exchange usable electronic information with other systems, as specified in its defining operational requirements documents. Specialized test tools are used to exercise and monitor the performance of the tested item to determine if the proper actions and reactions are produced. During an interoperability test, the tested product is normally connected with other products that have already been certified as interoperable at a specified baseline. If a product successfully completes an interoperability test, it is 'certified' as being interoperable with other systems that have passed the same tests.

•  Operational Testing. Operational tests are used to evaluate the performance of a prototype or pre-production version (or key component of) a product, in an authentic operational environment. Operational tests help an end user determine the effectiveness and suitability of a product, and whether or not a product is performing as expected. Such tests are often designed to identify the operational limits of the new product. Because of the manpower and resource requirements needed to create a realistic, authentic test environment, operational testing is usually conducted in conjunction with a training exercise.

•  Validation Testing. Validation tests are used to ensure that 1), proposed standards or specifications adequately cover system requirements and 2), that accurate standards or specifications are being used as the basis for product development. In the context of validation, accurate standards are those that are internally consistent, complete, and feasible. Validation testing consists of two general phases: static analysis, which satisfies item 1), and dynamic analysis, which satisfies item 2).


Test Types (Specific)

•  Acceptance Test. Acceptance testing is conducted by customers to determine if the product is acceptable. If the system satisfies the user's requirements, the system is considered 'deliverable,' and can be approved for implementation release.

•  Batch Test. Batch Testing is performed in environments in which repetitive batch transactions, reports, or batch jobs are processed. This type of testing is used to verify that systems function correctly in a batch-processing environment.

•  Build Test. When a specification requires that a new business function be added to a system, the implementation of the new function usually requires the addition of several new subfunctions. The new business function may actually require several programs, the coordination of non-system processes, or the combination of both online and batch programs. The combination of programs and other components is assembled into an integrated function called a 'build.' Each build is tested independently before being combined with other tested builds.

•  Demonstration. Like a test, a demonstration exercises system options, capabilities, or subsystems. The main purpose of a demonstration, however, is to show that a system can successfully perform a specific operation or produce a desired output; the system inputs and processes are secondary to the results. During a demonstration, for example, the system operator will be tasked to perform an action. The processes used to accomplish the action are not as important as the result of the action, and will not be as rigidly controlled as during a test.

•  Function Test. When specifications require a new system function (or the repair of an old function), a function test is required. The function test exercises all of the code and non-programming components required to execute the new function, independently of the whole system.

•  Integration Test. Integration testing is used to test subsystems as part of the whole. Software and hardware elements are combined and tested until the entire system has been integrated. Integration tests are performed as subsystems become available for testing, and are usually conducted by a QA team.

•  Parallel Test. Parallel testing is the process of operating a new system in parallel with the existing system. This is done to demonstrate that the new system functions correctly. The production output of both systems are compared to verify that the results are the same.

•  Performance Test. Performance testing is used to ensure that a system performs in accordance with its design specifications.

•  Regression Test. Regression testing verifies that changes to a software application have not had an unexpected impact and that previous functionality is still intact. Regression testing is performed in a strictly controlled environment because it tests total functionality of the system. This testing is usually performed by a QA team.

•  Stress Test. Stress (or Volume) testing is used to determine the degree to which a new system can handle peak and maximum operational loads, as well as the way in which a system responds to an overload. Stress testing is often used to exercise system and organizational recovery plans. A stress test uses a high input volume to ensure that the hardware and software have the ability to handle production-level workloads.

•  System Test. System testing is performed after all subsystems have been fully integrated. It is used to verify the entire system functions in accordance with its defining specification. System testing is usually divided into performance testing and batch testing, and is normally performed by a QA team. Typically, a system test is the last test performed by the developer before the system is turned over to the customer for acceptance and operational testing.

•  Unit Test. Unit testing is the verification of a single program unit or routine. It is intended to identify discrepancies between the unit's logic and its specification. The unit testing process checks for typographic, syntactic, and logical errors. It is performed by the programmer in isolation from other units. The goal of unit testing is to execute all logic branches at least once and as many paths (distinct combinations of branches) as schedule and budget permit.


     Sometimes, a person doesn't even know what questions to ask when starting a new job. In regards to software and system testing, at least, the descriptions and definitions above are by no means definitive; however, there is enough information to help even the most clueless neophyte formulate relevant questions. Of course, I would expect that doesn't describe anyone working as a manager on a systems development project, but I'm almost certain that none of the people in charge where I work is more than minimally aware of just how dumb the transition plan really is. In the end, it all could work out alright, but I won't place my money on that outcome. I've worked there too long to be optimistic.

     I'll update this when the dust settles.


Update - 1 Jan 11
     The dust hasn't settled yet, but as of December 30th, the new system was still not on-line, and some of my coworkers fear it will continue to be offline for another month at least. It was supposed to be turned on in mid-November, but unanticipated problems with external financial-system interfaces apparently have prevented that from happening. Imagine that.

Update - 9 Jan 11
     Problems with external financial-system interfaces still exist, supposedly, but case managers can now use the new system, according to a message sent late last week. I do not know if that means it's working properly, but tension and aggravation around the organization seem to have diminished somewhat. Of course, that could just be post-holiday exhaustion, and everyone is just catching their breath. We'll see.

Update - 21 Jan 11
     I was wrong. Very wrong. I overheard a coworker today talking on the phone. He said that the financial side of things is still not 'live' and that up until at least last Friday, it was still not possible to process any records normally, either in batch or interactive mode. He also said he'd heard that during system testing (it was really just pseudo-user testing, not system testing), records had been 'hand massaged' to make them pass through the system, but none were making it through now. I'm not sure what that actually means, but I do know the case managers are still doing their work manually, and that the frustration level is still very high. That's not surprising, really, given the original system was taken offline almost four months ago, and that it doesn't look like it will be back online any time soon. I suppose I should just be glad I'm not in the blame-chain for any of this, but I told more than a few people before this began that it was not being done correctly, and that it would likely be a catastrophe. I would rather have been wrong.

Update - 30 Jan 11
     Today was the day. It was announced early today (after a very false start last Friday) that all the various systems were on-line and available for everyone to use. Soon after that, a fairly steady stream of unhappy sounding people began visiting the office of the person who had orchestrated the transition (such as it was). Most people are still unsure the system will perform as advertised, and user confidence is almost non-existent, but it's all we've got. Go us!

Update - 22 Jan 13
     It has been almost two years since the system went live. All in all it has worked surprisingly well; however, much of the data ported over from the old system is either inaccessible, incomplete, or suspect. We are asked often to provide data to help with long-term trend analysis, but fairly often, the answer is that we just can't get there from here. And by the way, the person who was most responsible for the transition, and who has the only really deep understanding of how it all fit together, gave two weeks notice and retired just this month. As Scooby would say, "Ruh Roh."

     I'll update this again if appropriate.
 

Experience is that marvelous thing that enables you to recognize a mistake when you make it again. — F. P. Jones



Notes:

1. System and Software Testing was originally published by me on 04 Sep 02, as an intraoffice information paper. Some of it was my work, and all of it was edited by me in some way, but most of this is from a number of sources, none of which I could identify even when I wrote it. Suffice to say, while I retain whatever rights are legitimately mine, I can probably only claim editorial credit for most of System and Software Testing.

2. The general test types are also applicable to non-software system testing. For example, a radio might need to be air transportable; thus, interoperability, conformance, and operational tests might include test cases to determine if the item is indeed air transportable, per specifications. The basic rules of testing still apply.

3. Luddite: One who opposes technical or technological change. American Heritage Dictionary (electronic version), 3rd Edition (v. 3.6a), © 1994, Softkey International.

4. Peter Principle: The theory that an employee within an organization will advance to his or her level of incompetence and remain there. [after Laurence Johnston Peter (1919-1990)] American Heritage Dictionary.


     Updated 22 Jan 13
SangerMPermalinkBusiness Practices[Back to Top]