IV. SOFTWARE QUALITY ENGINEERING A. Concepts Software Quality Engineering (SQE) is a process that evaluates, assesses, and improves the quality of software. Software quality is often defined as the degree to which software meets requirements for reliability, maintainability, transportability, etc., as contrasted with functional, performance, and interface requirements that are satisfied as a result of software engineering. Quality must be built into a software product during its development to satisfy quality requirements established for it. SQE ensures that the process of incorporating quality in the software is done properly, and that the resulting software product meets the quality requirements. The degree of conformance to quality requirements usually must be determined by analysis, while functional requirements are demonstrated by testing. SQE performs a function complementary to software development engineering. Their common goal is to ensure that a safe, reliable, and quality engineered software product is developed. B. Software Qualities Qualities for which an SQE evaluation is to be done must first be selected and requirements set for them. Some commonly used qualities are reliability, maintainability, transportability, interoperability, testability, useability, reusability, traceability, sustainability, and efficiency. Some of the key ones are discussed below. 1. Reliability Hardware reliability is often defined in terms of the Mean- Time-To-Failure, or MTTF, of a given set of equipment. An analogous notion is useful for software, although the failure mechanisms are different and the mathematical predictions used for hardware have not yet been usefully applied to software. Software reliability is often defined as the extent to which a program can be expected to perform intended functions with required precision over a given period of time. Software reliability engineering is concerned with the detection and correction of errors in the software; even more, it is concerned with techniques to compensate for unknown software errors and for problems in the hardware and data environments in which the software must operate. 2. Maintainability Software maintainability is defined as the ease of finding and correcting errors in the software. It is analogous to the hardware quality of Mean-Time-To-Repair, or MTTR. While there is as yet no way to directly measure or predict software maintainability, there is a significant body of knowledge about software attributes that make software easier to maintain. These include modularity, self (internal) documentation, code readability, and structured coding techniques. These same attributes also improve sustainability, the ability to make improvements to the software. 3. Transportability Transportability is defined as the ease of transporting a given set of software to a new hardware and/or operating system environment. 4. Interoperability Software interoperability is the ability of two or more software systems to exchange information and to mutually use the exchanged information. 5. Efficiency Efficiency is the extent to which software uses minimum hardware resources to perform its functions. There are many other software qualities. Some of them will not be important to a specific software system, thus no activities will be performed to assess or improve them. Maximizing some qualities may cause others to be decreased. For example, increasing the efficiency of a piece of software may require writing parts of it in assembly language. This will decrease the transportability and maintainability of the software. C. Metrics Metrics are quantitative values, usually computed from the design or code, that measure the quality in question, or some attribute of the software related to the quality. Many metrics have been invented, and a number have been successfully used in specific environments, but none has gained widespread acceptance. D. A Software Quality Engineering Program The two software qualities which command the most attention are reliability and maintainability. Some practical programs and techniques have been developed to improve the reliability and maintainability of software, even if they are not measurable or predictable. The types of activities that might be included in an SQE program are described here in terms of these two qualities. These activities could be used as a model for the SQE activities for additional qualities. 1. Qualities and Attributes An initial step in laying out an SQE program is to select the qualities that are important in the context of the use of the software that is being developed. For example, the highest priority qualities for flight software are usually reliability and efficiency. If revised flight software can be up-linked during flight, maintainability may be of interest, but considerations like transportability will not drive the design or implementation. On the other hand, the use of science analysis software might require ease of change and maintainability, with reliability a concern and efficiency not a driver at all. After the software qualities are selected and ranked, specific attributes of the software that help to increase those qualities should be identified. For example, modularity is an attribute that tends to increase both reliability and maintainability. Modular software is designed to result in code that is apportioned into small, self-contained, functionally unique components or units. Modular code is easier to maintain, because the interactions between units of code are easily understood, and low level functions are contained in few units of code. Modular code is also more reliable, because it is easier to completely test a small, self contained unit. Not all software qualities are so simply related to measurable design and code attributes, and no quality is so simple that it can be easily measured. The idea is to select or devise measurable, analyzable, or testable design and code attributes that will increase the desired qualities. Attributes like information hiding, strength, cohesion, and coupling should be considered. 2. Quality Evaluations Once some decisions have been made about the quality objectives and software attributes, quality evaluations can be done. The intent in an evaluation is to measure the effectiveness of a standard or procedure in promoting the desired attributes of the software product. For example, the design and coding standards should undergo a quality evaluation. If modularity is desired, the standards should clearly say so and should set standards for the size of units or components. Since internal documentation is linked to maintainability, the documentation standards should be clear and require good internal documentation. Quality of designs and code should also be evaluated. This can be done as a part of the walkthrough or inspection process, or a quality audit can be done. In either case, the implementation is evaluated against the standard and against the evaluator's knowledge of good software engineering practices, and examples of poor quality in the product are identified for possible correction. 3. Nonconformance Analysis One very useful SQE activity is an analysis of a project's nonconformance records. The nonconformances should be analyzed for unexpectedly high numbers of events in specific sections or modules of code. If areas of code are found that have had an unusually high error count (assuming it is not because the code in question has been tested more thoroughly), then the code should be examined. The high error count may be due to poor quality code, an inappropriate design, or requirements that are not well understood or defined. In any case, the analysis may indicate changes and rework that can improve the reliability of the completed software. In addition to code problems, the analysis may also reveal software development or maintenance processes that allow or cause a high proportion of errors to be introduced into the software. If so, an evaluation of the procedures may lead to changes, or an audit may discover that the procedures are not being followed. 4. Fault Tolerance Engineering For software that must be of high reliability, a fault tolerance activity should be established. It should identify software which provides and accomplishes critical functions and requirements. For this software, the engineering activity should determine and develop techniques which will ensure that the needed reliability or fault tolerance will be attained. Some of the techniques that have been developed for high reliability environments include: Input data checking and error tolerance. For example, if out-of-range or missing input data can affect reliability, then sophisticated error checking and data interpolation/extrapolation schemes may significantly improve reliability. Proof of correctness. For limited amounts of code, formal "proof of correctness" methods may be able to demonstrate that no errors exist. N-Item voting. This is a design and implementation scheme where a number of independent sets of software and hardware operate on the same input. Some comparison (voting) scheme is used to determine which output to use. This is especially effective where subtle timing or hardware errors may be present. Independent development. In this scheme, one or more of the N-items are independently developed units of software. This helps prevent the simultaneous failure of all items due to a common coding error. E. Techniques and Tools Some of the useful fault-tolerance techniques are described under subsection D, above. Standard statistical techniques can be used to manipulate nonconformance data. In addition, there is considerable experimentation with the Failure Modes and Effects Analysis (FMEA) technique adapted from hardware reliability engineering. In particular, the FMEA can be used to identify failure modes or other assumable (hardware) system states which can then lead the quality engineer to an analysis of the software that controls the system as it assumes those states. There are also tools that are useful for quality engineering. They include system and software simulators, which allow the modeling of system behavior; dynamic analyzers, which detect the portions of the code that are used most intensively; software tools that are used to compute metrics from code or designs; and a host of special purpose tools that can, for example, detect all system calls to help decide on portability limits. | |
|