You get a bonus - 1 coin for daily activity. Now you have 1 coin

Methods and types of testing data processing

Lecture



Functionally, any program can be considered as processing the stream of data transmitted from the input to its output. The input data are sequentially used to determine a series of intermediate results, up to and including the required set of output data. The task of analyzing the data flow is to establish the correct processing and identify errors in the program being tested. This problem can be solved in two ways: 1. statistical (without using the program, by analyzing its text); 2. dynamic (by actually using the program on a PC).
Programming in high-level languages ​​allows you to fully and efficiently analyze data flows at the level of the program source code. The data involved in the computation in a high-level language are explicitly defined by name, type, and methods of access and use. This allows the program to be viewed as a multigraph given by the structure of control transfers and the transformation graph of the data involved in the calculations (data flow).
In the figure, the vertices of the left row correspond to the program operators, and the right row correspond to the variables and constants processed by the program. The flow of control between the operators is shown by solid lines, and the data flow is dashed. The intersection of control and data flows is performed in condition test statements in cycles. Joint analysis of control and data flows allows checking the correctness of the implementation of variable definition areas on the program execution routes.
The consequences of errors in the program can appear as small changes in some variables in the calculation process and as a complete distortion or absence of the required values ​​at the output. It is advisable to conduct testing of PM on ordered data sets, taking into account the degree of their influence on the output results.

From this position, it is advisable to distinguish 2 types of data processing:
1. completely changing the domain and meaning of the results;
2. modifying results within a certain bounded regular domain.
The first type of processing corresponds to the initial data at critical points and at the boundaries of the regions of change of variables. With such critical values, the route of the program execution may change, as a result of which the greatest change in the results is possible. Consequently, usually testing of data processing is primarily aimed at checking the execution of programs with the values ​​of variables affecting the choice of route and the logic of the program.
Boundary conditions are situations that occur in close proximity to the boundaries of the areas of change of the variables being processed. The number of such critical values ​​of each variable can be several orders of magnitude less than the number of values ​​throughout the entire inner part of the range of variation of this quantity. Most of the critical values ​​can significantly affect the results and are subject to the most thorough testing. In this part, testing the processed data in content is close to testing the structure of programs. With this type of testing, routes are formed in the process of analyzing and processing data on successive conditional statements in the program text. So the whole set of routes is realizable and is determined by the composition of the real test data. A set of combinations of source data in tests directly affects the degree of coverage of the program area with testing. By comparing the tested routes with the routes identified by the program graph, under various criteria, it is possible to evaluate the achieved completeness of module testing and the approximate degree of its correctness.
The second type of processing corresponds to data in a bounded or unlimited domain of definition, which can be divided into some set of mating subdomains. Changing data within such an area does not affect the route of use of the program. Therefore, to test the functionality of the program from the entire set of values, it is sufficient to use when testing only a few values ​​inside and near the borders of the region. The number of quantities used for testing in the processing of this type of m. several orders of magnitude less than the total number of values ​​of each variable in the region. In the process of testing, the accuracy of the performed calculations, the correct scaling and dimensions of the processed quantities, etc. are checked. At the same time, testing should cover the entire range of changes for each variable processed and each resulting value.

When analyzing the processed data within the internal domain of definition, it is advisable to use testing methods that are ordered in the following sequence:
1. testing the correctness of writing or reading variables in the calculations and the completeness of the composition of the output data on all routes of program execution;
2. testing the accuracy of the results of calculations and the correctness of processing each variable;
3. testing for full compliance with the composition of values ​​and accuracy of the output to the specification of requirements.
In the given sequence, particular methods of testing allow, first of all, to reveal primary errors that are capable of distorting the results to the greatest extent. With limited resources and such a sequence of testing in the program errors may remain that least affect the correctness of the output data. Attention should be focused on identifying data processing errors that affect: the logic of the program; to write and read variables; on the completeness of the results; on the accuracy of the calculation of the output data; for full compliance with the requirements specification.

2. Testing with data values ​​that determine the routes of use of programs (area strategy).
Data processing routes may depend on any type of value. When choosing the direction of branching, I participate in variables and constants reflecting real, integer, symbolic, boolean, vector, and other values. The scope of such values ​​depends on their types and contents and represents both individual points and disconnected areas, as well as an unlimited continuous sequence of values. One of the tasks of testing is to check the comparability of the compared types of values ​​and the identity of the conditions of their coding (scale, capacity, etc.).

Critical values ​​(predicates) that affect routes in many cases are not fixed, but are formed when processing and comparing several variables. In this case, predicates are formed in the entire range of changes in each of the variables. Predicates that determine the choice of routes for using a program are formed as a result of calculations on linear program sections. These sites are not large - about 10 teams. Calculations in most cases are the simplest linear transformations of input data. In addition, predicates are usually simple with one input variable (less often with two). Each limited area of ​​the source data corresponds to a specific route in the program, the boundary of the area consists of a set of sections, each of which is determined by a single simple predicate forming an arc of routes in the program graph. The total number of predicates in the route is the upper limit on the number of boundary sections of the input variable region of this route. So the program in relation to the data stream performs the function of dividing the source data space into regions, each of which corresponds to a single executable route.
Errors in the program m. due to the modification of the boundary of the route definition area, leading to the expansion or contraction of the source data space of the corresponding route. In addition, the deformation of the boundaries of areas can lead to errors in the destruction of some areas and the loss of their corresponding routes. In the presence of such errors, m. distortions of conditional analysis operators or distortions in the process of calculating predicate values ​​with the correct content of the condition operator. In the latter case, the boundary of the area is usually shifted, but it retains the overall structure. The distortion of the conditions analysis operators can lead both to the deformation of the boundary of the region, and to the appearance or destruction of new boundaries. Because of this, areas can be divided (merged).
This method of orderly testing regulation based on the definition of data change areas is very effective. The complexity of testing increases with an increase in the source data space (i.e., the number of variables) and with an increase in the number of predicates on the routes. For many typical PM, the complexity of the tests is acceptable for a complete check of the PM. The limitations of the area verification method can manifest themselves in complex organization of cycles, when the number of routes and conditions being analyzed sharply increases.

3. Testing the correctness of determining the use of data on program execution routes.
Each value on the program execution path is read from memory, and after being used for calculations, it is recorded in computer memory for storage and subsequent processing. The alternation of reading and writing variables can be broken as a result of errors in the program. To identify such errors, testing of the correctness of writing and reading real data values ​​is carried out, or analysis of these operations using the program text. Errors in memory allocation usually occur when there is no value to be read in the corresponding register. The reasons for this may be as follows. 1) The write operation of the required value is completely absent. 2) a write operation exists, but writing is performed in the wrong case. 3) The correct recording of the required value in the prescribed register is destroyed by an unrecorded recording of another value before the use of the primary recording.
These errors may not appear on all routes of program execution. Testing tasks are reduced to detecting erroneous combinations of read operations for each value and addresses of registers with which these operations are performed. To do this, information on the addresses of registers and the smallest writeable and readable values ​​is required. Testing the process of using variables is based on data flow analysis.
All definitions of variables, cat, are revealed. They can reach each vertex of the control program graph. For each point of the program, it is necessary to establish which variables are valid before arriving at this point of use to view the outputs from it. Analysis of the data flow allows you to detect errors caused by the violation of the correct sequence of data write / read operations. This analysis is carried out along the program execution routes.

4. Testing the correctness of processing each variable and the accuracy of the calculation results.
This type of testing is performed predominantly with real and integral values ​​in the inner part of their domains of definition. Moreover, the accuracy of the calculation can be performed with the boundary values ​​used for testing the routes by definition areas. Test values ​​for testing calculations with simple numeric variables are constructed in an orderly manner, taking into account the following rules: 1) Input test data should take values ​​close to the largest and smallest, as well as one or two intermediate values. 2) Data testing is carried out at all special values ​​of input variables. At the point of sharp increase or rupture of derivatives, with zero, single and extremely small numerical values. 3) Input test values ​​should provide program verification for output results that have particular points of abrupt change or rupture of derivatives. 4) If the value of a variable depends on the value of another variable, then it is necessary to test it with special values ​​of combinations of variables. The equality of common variables is small and their extremely large difference, zero and unit values.
Thus, for each simple numeric variable, in addition to significant points near and at the boundary of the domain, it is usually necessary to test the program at three, intermediate and 2.5 special points of the input data values. If variables are represented as arrays, the amount of testing increases significantly. In this case, it is necessary to take into account the structure and size of the array, the presence of singular points or substructures, a change in the domain of definition and the singular points of the values ​​of each variable when the data is located in different places of the array.

5. Automation tools for testing software components.
Created a variety of tools and integrated systems that provide an effective and convenient testing environment for developers of software components.
Automated testing systems are adapted to the peculiarities of the created software and hardware design environment.
Modern software testing systems should provide the following features:
1. Convenient user dialogue with test automation tools; 2.Use of the design database for accumulating and storing information about the developed programs, their versions, test and reference data; 3. Automatic detection of typical errors in the texts of the source programs; 4. Automated test planning; 5. Implementation of debugging tasks to achieve maximum program correctness in the context of limited testing resources; 6. Evaluation of the achieved program correctness according to the selected testing criteria and determination of the main indicators of the quality of the created program components; 7. Automated registration and documentation of all changes performed in the programs.
Test automation tools are divided into static and dynamic.
Static analyze the source code of programs without their execution in the object code. They automate the analysis and verification of the correctness of program texts and the identification of appropriate types of errors.
The group of static tools includes: Means of controlling the structural correctness of programs; Controls for writing and reading variables; Search tools for persistent semantic errors; Means of data preparation for test planning, which provide the selection of typical structures in the module (cycles, routes). The developer must specify the criterion by which it is necessary to form routes, and a strategy for compiling an ordered list of routes; Means of calculating the duration of the execution of modules.
When using dynamic tools, the programs function in object code. Dynamic debugging tools are divided into: - fixed assets that ensure the execution of programs in accordance with debugging tasks;
- aids that take into account the testing performed, its results and the corrections made.
Auxiliary means are usually used in cases when software components are intended for use in critical software systems and especially high demands are placed on their reliability.
For each debugged software component, storage of basic tests and debugging tasks that were used during testing should be provided. Along with the tests, the results of the assessment of the completeness of testing and the achieved correctness of the software component are stored. These data, together with an assessment of the complexity of the components, structural and informational characteristics are combined into a passport of the program certification.


Comments


To leave a comment
If you have any suggestion, idea, thanks or comment, feel free to write. We really value feedback and are glad to hear your opinion.
To reply

Software reliability

Terms: Software reliability