Lecture
Load testing or performance testing is an automated testing system that simulates the operation of a certain number of business users on a shared (shared) resource.
Consider the main types of load testing, as well as the tasks facing them.
Performance Testing
The task of performance testing is to determine the scalability of the application under load, and this happens:
Stress Testing
Stress testing allows you to check how the application and the system as a whole work under stress and also evaluate the ability of the system to regenerate, i.e. to return to normal after cessation of stress. Stress in this context may be an increase in the intensity of operations to very high values or an emergency reconfiguration of the server. Also, one of the tasks in stress testing can be to evaluate performance degradation, so stress testing objectives can overlap with performance testing objectives.
Volume Testing
The task of volumetric testing is to obtain an assessment of performance with an increase in the amount of data in the database of the application, and this happens:
Stability / Reliability Testing
The task of testing stability (reliability) is to test the performance of the application during long (many hours) testing with an average load level. The execution time of operations can play a minor role in this type of testing. At the same time, the absence of memory leaks, server restarts under load, and other aspects affecting precisely the stability of work takes the first place.
In English terminology, you can also find another type of testing - Load Testing - testing the response of the system to a load change (in the limit of permissible). It seemed to us that Load and Performance pursue the same goal: checking performance (response time) at different loads. Actually, therefore, we did not separate them. At the same time, someone can share. The main thing is to understand the goals of one or another type of testing and try to achieve them.
The following are some experimental facts, generalized principles used in performance testing in general and applicable to any type of performance testing (in particular, to load testing).
1. Uniqueness of requests
Even having formed a realistic scenario of working with the system on the basis of statistics of its use, it is necessary to understand that there will always be exceptions to this scenario.
Illustration of the various dispersions of distributions for the execution time of X and Y queries.
In the case of Example 1, this may be a user accessing unique web service pages that are different from all others.
2. System response time
In general, the system response time is subject to the normal distribution function.
In particular, this means that, having a sufficient number of measurements, it is possible to determine the probability with which the system’s response to a request will fall into one or another time interval.
3. Dependence of the response time of the system on the degree of distribution of this system.
The variance of the normal distribution of the system response time to a query is proportional to the ratio of the number of system nodes processing these requests in parallel and the number of requests per node.
That is, the number of requests for each node of the system and the number of nodes, each of which adds a certain random amount of delay in processing requests, affects the variation in the response time of the system.
4. The variation of the response time of the system
From statements 1, 2 and 3, we can also conclude that with a sufficiently large number of measurements of the magnitude of the request processing time, in any system there will always be requests whose processing time exceeds the maximums defined in the requirements; moreover, the longer the total time of the experiment, the higher will be the new maximums.
This fact must be taken into account when formulating system performance requirements, as well as during regular load testing.
5. Accuracy of reproduction of load profiles
The required accuracy of reproduction of load profiles is the more expensive the larger the component contains the system.
It is often impossible to take into account all aspects of the load profile for complex systems, since the more complex the system, the more time will be spent on designing, programming and maintaining an adequate load profile for it, which is not always a necessity. The best approach in this case is to balance the cost of test development and the coverage of the system's functionality, which leads to assumptions about the impact on the overall performance of a particular part of the system under test.
Detailed information about what load testing and testing of program performance , as well as information about the methodology of conducting, you can find in the Automation of load testing.
It should be noted that for most types of performance testing the same toolkit is used, which is able to perform typical tasks.
There is a common misunderstanding that the tools for load testing a system are tools that are the same as recording and playback tools for automating regression testing. Tools for load testing work at the protocol level, while tools for automating regression testing work at the object level of the graphical user interface.
Example 2: There is a standard Internet browser that performs the function of navigating to the specified link when a button is pressed. In this case, to automate regression testing, you need to write a script that sends a mouse click and button click to the browser, while to create a script for load testing, you need to write a hyperlink from the browser to several users, including a unique user name and password. |
There are various tools for detecting and investigating problems in various nodes of the system. All system nodes can be classified as follows:
Also noteworthy is the emergence of networked Business-to-business (B2B) applications using a service level agreement (or SLA, Service Level Agreement). The growing popularity of B2B applications has led to the fact that more and more applications are switching to a service-oriented architecture, in which case information is exchanged without the participation of web browsers. An example of such interaction is the travel services bureau, requesting information about a specific flight between St. Petersburg and Omsk, while the airline is obliged to provide an answer within 5 seconds. Often, breach of an SLA agreement is a major penalty.
Some tools for load testing are presented below:
BY | Manufacturer Name | Comments |
---|---|---|
OpenSTA | 'Open System Testing Architecture' | Free stress / stress testing software licensed by the GNU GPL. Uses a distributed application architecture based on CORBA. A Windows version is available, although there are compatibility issues with Windows Vista. Support was discontinued in 2007. |
IBM Rational Performance Tester | Ibm | Based on the Eclipse development environment software that allows you to create large volumes of load and measure response times for applications with a client-server architecture. Requires licensing. |
JMeter | Open project Apache Jakarta Project | A Java-based cross-platform toolkit that allows you to perform stress tests using JDBC / FTP / LDAP / SOAP / JMS / POP3 / HTTP / TCP connections. It allows you to create a large number of requests from different computers and control the process from one of them. |
HP LoadRunner | HP | Tool for load testing, originally designed to emulate the work of a large number of concurrently working users. Can also be used for unit- or integration testing. |
Loadcomplete | SmartBear | Proprietary product that allows load testing of web applications |
SilkPerformer | Micro Focus (Borland) | |
Siege | Joe Dog Software | Siege is a utility for load testing web servers. [3] |
Visual StudioTeam System | Microsoft | Visual Studio provides a tool for performance testing including load / unit testing |
QTest | Quotium | |
Httperf | ||
QALoad | Compuware Ltd. | |
(The) Grinder | ||
Webload | RadView Software | Load testing tool for web and mobile applications, including web dashboards for testing performance analysis. Used for large-scale loads that can also be generated from the cloud. licensed. [4] |
Yandex.Tank | Yandex | Modular and extensible tool that allows you to use different generators inside, in particular, familiar to many JMeter. This is an open-source project published by Yandex in 2012. |
One of the results obtained during load testing and used later for analysis are application performance indicators. The main ones are discussed below.
1. Consumption of CPU resources,%
A metric that shows how much time from a given specific interval was spent by the processor on the calculations for the selected process. In modern systems, an important factor is the ability of the process to work in several threads, so that the processor can perform calculations in parallel. An analysis of the processor's consumption history can explain the effect on the overall system performance of the streams of data being processed, the configuration of the application and the operating system, the multithreading of calculations, and other factors.
2. Consumption of RAM, MB
A metric indicating the amount of memory used by the application. The used memory is divided into several categories:
When the application is running, the memory is filled with references to objects that, if not used, can be cleared by a special automatic process called the garbage collector. The time spent by the processor on cleaning up memory in this way can be significant when the process has taken up all the available memory (in Java, the so-called “Full GC constant”) or when the process has been allocated large amounts of memory that need to be cleaned. For the time required to clear the memory, the process access to the pages of allocated memory may be blocked, which may affect the final processing time of this process.
3. Consumption of network resources
This metric is not directly related to the performance of the application, but its performance may indicate the limits of system performance as a whole.
Example 3: The server application processing the user's request returns a video stream to it using a 2 megabit network channel. The requirement states that the server must handle 5 user requests simultaneously. Load testing showed that the server can efficiently provide data to only 4 users at a time, since the multimedia stream has a bit rate of 500 kilobits. It is obvious that the provision of this stream to 5 users at the same time is impossible due to the excess bandwidth of the network channel, which means that the system does not meet the specified performance requirements, although its consumption of processor and memory resources may be low. |
4. Work with the disk subsystem (I / O waiting time, I / O wait)
Working with a disk subsystem can significantly affect system performance, so collecting statistics on how to work with a disk can help identify bottlenecks in this area. A large number of reads or records can lead to idle processor in anticipation of data processing from the disk and as a result increase the CPU consumption and increase the response time.
5. Request execution time, ms
Application execution time remains one of the most important indicators of system or application performance. This time can be measured on the server side, as an indicator of the time it takes the server to process a request; and on the client, as an indicator of the total time required for serialization / deserialization, transfer and processing of the request.
It should be noted that not every performance test application can measure both of these times.
The following are some experimental facts, generalized principles used in performance testing in general and applicable to any type of performance testing (in particular, to load testing).
1. Uniqueness of requests
Even having formed a realistic scenario of working with the system on the basis of statistics of its use, it is necessary to understand that there will always be exceptions to this scenario.
Illustration of the various dispersions of distributions for the execution time of X and Y queries.
In the case of Example 1, this may be a user accessing unique web service pages that are different from all others.
2. System response time
In general, the system response time is subject to the normal distribution function.
In particular, this means that, having a sufficient number of measurements, it is possible to determine the probability with which the system’s response to a request will fall into one or another time interval.
3. Dependence of the response time of the system on the degree of distribution of this system.
The variance of the normal distribution of the system response time to a query is proportional to the ratio of the number of system nodes processing these requests in parallel and the number of requests per node.
That is, the number of requests for each node of the system and the number of nodes, each of which adds a certain random amount of delay in processing requests, affects the variation in the response time of the system.
4. The variation of the response time of the system
From statements 1, 2 and 3, we can also conclude that with a sufficiently large number of measurements of the magnitude of the request processing time, in any system there will always be requests whose processing time exceeds the maximums defined in the requirements; moreover, the longer the total time of the experiment, the higher will be the new maximums.
This fact must be taken into account when formulating system performance requirements, as well as during regular load testing.
5. Accuracy of reproduction of load profiles
The required accuracy of reproduction of load profiles is the more expensive the larger the component contains the system.
It is often impossible to take into account all aspects of the load profile for complex systems, since the more complex the system, the more time will be spent on designing, programming and maintaining an adequate load profile for it, which is not always a necessity. The best approach in this case is to balance the cost of test development and the coverage of the system's functionality, which leads to assumptions about the impact on the overall performance of a particular part of the system under test.
Comments
To leave a comment
Quality Assurance
Terms: Quality Assurance