Performance Testing Definitions
Software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.
In software engineering, performance testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of Performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort.
Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly. In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time. It is critical to the cost performance of a new system, that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope.
Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.
Performance testing can verify that a system meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs in terms of parameters such as speed, data transfer rate, bandwidth, throughput, efficiency or reliability.
Performance testing can also be used as a diagnostic aid in locating communications bottlenecks. Often a system will work much better if a problem is resolved at a single point or in a single component. For example, even the fastest computer will function poorly on today's Web if the connection occurs at only 40 to 50 Kbps (kilobits per second).
Slow data transfer rate may be inherent in hardware but can also result from software-related problems, such as:
- Too many applications running at the same time
- A corrupted file in a Web browser
- A security exploit
- Heavy-handed antivirus software
- The existence of active malware on the hard disk.
In the computer industry, software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.
Performance testing - The specification for a system will usually have some requirements for how well the system should perform certain functions, additional to a statement of required functions. Thus while functional testing will, for example, demonstrate that the sum and average of a set of numbers will be calculated, performance testing will concentrate on how well the calculation is done (speed, accuracy, range, etc.). Typically, performance testing will consist of one of more of the following.
Stress and timing tests, for example measuring and demonstrating the ability to meet peak service demand measured by number of users, transaction rate, volume of data, and the maximum number of devices all operating simultaneously.
Configuration, compatibility, and recovery tests, for example using a combination of the slowest processor, the minimal memory, the smallest disk, and the last version of the operating system, and checking that other valid combinations of processor, memory, disk, communications, and operating systems will interoperate and recover from faults.
Regression tests, showing that the new system will perform all the required application functions of the system it replaces.
The scope of performance testing is not well defined and varies from creating scripts and running tests in its narrow definition to all kinds of performance-related activities when synthetic workload is applied to the system in its wide definition. And the wide definition often includes performance analysis, performance troubleshooting / diagnostics, tuning, and capacity planning making the word "testing" to be rather disguising the substance of what it behind. The difference behind these definitions can partly explain the wide range of opinions about a path from functional testing to performance testing: while the path from automated functional testing to "narrow definition" performance testing is quite straightforward, skills required by other areas of "wide definition" performance testing are quite different.
If speak about the "wide definition" of performance testing, it is definitely highly interwoven with performance engineering. Performance testing is a source of data for any kind of modeling and capacity planning activities. It is way to calibrate and verify models (where models mean not only formal models, but any kind of perception of how the system is supposed to work). If any pro-active performance experiment consider as performance testing, only other source of data may be observation or log analysis of real work with the system, when you never know what exact level of load was applied and what else happened around (so data won’t be "clean"). A notable exception is data defining load (like throughput) which are parameters of performance testing; so they can’t come from its results, they should come from analyzing real systems or other sources.
Unique opportunity to exactly reproduce workload in performance testing is priceless during tuning and diagnostics of performance problems. You get exact impact of suggested change, while, for example, doing that in production you never know what other unknown factors may impact results.
While performance testing wasn’t discussed much by Dr. Connie U. Smith or Dr. Lloyd G. Williams (people who probably done most to promote software performance engineering), it is implied as an integral part of performance engineering. For example, the QSEM: Quantitative Scalability Evaluation Method paper really described performance testing to investigate system’s scalability.
Performance testing is not "sexy" from scientific point of view. While it has rather mature practice, there is no mature performance testing theory around. There is no classic textbook, for example, most education goes around vendor’s tools. It is difficult to put many formulas in it (I always enjoy performance management / capacity planning books where after some initial performance- and computer-related discussions usually there are many chapters of plain queuing theory and other part of mathematics that are supposed to be used).
Performance testing is a very good step to introduce performance engineering. At least it is the area where some low hanging performance fruits could be easily found and it is quite clear what to do it. While performance testing theory is almost non-existing, performance testing practice is pretty mature (although often grouped around tool vendors, their training and user communities). It looks like performance testing groups becomes the centers of performance engineering ("a performance center of excellence", for example) in the enterprise instead of performance analysis and capacity planning groups that were typical for corporations using mainframes.
Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload.Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly
Performance testing - Test of application productivity and its conformity to requirements. It is especially important for complex Web applications and mobile software. For example, graphics processing can be crucial on mobile devices, so, it is necessary to check if the application works properly and, e.g., doesn't lead to display "freezing". Special tools allow getting productivity metrics. One of the subtypes of this testing is benchmark testing.
Load testing - It tests system work under loading. This type of testing is very important for client-server systems including Web application (e-Communities, e-Auctions, etc.), ERP, CRM and other business systems with numerous concurrent users.
Stress testing - Stress testing examines system behavior in unusual ("stress", or beyond the bounds of normal circumstances) situations. E.g., a system behavior under heavy loading, system crash, and lack of memory or hard disk space can be considered as a stress situation. Fool-proof testing is another case which is useful for GUI systems, especially if they are oriented at a wide circle of users.
Load Testing (Performance Testing) is used to verify performance behaviors for business functions under the normal and heavy work conditions (e.g. for CLI application it could be the situation when many tasks are assigned to one particular time). The success criteria of this test are completion of all the test cases without any failures and within acceptable time allocation.
Performance Testing (Load Testing) - Testing is aimed at assessing the speed at which the product addresses different events under different conditions.
Pperformance testing - The process of testing to determine the performance of a software product. See efficiency testing.
Performance testing tool - A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.
The objective of performance test is to validate the back-end architecture, hardware and applications scalability:
Determine the performance, stability and scalability of an application under various load conditions.
- Determine which configuration sizing provides the best performance level.
- Determine if the current architecture can support the application at peak user levels.
- Prove application is stable enough to go into production (Acceptance).
- Determine if the new version of the software adversely had an impact on response time.
- Determine at what point does degradation performance occur (Capacity Planning).
- Identify application and infrastructure bottlenecks.
- Evaluate product and/or hardware to determine if it can handle projected load volumes.
Performance Test and Load / Stress Test determine the ability of the application to perform while under load. During Stress/Load testing the tester attempts to stress or load an aspect of the system to the point of failure - the goal being to determine weak points in the system architecture. The tester identifies peak load conditions at which the program will fail to handle required processing loads within required time spans. During Performance testing the tester designs test case scenarios to determine if the system meets the stated performance criteria (i.e. A Login request shall be responded to in 1 second or less under a typical daily load of 1000 requests per minute.). In both cases the tester is trying to determine the capacity of the system under a known set of conditions. The same set of tools and testing techniques can be applied for both types of capacity testing - only the goal of the test changes.
Performance Testing = how fast is the system?
Load Testing = how much volume can the system process?
Performance testing seems to me to be much more broad than load testing. Consider:
Web load testing is:
- A web developer can test the speed at which a page renders in a browser, and that is testing performance. Yet, that test would have nothing to do with load.
- I might analyze the efficiency at which my database processes a single specific SQL query, and the resulting speed of delivery of the records can be the slowest component of the whole page building process. Measuring that speed is about performance, but only one transaction is involved (small load).
- Load testing is usually focused on metrics like requests per second and concurrent users (the cause); whereas performance testing is more concerned with response times (the effect).
Web performance testing is:
- similar to, but not synonymous with performance testing
- concerned with the volume of traffic your website (or application) can handle
- not intended to break the system
- viewing the system from the user perspective
- associated with black box testing
- a superset of load testing
- concerned with speed and efficiency of various components of the web application
- useful with only one user and/or one transaction
- viewing the system from the architecture perspective (behind the server side curtain)
- associated with white box testing
Performance testing: A web application should sustain heavy workload, especially during peak times when many users access the same page simultaneously. In addition, the site must be able to handle input data from a large number of users simultaneously. A performance test also includes stress testing where the system is tested beyond its specification limits.
According to IEEE:
Performance testing is conducted to evaluate the compliance of a system or component with specified performance requirements.