Evaluation of Software Reliability
In the mid-seventies of last century it was offered the first model of reliability growth, which was intended to assess the reliability of software. Over the past decades since that time it was suggested a set of models, which were similar and not very much. All of these models can be divided into three types.
Reliability growth model
Reliability growth phenomenon is that in the process of testing software patches are made, the code is stabilized, the number of undetected errors is reduced, crashes happen less often. It is shorter than the reliability increases.
The idea of such models is to find a function which is well correlated with the actual data on the process of detecting softawre bugs.
The most common such function is chosen exponent or logarithm .These equations yield the expected number of errors found in time t. β0 and β1 - model parameters that are evaluated on the basis of previous experience and first test results. The model allows us to estimate the number of undetected errors, and this information can be used to make a decision on the completion of the testing phase and the response to other questions.
Regression models are similar to the growth model of reliability. They also provide an assessment based on the approximation of the real data. Only the growth model of reliability as a source of data use information about testing, and regression models use different metrics such as number of lines of code, cyclomatic complexity, etc.
Suppose there are a lot of releases of the system for each release can be measured by a set of metrics and count the number of errors detected in it for some time after release. Then, if the data are sufficient, we can choose a regression that will give a fairly accurate prediction of the number of errors in a release on the basis of only the measured metrics. Usually for this purpose is used logistic, negative binomial, and other regression.
The most widely used scenario of a regression model is a prediction set of modules (files) in a system that will contain the largest number of defects after release. The Pareto principle works here: 20% of files will contain 80% of the defects. This information can be used to test the distribution of resources, reducing the time spent on testing, selection of the optimum moment to release, and other tasks.
There are other models that are in some form or other yield estimates of reliability. These models often use any additional information about the development process, such as code coverage data tests, or some information about the organization development process. As a consequence, the applicability of such models is often limited, because of what they find little application in practice.