Goodness of fit tests are intended to inform the user if there are large deviations in the data away from the selected and calibrated probability model. The tests are not an indication that the data necessarily come from the selected probability distribution, just that the data do not significantly deviate from it. The choice of a probability distribution and fitting method to model data should not be based solely on the results of comparing goodness of fit tests; there should always be a mathematical basis for selecting a probability distribution as a model for data. The tests outlined below may help rule out one or more distributions when multiple are valid for the data being modeled.

Kolmogorov-Smirnov Test

The Kolmogorov-Smirnov, or K-S Test, is a nonparametric method for checking equality of two continuous probability distributions. When the distribution of data is approximated with an empirical distribution, equality can be checked between the empirical distribution and an alternative model for the data. The K-S Test behaves by finding the maximum difference in CDF between the proposed model for the data and the empirical distribution of the data. The program will provide a test statistic which is the result of previously-mentioned computations. In practice, if the difference is large based on the sample size, the null hypothesis that the data come from the proposed model would be rejected.

Chi-Squared Test

The Chi-Squared Test (more specifically, Pearson's Chi-Squared Test) is a parametric goodness of fit test. The test behaves by creating a number of discrete classes or "bins" for the data, and comparing the observed proportion of the data in each bin compared to the expected proportion of the data according to the model. Similar to the K-S test, the program will provide a test statistic which is the result of previously-mentioned computations. In practice, if the proportions are significantly different, then the null hypothesis that the data arise from the proposed model would be rejected. The name for the test comes from the distribution of the differences of the proportion, which follow the Chi-Squared Distribution. The critical value for rejection can be computed from a Chi-Squared Distribution with k – 1 degrees of freedom, where k is the number of bins used in the test.

Anderson-Darling Test

The Anderson-Darling Test is a special case of the K-S test that gives more weight to the tails of the distribution. The critical value for the A-D Test is based on the probability distribution being tested whereas in the K-S Test, the critical value is not based on the probability distribution being tested. As in the K-S Test, if the test statistics exceeds the critical value then the hypothesis that the sample data comes from the proposed distribution is rejected.