# Non-parametric Tests: Definition And Types

**Non-parametric tests comprise a series of statistical tests that have in common the absence of hypotheses on the law of probability** concerning the population from which the sample was extracted. These tests, therefore, apply when we do not know if the population the sample comes from is normal or approximately normal.

Nonparametric tests are used quite frequently, as often many variables do not follow the parametric conditions. These are: the use of continuous quantitative variables, normal distribution of samples, equal variances and balanced sampling.

When these prerequisites are not met or there are serious doubts that they can be met, **nonparametric or free distribution tests are used. **They have the following characteristics:

- They are used much less than would be recommended (they are less well known by analysts).
- They are applicable to hierarchical data.
- They can be used when two series of observations come from different populations (populations in which the variable is not equally distributed).
- They are the only realistic alternative when the sample size is small.

## Classification of non-parametric tests

**Not everyone agrees with this classification. **However, the authors Berlanga and Rubio (2012) summarized the main parametric tests thus.

### Non-parametric tests of a sample

#### Pearson’s chi-square test

**It is a widely used test when the researcher wants to analyze the relationship between two quantitative variables. **It is also widely used to evaluate the extent to which data collected in a categorical variable (empirical distribution) fit or not (resemble or not) a given theoretical distribution (uniform, binomial, multinomial, etc.).

#### Binomial Test

**This test allows us to find out whether a dichotomous variable follows a certain probability model or not. **Furthermore, it allows to test the hypothesis that the observed proportion of correct answers conforms to the theoretical proportion of a binomial distribution.

#### Frequencies Test

It is a test that allows us to determine whether the number of frequencies (R) observed in a sample *n* is large enough or small enough to reject the hypothesis of independence (or randomness) between observations.

**A frequency is a sequence of observations of the same attribute or quality. **The fact that in a data series there are more or fewer frequencies than expected can be an indicator that there is an important variable that we are not taking into consideration and that affects the results.

#### Kolmogorov-Smirnov test (KS)

This test is used to compare the null hypothesis that the distribution of a variable fits a given theoretical probability distribution (normal, exponential or Poisson). **Whether or not the distribution of data fits a certain distribution will suggest some data analysis techniques** over others.

### Nonparametric tests for two unrelated samples

#### McNemar test

The McNemar test is used to test hypotheses on the equality of proportions, specifically **a situation in which the measurements of each subject are repeated. **The response of each of them, therefore, is obtained twice: once before and once after a specific event.

#### The test of signs

It allows to compare the hypothesis of equality between two population medians. This test can be used to find out if one variable tends to be greater than another. Also, ** to test the trend that follows a series of positive variables.**

#### Wilcoxon test

The Wilcoxon test allows to compare the hypothesis of equality between two population medians.

### Nonparametric tests for k related samples

#### Friedman test

It is an extension of the Wilcoxon test. **It is therefore used to include data recorded in more than two time periods or in groups of three or more subjects,** with one subject from each group randomly assigned to one of three or more conditions.

#### Cochran test

It is identical to the previous one, **but it applies when all the answers are binary. **Cochran’s Q supports the hypothesis that several interrelated dichotomous variables have the same mean.

#### Kendall’s W concordance coefficient

It has the same indications as the Friedman test. However, it **is mainly used to know the agreement between intervals.**

### Non-parametric tests for two independent samples

#### Mann-Witney U test

It is equivalent to the Wilcoxon rank sum test and the Kruskal-Wallis two-group test.

#### Kolmogorov-Smirnov test

This test is used to test the hypothesis that two samples are from the same population.

#### Wald-Wolfowitz test

This test allows you to compare whether two samples with independent data come from populations with the same distribution.

#### Moses extreme reactions test

**Checks whether there is a difference in the degree of dispersion or variability of two distributions. **It focuses on the distribution of the control group and is a means of knowing how many extreme values of the experimental group affect the distribution when combined with the control group.

### Non-parametric tests for k independent samples

#### Median test

**Compare two or more groups in relation to their median. **The means are not used, either because they do not satisfy the normal conditions, or because the variable is quantitative discrete. It is similar to the chi-square test.

#### Jonckheere-Terpstra test

It is the most powerful test when analyzing an ascending or descending order of the K populations from which the samples are drawn.

#### Kruskal-Wallis H test

The Kruskal-Wallis H test is an extension of the Mann-Witney U test and is a great alternative to analysis of variance (ANOVA).

## Conclusions

**These tests are used when the data distribution is abnormal. **In other words, when the data are not on a ratio scale or when, even if they are, there are doubts as to whether the distribution of any variable fits the normal curve.

On the other hand, it is also true that many parametric tests are relatively efficient against hypothesis violation. However, if there are better tests, why not use them?