Statistical Techniques. Non-Parametric Procedures

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

The most common reasons you would select a non-parametric test over the parametric alternative. Parametric type of statistical tests can be defined as a group of statistical procedures having a set of things in common. These tests are designed to be used with nominal and ordinal variables, making a few assumptions about a certain population parameter (Field, 2009).

Parametric tests can be used in tight situations for example where there is markedly the absence of a normal distribution. They can also be used in situations where the data collected are ranked (either from the highest to the lowest or from the lowest to the highest) rather than them being arranged in scores as opposed to the parametric type of test that is relatively not affected by the violations of the assumptions that they are robust (Bennett, 2006).

Another advantage of the non-parametric tests over the parametric ones is that they are universal. That is, they can be used even in situations where it would be appropriate to use the other type of test for example, one can convert a score into a rank while he or she cannot convert the rank into a score (Bennett, 2006).

Also, non-parametric procedures are used by people involved in certain research whereby these people do not know anything to do with the parameters of the variables of interest in the population being researched (Bennett, 2006).

Another advantage of the non-parametric method is that it tends to describe the distribution of the variable of interest, since it is independent of the estimation of the parameters, such as the mean and the standard deviation (Bennett, 2006).

While one uses the non-parametric type of tests, he or she can have a more relaxed approach to the statistical data to analyze.

Also, these parametric tests are usually based on the ordering of the data. In other words, they are usually ranked systematically, as opposed to the other type of test where the sets of data are only distributed and not numbered in any order. In doing so, it will therefore be very easy to calculate the probability of a certain given set of data (Field, 2009).

Since most of the hypothesis tests in statistics are usually based on the assumption that a given population tends to follow a specific probability distribution, there exist situations whereby the assumptions cannot be in a position to be justified. Non-parametric statistical tests largely help in situations where the population cannot be proven to follow a specific probability distribution since they can be used as a shortcut in such situations. They can also be used in the more complicated parametric tests (Bennett, 2006).

This type of test can also be used in situations where a highly skewed population is present as opposed to the parametric type of statistical test.

In a parametric test, it is easier to learn and apply as compared to the other type of statistical test (parametric). Parametric tests are usually available for rating the data that are only classificatory. This results in the probability statements that are obtained from most of the parametric statistics to be exact regardless of the physical dimension of the population distribution from which the sample is drawn. However, the sample in most cases is considered to be random (Field, 2009).

On the other hand, non-parametric tests can treat the data that is inherently placed in ranks as well as the data that seems to be numerical hence making them have strength in their ranks. In such cases, there is no alternative solution to using a non-parametric test especially if the sample sizes are too small.

Treat samples are not made up of the observations from different populations as opposed to the other type of test (non-parametric) where the treat samples are comprised of observations that come from some sets of very different populations (Bennett, 2006).

The issues of statistical power in non-parametric tests (as compared to their parametric counterparts). Which type tends to be more powerful and why?

The statistical power of a given test involves the probability of rejecting the negative researcher’s speculation about the problem that has to be solved especially when it is not true and that it should be rejected. The power of parametric is usually calculated from graphs, tables and formulas based on their underlying distribution. One of the most common ways of calculating the power of a non-parametric test is through the use of the Monte carol simulation methods (Field, 2009).

The statistical power is believed to be lower in the non – parametric tests as compared to the parametric type of tests. This is because a non-parametric one cannot test the same set of variables as the corresponding parametric test. As far as other things might be equal, non-parametric techniques tend to be less powerful tests insignificance in comparison to their parametric counterparts.

The reasons for the lower power in the non-parametric tests are due to its ability to lose precision and its false sense of securing data. Another reason for the lower power is due to the ability to test distributions only and that it cannot deal with highly ordered interactions.

By assuming the researcher’s negative speculation about the problem to be solved to be false (null hypothesis), non-parametric tests will then have less power to find a meaningful relationship as compared to the counterparts (parametric tests).

Non-parametric tests tend to be non-sensitive, hence their inability to detect an effect of the independent variable on the dependent variable. Its power efficiency, therefore, tends to be too lower as compared to the other parametric type of statistical test.

A larger sample size is therefore needed for the non-parametric test as opposed to the parametric tests to detect any effect that might be present at any given significant level. The power efficiency of two sets of tests can be expressed as shown below:

Considering two sets of tests A and B, the power efficiency of the two tests A compared with the test B = N (B) / N (A) * 100

Where: N (A) – represents the sample size that is needed for showing a statistically significant effect usually at a test level of about five percent. On the other hand, N (B) represents the sample dimension that is required for showing a statistically significant effect at a level of about five percent for test B.

Identifying the appropriate non-parametric counterparts for each of the following parametric tests

Dependent tests

This type of test is used in comparing the means between subjects that are either similar or related. The set of data that is to be tested should vary. The Wilcoxon’s Matched pair test, which is one of the non-parametric counterparts of the dependent tests, is usually calculated using the same procedure as the W(s). The only exception is that it is used with some sets of different scores that are matched. When computing the Wilcox in, the differences are arranged from the highest to the lowest regardless of the signs they hold. All the positive ranks are added to result in a positive (T +) and all the negative ranks are added to result into (T -). One of the two sums (T +, T -) that has a smaller value is taken as the statistic Q. This happens regardless of the signs they hold.

Independent sample test

These are tests or procedures that do not require comparisons of other procedures or tests so as to give out appropriate results. The non-parametric alternative counterpart to these procedures’ tests are the McNamara chi-square test, Mann – Whitney U tests and the Wilcoxon rank-sum tests, this is because these tests are normally used when comparing two independent samples.

The Wilcoxon signed-rank test is usually used when a researcher wants to carry out a test on whether the median of a symmetric population is zero.

The Wilcoxon sum rank test / The Mann-Whitney U test is normally used to test whether a pair of samples are drawn from the same population. When the two populations are shifted concerning each other, this test becomes a preferred non-parametric counterpart of the independent sample compared to the McNamara chi-square test.

On the other hand, the McNamara chi-square test is very useful when comparing the observed frequency of a given variable contained in one group with what is expected. This test exploits the use of Chi-square that is a family of probability distributions. These probability distributions tend to vary greatly with their degree of freedom. In the case of a two-way chi-square, the observed frequencies are compared with the derived expected frequencies. These frequencies are only for a particular group. on the other hand, these observed frequencies are normally compared with the expected frequencies that are derived from the marginal summations of the cross-tabulation table but only in a two-way chi-square. The following formula is always used when calculating the chi-square:

X ^ 2 = ∑ [(f (o) – f (e)) ^ 2 / f (e)]

The following are the examples of the Mann-Whitney U test: The two-tailed negative hypothesis where the difference between two variables is not present for example where there is no difference between the clothing of male and female students in a particular institution or class whereby:

H(o) represents the number of male and female students that have the same mode of clothing and that H(A) represent male and female students that do not have the same modes of clothing.

Repeated measures ANOVA (one variable)

In situations where two variables were measured in the same sample, this procedure is normally recommended (Bennett, 2006).

The Cochran Q test, which is one of the non-parametric counterparts of the repeated measures ANOVA, or one variable test, applies in situations where the variables are measured in terms of categories for example, “good” and “bad”.

On the other hand, the fried man’s two-way analysis is considered as a generalization of the Wilcoxon test and it is more appropriate for any two pair of sample designs. Friedman’s test is considered a non-parametric alternative to the repeated measures ANOVA in situations where the assumptions underlying the test are not being satisfied. For a normal distribution, the Friedman test’s Are concerning the F test is taken to be 0.955 k / (k + 1) where k is equivalent to the number of treatment groups (Field, 2009).

One way ANOVA (independent)

This type of procedure is used when one is testing for the differences between two or more independent groups. On the other hand, the method is used to test for the difference that exists between at least three groups. This is because the T-test can cover the two group test cases. The non-parametric alternative counterpart for this test includes the Kruskal-Wallis’s analysis of variance by ranks. This procedure is used mostly when one is comparing two or more groups (Field, 2009).

Pearson correlation

This type of procedure has its non-parametric alternative counterpart that includes the Spearman rank correlation coefficient. The purpose of this test is for assessing the linear association between some two sets of variables (Bennett, 2006).

SPSS Activity

Non-parametric version of the dependent t-test

A dependent t-test can also be referred to as a paired sample t-test. One of its greater functions is that it can be used in mean comparisons between related or similar subjects. A condition that the data set to be tested must satisfy the test assumption for it to be varied. The assumptions are the observed data set are from a similar subject and the subject population should be normally distributed. The main aim is to test if there is any difference in the two means of the subject under consideration. To state the hypothesis, we must have a problem statement, which necessitates the test to be carried out. The problem statement is: are the result scores, on average, increased after a creative writing course? The researchers own speculation about the difficulty of the test is defined by two statements (Field, 2009).

They include: -The null and alternative hypothesis as follows,

  • Ho: m = 0 (There is no significant difference between the means) Null hypothesis
  • H1: m ≠ 0 (There is a significant difference between the means) Alternative hypothesis

From the computation results that were given, the significance value is 0.011 and the confidence interval of the means is lower – 5.623, upper – 0.777, therefore indicating that there is a significant difference between the mean, the null hypothesis is rejected in favor of the alternative hypothesis. There is also the presence of a notable difference in result scores. This is usually after the student has undergone a creative writing course.

Non-parametric version of the independent t-test

For a nonparametric version of the independent test, Correlation uses the means of variables to measure the strength of the relationship between two or more variables. Regression also uses the mean to measure the relationship between the variable and also to predict one variable given the other. Means comparison is a measure used to test the hypothesis, between two variables means whether the means are significantly different from each other or similar to each other (Field, 2009).

The hypothesis of the test is defined by two statements, the null and alternative hypothesis as follows,

  • Ho: m = 0 (There is no significant difference between the two test results) Null
  • H1: m ≠ 0 (There is a significant difference between the two test results) Alternative

Non-parametric version of the single factor ANOVA

From the descriptive statistics for the non-parametric version of the single factor and ova, it is clear they vary for individual variables and that of the paired variables. Systolic means (Fig 8) are 122.9, 132.60, and 118.80 for home, doctors’ office, and classroom respectively. It is clear the mean for the test taken at the doctors’ office is higher compared to the other two. Standard deviations are 7.094, 8.369, and 5.554 for home, doctors’ office and classroom respectively. From the result, the variation in the blood pressure taken at the doctors’ office is higher while that taken at the classroom room is the least variation. The F test value (Fig 11) is 9.964 and the significance value is 0.001, this indicates there is indeed a significant difference among the groups. To specifically pinpoint the group that differs from the other a post hoc test is carried out. From the result (Fig 13) it is clear the result taken at the doctors’ office differ from those taken in the classroom and at home, results taken in the classroom and at home do not differ.

Diastolic means (Fig 8) are 82.90, 83.20, and 82.60 for home, doctors’ office and classroom respectively. It is clear the mean for the test taken at the doctors’ office is higher compared to the other two. Standard deviations are 2.685, 3.360, and 2.675 for home, doctors’ office and classroom respectively. From the result, the variation in the blood pressure taken at the doctors’ office is higher while those taken at the classroom room and home have similar variation The F test value (Fig 12) is 0.105 and the significance value is 0.9, this indicates there is no significant difference among the groups. The post hoc test (Fig 14) confirms that as no group differs from the others.

The effect size is used to measure relationship strength between two variables in a sample or a population. It is used to complement inferential statistics in measuring the relationship strength Effect size is essential in determining the number of observations required to establish a relationship. Generally, as the size of sample size increases the statistic distribution tend to be an F distribution (Field, 2009).

References

Bennett, B. (2006). Advanced Statistical Techniques. New York: McMillan Publishers.

Field, A. (2009). Discovering statistics using SPSS. Los Angeles: Sage.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!