Measurement for a Quantitative Research Plan

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Abstract

Reliability and validity are important constructs in quantitative research. Researchers must enhance the validity and reliability of the study test, scale, and measurements to generate significant findings. This paper examines the approaches the researcher will use to boost the validity and reliability of research examining the effect of social/medical support on the medical compliance of African American women with HIV.

Levels of Measurement

A level of measurement is the relationship among the different values of a study variable (Creswell, 2009, p. 141). Each variable, whether discrete or continuous, contains ordered categories or constructs that can be assigned values. The study will use an ordinal measurement to rank the values of each construct or item included in the interview. A level of measurement operationalizes the attributes of a variable, enabling the researcher to determine the appropriate statistical analyses for the data.

In the study, the independent variable (IV) includes medical/social support while the dependent variable (DV) is the level of medical compliance. The key measures of the IV will include perceived informational support, emotional support, and support networks (Zuckerman & Antoni, 2009). According to Nation (2007), ordinal and nominal scales give the researcher more degrees of freedom than higher-level measurements such as interval and ratio scales. In this view, the study will use an ordinal scale where the attributes of a construct are rank-ordered with the distance between the categories having no meaning (Nation, 2007, p. 17).

In addition, medical compliance (DV) values of CD4 count, symptom remissions, and quantity of metabolites in urine can be coded in numerical values. Thus, an ordinal scale will be useful in constructing meaningful ranks or orders for the attributes of the variables. The patients aggregate score on the variables will predict how well he or she adheres to the treatment guideline.

Validity

In content validity, the researcher cross-checks each operationalized construct against its content (Blaxter, Hughes & Tight, 2006). It entails defining the criteria that make up the attributes of the content of a program or intervention. In the proposed study, the content domain of the intervention will include a description of the target population (HIV positive African women), age, information of ARV use, and self-care methods, among others. The criteria will be used as a checklist to measure the extent to which the IV and DV represent the constructs of the content domain.

Empirical or statistical validity measures the predicting ability of an operationalized construct. In the study, it is theorized that a measure of medical/social support should predict medical compliance in HIV-positive patients. The researcher will test the measures of the IV, i.e., informational support, emotional support, and support networks, on in-patients under treatment regimen. A strong correlation between the values will give evidence for the empirical validity of the measures to predict medical compliance in the target group.

Construct validity estimates the extent to which the conclusions made correspond to the theoretical concepts of the constructs. It entails generalizing the implemented measures to the concept of the measures (Trochim, 2006). A valid construct evaluates the attribute it was meant to measure. In this study, it is hypothesized that informational/medical support, emotional support, and support networks will enhance medical adherence among the target patient population. The researcher will seek to prove that this theoretical relationship occurs in reality, using tests of significance, such as t-test and ANOVA (Frankfort-Nachmias and Nachmias, 2008, p. 57). Significant test scores will indicate that the level of association between the concepts and the attributes measured, and thus, evidence for the validity of the theorized constructs.

Reliability of the Measurement

In quantitative research, reliability describes the repeatability of the measures (Shuttleworth, 2011, para. 11). It reflects the quality of a studys measures. A reliable measurement procedure yields valid results. One way the investigator will enhance the reliability of the measurements will involve utilizing multiple types of research and statistics provided by different HIV clinical program coordinators. The statistics, collected through interviews, will highlight the educational and medical programs implemented in the United States. In this study, more than one coordinator will provide the data, which will help eliminate researcher bias and measurement bias that affect reliability.

The coordinators use structured interviews to score the impact of social/medical support on compliance over a specific duration. Researcher bias may arise due to adaptation effects, i.e., gaining experience over the course of the study. Thus, relying on data from different coordinators will help filter the measurements for internal consistency to eliminate researcher bias. Researcher bias limits the reliability of measurements due to systematic errors in measuring the constructs. Using multiple interviewers can minimize the effects of researcher bias and enhance the reliability of the results.

The second method will involve the computation of the correlation between the datasets provided by different coordinators to estimate their reliability. According to Shuttleworth (2011), inter-rater reliability using SPSS estimates the extent of the agreement between measurements (para. 7). Thus, the approach will help determine how reliable the statistics are for the study. The study will draw data from coordinators stationed at distinct locations across the US. The investigator will cross check data submitted by each coordinator for internal consistency. A strong association between the individual values will indicate the reliability of each coordinators measurements.

Strengths and Limitations of the Measurement Instrument

The study is based on statistics provided by coordinators who interviewed the subjects on compliance and social/medical support. A key strength of closed-ended interview protocols is the ability to yield precise information pertinent to the research. Thus, they enhance content validity because the interviewer can cross-check the responses with the predefined checklist items.

In collecting data, interviews can be used to confirm theoretical relationships between measures, and thus, ascertain construct validity. Fraenkel and Wallen (2003) identify exploration and confirmation as they cornerstones of interviews (p. 112). Thus, the ability to achieve construct validity and content validity is a key strength of the interview protocol. A well-structured interview protocol can be applied to different respondents. The coordinators used the same item list on the interview protocol to interview the respondents, obtaining simultaneous measurements. Thus, the ability of the protocol to generate comparable measurements makes it a reliable instrument.

The weakness of the interview protocol lies in the accuracy of the responses expressed as scores. The recorded score may not depict the true value of the response due to measurement errors (Clark & Watson, 2007). As a result, measurements may be overestimated or underestimated. A systematic error may arise due to social desirability bias, whereby the participant gives responses deemed favorable during the interview. Acquiescence bias wherein participants agree or disagree with every statement also weakens the reliability of interviews (Clark & Watson, 2007, p. 312). Unstructured questions may also elicit inconsistent responses from the respondents, affecting the reliability of the measures.

The Appropriate Scale for the Study

The study will use a 5-item Likert or summative scale to measure the variables. The scale relies on a unidirectional rating of the participants responses from one to five or one to seven (Bertram, 2010). A social/medical support scale, the multidimensional perceived social support scale (MPSSS), is appropriate for this study for three reasons (Smallbone & Quinton, 2004, p. 157). First, the study will attempt to estimate the degree of social/medical support (informational, emotional, support networks) as perceived by the respondents. Therefore, a 5-item social/medical support scale will indicate each respondents level of agreement with the concepts, and thus, enable the researcher to find an aggregate rating.

Similarly, the dependent variable (medical compliance) can be rated on a 5-point scale indicating how often a respondent enrolled in the social/medical support program takes his/her medicine. Second, the MPSS, which is a Likert-type scale, generates ordinal data, which can be analyzed with non-parametric tests to compare responses. The study uses the ordinal measurement, and therefore, a Likert-type scale would be appropriate for measuring the variables. Third, the MPSSS, unlike multidimensional scales, requires little effort to read and complete, making it appropriate for measuring the perceptions of the participants. In this study, the structured interview questions in the MPSSS used by the coordinators contain rank-ordered responses from the lowest to the lowest rank.

Reliability and Validity of the Scale

Various methods will be used to ascertain the validity of the scale. The principal components analysis will be used to ascertain the construct validity of the factors, i.e., the variance within a dataset (Shuttleworth, 2011). The data will be drawn from multiple sources to reduce the effect of acquiescence and social desirability bias. The Cronbach coefficient will be used to estimate the reliability of the developed scale to measure perceived medical/social support by African American women with HIV. The value of the Cronbach coefficient determines the reliability of the scale. In general, a value greater than 80% (± = 0.80) is statistically significant, indicating that the scale is reliable for the measurements (Lyubomirsky & Lepper, 2007).

Measurement errors affect the reliability of the scale. The study will use test-retest method to ascertain the reliability of the medical/social support scale over time (Shuttleworth, 2011). The respondents will take the test after completing one medical/social support program and before enrolling for a next one. It is anticipated that a significant correlation (r = 0.9) will exist between the test and the retest, confirming the reliability of the scale. A second method involves the split-half test, whereby similar results between two halves confirms internal reliability (Drost, 2009, p. 109). The reliability of the scale will be enhanced using the split-half approach.

Parallel forms reliability is another method the researcher will use to test the scale. The method involves formulating questions that focus on a particular construct and dividing them into two categories (Shuttleworth, 2011). The questions are then administered to a respondent sample. The level of correlation between the two categories indicates the level of reliability of the scale. The researcher will create several items that address the same construct and administer them to the same respondents to estimate reliability and consistency of the responses. Reproducible results will indicate that the scale is not prone to measurement errors, and thus, can give reliable estimates of the constructs.

The Appropriate Test

The study will use a specific test to measure medication adherence by the respondents participating in the social/medical support programs. The Medication Event Monitoring System (MEMS) tests medical compliance over a specific duration (Kaya & Celik, 2012). The MEMS is an appropriate test for this study for measuring hospital visits, hospitalizations, and fatalities involving African American women with HIV. The test will also be used to monitor key variables such as CD4 count and weight in patients.

The MEMS is a criterion-referenced test or CRT. According to), CRTs measure a participants understanding of a particular body of knowledge and skills (National Center for Fair and Open Testing, 2007, para. 6). They often entail multiple choice questions to test the participants knowledge and skills pertinent to a specific subject area. The MEMS qualifies as a criterion-referenced test because it evaluates the respondents medical adherence over a duration using variables such as hospital visits and CD4 count. Therefore, it is possible to distinguish compliant patients from noncompliant ones based on a predetermined passing score. Higher scores would predict medication adherence by the patient enrolled in the social/medical support programs.

The passing score is determined from the set performance standards. The content of the social/medical support programs for African American women with HIV in the US is professionally determined. The aim is to ensure that the programs cover important skills that would make the participants proficient in self-care. Thus, MEMS, which is based on standards of social support, will be an effective test for self-care proficiency and medical compliance.

The Population Used for the Scale and Test

The MPSSS scale can be applied to a patient population to measure their perception of the social/medical support received. In this study, the scale will be used with African American patients with HIV receiving support from various programs across the country. The MEMS test monitors drug adherence among patients. It compares baseline data with individual scores on key variables during the study. The MEMS is usually used with heart disease patients under treatment regimen. Medication compliance in this population predicts patient outcomes, such as hospital visits or death.

References

Bertram, D. (2010). Likert Scales: the Meaning of Life. Web.

Blaxter, L., Hughes, C., & Tight, M. (2006). How to Research. Berkshire: Open University Press.

Clark, L., & Watson, D. (2007). Constructing Validity: Basic Issues in Objective Scale Development. Psychological Assessment, 7(3), 309-319.

Creswell, J. W. (2009). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Thousand Oaks, CA: Sage Publications.

Drost, A. (2009). Validity and Reliability in Social Science Research. Education Research and Perspectives, 38(1), 105-114.

Fraenkel, J. R. & Wallen, N. E. (2003). How to Design and Evaluate Research in Education. New York: McGraw-Hill.

Frankfort-Nachmias, C., & Nachmias, D. (2008). Research Methods in the Social Sciences. New York, NY: Worth Publishers.

Kaya, M., & Celik, E. (2012). Projective Identification: The Study of Scale Development, Reliability and Validity. The Online Journal of Counselling and Education, 1(2), 29-43.

Lyubomirsky, S., & Lepper, H. S. (2007). A Measure of Subjective Happiness: Preliminary Reliability and Construct Validation. Social Indicators Research, 46, 137155.

Nation, J. R. (2007). Research Methods. New Jersey: Prentice Hall.

National Center for Fair and Open Testing. (2007). Criterion- and Standards-Referenced Tests. Web.

Shuttleworth, M. (2011). Validity and Reliability. 

Smallbone, T., & Quinton, S. (2004). Increasing Business Students Confidence in Questioning the Validity and Reliability of their Research. Electronic Journal of Business Research Methods, 2 (2), 153-162.

Trochim, W. M. (2006). Introduction to Validity: Social Research Methods

Zuckerman, M., & Antoni, M. (2009). Social Support and its Relationship to Psychological, Physical Health, and Immune Variables in HIV-infection. Clinical Psychology & Psychotherapy, 2(4), 210-219.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!