Student’s Performance Assessment Validation

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Introduction

While it is apparent that the tests that are used to assess student’s performance need to be valid and reliable, the problem of their insufficient validity has been plaguing education for a while. In fact, it proceeds to be a challenge today (Noble, Rosebery, Suarez, Warren, & O’Connor, 2014). The article “An examination of the validity of English-language achievement test scores in an English language learner population” by Abella, Urrutia, and Shneyderman (2005) is devoted to this issue. The authors carry out a study to test the validity of Eglish-language tests that are used with English language learners (ELL) and provide scientific evidence to the fact that their results do depend on a person’s proficiency in a language.

Summary

Abella et al. (2005) authors begin by describing the problems related to the then-current methods of testing ELL and highlight the urgency of the issue by indicating that the number of ELL students in the US has been growing. The authors inform the reader that despite the directives to employ native-language tests (NVT), a vast majority of educators proceeded to use English-language ones with ELLs at the time of the article’s creation.

Further, Abella et al. (2005) mention that the native-language tools are not guaranteed to be “fully equivalent to their English-language counterparts,” and the students might have problems with the native language as well (p. 128). The authors set the goals to find out the extent to which the proficiency of ELLS in both languages affects test results and to check the validity of English-language achievement tests (ELAT) outcomes when they are used to assess the performance of “recently exited” students (RE). RE are the students who had recently exited the English for Speakers of Other Languages.

The research design included the following details. The sample consisted of about 1,700 Hispanic students studying in fourth and tenth grades in the state of Florida. The sample included beginners, advanced ELL, and RE. The students were administered comparable tests in English and Spanish that were aimed at assessing their reading and mathematics knowledge. The latter was of particular interest to the researchers since it is less dependent on language proficiency.

For all the groups of the tenth-grade students, the number of math items answered correctly was greater for NLT. Also, the students of this age tended to have a substantial difference in the results of the two tests if they had a high level of proficiency in English or Spanish. For the fourth-grade students, the beginners in English performed better in NLT; the advanced students exhibited similar results for both tests, and RE ones demonstrated better results in ELAT. The authors conclude that the validity of test results depends on the children’s literacy in both languages. Also, the results of the study indicate that ELAT appears to be valid for the students who have a high-level Engish proficiency combined with low-level Spanish proficiency (Abella et al., 2005, p. 139). For the rest of the students, the scores were better in NLT.

The key conclusion of the authors can be formulated as follows: the results of ELAT can lack the validity of assessment when they are used to evaluate the knowledge of children who have a different native language, especially if the level of home-language proficiency is high or that of English proficiency is low. Also, the authors mention that ELAT proved to be invalid in assessing RE as well. They imply that the validity of the tests does not depend only on the presence of English-language skills. Instead, it appears to be influenced by a combination of factors (such as culture, the differences between academic and casual language, and possibly others).

Critique

Strengths and Weaknesses of Assessment Discussion

The study of Abella et al. (2005) has several strengths. The authors clearly state the purpose of the research and its place in the relevant literature. They also explain the urgency of the issue and the reason for working with the topic. The topic can be considered timely back then and even nowadays (Noble et al., 2014). The sample of students in the test is admittedly impressive; it can be concluded that the assumptions based on such a study are trustworthy, and its results are generalizable. A limitation of the study consists in the fact that the difficulty levels of the tests were not calibrated.

The authors explain that the measure is desirable but too time- and effort-costly to be carried out within the study. Similarly, the research is devoted to testing only one language pair and a particular part of the student population, and the authors invite further research on the issue in different settings. Still, the authors do not make generalizations outside of what is suggested by the results of their work, and these results are of great interest to practitioners, test developers, and policymakers in any case.

Assessment Ideas Analysis

While the results of the article are of interest to policymakers and test developers, the remark of Abella et al. (2005) concerning the fact that teachers tend to neglect the demand for NLT also makes it most important for an ESL or bilingual classroom. Validity can be described as the “extent to which a test or other assessment measures what it is intended to measure” (Brantley, 2007, p. 33). In other words, invalid tests are incapable of carrying out their function: in this case, measure the student’s performance. Since accountability has become a “political obsession,” and it is demanded of modern teachers, it needs to be fair and valid (Brantley, 2007, p. 28).

The necessity for the use of a diversity-friendly achievement test is explained by the theory and practice of teaching and demanded by the law (Gottlieb, 2009). The article by Abella et al. (2005) adds scientific evidence and research data to these supporting arguments. In other words, it demonstrates that the necessity of diversity-friendly assessment is undeniable but keeps being denied for some reason.

Indeed, the problem persists nowadays. A more recent work by Noble et al. (2014) studied the validity of scientific tests applied to Spanish, Haitian Creole, and Asian students who were deemed proficient in English and were supposed to take tests in English.

The results indicated that the children’s proficiency in English affected their scores in a negative way after all. Therefore, the issue that was emphasized by Abella et al. (2005) is still urgent, and the teachers of diverse classes need to keep it in mind: develop competencies that allow working with the diverse student population and use or advocate for the use of proper tools with their students. The drawback of such a strategy consists in the fact that such an approach increases the time and effort spent on assessment in particular and in the classroom in general. Still, the alternative is not just ineffective; it has the potential to harm students, which is unacceptable (Abella et al., 2005).

Conclusion

The article by Abella et al. (2005) provides scientifically proved evidence to the logical assumption that the language of an achievement test is going to influence the performance of the children. In general, the study is of interest to policymakers and test developers, but it is also of particular consequence for teachers. For the sake of validity, fairness, and respect for diversity, this fact needs to be taken into account by any teacher who works in an ESL or bilingual classroom.

References

Abella, R., Urrutia, J., & Shneyderman, A. (2005). An examination of the validity of English-language achievement test scores in an English language learner population. Bilingual Research Journal, 29, 127-144. Web.

Brantley, D. (2007). Instructional assessment of English language learners in the K-8 classroom. Boston, MA: Pearson/Allyn and Bacon. Web.

Gottlieb, M. (2009). Assessing English language learners: Bridges from language proficiency to academic achievement. Thousand Oaks, Calif.: Corwin Press. Web.

Noble, T., Rosebery, A., Suarez, C., Warren, B., & O’Connor, M. C. (2014). Science Assessments and English Language Learners: Validity Evidence Based on Response Processes. Applied Measurement In Education, 27(4), 248-260. Web.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!