Cross-battery assessment

Cross-battery assessment refers to the process by which psychologists use information from multiple test batteries (i.e., various IQ tests) to help guide diagnostic decisions and to gain a fuller picture of an individual’s cognitive abilities than can be ascertained through the use of single-battery assessments. The cross-battery approach (XBA) was first introduced in the late 1990s[1] by Dawn Flanagan, Samuel Ortiz and Kevin McGrew. It offers practitioners the means to make systematic, valid and up-to-date interpretations of intelligence batteries and to augment them with other tests in a way that is consistent with the empirically supported Cattell–Horn–Carroll (CHC) theory of cognitive abilities.[2]

Three Foundational Sources of Information

The XBA approach is a time efficient method to reliably measure a wider (or more in-depth but selective range) of cognitive abilities/processes than any single intelligence battery can measure. It is based on three foundational sources of information (i.e., practice, research and test development) that provide the knowledge necessary to organise theory-driven, comprehensive, reliable, and valid assessments of cognitive abilities.[2]

Practice

R. W. Woodcock conducted a joint factor analysis suggesting the necessity of cross-battery assessments to measure a broad range of cognitive abilities rather than a single intellectual battery.[2] For instance, he found that of the major intellectual batteries utilized prior to 2000, most failed to measure three or more broad CHC abilities that were considered essential in understanding and predicting school achievement. This provided the impetus for developing the XBA approach. The XBA approach also helps facilitate communication among professionals, which guards against misinterpretation. The XBA approach offers practitioners a psychometrically defensible way of identifying normative strengths and weaknesses in cognitive abilities.[2]

Research

The XBA approach helped to promote a greater understanding between cognitive abilities and important outcome criteria. Furthermore, improving the validity of CHC ability measures will further elucidate the relations between CHC cognitive abilities and different outcomes, such as achievement and occupational outcomes.[2]

Test Development

Test authors have utilized CHC theory and XBA CHC test classifications as a blueprint for test development (WJ III, SB5, KABC-II, and DAS-II etc.). Despite the fact that cognitive abilities tests demonstrate a greater coverage of CHC broad cognitive abilities now as compared to previous years; there is still a need to use XBA approach for assessment.[2]

Application of the XBA Approach

It is recommended that practitioners adhere to several guiding principles in order to ensure that XBA procedures are psychometrically and theoretically sound.[2] First, one should select an intelligence battery that best addresses referral concerns. Second, subtests and clusters or composites from a single battery should be utilized whenever possible in order to best represent the broad CHC abilities (i.e., use actual norms whenever possible). Third, it is important to construct CHC broad and narrow ability clusters through acceptable methods, such as CHC theory driven factor analyses or expert consensus content-validity studies.[2] Fourth, when two or more qualitatively different indicators of a broad abilities of interest are not assessed or available on the core battery, then one may supplement it for broad ability indicators from another battery. Finally, when crossing batteries, select tests that were developed and normed within a few years of one another. Sixth, in order to minimize the effect of spurious differences between test scores, select tests from the smallest number of batteries.[2] Underlining the importance of the above points of consideration is the fact that the overzealous attitude of few psychologists to use this XBA approach has led to several cases of its abuse resulting in wrong and misguiding results.[3]

Implementation of the XBA Approach Step-by-Step [2]

  1. Select primary intelligence battery for assessment
  2. Identify represented CHC abilities
  3. Select tests to measure CHC abilities not measured by the primary battery
  4. Administer the primary battery (and any other supplemental tests)
  5. Enter data into the XBA DMIA (provided in "Essentials of Cross Battery Assessment: Second Edition"[2]
  6. Follow XBA guidelines

Use of XBA in Specific Learning Disability (SLD) Evaluation

The "Seven Deadly Sins" in SLD Evaluation

Specific learning disability (SLD) is the largest disability identified among school-aged children. According to Flanagan, Ortiz and Alfonso,[2] in order to receive a diagnosis of SLD the following criteria must be met following these steps: a deficit in academic functioning is determined, academic difficulties are not due to secondary exclusionary factors (e.g., neurological issues, etc.), a deficit in cognitive ability is determined, exclusionary factors are reviewed again to determine that the academic and cognitive deficits are not due to secondary factors, underachievement is established, the academic deficits are shown to have a negative effect on daily life. Flanagan, Ortiz and Alfonso [2] suggest "seven deadly sins" as a metaphor for understanding the misconceptions surrounding SLD evaluation that continue to undermine its reliability and validity.

1. Relentless searching for ipsative or intra-individual discrepancies

One of the most common practices in SLD evaluations is when the scores are ipsatized. Ipsatized scores are scores that have been averaged and subtracted from the overall average in order to determine the degree of deviation from the average. This suggests that when scores deviate from the mean they are clinically important indicators of either relative weaknesses (lower) or relative strengths (higher). Thus, weaknesses are thought of as evidence of SLD. This approach only focuses on the identification of discrepancies that exist within the individual. The vast majority of people do not have flat cognitive profiles and instead show significant variability in their profile of cognitive ability scores. The assumption that people who have certain scores in one domain will show similar ability in all domains is erroneous. Instead of looking for discrepancies wherever they might be found, theory should guide comparison between different sub-tests.[2]

2. Failure to distinguish between a relative weakness and a normative weakness

A lower score does not automatically gain clinical significance simply because the discrepancy has been determined to be real (statistically significant). Statistical significance only means that the difference between the two scores is not due to chance (i.e., that they are different from one another), that is, it does not mean that the difference between the two scores in the comparison is clinically meaningful or indicative of impairment.

3. Obsession with the severe discrepancy calculation

The ability-achievement discrepancy has been regarded as important to definitions and diagnostic criteria of SLD that practitioners often resort to calculating every sub-test score obtained at an evaluation. Given the high number of discrepancies available to calculate, it would be surprising if at least one significant discrepancy was not found. The significant ability-achievement discrepancy should not be synonymous with nor a necessary condition for a SLD diagnosis.

4. Belief that IQ is a near perfect predictor of potential

This ability-achievement discrepancy was likely fostered by the notion that IQ and other global ability composites are near-perfect predictors of an individual's academic achievement. For instance, scores of general ability, like the FSIQ, only account for about 35 to 50% of total achievement variance, which leaves about 50 to 65% of the variance unexplained. Thus, practitioners must recognize that there are other important factors that explain significant variance in achievement and global ability.

5. Failure to apply current theory and research

In evaluating SLD, practitioners may not always be privy to or able to implement procedures that are based on modern theory and research. Practitioners often omit contemporary psychometric theory and current research on SLD that aid in determining identification and diagnosis of SLD.

6. Over-reliance on findings from a single sub-test

Diagnostic decisions are often based on the results from either a single sub-test score or scores used to screen individuals. The reliance on these single scores may not be suitable for the purpose of diagnosis or high-stakes decision making. For instance, one of the fundamental properties of psychometrics is that a single sub-test can't be considered a reliable indicator by itself of the construct it is intended to measure. One sub-test is not sufficient to indicate the presence of an SLD or other impairment.

7. Belief that aptitude and ability are the same

Aptitude and ability are two concepts that are often mistakenly confused. It is important to differentiate between the two given the shift in understanding SLD which is based on the difference between ability and aptitude. When evaluating SLD, looking at aptitude is important because those abilities are associated with long-term academic outcomes.

References

  1. McGrew, D. P. & , K. S (1997). "A cross-battery to assessing and interpreting cognitive abilities: Narrowing the gap between practice and cognitive science". In Flanagan, Dawn P.; Harrison, Patti L. Contemporary intellectual assessment: Theories, tests, and issues. New York: The Guilford Press. pp. 314–325. ISBN 978-1-59385-125-5.
  2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Flanagan, D. P., Ortiz, S. O., & Alfonso, V. C. (2007). Essentials of Cross Battery Assessment 2nd Edition. New Jersey: Wiley
  3. wiki-centric (11 January 2009). "Hits and Myths of XBA". Education Faculty Publications and Presentations.

Further reading

Flanagan, Dawn P.; Harrison, Patti L., eds. (2012). Contemporary Intellectual Assessment: Theories, tests, and issues (Third ed.). New York (NY): Guilford Press. ISBN 978-1-60918-995-2. Lay summary (29 March 2014). This handbook for practitioners includes chapters by John D. Wasserman, Randy W. Kamphaus, Anne Pierce Winsor, Ellen W. Rowe, Sangwon Kim, John L. Horn, Nayena Blankson, W. Joel Schneider, Kevin S. McGrew, Jie-Qi Chen, Howard Gardner, Robert J. Sternberg, Jack A. Naglieri, J. P. Das, Sam Goldstein, Lisa Whipple Drozdick, Dustin Wahlstrom, Jianjun Zhu, Lawrence G. Weiss, Dustin Wahlstrom, Kristina C. Breaux, Jianjun Zhu, Lawrence G. Weiss, Gale H. Roid, Mark Pomplun, Jennie Kaufman Singer, Elizabeth O. Lichtenberger, James C. Kaufman, Alan S. Kaufman, Nadeen L. Kaufman, Fredrick A. Schrank, Barbara J. Wendling, Colin D. Elliott, R. Steve McCallum, Bruce A. Bracken, Jack A. Naglieri, Tulio M. Otero, Cecil R. Reynolds, Randy W. Kamphaus, Tara C. Raines, Robb N. Matthews, Cynthia A. Riccio, John L. Davis, Jack A. Naglieri, Tulio M. Otero, Dawn P. Flanagan, Vincent C. Alfonso, Samuel O. Ortiz, Catherine A. Fiorello, James B. Hale, Kirby L. Wycoff, Randy G. Floyd and John H. Kranzler, Samuel O. Ortiz, Salvador Hector Ochoa, Agnieszka M. Dynda, Nancy Mather, Barbara J. Wendling, Laurie Ford, Michelle L. Kozey, Juliana Negreiros, David E. McIntosh, Felicia A. Dixon, Eric E. Pierson, Vincent C. Alfonso, Jennifer T. Mascolo, Marlene Sotelo-Dynega, Laura Grofer Klinger, Sarah E. O’Kelly, Joanna L. Mussey, Sam Goldstein, Melissa DeVries, James B. Hale, Megan Yim, Andrea N. Schneider, Gabrielle Wilcox, Julie N. Henzel, Shauna G. Dixon, Scott L. Decker, Julia A. Englund, Alycia M. Roberts, Kathleen Armstrong, Jason Hangauer, Joshua Nadeau, Jeffery P. Braden, Bradley C. Niebling, Timothy Z. Keith, Matthew R. Reynolds, Daniel C. Miller, Denise E. Maricle, Denise E. Maricle, Erin Avirett, Rachel Brown-Chidsey, Kristina J. Andren, George McCloskey, James Whitaker, Ryan Murphy, Jane Rogers, and John B. Carroll.

  • Kaufman, James C., ed. (2009). Intelligent Testing: Integrating Psychological Theory and Clinical Practice. Cambridge: Cambridge University Press. ISBN 978-0-521-86121-2. Lay summary (23 October 2010). This review of current research includes chapters by Nadeen L. Kaufman, Elizabeth O. Lichtenberger, Jennie Kaufman Singer, Elaine Fletcher-Janzen, Nancy Mather, Kyle Bassett, Thomas Oakland, Jack A. Naglieri, Samuel O. Ortiz, Dawn P. Flanagan, Robert J. Sternberg, Randy W. Kamphaus, Cecil R. Reynolds, Jason C. Cole, Claire Énéa-Drapeau, Michèle Carlier, Toshinori Ishikuma, Jan Alm, R. Steve McCallum, and Bruce A. Bracken.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.