Rubric (academic)

In education terminology, rubric means "a scoring guide used to evaluate the quality of students' constructed responses".[1] Put simply, it is a set of criteria for grading assignments. Rubrics usually contain evaluative criteria, quality definitions for those criteria at particular levels of achievement, and a scoring strategy.[1] They are often presented in table format and can be used by teachers when marking, and by students when planning their work.[2]

A scoring rubric is an attempt to communicate expectations of quality around a task. In many cases, scoring rubrics are used to delineate consistent criteria for grading. Because the criteria are public, a scoring rubric allows teachers and students alike to evaluate criteria, which can be complex and subjective. A scoring rubric can also provide a basis for self-evaluation, reflection, and peer review. It is aimed at accurate and fair assessment, fostering understanding, and indicating a way to proceed with subsequent learning/teaching. This integration of performance and feedback is called ongoing assessment or formative assessment.

Several common features of scoring rubrics can be distinguished, according to Bernie Dodge and Nancy Pickett:[3]

  • They focus on measuring a stated objective (performance, behavior, or quality).
  • They use a range to rate performance.
  • They contain specific performance characteristics arranged in levels indicating either the developmental sophistication of the strategy used or the degree to which a standard has been met.

Components of a scoring rubric

Scoring rubrics include one or more dimensions on which performance is rated, definitions and examples that illustrate the attribute(s) being measured, and a rating scale for each dimension. Dimensions are generally referred to as criteria, the rating scale as levels, and definitions as descriptors.

Herman, Aschbacher, and Winters[4] distinguish the following elements of a scoring rubric:

  • One or more traits or dimensions that serve as the basis for judging the student response
  • Definitions and examples to clarify the meaning of each trait or dimension
  • A scale of values on which to rate each dimension
  • Standards of excellence for specified performance levels accompanied by models or examples of each level

Since the 1980s, many scoring rubrics have been presented in a graphic format, typically as a grid. Studies of scoring rubric effectiveness now consider the efficiency of a grid over, say, a text-based list of criteria.

Rubrics can be classified as holistic, analytic, or developmental. Holistic rubrics integrate all aspects of the work into a single overall rating of the work. For example, "the terms and grades commonly used at university (i.e., excellent – A, good – B, average – C, poor – D, and weak – E) usually express an assessor's overall rating of a piece of work. When a research article or thesis is evaluated, the reviewer is asked to express his or her opinion in holistic terms – accept as is, accept with minor revisions, require major revisions for a second review, or reject. The classification response is a weighted judgement by the assessor taking all things into account at once; hence, holistic. In contrast, an analytic rubric specifies various dimensions or components of the product or process that are evaluated separately. The same rating scale labels may be used as the holistic, but it is applied to various key dimensions or aspects separately rather than an integrated judgement. This separate specification means that on one dimension the work could be excellent, but on one or more other dimensions the work might be poor to average. Most commonly, analytic rubrics have been used by teachers to score student writing when the teacher awards a separate score for such facets of written language as conventions or mechanics (i.e., spelling, punctuation, and grammar), organisation, content or ideas, and style. They are also used in many other domains of the school curriculum (e.g., performing arts, sports and athletics, studio arts, wood and metal technologies, etc.). By breaking the whole into significant dimensions or components and rating them separately, it is expected that better information will be obtained by the teacher and the student about what needs to be worked on next." (Brown, Irving, & Keegan, 2014, p. 55).[5] Developmental rubrics are analytical but also meet developmental characteristics described below.

Steps to create a scoring rubric

Scoring rubrics may help students become thoughtful evaluators of their own and others' work and may reduce the amount of time teachers spend evaluating student work. Here is a seven-step method to creating and using a scoring rubric for writing assignments:[6]

  1. Have students look at models of good versus "not-so-good" work. A teacher should provide sample assignments of variable quality for students to review.
  2. List the criteria to be used in the scoring rubric and allow for discussion of what counts as quality work. Asking for student feedback during the creation of the list also allows the teacher to assess the students' overall writing experiences.
  3. Articulate gradations of quality. These hierarchical categories should concisely describe the levels of quality (ranging from bad to good) or development (ranging from beginning to mastery). They can be based on the discussion of the good versus not-so-good work samples or immature versus developed samples. Using a conservative number of gradations keeps the scoring rubric user-friendly while allowing for fluctuations that exist within the average range ("creating rubrics").
  4. Practice on models. Students can test the scoring rubrics on sample assignments provided by the instructor. This practice can build students' confidence by teaching them how the instructor would use the scoring rubric on their papers. It can also aid student/teacher agreement on the reliability of the scoring rubric.
  5. Ask for self and peer-assessment.

When to use scoring rubrics

A rubric can be used in individual assessment within the course, or a project or capstone project. However, it can be used when multiple evaluators are evaluating the assessment to get focus on the contributing attributes for the evaluation. Rubrics are ideally suited for project assessment since each component of the project has a corresponding section on the rubric that specifies criteria for quality of work.

  1. Revise the work on the basis of that feedback. As students are working on their assignment, they can be stopped occasionally to do a self-assessment and then give and receive evaluations from their peers. Revisions should be based on the feedback they receive.
  2. Use teacher assessment, which means using the same scoring rubric the students used to assess their work.

Developmental rubrics

Developmental rubrics are analytic rubrics that use multiple dimensions of developmental successions to facilitate assessment, instructional design, and transformative learning.[7]

Defining developmental rubrics

Developmental rubrics refer to a matrix of modes of practice. Practices belong to a community of experts.[8] Each mode of practice competes with a few others within the same dimension. Modes appear in succession because their frequency is determined by four parameters: endemicity, performance rate, commitment strength, and acceptance. Transformative learning results in changing from one mode to the next. The typical developmental modes can be roughly identified as beginning, exploring, sustaining, and inspiring. The timing of the four levels is unique to each dimension and it is common to find beginning or exploring modes in one dimension coexisting with sustaining or inspiring modes in another. Often, the modes within a dimension are given unique names in addition to the typical identifier. As a result, developmental rubrics have four properties:

  1. They are descriptions of examples of behaviors.
  2. They contain multiple dimensions each consisting of a few modes of practice that cannot be used simultaneously with other modes in the dimension.
  3. The modes of practice within a dimension show a dynamic succession of levels.
  4. They can be created for extremely diverse scales of times and places.

Creating developmental rubrics

  1. Since practices belong to a community, the first step is to locate a group of practitioners, who are expert in their field and experienced with learners.
  2. Next, each practitioner works with an expert developmental interviewer to create a matrix that best reflects their experiences. Once several interviews have been completed they can be combined within a single set of developmental rubrics for the community through individual or computerized text analysis.
  3. Third, the community of experts rate learner performances and meet to compare ratings of the same performances and revise the definitions when multiple interpretations are discovered.
  4. Fourth, instructors of particular courses share the developmental rubrics with students and identify the target modes of practice for the course. Typically, a course targets only a fraction of the dimensions of the community's developmental rubrics and only one mode of practice within each of the target dimensions.
  5. Finally, the rubrics are used real-time to motivate student development, usually focusing on one dimension at a time and discussing the opportunities to perform at the next mode of practice in succession.

Etymology and history

The traditional meanings of the word rubric stem from "a heading on a document (often written in red — from Latin, rubrica, red ochre, red ink), or a direction for conducting church services".[9] Drawing on definition 2 in the OED for this word [10] rubrics referred to the instructions on a test to the test-taker as to how questions were to be answered.

In modern education circles, rubrics have recently come to refer to an assessment tool. The first usage of the term in this new sense is from the mid-1990s, but scholarly articles from that time do not explain why the term was co-opted. Perhaps rubrics are seen to act, in both cases, as metadata added to text to indicate what constitutes a successful use of that text. It may also be that the color of the traditional red marking pen is the common link.

As shown in the 1977 introduction to the International Classification of Diseases-9,[11] the term has long been used as medical labels for diseases and procedures. The bridge from medicine to education occurred through the construction of "Standardized Developmental Ratings." These were first defined for writing assessment in the mid-1970s[12] and used to train raters for New York State's Regents Exam in Writing by the late 1970s.[13] That exam required raters to use multidimensional standardized developmental ratings to determine a holistic score. The term "rubrics" was applied to such ratings by Grubb, 1981[14] in a book advocating holistic scoring rather than developmental rubrics. Developmental rubrics return to the original intent of standardized developmental ratings, which was to support student self-reflection and self-assessment as well as communication between an assessor and those being assessed. In this new sense, a scoring rubric is a set of criteria and standards typically linked to learning objectives. It is used to assess or communicate about product, performance, or process tasks.

Technical

One problem with scoring rubrics is that each level of fulfillment encompasses a wide range of marks. For example, if two students both receive a 'level four' mark on the Ontario system, one might receive an 80% and the other 100%. In addition, a small change in scoring rubric evaluation caused by a small mistake may lead to an unnecessarily large change in numerical grade. Adding further distinctions between levels does not solve the problem, because more distinctions make discrimination even more difficult. Both scoring problems may be alleviated by treating the definitions of levels as typical descriptions of whole products rather than the details of every element in them.

Scoring rubrics may also make marking schemes more complicated for students. Showing one mark may be inaccurate, as receiving a perfect score in one section may not be very significant in the long run if that specific strand is not weighted heavily. Some may also find it difficult to comprehend an assignment having multiple distinct marks, and therefore it is unsuitable for some younger children. In such cases it is better to incorporate the rubrics into conversation with the child than to give a mark on a paper. For example, a child who writes an "egocentric" story (depending too much on ideas not accessible to the reader) might be asked what her best friend thinks of it (suggesting a move in the audience dimension to the "correspondence" level). Thus, when used effectively scoring rubrics help students to improve their weaknesses.

Multidimensional rubrics also allow students to compensate for a lack of ability in one strand by improving another one. For instance, a student who has difficulty with sentence structure may still be able to attain a relatively high mark, if sentence structure is not weighted as heavily as other dimensions such as audience, perspective or time frame.

Another advantage of a scoring rubric is that it clearly shows what criteria must be met for a student to demonstrate quality on a product, process, or performance task.

Scoring rubrics can also improve scoring consistency. Grading is more reliable while using a rubric than without one.[15] Educators can refer to a rubric while scoring assignments to keep grading consistent between students. Teachers can also use rubrics to keep their scoring consistent between other teachers who teach the same class.

See also

References

  1. Popham, James (October 1997). "What's Wrong - and What's Right - with Rubrics". Educational Leadership. 55 (2): 72–75.
  2. Dawson, Phillip (December 2015). "Assessment rubrics: towards clearer and more replicable design, research and practice Phillip". Assessment & Evaluation in Higher Education. 42 (3): 347–360. CiteSeerX 10.1.1.703.8431. doi:10.1080/02602938.2015.1111294.
  3. "Rubrics for Web Lessons". 2007. Retrieved 2020-04-21.
  4. Herman, Joan (January 1992). A Practical Guide to Alternative Assessment. Association for Supervision & Curriculum Deve. ISBN 978-0871201973.
  5. Brown, G. T. L., Irving, S. E., & Keegan, P. J. (2014). An introduction to educational assessment, measurement, and evaluation: Improving the quality of teacher-based assessment (3rd ed.). Auckland, NZ: Dunmore Publishing. ISBN 9781927212097
  6. Goodrich, H. (1996). "Understanding Rubrics." Educational Leadership, 54 (4), 14-18.
  7. Dirlam, D. K. (2017). "Teachers, learners, modes of practice: Theory and methodology for identifying Knowledge Development." New York: Routledge.
  8. Wenger, E., McDermott,R. & Snyder, W. M. (2002). "Cultivating Communities of Practice." Boston, MA: Harvard Business School Press.
  9. "The definition of rubric".
  10. "Rubric | Definition of rubric in English by Oxford Dictionaries".
  11. "International Classification of Diseases - 9 (1975)".
  12. Dirlam, David; Byrne, Maureen (1978-02-28). "Standardized Developmental Ratings". Archived from the original on 2012-07-29. Cite journal requires |journal= (help)
  13. Dirlam, D. K. (1980). Classifiers and cognitive development. In S. & C. Modgil (Eds.), Toward a Theory of Psychological Development. Windsor, England: NFER Publishing, 465-498
  14. Grubb, Mel. (1981). Using Holistic Evaluation. Encino, Cal.: Glenco Publishing Company, Inc.
  15. Jonsson, Anders; Svingby, Gunilla (2007). "The use of scoring rubrics: Reliability, validity and educational consequences". Educational Research Review. 2 (2): 130–144. doi:10.1016/j.edurev.2007.05.002.

Further reading

  • Flash, P. (2009) Grading writing: Recommended grading strategies. Retrieved Sep 17, 2011, from http://writing.umn.edu/tww/responding/grading.html
  • http://www.uen.org/rubric/
  • Stevens, D. & Levi, Antonia J. (2013). Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Sterling, VA: Stylus Publishing.
  • University of Minnesota, Center for Advanced Research on Language Acquisition (CARLA), Virtual Assessment Center. (n.d.). Creating Rubrics. Retrieved May, 2015, from http://www.carla.umn.edu/assessment/vac/improvement/p_6.html
  • Winter H., (2002). Using test results for assessment of teaching and learning. Chem Eng Education 36:188-190
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.