Show simple item record

dc.contributor.authorBOVE, Giuseppe
dc.date.accessioned2023-08-03T15:06:08Z
dc.date.available2023-08-03T15:06:08Z
dc.date.issued2023
dc.identifierONIX_20230803_9791221501063_96
dc.identifier.issn2704-5846
dc.identifier.urihttps://library.oapen.org/handle/20.500.12657/74900
dc.description.abstractMost measures of interrater agreement are defined for ratings regarding a group of targets, each rated by the same group of raters (e.g., the agreement of raters who assess on a rating scale the language proficiency of a corpus of argumentative written texts). However, there are situations in which agreement between ratings regards a group of targets where each target is evaluated by a different group of raters, like for instance when teachers in a school are evaluated by a questionnaire administered to all the pupils (students) in the classroom. In these situations, a first approach is to evaluate the level of agreement for the whole group of targets by the ANOVA one-way random model. A second approach is to apply subject-specific indices of interrater agreement like rWG, which represents the observed variance in ratings compared to the variance of a theoretical distribution representing no agreement (i.e., the null distribution). Both these approaches are not appropriate for ordinal or nominal scales. In this paper, an index is proposed to evaluate the agreement between raters for each single target (subject or object) on an ordinal scale, and to obtain also a global measure of the interrater agreement for the whole group of cases evaluated. The index is not affected by the possible concentration of ratings on a very small number of levels of the scale, like it happens for the measures based on the ANOVA approach, and it does not depend on the definition of a null distributions like rWG. The main features of the proposal will be illustrated in a study for the assessment of learning teacher behavior in classroom collected in a research conducted in 2018 at Roma Tre University.
dc.languageEnglish
dc.relation.ispartofseriesProceedings e report
dc.subject.classificationthema EDItEUR::J Society and Social Sciencesen_US
dc.subject.otherInterrater agreement
dc.subject.otherOrdinal data
dc.subject.otherTeacher evaluation
dc.titleChapter Measures of interrater agreement when each target is evaluated by a different group of raters
dc.typechapter
oapen.identifier.doi10.36253/979-12-215-0106-3.28
oapen.relation.isPublishedBy9223d3ac-6fd2-44c9-bb99-5b98ca9d2fad
oapen.relation.isPartOfBook863aa499-dbee-4191-9a14-3b5d5ef9e635
oapen.relation.isbn9791221501063
oapen.series.number134
oapen.pages6
oapen.place.publicationFlorence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record