[S] Follow Up: Chance Corrected Measures of Agreement

Nancy Friday (nfriday@whsun1.wh.whoi.edu)
Tue, 19 May 1998 14:53:43 -0400 (EDT)


For those interested, here are the 2 responses which I received
regarding my query about chance corrected measures of agreement
(query is copied at the end).

Bill King's <king@Biostat.Wisc.Edu> response:

I have done a lot of work with inter-grader agreement with S-
Plus, but as of yet nothing that deals with multiple-observer
data. We have a large grading group here, but all of our
grading exercises are designed to examine pairwise agreement.
We tend to more concerned with the variability of the group as
a whole, rather than individual grading differences.

So, I am afraid I have nothing immediately available to you;
however, I will examine the topic a bit.

Brian Cade's <Brian_Cade@usgs.gov> response:

You might want to check out Berry and Mielke (1988. a
generalization of Cohen's kappa agreement measure to interval
measurement and multiple raters. Educational and
Psychological Measurement 48:921-933). They describe how
special variants of multiple response permutation procedures
(MRPP) can be used to compute Cohen's kappa for nominal
responses and extended to interval responses and multiple
raters (observers). The test statistic is evaluated under the
null hypothesis by a 3 moments approximation of the
permutation distribution. Mielke's web page for Statistics
Dep at Colorado State Univ has fortran code for the procedures
that could be customized to be called by S-Plus. Our USGS web
page (http://www.mesc.usgs.gov/blossom/blossom.html) has our
BLOSSOM software, which implements many of the MRPP tests
including the agreement statistic. This is available for
downloading (DOS software, but very fast). I suppose this
could be customized to call from S-Plus also, but I haven't
done that yet.

In addition to these two responses, I was able to obtain the fortran
code for the functions used by O'Connell and Dobson (1984) from
Dianne O'Connell.

Nancy Friday

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Nancy Friday, PhD NRC Research Associate *
* NEFSC/NMFS Tel: (508) 495-2397 *
* 166 Water St. Fax: (508) 495-2258 *
* Woods Hole, MA 02543 Email: Nancy.Friday@noaa.gov *
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Original query:
I'm looking for S-Plus code for chance-corrected measures of
agreement. I have 5 to 8 observers (so the standard 2
observer kappa statistics don't work). Subjects are evaluated
for variables with ordinal categories and for variables with
nominal categories. Most of my data are balanced (ie, the
same number of observations for each subject), however some
cases are unbalanced. A procedure for unbalanced data would
be a plus but not necessary. For each variable, I want to
measure the agreement between observers and identify the
observers, if any, which are significantly different from the
rest.

The procedure which I am considering is presented in O'Connell
and Dobson. 1984. General Observer-Agreement Measures on
Individual Subjects and Groups of Subjects. Biometrics.
40:973-983.

I'm also looking into Landis and Koch. 1977. An Application
of Hierarchical Kappa-type Statistics in the Assessment of
Majority Agreement Among Multiple Observers. Biometrics.
33:363-374.

In addition to looking for S-Plus code for such methods, I
would appreciate any other suggestions for chance-corrected
measures of agreement.

Thanks
Nancy Friday
-----------------------------------------------------------------------
This message was distributed by s-news@wubios.wustl.edu. To unsubscribe
send e-mail to s-news-request@wubios.wustl.edu with the BODY of the
message: unsubscribe s-news