Weighted Cohen’s Kappa

Cohen’s kappa takes into account disagreement between the two raters, but not the degree of disagreement. This is especially relevant when the ratings are ordered (as they are in Example 2 of Cohen’s Kappa).

To address this issue, there is a modification to Cohen’s kappa called weighted Cohen’s kappa.  The weighted kappa is calculated using a predefined table of weights which measure the degree of disagreement between the two raters, the higher the disagreement the higher the weight. The table of weights should be a symmetric matrix with zeros in the main diagonal (i.e. where there is agreement between the two judges) and positive values off the main diagonal. The farther apart are the judgments the higher the weights assigned.

We show how this is done for Example 2 of Cohen’s Kappa where we have reordered the rating categories from highest to lowest to make things a little clearer. We will use a linear weighting although higher penalties can be assigned for example to the Never × Often assessments.

Example 2: Repeat Example 2 of Cohen’s Kappa using the weights in range G6:J9 of Figure 1, where the weight of disagreement of Never × Often is twice the weights of the other disagreements.

Weighted kappa Excel output

Figure 1 – Weighted kappa

We first calculate the table of expected values (assuming that outcomes are by chance) in range A14:E19. This is done exactly as for the chi-square test of independence. E.g. cell B16 contains the formula =B$10*$E7/$E$10.

The weighted value of kappa is calculated by first summing the products of all the elements in the observation table by the corresponding weights and dividing by the sum of the products of all the elements in the expectation table by the corresponding weights. Since the weights measure disagreement, weighted kappa is then equal to 1 minus this quotient.

For Example 1, the weighted kappa (cell H15) is given by the formula

         =1-SUMPRODUCT(B7:D9,H7:J9)/SUMPRODUCT(B16:D18,H7:J9)

Note that if we assign all the weights on the main diagonal to be 0 and all the weights off the main diagonal to be 1, we have another way to calculate the unweighted kappa, as shown in Figure 2.

Unweighted Cohen's kappa example

Figure 2 – Unweighted kappa

Observation: Using the notation from Cohen’s Kappa where pij are the observed probabilities, eij = piqj are the expected probabilities and  wij are the weights (with wji = wij) then

image9208

The standard error is given by the following formula:

image9209

where

image9210 image9211 image9212

Note too that the weighted kappa can be expressed as

image9213

where

image9214

From these formulas, hypothesis testing can be done and confidence intervals calculated, as described in Cohen’s Kappa.

Real Statistics Function: The Real Statistics Resource Pack contains the following function:

WKAPPA(R1, R2, lab, alpha) = returns a 4 × 1 range with values kappa, the standard error and left and right endpoints of the 1 – alpha confidence interval (alpha defaults to .05) where R1 contains the observed data (formatted as in range M7:O9 of Figure 2) and R2 contains the weights (formatted as in range S7:U9 of the same figure).

If range R2 is omitted it defaults to the unweighted situation where the weights on the main diagonal are all zeros and the other weights are ones. Range R2 can also be replaced by a number r. A value of r = 1 means the weights are linear (as in Figure 1), a value of 2 means the weights are quadratic. In general this means that the equivalent weights range would contain zeros on the main diagonal and values (|i−j|)r in the ith row and jth column when i ≠ j.

If lab = TRUE then WKAPPA returns a 4 × 2 range where the first column contains labels which correspond to the values in the second column. The default is lab = FALSE.

Observation: Referring to Figure 1 and 2, we have WKAPPA(B7:D9,G6:J9) = WKAPPA(B7:D9,1) = .500951 and WKAPPA(M7:O9) = .495904. We If we highlight a 4 × 2 range and enter WKAPPA(B7:D9, G6:J9,TRUE,.05) we obtain the output in range Y7:Y10 of Figure 3. For WKAPPA(M7:O9,,TRUE,.05) we obtain the output in range AA8:AB11 of Figure 7 of Cohen’s Kappa.

Real Statistics Data Analysis Tool: The Reliability data analysis tool supplied in the Real Statistics Resource Pack can also be used to calculate Cohen’s weighted kappa.

To calculate Cohen’s weighted kappa for Example 1 press Ctrl-m and choose the Reliability option from the menu that appears. Fill in the dialog box that appears (see Figure 7 of Cronbach’s Alpha) by inserting B7:D9 in the Input Range and G7:J9 in the Weights Range, making sure that Column headings included with data is not selected and choosing the Weighted kappa option. The output is shown on the left side of Figure 25.5.3.

Alternatively you can simply place the number 1 in the Weights Range field. If instead you place 2 in the Weights Range field (quadratic weights) you get the results on the right side of Figure 3.

Weighted kappa data analysis

Figure 3 – Weighted kappa with linear and quadratic weights

46 Responses to Weighted Cohen’s Kappa

  1. Arthur Karov says:

    Hi Charles, I’m a nu-bee in SPSS. Is it possible to do the whole thing in SPSS? Like calculation of weighted kappa, drawing the table etc.? If so what would be the command? How to determine quadratic weights for weighted kappa?

  2. Amber M says:

    Hi Charles,

    Thank you for the useful guidance here.

    I have two questions.

    First, if someone calculates unweighted Cohen’s K when actually their data are ordinal so it would be more appropriate for them to calculate weighted Cohen’s K, would the result be a more or less conservative estimate of reliability?

    Second, I have some categorical data, with 4 categories. Three of these are ordered, e.g., low, medium, high, but one of them is a sort of “other” category and so not really ordered. Would you classify this variable as ordinal or nominal?

    I have chosen to classify it as nominal, and therefore, have calculated unweighted Cohen’s K, yielding a significant k value of .509 (hence my first question on interpretation).

    Thanks so much.
    Amber

    • Charles says:

      Amber:
      1. I don’t have any reason to believe that it would be a conservative estimate. It would be a different estimate. Better to use the value for the weighted kappa.
      2. I guess it depends on what “other” really means. If it means “I don’t know”, you might be better off dropping those values and treating the variable as ordinal, especially if only a small percentage of the respondents answer “other”.
      Charles

  3. edward says:

    thanks to anyone will answer
    I have a table 2 x 2 with this data:
    16 0
    4 0

    With a calculator i get k= 0 with CI : from 0 to 0
    it looks so strange, is it correct?

    • Charles says:

      Edward,
      Sorry, but I don’t understand the point of analyzing such lopsided data.
      Charles

      • edward says:

        Hi Charles,
        I’m a very beginner and inexpert. I have to apply the cohen’s kappa to some table 2 x 2 showing the adhesion to a specific protocol. For example for the data in the table
        53 1
        1 5

        I get a kappa of +0,815 with CI: 0,565 to 1000

        But with the data

        16 0
        4 0

        i get k=0 with CI: 0 to 0

        I mechanically applied this calculater for my thesis, i’m really inxpert. You say these are lopsided data, therefore i can’t apply the cohen’s kappa?

  4. Daniele says:

    Hi Charles,

    thank you for this great explanation.
    For a paper I calculated the Weighted K (wK). However I’m wondering how to interpret my wK…
    Can I interpret my wK just like the unweighted K? So, for example, a wK greater than 0.61 corresponds to substantial agreement (as reported in https://www.stfm.org/fmhub/fm2005/May/Anthony360.pdf for the unweighted k)?

    Thank you for your help!

    Danile.

    • Charles says:

      Daniele,
      Yes, I would think that the interpretation of weighted kappa is similar to unweighted kappa. Keep in mind that not everyone agrees with the rankings shown in Table 2 of the referenced paper (nor any other scale of agreement).
      Charles

      • Daniele says:

        Thanks for your reply!

        Yes, I know that there isn’t an agreement about the rankings… I found several types of ranks for the kappa interpretation.
        Can you suggest a reference of reliable (in your opinion) rankings?
        My field is neuroimaging (medicine), so it is not supposed to be an “exact science”…

        Best regards,

        Daniele.

  5. Rose Callahan says:

    Can this concept be extended to three raters (i.e., is there a weighted Fleiss kappa)?

    • Charles says:

      Rose,
      I don’t know of a weighted version of Fleiss kappa or a three rater version of weighted kappa. Perhaps ICC or Kendall’s W will will provide the required functionality for you.
      Charles

  6. Miran says:

    Dear Charles,

    Thank you for proving the overall information of kappa.

    I’m reviewing a statistical analysis used in a reliability study and kappa is widely used in it. However, in many cases using ordinal scores, they just said that kappa was used in the study. I’m wondering if I could know whether they use weighted OR unweighted kappa in those papers without mentioning of the exact name. In addition, if they use the weighted kappa, can I distinguish the type of weighted kappa (linear or quadrtic) without mentioning if the statistical table shows K value only?

    Regards,
    Miran

    • Charles says:

      Miran,
      I don’t know how you could determine this. My guess (and this is only a guess) is that unless they said otherwise they used the unweighted kappa.
      Charles

      • Miran says:

        Dear Charles,

        Thank you for your answer. I’ve judged that a paper used the unweighted kappa unless they mention about the weighted kappa so far.

        Regards,
        Miran

  7. Chris says:

    I am curious about the application of weighted kappa in the following scenario. I had two raters complete a diagnostic checklist with 12 different criteria. The response to each criteria was either 1 or 0 (present or absent). If a specific number of criteria were present, then an overall criteria was coded 1 (if not, 0). Are dichotomous responses considered ordered in this case? Is weighted kappa the appropriate statistic for reliability?

    • Charles says:

      Chris,

      Dichotomous responses are generally considered to be categorical, although depending on what the data represents they could be considered to be ordered. E.g. Male = 0 and Female = 1 is not really ordered, while 0 = Low and 1 = High could be considered ordered.

      Regarding your specific case, I understand that if a rater finds that say 6 or more criteria are met then the score is 1, while if fewer than 6 are met then the rating is 0. This could qualify as ordered. Based on what you have described you might be able to use weighted kappa, but I would have to hear more about the scenario before I could give a definitive answer.

      Note that the coding that I have described throws away a lot of the data. You might be better just counting the number of times the criteria are met and use this as the rating. Then you could use weighted kappa with this number as the weights. You might also be able to use the intra-correlation coefficient.

      Charles

  8. John says:

    Hi Charles,

    What is the best method of determining the correct predefined weight to use?

    Cheers,
    John

    • Charles says:

      John,
      You need to decide what weights to use based on your knowledge of the situation. The usual weights are linear and quadratic.
      Charles

  9. Deborah Oliveira says:

    Hi Charles,

    I would like to know what would be the minimum sample size for a reliability re-test of a newly developed questionnaire of 100 items, ordinal scale (1-5), considering 80% power (0.05 type I error) to detect an acceptable weighted Kappa coefficient ≥0.6 in a two-tail single group comparison. Could you please help me with that or provide me any good reference that contains this information?

    Thank you for your help.
    Deborah.

    • Charles says:

      Deborah,

      I found the following article on the Internet which may provide you with the information that you are looking for.

      http://www.ime.usp.br/~abe/lista/pdfGSoh9GPIQN.pdf

      Charles

      • Deborah Oliveira says:

        Hi Charles,

        Thank you so much for your help. I managed the sample size but now I have another problem. I am developing and evaluating the psychometric properties of a multidimensional psychological scale. One of the measurements is the reliability re-test, for which I handed out two copies of the same questionnaire for participants to complete each of them within an interval of 15 days. I need now to compare both results, for each participant, to see how much the outcomes have changed in between measurements. The scale has 100 items, each of them with 1-5 categorical responses (from never to always, for example). Because it is categorical, I have been advised to use weighted Kappa (0-1.0) for this calculation ans I need a single final kappa score. Do you have any idea about which software and how to calculate it? I haven’t found anything explaining the practical calculation in a software. Thank you!

        • Charles says:

          Deborah,
          The referenced webpage describes in detail how to calculate the weighted kappa.It also describes how to use the Real Statistics Weighted Kappa data analysis tool and KAPPA function.
          Charles

          • I have a 25 items that are rated 0-4 (from Unable to Normal). How should I calculate the inter-rater reliability between two raters and intra-rater reliability between two sessions for each item? If this should be a weight Kappa, then how to calculate the 95% confidence interval in Excel? How to calculate a single final kappa score.

          • Charles says:

            Rick,
            I don’t completely understand your scenario. Are you measuring the 25 items twice (presumably based on different criteria). Are you trying to compare these two ways of measuring? You might be able to use Weighted Kappa. The referenced webpage shows how to calculate the 95% confidence interval in Excel. I don’t understand what you mean by calculating “single final kappa score”, since the usual weighted kappa gives such a final score.
            Charles

  10. Marchessoux says:

    Dear Charles,

    How do you compute the 95% CI for weighted Kappa? Is there anything already in your excel tool?

    Thanks in advance for your help

    Best regards
    Cédric

    • Charles says:

      Cédric,
      The calculation of the 95% CI for the unweighted version of Cohen’s kappa is described on the webpage Cohen’s Kappa.
      Shortly I will add the calculation of the 95% CI for the weighted Kappa to the website. I also plan to add support for calculating confidence intervals for weighted kappa to the next release of the Real Statistics Resource Pack. This will be available in a few days.
      Charles

    • Charles says:

      Cédric,
      I have now added support for s.e. and confidence intervals for Cohen kappa and weighted kappa to the latest release of the Real Statistics software, namely Release 3.8.
      Charles

  11. Niels says:

    Dear Charles,
    thank you for the add-on and all the good explanations!

    I will have rather large kappa and weights tables (20 items and weights ranging from 0 to 3). Can I extend the tables according to my needs or do I have to expect problems?

    Best,
    Niels

    • Charles says:

      Niels,
      You should be able to extend the tables. Alternatively you can use the Real Statistics WKAPPA function or the weighted kappa option in the Real Statistics Reliability data analysis tool.
      Charles

  12. Richard says:

    How would one calculate a standard error for the weighted kappa? and thus a p value.

  13. Susana says:

    Hi Charles,

    What do you do if judges do not know ahead of time how many students are being interviewed. For example, judges are asked to identify off-road glances from a video and place them in 3 different categories. Judge1 may identify 10 glances, while Judge2 only 5. They may agree completely on those 5 identified by Judge 2, but do the other 5 non-identified glances count as disagreements then?

    • Charles says:

      Susana,
      One way to approach this situation could be to assign a 4th category, namely “non-identified glance”. It is then up to you to determine what weight you want to assign to category k x category 4 (for k = 1, 2, 3, 4).
      Charles

  14. Rob says:

    Hello Charles,

    in literature a Weighted Kappa >.60 is considered Good, but where is this based on? I cant find any article that invests this topic, so where does this .60 comes from?

    greets

    Rob

  15. Klaus says:

    Your table of weights is a symmetric matrix with zeros in the main diagonal (i.e. where there is agreement between the two judges) and positive values off the main diagonal. Elsewhere, for example here http://www.medcalc.org/manual/kappa.php and here http://www.icymas.org/mcic/repositorio/files/Conceptos%20de%20estadistica/Measurement%20of%20Observer%20Agreement.pdf the main diagonal has values of 1.

    • Klaus says:

      I figured it out. Instead of using w = 1-(i/(k-1)) you are using w = i/(k-1). k is the number of categories and i the difference in categories.
      I checked the WKAPPA results of the example given in against a calculated example in the reference cited in my prior post (Kundel&Polansky) and it all works out.
      The nice thing about the WKAPPA function is that you can use a subjective set of weights and you are not limited to linear and quadratic weighting. Thanks!

      • Charles says:

        Klaus,
        I used zeros on the main diagonal instead of ones since it seemed more intuitive to me. As you point out, both approaches are equivalent.
        Charles

  16. Colin says:

    Sir

    BTW.The matrix in figure 1 is not symmetric.

    Colin

    • Charles says:

      Colin.
      Another good catch. I inadvertently switched two cells. I have now changed the webpage with the symmetric weights that I had intended and should have used. Thanks for your diligence. As usual you have helped make the site better and more reliable for everyone.
      Charles

  17. Colin says:

    Sir
    I think the design of predefined table of weights is a little arbitrary. Different people may make different table of weights.

Leave a Reply

Your email address will not be published. Required fields are marked *