Assessment of the Human Factors Analysis and Classification System (HFACS): Intra-Rater and Inter-Rater Reliability

Awatef Ergai, Tara Cohen, Julia Sharp, Doug Wiegmann, Anand Gramopadhye, Scott Shappell

Research output: Contribution to journalArticlepeer-review

Abstract

The Human Factors Analysis and Classification System (HFACS) is a framework for classifying and analyzing human factors associated with accidents and incidents. The purpose of the present study was to examine the inter- and intra-rater reliability of the HFACS data classification process.

Methods

A total 125 safety professionals from a variety of industries were recruited from a series of two-day HFACS training workshops. Participants classified 95 real-world causal factors (five causal factors for each of the 19 HFACS categories) extracted from a variety of industrial accidents. Inter-rater reliability of the HFACS coding process was evaluated by comparing performance across participants immediately following training and intra-rater reliability was evaluated by having the same participants repeat the coding process following a two-week delay.

Results

Krippendorff’s Alpha was used to evaluate the reliability of the coding process across the various HFACS levels and categories. Results revealed the HFACS taxonomy to be reliable in terms of inter- and intra-rater reliability, with the latter producing slightly higher Alpha values.

Conclusion

Results support the inter- and intra-rater reliability of the HFACS framework but also reveal additional opportunities for improving HFACS training and implementation.
Original languageAmerican English
JournalSafety Science
Volume82
DOIs
StatePublished - Feb 2016

Keywords

  • HFACS
  • inter-rater reliability
  • intra-rater reliability
  • classifixcation
  • human error

Disciplines

  • Aviation Safety and Security
  • Other Psychology

Cite this