Evaluating the Reliability of the Human Factors Analysis and Classification System

Tara N. Cohen, Douglas A. Wiegmann, Scott A. Shappell

Research output: Contribution to journalArticlepeer-review

Abstract

INTRODUCTION: This paper examines the reliability of the Human Factors Analysis and Classification System (HFACS) as tool for coding human error and contributing factors associated with accidents and incidents.

METHODS: A systematic review of articles published across a 13-yr period between 2001 and 2014 revealed a total of 14 peer-reviewed manuscripts that reported data concerning the reliability of HFACS.

RESULTS: Results revealed that the majority of these papers reported acceptable levels of interrater and intrarater reliability.

CONCLUSION: Reliability levels were higher with increased training and sample sizes. Likewise, when deviations from the original framework were minimized, reliability levels increased. Future applications of the framework should consider these factors to ensure the reliability and utility of HFACS as an accident analysis and classification tool.
Original languageAmerican English
JournalAerospace Medicine and Human Performance
Volume86
DOIs
StatePublished - Aug 2015

Keywords

  • HFACS
  • Human Factors Analysis and Classification System
  • human error
  • accident analysis
  • reliability
  • error analysis

Disciplines

  • Aviation Safety and Security
  • Other Psychology

Cite this