Abstract
When using qualitative coding techniques, establishing inter-rater reliability (IRR) is a recognized method of ensuring the trustworthiness of the study when multiple researchers are involved with coding. However, the process of manually determining IRR is not always fully explained within manuscripts or books. This is especially true if specialized qualitative coding software is being used since these software packages are often able to automatically calculate IRR providing little explanation on the methods used. Methods of coding without commercial software vary greatly including using non-specialized word processing or spreadsheet software and marking transcripts by hand using colored highlighters, pens, and even sticky notes. This array of coding approaches has led to a variety of techniques for calculating IRR. It is important that these techniques be shared, since IRR calculation is only automatic when using specialized coding software. This study summarizes a possible approach to establishing IRR for studies when researchers use word or spreadsheet processing software (e.g., Microsoft Word® and Excel®). Additionally, the authors provide their recommendations or "tricks of the trade" for future teams interested in calculating IRR between members of a coding team without specialized software.
| Original language | English |
|---|---|
| Journal | ASEE Annual Conference and Exposition, Conference Proceedings |
| Volume | 2017-June |
| State | Published - Jun 24 2017 |
| Event | 124th ASEE Annual Conference and Exposition - Columbus, United States Duration: Jun 25 2017 → Jun 28 2017 |
Fingerprint
Dive into the research topics of 'Qualitative coding: An approach to assess inter-rater reliability'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver