Twenty-eight studies were included. They apparently included one randomised study, two comparative studies, nine prospective cohort studies, four prospective studies, five retrospective reviews and seven observational studies. The total number of participants included was not reported.
Direct observation identified a larger number of problems than other methods (six studies). Incident report review identified the fewest problems in most comparisons; chart review consistently reported more events compared to incident report review (13 studies). Findings that compared chart reviews versus trigger tools were inconsistent (five studies). Overlap in the identification of events between different methods was rare. Direct observation was the method most likely to identify problems found with other tools; all events detected by incident report reviews or chart review were also detected by direct observation.
Sensitivity, specificity and positive predictive values differed widely between studies reporting these data. Incident report review was more specific than other methods in identifying problems (three studies), but was less sensitive than trigger tools (four studies). The positive predictive value for trigger tools ranged from zero to 100% (six studies) depending on the design of the tool.
Trigger tool was the most time-efficient (two studies) and least labour-intensive assessment method, followed by incident report review and then direct observation (one study).
Other findings were reported in the review.