|Decision aids: are they worth it? A systematic review
|Estabrooks C, Goel V, Thiel E, Pinfold P, Sawka C, Williams I
To identify the outcomes influenced by consumer decision aids (CDAs), and the particular effects of CDAs on these outcomes.
The search strategy of O'Connor et al. (see Other Publications of Related Interest no.1) was used initially and then augmented. The authors then compared the results found in O'Connor's annotated bibliography with their own, and then ran their own search strategy on selected additional databases to check for publications from 1980 onwards. A variety of search methods were employed including MeSH terms and keywords. Ancestry searches were conducted and selected journals were handsearched from 1990 to 1999. Dissertations and conference abstracts were followed up to determine if a forthcoming publication or recent publication was available from the authors. In addition, 12 investigators in the field of consumer decision-making were sent the list of retrieved articles and asked to comment on possible omissions or pending publications. Only articles published in the English language were eligible for inclusion.
Study designs of evaluations included in the review
No inclusion criteria relating to the study design were specified.
Specific interventions included in the review
To be included in the review, an intervention needed to be a structured CDA or a structured decision aid combined with a personal support intervention. The decisions needed to be real, not hypothetical, and involve treatment or screening. The researchers included lifestyle and treatment decisions, but excluded clinical trial entry decisions, advance directive decisions, informed consent decisions and lifestyle-only decisions (e.g. smoking cessation).
Participants included in the review
The participants needed to be the decision-makers and, as such, were 'health consumers'. Practitioners were considered to be clients if they were the subject of a treatment or screening decision intervention.
Outcomes assessed in the review
The studies had to include at least one quantified outcome that was hypothesised to result from the intervention. The five categories of outcome were treatment preference, actual decision, decision-making process, knowledge and decision aid evaluation.
How were decisions on the relevance of primary studies made?
The search strategy generated over 500 titles and abstracts. These were assessed by consensus of three members of the team, based on the preliminary inclusion criteria. A more detailed review of the 272 articles thus identified resulted in 96 reports being included in a full inclusion/exclusion screening. This was performed by two members of the review team working independently and resolving any disagreements. The group developed categories and definitions of the various types of decision tool or aid, in order to assist in this process.
Assessment of study quality
Validity assessment tools were developed on the basis of published criteria (see Other Publications of Related Interest nos.2-4). Separate tools and accompanying data dictionaries (sets of definitions) were developed for randomised controlled trials (RCTs) and observational studies (see Other Publications of Related Interest no.5).
The RCTs and observational studies were both assessed on the basis of 6 categories. For RCTs, these were: design and allocation; recruitment; inclusion and exclusion; description of the intervention; statistical analysis; and outcome measurement. For observational studies, these were: design; inclusion and follow-up; control of confounders; data collection and outcome measurement; statistical analysis and conclusions; and discussion of design rationale and limitations. The maximum possible total score was 35 for RCTs and 28 for observational studies. Each subcategory was scored and rated as low, medium or high, and the final rating was determined by the pattern of rating on the subcategories. Each study was evaluated independently by two reviewers and any disagreements were resolved. A third member of the team independently rated a subset of both RCTs and observational studies and, once the interpretation of the tool definitions was clarified, any differences were resolved. A final and independent rating was made by a fourth team member on the same subset. The ratings were in agreement and no modifications or further discussions were necessary.
The data were extracted independently by two members of the team and any disagreements were resolved by consensus. Data were extracted on the setting, design, participants, decision types, decision aid features and outcomes.
Methods of synthesis
How were the studies combined?
A narrative synthesis was undertaken. After reviewing and extracting the data from the final twelve studies, the authors grouped the outcomes into five categories for data synthesis: treatment preference, the decision, the decision-making process, knowledge and decision aid evaluation. The authors mentioned possible publication bias but did not consider that this seriously affected the conclusions of the review.
How were differences between studies investigated?
The authors stated that their group of studies was heterogeneous: there was large variation in the interventions applied in these studies, and considerable variation in the participants' medical conditions and the nature of the decisions being taken.
Results of the review
Twenty-two reports representing 20 studies met the inclusion criteria. Eight studies with a low quality rating were excluded. The results were synthesised from the remaining 12 studies (8 RCTs and 4 observational studies). The RCTs involved a total of 3,432 participants. The observational studies had 1,457 participants; two reports, however, were based on the same study, although there was some variation in the number of participants.
Structured decision aids did not appear to influence treatment preferences. Of the 7 studies that assessed this outcome, only 2 observational studies reported that decision aids exerted a statistically-significant effect on this outcome. These studies investigated men with benign prostatic hyperplasia using a shared decision programme, and men with prostate cancer using a Patient Outcomes Research Team video decision aid.
With the exception of decisions to obtain vaccination, structured decision aids did not appear to influence the actual treatment or screening decision. Six studies reported non significant findings for this outcome, 3 studies did not measure it, and one reported a partial change in the decision.
The decision-making process.
Four studies did not report any outcomes in this category. Most reports on these measures did not demonstrate an influence of the structured decision aid on the decision-making process. The following outcomes were reported as statistically significant on at least one occasion: perceived choice, decisional conflict, realistic expectations, and satisfaction with the decision-making process. The outcomes reported in this category were insufficient to draw conclusions on the relevance of the outcomes and their sensitivity to the intervention.
Improvement in knowledge scores appeared to result from the use of structured decision aids. Seven studies reported statistically- significant differences in knowledge. Of the 7 studies, one reported a difference by time, one reported a difference on one of ten questions only by group, 4 reported a significant difference by group, and one reported no difference by group but a difference by time. Three studies did not measure knowledge as an outcome.
Decision aid evaluation.
Just 3 of the 12 studies reported this assessment. Generally, the reports were favourable when the structured decision aids were assessed for the consumers' general satisfaction, readability and ease of use.
The results of the effects of structured CDAs are under-determined. There was little evidence that they significantly influence treatment preferences or actual treatment or screening decisions, with the possible exception of increased vaccination rates in both of the immunisation studies reviewed. CDAs do seem to influence knowledge; however, it is unclear if they are better than good quality educational materials. Generally, the changes are greatest on attitudes and knowledge, then behaviour, with effects on other outcomes being negligible or low.
The reviewers tackled a broad review question, determining the outcomes of structured decision aids. The inclusion criteria regarding the participants and interventions were clearly defined with procedures in place for resolving disagreements. Observational studies, which are known to be of poorer quality and more prone to bias, were included in this review. However, the reviewers performed a validity assessment and excluded the lowest quality research. The search strategy appeared comprehensive but was restricted to English language papers, thus some research may have missed.
The review process was thorough with two reviewers working independently in the inclusion or exclusion of the studies. Judgements of validity were made by several members of the team, while the data were extracted independently by two members. The reviewers appropriately summarised their findings with a narrative synthesis, rather than using a statistical meta-analysis, due to variation in the participant and disease profile and in the decision aids used.
The reviewers' conclusions are sound and they provide pointers to further research.
Implications of the review for practice and research
Practice: The authors did not state any implications for practice.
Research: The authors state that there is a need for more rigorous study designs in this field, particularly RCTs, to enable the effectiveness of the intervention on the outcomes of interest to be evaluated. Further research is also needed on the client's satisfaction with the information provided, and on how decision aids are integrated or not by patients into the decision-making process. The authors suggest that two important questions should be addressed: (1) Do different forms of CDA influence different outcomes?; (2) Do different forms of CDA influence the same outcomes differently? There is a need to determine whether the relevance of CDAs can be demonstrated across other populations than those studied. It is also necessary to explore which outcomes CDAs can reasonably be expected to influence and what constitutes an appropriate outcome. There is a need to show whether CDAs are superior to educational materials in increasing knowledge. Finally, there is a need to address the question of whether the value of CDAs is qualitatively and quantitatively important enough to justify their cost.
Estabrooks C, Goel V, Thiel E, Pinfold P, Sawka C, Williams I. Decision aids: are they worth it? A systematic review. Journal of Health Services Research and Policy 2001; 6(3): 170-182
Other publications of related interest
1. O'Connor AM, Drake ER, Fiset VJ, Page J, Curtin D, Llewellyn-Thomas HA. Annotated bibliography: studies evaluating decision-support interventions for patients. Can J Nurs Res 1997;29:113-20. 2. Chalmers TC, Smith H, Blackburn B, Silverman B, Schroeder B, Reitman D, et al. A method for assessing the quality of a randomized control trial. Control Clin Trials 1981;2:31-49. 3. NHS Centre for Reviews and Dissemination. Undertaking systematic reviews of research on effectiveness. CRD's guidance for those carrying out or commissioning reviews. York: University of York, NHS Centre for Reviews and Dissemination; 1996. Report No.: CRD report 4. 4. Crombie IK. The pocket guide to critical appraisal: a handbook for health care professionals. Dundee: BMJ Publishing Group; 1996. 5. Estabrooks CA, Goel V, Thiel EC, Pinfold SP, Sawka C, Williams JI. Consumer decision aids: where do we stand? A systematic review of structured consumer aids. Toronto: Institute for Clinical Evaluative Sciences; 2000. ICES Technical Report No.: 00-01-TR.
Subject indexing assigned by NLM
Canada; Consumer Participation; Decision Support Techniques; Female; Great Britain; Health Services Research; Humans; Male; Outcome Assessment (Health Care); United States
Date bibliographic record published
Date abstract record published
This is a critical abstract of a systematic review that meets the criteria for inclusion on DARE. Each critical abstract contains a brief summary of the review methods, results and conclusions followed by a detailed critical assessment on the reliability of the review and the conclusions drawn.