|
Problem based learning in continuing medical education: a review of controlled evaluation studies |
Smits P B, Verbeek J H, de Buisonje C D |
|
|
Authors' objectives To find out whether there is any evidence that problem-based learning in continuing medical education is effective.
Searching MEDLINE, EMBASE, PsycLIT, ERIC, the Cochrane Library, and the Research and Development Resource Base in Continuing Medical Education (on the Internet) were searched from 1974 to August 2000. The following keywords were used: 'problem-based (PBL)', 'practice-based', 'self-directed', 'learner centred' and 'active learning'. The search results were combined with another search using the keywords 'continuing medical education (CME)', 'continuing professional development (CPD)', 'post-professional', 'postgraduate' and 'adult learning'. The reference lists of all the included studies were also checked.
Study selection Study designs of evaluations included in the reviewRandomised controlled trials (RCTs) or non-randomised controlled trials with a pre-test post-test design were included.
Specific interventions included in the reviewStudies in which the educational intervention was problem-based, and in which the learning process in essence resembled the methods used at McMaster University or the University of Maastricht (see Other Publications of Related Interest nos.1-2), were eligible for inclusion. This consists of a tutor-facilitated, problem-based learning session in which a small self-directed group starts with a brainstorming session. A problem is posed that challenges their knowledge and experience. Learning goals are formulated by consensus and new information is learnt by self-directed study. It ends with group discussion and evaluation. The control groups in the included studies received educational interventions (lecture-based learning or large group discussion with didactic lecture or Internet resources) or no educational intervention.
Participants included in the reviewStudies of people undertaking postgraduate and continuing medical education and continuing professional development were eligible for inclusion. The included studies were of general practitioners and staff of an out-patient clinic.
Outcomes assessed in the reviewThe outcome variables looked for were the participants' knowledge, performance and satisfaction, and the patients' health.
How were decisions on the relevance of primary studies made?The authors do not state how the papers were selected for the review, or how many of the reviewers performed the selection.
Assessment of study quality Validity was assessed on the basis of five quality criteria: randomisation, length and completeness of follow-up, intention to treat analysis, blinding, and groups similar at baseline. Each criterion was allotted a maximum of 10 points, resulting in a maximum possible score of 50 points. Studies with a total score of at least 25 points were considered to be of a high quality, while those with less than 25 points were of a low quality. Two reviewers independently assessed the validity of the included studies. The authors do not state how any disagreements were resolved.
Data extraction The authors do not state how the data were extracted for the review, or how many of the reviewers performed the data extraction.
Data were extracted into the following categories: participants, study design, intervention, control, number of participants, outcomes, and follow-up.
Methods of synthesis How were the studies combined?Evidence for the effectiveness of problem-based learning was graded as either: strong, if there was a positive outcome in two high- quality studies; moderate, if there was a positive outcome in one high-quality and one low-quality study; limited, if there was a positive outcome in one high-quality study or one or more low-quality studies; and none, if there was a contradictory outcome or no outcome.
How were differences between studies investigated?There was no formal assessment of heterogeneity.
Results of the review Six studies (n=382) were included: 2 RCTs (n=80) and 4 controlled trials (n=302). Three (including both RCTs) used an educational control group, while the other 3 used nothing in the control group.
The outcome measurement was often restricted to only one variable. No study measured both the preferred outcome variables, i.e. the participants' performance and patients' health. In one of the high- quality studies (problem-based learning via email versus use of Internet resources), neither educational programme increased the participants' knowledge but the group size was small. The other high-quality study (problem-based versus lecture-based learning) showed positive results for problem-based learning in terms of the participants' knowledge, clinical reasoning and satisfaction. It is unclear whether these effects can be attributed to the problem-based learning format, however, because of differing periods of educational exposure.
Three studies compared problem-based learning with another educational format. No evidence was found that problem-based learning affected the participants' knowledge and performance, and there was only moderate evidence that it increased the participants' satisfaction. None of the studies measured the patients' health. The other three studies compared problem-based learning with no educational intervention and were of a low quality. They showed limited evidence that problem-based learning was effective in improving the participants' knowledge and performance, and the patients' health.
Authors' conclusions This review of controlled evaluation studies found limited evidence that problem-based learning in continuing medical education increased the participants' knowledge and performance, and the patients' health. There was moderate evidence that doctors are more satisfied with problem-based learning.
CRD commentary The review question and study selection criteria were clearly stated, some details of the review process were given, and the literature search seemed comprehensive. No attempt was made to find unpublished studies so it is possible that some were missed. The studies were adequately described, and a validity assessment was undertaken and reported. The synthesis of the data seemed appropriate in view of the limitations of the included studies.
The authors' conclusions seem suitably cautious.
Implications of the review for practice and research Practice: The authors did not state any implications for practice.
Research: The authors state that further RCTs are needed to compare educational methods. Educational methods should be clearly defined and practice controlled. The aims of the education should be clarified, outcome variables should correspond with objectives, and preferably several different variables should be measured, including the participants' performance and the patients' health.
Funding Netherlands Organisation of Scientific Research; Netherlands School of Occupational Health.
Bibliographic details Smits P B, Verbeek J H, de Buisonje C D. Problem based learning in continuing medical education: a review of controlled evaluation studies. BMJ 2002; 324: 153-157 Other publications of related interest 1. Maudsley G. Do we all mean the same thing by problem-based learning? A review of the concepts and formulation of the ground rules. Acad Med 1999;74:178-85. 2. Barrows HS. A taxonomy of problem-based learning methods. Med Educ 1986;20:481-6.
This additional published commentary may also be of interest. Rhyne RL, Cosgrove EM. Problem-based learning has similar outcomes to traditional continuing medical education. Evidence-based Healthcare 2002;6:131.
Indexing Status Subject indexing assigned by NLM MeSH Education, Medical, Continuing /standards; Evaluation Studies as Topic; Humans; Problem-Based Learning /standards; Randomized Controlled Trials as Topic AccessionNumber 12002008071 Date bibliographic record published 28/02/2003 Date abstract record published 28/02/2003 Record Status This is a critical abstract of a systematic review that meets the criteria for inclusion on DARE. Each critical abstract contains a brief summary of the review methods, results and conclusions followed by a detailed critical assessment on the reliability of the review and the conclusions drawn. |
|
|
|