Of 128 studies that met the inclusion criteria, only 77 (sample sizes not reported) provided appropriate relevant comparisons and were included in the meta-analysis; 45 were randomised controlled trials (RCTs). Most studies evaluated parent training programmes as standalone programmes and were mostly compared with no treatment. Thirty-three percent of the studies assessed baseline equivalence between groups.
The overall weighted effect size across all outcomes was 0.34 (95% CI 0.29 to 0.39), which indicated a significant mean difference between treatment and comparison groups. There was substantial heterogeneity (Q  330.9, p<0.001).
Programme components that were associated with significantly larger effects on parental outcomes (in regression analyses controlled for methodological rigour and parent self report), included positive interactions with their child (regression weight 0.198), emotional communication skills (regression weight 0.437) and practising with their own child (regression weight 0.375). Programme components that were associated with smaller effects on parental outcomes included problem solving (regression weight -0.247), promoting children's cognitive/academic skills (regression weight -0.243) and ancillary services (regression weight -0.205).
Programme components that were associated with significantly larger effects on child behaviours (in regression analyses controlled for methodological rigour and parent self report), included positive interactions with the child (regression weight 0.284), time out (regression weight 0.170), consistent responding (regression weight 0.333) and practice with the child (regression weight 0.234). A programme component associated with smaller effects on child outcomes was promoting children's social skills (regression weight -0.198).
Four of the identified components (practice with the parents' own child, teaching skills related to emotional communication, teaching parents to interact positively with their children and disciplinary consistency) remained significant predictors of effect sizes with the use of a random-effects variance component in a mixed-effects model as a more conservative test of association.
The fail-safe N calculation suggested that 250 unpublished studies with non significant results would be required to substantially change the overall effect size, which suggested that publication bias was unlikely.