Background and Objectives for the Systematic Review
The RTI International−University of North Carolina Evidence-based Practice Center (RTI-UNC EPC) used an ongoing review, Strategies to Improve Mental Health Care for Children and Adolescents (SIMHC), to generate a report on the additional information gained by including data from clinicaltrials.gov. The purpose of the report was to summarize the evidence on strategies to improve mental health for children, through quality improvement (QI) strategies and interventions with proven effectiveness (e.g., evidence-based practices [EBPs]). The rationale for the topic was to understand how to bridge the gap between observed and achievable processes and outcomes, through strategies that target changes in the organization and delivery of mental health services.
Conducting a supplemental transparency project on this review afforded an opportunity to explore additional sources of information on the included strategies, which are generally complex, systems-focused, and underreported. To achieve this goal, we explored the differences between information from published and unpublished sources included in the review and clinicaltrial.gov.
In addition to this primary goal, we had three additional goals. First, we wanted to understand the state of reporting and reporting requirements on a topic of increasing importance: quality improvement (QI), implementation, and dissemination. Despite advances in the evidence base about interventions for treating mental health conditions in children, national health outcomes remain suboptimal, in part because of the failure of systems and providers to adopt QI strategies and interventions with proven efficacy. Given the gap between observed and achievable processes and outcomes, the next critical step is the adoption of effective QI strategies and the development of strategies to implement or disseminate effective interventions.1-3 These strategies are complex and may include multiple components, caregivers, or systems. Closing the gap requires more information on not just outcomes of these complex interventions: it requires information on study conduct and processes to allow interpretation of results, assessment of their applicability, and enable scale-up. To achieve this goal, we reached out to authors to understand the utility of clinicaltrials.gov and other archives (e.g., the World Health Organization [WHO] International Clinical Trials Registry and NIHReporter) for information on implementation processes.
Second, we wanted to investigate reporting shortcomings for complex study designs, such as cluster randomized controlled trials (cRCTs). cRCTs require advanced analytic methods (hierarchical linear modeling, for example) that account for clustering at each level of recruitment. To date, our investigation has revealed that a substantial proportion of the included studies in the SIMHC review use cRCTs (10 cRCTs of 174-20 included studies). However, the published data on these trials have been woefully inadequate and do always not permit an independent assessment of the effects of the intervention. These inadequacies hinder not only higher order analyses, such as risk of bias assessment, but also basic calculations of effect size and precision because of poor reporting of retention at the multiple levels of recruitment in a cRCT. To achieve this goal, we sought information from clinicaltrials.gov on more design details, and when they were not available, seek to understand the impediments to reporting through outreach to study authors.
Third, we wanted to understand whether impediments to publication for pragmatic trials and systems interventions exist and if so, why. As noted above, we sought to understand the impediments to publication through outreach to study authors.
Key Questions
Our key questions (KQ) focus on the utility of clinicaltrials.gov for the systematic review. We also explored the additional issues (described above) that are specific to this review, complex interventions and study designs:
- Which studies were in the EPC report alone, clinicaltrials.gov alone or in both?
- For completed studies that were in both sources:
- What were the differences, if any, in pre-specified outcome measures, statistical plan and size of the study reported, retention, study conduct, and other details of study design in the peer reviewed literature vs. clinicaltrials.gov?
- Were results reported in clinicaltrials.gov for any of the studies? If they were, what were the differences, if any, in the results reported in the peer reviewed literature vs. clinicaltrials.gov?
- For studies in clinicaltrials.gov that were not completed or discontinued:
- For the discontinued studies, were there reasons given for discontinuation? If so, what were they?
- For studies that are ongoing but not completed, what was the date of initiation of the studies? Are the studies proceeding according to the original schedule or is there information in clinicaltrials.gov indicating a delay in completion? If there is a delay in completion, what is the reason given?
- For studies that are completed but not published, what are the reasons for delay in or lack of publication?
- For included studies with limited or no information on study processes and conduct in clinicaltrials.gov, what, if any, publicly available sources provide or can provide information on implementation processes? What are the constraints to producing and disseminating this information? What is the perceived utility of clinicaltrials.gov as an archive for such information?
- What is the impact on the conclusions of the EPC report with and without the information from clinicaltrials.gov? What would be the impact on the strength of evidence (including impact of knowledge of outcomes measured in studies but not reported in the peer reviewed literature)?
Methods
KQ 1
We updated our searches for SIMHC draft report and then compare the yield with clinicaltrials.gov, using a dual independent review process.
KQ 2
- For studies with information in both peer-reviewed literature and clinicaltrials.gov, we extracted and compared the results, using a dual review process, with a second reviewer checking the first abstractions.
- For studies with differences in reporting by source, we reached out to study authors via email and phone interview, if necessary, to understand the reasons for the differences.
KQ 3
- For discontinued studies, we planned to reach out to authors via email to identify reasons for discontinuation.
- For ongoing incomplete studies, we supplemented information in clinicaltrials.gov with additional information from study authors via email.
- For completed and unpublished studies, we planned to reach out to authors of discontinued studies via email to identify reasons for lack of publication.
KQ 4
We reached out to authors of included studies on the reasons for use or non-use of clinicaltrials.gov or other archive sites for information on study conduct and processes.
KQ 5
We integrated the information for KQs 1-4, using data from searches; abstraction from clinicaltrials.gov; and email, personal interviews, and any additional information provided by authors. We planned to update the strength of evidence and conclusion of the SIMHC report, if we found relevant results.
Table 1 provides the questions for email or personal interview. These are general questions, to be tailored for each interviewee. We obtained IRB exemption before conducting email interviews. We planned a minimum of two email and two telephone outreach attempts before categorizing investigators as non-responders.
Introduction to the interview:
The RTI-UNC Evidence-based Center is conducting a systematic review of strategies to improve mental health for children and adolescents. In addition, our funder, the Agency for Healthcare Research and Quality, has requested an additional investigation of the validity and reliability of clinicaltrials.gov as a potential additional source of information on study conduct, processes and results. Your study [xxx, has been included/is eligible for inclusion] in this review. We are reaching out to you to obtain some additional details about the reporting of your study. Thank you for agreeing to answer our questions.
Authors | Questions |
---|---|
For authors of included clinical trials included in the report that do not have a clinicaltrials.gov listing, N=84, 5, 8, 10, 11, 13-15, 17 |
|
For authors of clinical trials included in the report that have a listing in clinicaltrials.gov, with no results reported in clinicaltrials.gov at the time of our outreach, N=56, 7, 9, 16, 20 |
|
For authors of studies included in the report that are NOT clinical trials, N=44, 12, 18, 19 |
|
For authors of ongoing incomplete studies identified via clinicaltrials.gov, not included in the SIMHC review, N=3 |
|
We also constructed questionnaires in three additional categories but did not find studies in these categories (studies with different results reported in clinicaltrials.gov and published results, eligible discontinued studies identified via clinicaltrials.gov, and complete but unpublished studies identified via clinicaltrials.gov)
Results
Table 2 provides the results of the outreach.
Author | Available on clinical- trials.gov | Available on other registries | Outcomes available on registry | Barriers to registering study | Barriers to presenting information on critical components on registries | Availability of materials for replication | Critical components for replication as identified by study authors |
---|---|---|---|---|---|---|---|
Abbreviations: NA = not applicable; NR = not reported | |||||||
Beidas et al., 20125 | No | No | NA | Not a traditional clinical trial in that it focused on changing clinician behavior and did not enroll patients; therefore did not attempt to include it on the clinical trials registry. | NA | In existing publications on the trial | Augmented training: focus on principles of treatment and use of experiential learning; the ongoing support and consultation |
Bickman et al., 201116 | Yes | No | No | None | Not perceived as necessary because author did not experience barriers in dissemination through routine outlets such as publications and presentations | NA | Feedback |
Carroll et al., 20136 | Yes | NR | Yes | NR | NR | NR | NR |
Epstein et al., 20117 | Yes | No | No | clinicaltrials.gov is made for pharmaceutical clinical trials and was very difficult to complete some of the fields for this non- pharmaceutical study. It required an extended call with tech support at clinicaltrials.gov to get results posted correctly. | None but noted no community- based pediatricians has contacted author through clinicaltrials.gov . |
NA |
|
Epstein et al., 20078 | No | No | NA | No barriers noted but the authors did not attempt registration because it was not mandated at the start of the trial | NA | Published materials or contact authors | Recruitment of patients from community-based pediatric practices. |
Garner et al., 20129 | Yes | No | No | None given that clinicaltrials.gov automatically indexed publications via the ClinicalTrials.gov Identifier | A study registry could serve as a repository but unclear whether it could be used for this purpose. | None | Financial incentives provided to the staff delivering the intervention |
Glisson et al., 201210,11 | No | No | NA | Did not attempt registration so no barriers noted | NA | Publications, website, intervention training materials | The ARC intervention strategies depend on trained specialists who work at all levels of a service system to: (a) embed guiding principlesfor improving services, (b) develop shared mental models among organizational members to support the improvement effort, and (c) enact organizational tools (e.g., feedback) for identifying and addressing service barriers. |
Glisson et al., 201017 | No | No | NA | Did not attempt registration so no barriers noted | NA | Publications, website, intervention training materials | The ARC intervention strategies depend on trained specialists who work at all levels of a service system to: (a) embed guiding principles for improving services, (b) develop shared mental models among organizational members to support the improvement effort, and (c) enact organizational tools (e.g., feedback) for identifying and addressing service barriers. |
Gully et al., 20084 (study 1) | No | NR | NR | NR | NR | NR | NR |
Gully et al., 20084 (study 2) | No | NR | NR | NR | NR | NR | NR |
Henggeler et al., 200812 | No | NR | NR | NR | NR | NR | NR |
Henggeler et al., 201313 | No | NR | NR | NR | NR | NR | NR |
Lester et al., 200914 | No | NR | NR | NR | NR | NR | NR |
Lochman et al., 200915 | No | No | NA | Did not attempt registration so no barriers noted | NA | Contact authors | Audit and feedback components where trainers reviewed the rate of completion of session objectives and provided individualized supervisory feedback |
Ronsely et al., 201219 | No | NR | NR | NR | NR | NR | NR |
Sterling et al., 201520 | No | No | NA | No barriers noted but the authors did not attempt registration because it was not mandated | A registry could be of use if it included very specific protocols to assist people in replicating procedures, either for other studies or for implementation in program settings | NA | Brief training in how to deliver SBIRT in the pediatrician-only arm; embedding a BHCP in the BHCP arm |
Wildman et al., 201218 | No | NA | NA | NA | NA | Contact authors | Creating easy referral procedures for primary care providers to use for behavioral health care. |
Proportion of Studies Reported in Clinicaltrials.gov (KQ 1)
We identified 17 studies, reported in 17 articles4-19 (including two studies in a single article,4 and one study reported in two articles.10,11 Of these, ten are cRCTs,6-11,13-16,20 three are parallel-group4,5 or two-stage trials,17 and the remaining four are nonrandomized studies.4,12,18,19 Only 4—all cRCTS6,7,9,16—of the 13 trials appeared in a trials registry (clinicaltrials.gov). All other studies (9 trials4,5,8,10,11,13-15,17,20 and 4 nonrandomized studies4,12,18,19) did not appear in a study registry. Additionally, we found three ongoing trials in clinicaltrials.gov that have not yet published results (NCT02097355, NCT01829308, NCT02271386).
Comparing Data Between Clinicaltrials.gov and Published Sources (KQ2)
Three of four studies that had been registered in clinicaltrials.gov did not report results (NCT01308879,16 NCT01016704,9 and NCT010560167). One study updated the clinicaltrials.gov registry with results after we sent out a query to the authors (NCT013510646). The results did not differ between the publication and the registry, with one exception. In the publication, the authors present an adjusted odds ratio for the use of structured diagnostic assessments, of 8.0 (95% CI, 1.6 to 40.6). In clinicaltrials.gov, the authors provide raw data rather than adjusted results. Using these data, we calculated an unadjusted odds of 6.9 (95 CI%, 2.6 to 18.6).
Incomplete, Discontinued, or Unpublished Studies (KQ 3)
We reached out to investigators of three ongoing studies (NCT02097355, NCT01829308, and NCT02271386). Two did not note barriers to registering their trials, but a third noted difficulties arising from the required data entry fields in clinicaltrials.gov, which are not designed for implementation trials.
We found no discontinued or unpublished studies.
Utility of Trial Registries for Disseminating Information on Study Outcomes and Processes (KQ 4)
As noted in Table 2, three investigators (lead investigators on two studies and one proxy for two studies with a deceased principal investigator) did not respond to our repeated outreach attempts. A fourth respondent refused because of lack of time and a fifth responded to us but was unable to provide us with information because the principal investigator (lead on two studies) was deceased. Of the remaining ten investigators who completed the questionnaires, six did not attempt to register the study on clinicaltrials.gov and therefore noted no barriers. Three of four respondents who registered their study noted no barriers, with one noting that clinicaltrials.gov automatically indexed publications via the ClinicalTrials.gov Identifier. A fourth noted barriers arising from a mismatch between the nature of the trial and the purpose of clinicaltrials.gov, which was designed for pharmaceutical trials. We asked these four respondents about the utility of adding information on critical components to registries. Two expressed doubts about the utility of clinicaltrials.gov for housing such information, and one did not perceive a need for clinicaltrials.gov to house such information.
Discussion
Impact of Results on EPC Report (KQ 5)
Table 2 lists the critical components of the study, as identified by study authors. As noted previously, a significant constraint in understanding the results of studies of complex interventions is that they frequently involve complex designs and multiple components. Outreach to study investigators can potentially shed light on critical components that are not otherwise identified in the literature. Ideally, this information can be used to cluster and analyze studies in a systematic review to generate insights and effect estimates from the overall body of evidence. Although we were able to update the report with additional information on critical components in the study descriptors table, our efforts did not result in sufficient information to alter the EPC report materially, for a few reasons. First, despite multiple attempts to reach out to investigators, we had a 59 percent completion rate (we received responses for 10 of 17 studies). Second, among those who responded, use of clinicaltrials.gov was very limited. Only one author posted results in clinicaltrials.gov, and those results did not differ substantively from what was otherwise available to us. Third, investigators who responded may have interpreted our questions in varying ways. Fourth, because of the email format of our outreach, we could not ask followup questions.
Utility of Clinicaltrials.gov for Systems Interventions
The limited utility of clinicaltrials.gov for supplementing information in this report arises from three sources. First, clinicaltrials.gov is not designed or a good fit for the types of complex designs typified by implementation, dissemination or quality improvement studies. Authors who attempt to register studies on their own reported difficulties. Second, authors did not generally report findings on clinicaltrials.gov. Third, authors do not perceive a need for using clinicaltrials.gov to house information vital to the next generation of implementation studies on the critical components of their interventions.
Next Steps
Implementation, dissemination, and quality improvement studies such as those covered by this systematic review urgently require substantial documentation of design, processes, and outcomes. Current methods of dissemination simply do not provide sufficient detail at the present time to fully understand or synthesize these strategies and replicate them. As research teams splinter or change trajectories, this information is potentially lost forever (as we inferred from our attempts to reach some authors). At the present time, clinicaltrials.gov does not appear to offer a viable solution to house such information for two reasons: first, the site is not designed for implementation studies and second, authors do not perceive that their audience will seek such information from clinicaltrials.gov. The most viable alternative to enhancing transparency of reporting for these strategies appears to be through journal requirements such as TIDieR.21 Recent changes to clinicaltrials.gov specifying that eligible clinical trials include an FDA- regulated device product are likely to deter any further reporting of implementation, dissemination, and quality improvement trials, which often do not include such products.22
In the short term, enhanced searches of clinicaltrials.gov and outreach to authors appear to offer limited utility for systematic reviews of implementation, dissemination, and quality improvement trials. However, as the main body of our report indicates, we found that studies of related publications ("sibling" studies of the same intervention, or searches of authors of included interventions) can substantially enhance the descriptions and interpretation of studies. These sibling studies are not available, however, for all included studies and cannot serve as comprehensive and universal sources of information. Future systematic reviews of implementation, dissemination, and quality improvement should anticipate using a combination of citation mining of included studies and searches of sibling studies in order to capture all relevant studies.
References
- Balas EA, Boron SA. Managing clinical knowledege for healthcare improvements. Stuttgart, Germany: Schattauer Verlagsgesellschaft; 2000.
- Shojania KG, McDonald KM, Wachter RM, et al. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies, Vol. 1: Series Overview and Methodology Technical Review 9 (Contract No. 290-02- 0017 to the Stanford University-UCSF Evidence-based Practices Center). AHRQ Publication No. 04-0051-1. Rockville, MD: Agency for Healthcare Research and Quality; Aug 2004.
- Institute of Medicine. Crossing the Quality Chasm. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
- Gully KJ, Price BL, Johnson MK. Increasing abused children's access to evidence-based treatment: diffusion via parents as consumers. Child Maltreat. 2008 Aug;13(3):280-8. doi: 10.1177/1077559508316042. PMID: 18359928.
- Beidas RS, Edmunds JM, Marcus SC, et al. Training and consultation to promote implementation of an empirically supported treatment: a randomized trial. Psychiatr Serv. 2012;63(7):660-5. doi: 10.1176/appi.ps.201100401. PMID: 22549401.
- Carroll AE, Bauer NS, Dugan TM, et al. Use of a computerized decision aid for ADHD diagnosis: a randomized controlled trial. Pediatrics. 2013 Sep;132(3):e623-9. doi: 10.1542/peds.2013-0933. PMID: 23958768.
- Epstein JN, Langberg JM, Lichtenstein PK, et al. Use of an Internet portal to improve community-based pediatric ADHD care: a cluster randomized trial. Pediatrics. 2011 Nov;128(5):e1201-8. doi: 10.1542/peds.2011-0872. PMID: 22007005.
- Epstein JN, Rabiner D, Johnson DE, et al. Improving attention-deficit/hyperactivity disorder treatment outcomes through use of a collaborative consultation treatment service by community-based pediatricians: a cluster randomized trial. Arch Pediatr Adolesc Med. 2007 Sep;161(9):835-40. doi: 10.1001/archpedi.161.9.835. PMID: 17768282.
- Garner BR, Godley SH, Dennis ML, et al. Using pay for performance to improve treatment implementation for adolescent substance use disorders: results from a cluster randomized trial. Arch Pediatr Adolesc Med. 2012 Oct;166(10):938-44. doi: 10.1001/archpediatrics.2012.802. PMID: 22893231.
- Glisson C, Hemmelgarn A, Green P, et al. Randomized trial of the Availability, Responsiveness, and Continuity (ARC) organizational intervention with community- based mental health programs and clinicians serving youth. J Am Acad Child Adolesc Psychiatry. 2012 Aug;51(8):780-7. doi: 10.1016/j.jaac.2012.05.010. PMID: 22840549.
- Glisson C, Hemmelgarn A, Green P, et al. Randomized trial of the Availability, Responsiveness and Continuity (ARC) organizational intervention for improving youth outcomes in community mental health programs. J Am Acad Child Adolesc Psychiatry. 2013 May;52(5):493-500. doi: 10.1016/j.jaac.2013.02.005. PMID: 23622850.
- Henggeler SW, Sheidow AJ, Cunningham PB, et al. Promoting the implementation of an evidence-based intervention for adolescent marijuana abuse in community settings: testing the use of intensive quality assurance. J Clin Child Adolesc Psychol. 2008 Jul;37(3):682-9. doi: 10.1080/15374410802148087. PMID: 18645758.
- Henggeler SW, Chapman JE, Rowland MD, et al. Evaluating training methods for transporting contingency management to therapists. J Subst Abuse Treat. 2013 Nov- Dec;45(5):466-74. doi: 10.1016/j.jsat.2013.06.008. PMID: 23910392.
- Lester H, Birchwood M, Freemantle N, et al. REDIRECT: cluster randomised controlled trial of GP training in first-episode psychosis. Br J Gen Pract. 2009 Jun;59(563):e183-90. doi: 10.3399/bjgp09X420851. PMID: 19520016.
- Lochman JE, Boxmeyer C, Powell N, et al. Dissemination of the Coping Power program: importance of intensity of counselor training. J Consult Clin Psychol. 2009 Jun;77(3):397-409. doi: 10.1037/a0014514. PMID: 19485582.
- Bickman L, Kelley SD, Breda C, et al. Effects of routine feedback to clinicians on mental health outcomes of youths: results of a randomized trial. Psychiatr Serv. 2011 Dec;62(12):1423-9. doi: 10.1176/appi.ps.002052011. PMID: 22193788.
- Glisson C, Schoenwald SK, Hemmelgarn A, et al. Randomized trial of MST and ARC in a two-level evidence-based treatment implementation strategy. J Consult Clin Psychol. 2010 Aug;78(4):537-50. doi: 10.1037/a0019160. PMID: 20658810.
- Wildman BG, Langkamp DL. Impact of location and availability of behavioral health services for children. J Clin Psychol Med Settings. 2012 Dec;19(4):393-400. doi: 10.1007/s10880-012-9324-1. PMID: 23053830.
- Ronsley R, Rayter M, Smith D, et al. Metabolic monitoring training program implementation in the community setting was associated with improved monitoring in second-generation antipsychotic-treated children. Can J Psychiatry. 2012 May;57(5):292-9. PMID: 22546061.
- Sterling S, Kline-Simon AH, Satre DD, et al. Implementation of screening, brief intervention, and referral to treatment for adolescents in pediatric primary care a cluster randomized trial. JAMA Pediatrics; 2015. p. e153145.
- Hoffmann TC, Glasziou PP, Boutron I, et al. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. BMJ.2014;348:g1687. doi: 10.1136/bmj.g1687. PMID: 24609605.
- Zarin DA, Tse T, Williams RJ, et al. Trial Reporting in ClinicalTrials.gov - The Final Rule. N Engl J Med. 2016 Sep 16doi: 10.1056/NEJMsr1611785. PMID: 27635471.