Healthcare Policy
Using Health Technology Assessment to Identify Research Gaps: An Unexploited Resource for Increasing the Value of Clinical Research
N. Ann Scott, Carmen Moga, Christa Harstall and Jacques Magnan
Abstract
Health technology assessments (HTAs) are an as yet unexploited source of comprehensive, systematically generated information that could be used by research funding agencies to formulate researchable questions that are relevant to decision-makers. We describe a process that was developed for distilling evidence gaps identified in HTAs into researchable questions that a provincial research funding agency can use to inform its research agenda. The challenges of moving forward with this initiative are discussed. Using HTA results to identify research gaps will allow funding agencies to reconcile the different agendas of researchers who conduct clinical trials and healthcare decision-makers, and will likely result in more balanced funding of pragmatic and explanatory trials. This initiative may require a significant cultural shift from the current, mostly reactive, funding environment based on an application-driven, competitive approach to allocating scarce research resources to a more collaborative, contractual one that is proactive, targeted and outcomes-based.
[To view the French abstract, please scroll down.]
In most countries, the demand for healthcare outstrips the resources available to provide it. This imbalance, and the concomitant pressure on decision-makers to allocate resources appropriately, has been a prominent factor in the rise of health technology assessment (HTA). HTA is a form of policy research that seeks to inform decision-making, in both policy and practice, by systematically examining the effects of a particular technology on the individual and society with respect to its safety, efficacy, effectiveness and cost-effectiveness, as well as its social, economic and ethical implications, and identifying areas that require further research (Office of Technology Assessment 1976; Banta et al. 1999). A health technology, in this instance, is any intervention administered with the aim of improving the health status of patients or of populations (Muir Gray 1997). HTA activities are currently undertaken in more than 23 countries, and the majority are publicly funded and administered by national or regional governments (INAHTA 2008).
The Relevance Gap in Health Research
Basic research is often supported by public funds, while the private sector usually concentrates on applied research and technology development. A major problem with this arrangement is that developments from industry are often driven by technological and financial factors rather than the health needs of the population (Banta et al. 1999). In addition, it is not uncommon for publicly funded academic institutions to produce research that does not directly address a relevant patient or health system need, or improve on previous inconclusive clinical or population studies, and may even duplicate prior work on a question that has already been answered (Savulescu et al. 1996; Hotopf 2002; Tunis et al. 2003; Zwarenstein and Oxman 2006).
Despite substantial increases in public and private funding for clinical research over the last decade, research output still often fails to provide specific answers for many common, important questions posed by healthcare decision-makers (Pearson and Jones 1997; Hotopf 2002; Lenfant 2003). This lack is most apparent in the conclusions of systematic literature reviews, HTA reports and clinical practice guidelines. These research syntheses are designed to provide a comprehensive summation of the available evidence for decision-makers in the healthcare system, but this aim is routinely stymied by the poor quality and inadequate quantity of the available evidence (Tunis et al. 2003). The relevance disconnect between the research agenda and societal need undermines efforts to improve the scientific basis of healthcare decision-making and limits the ability of public and private insurers to develop evidence-based coverage policies (Tunis et al. 2003).
[Figure 1]
Identifying Evidence Gaps, Research Gaps and Researchable Questions
Health services research is crucial to controlling healthcare costs in the long term and ensuring optimal use of healthcare resources because it can help identify innovative, less expensive alternative therapies that should be promoted, as well as costly, harmful and ineffective treatments that should be sidelined (Lenfant 1994; Claxton and Sculpher 2006). When comparing health technologies, the questions posed by healthcare decision-makers are often structured differently from those addressed in clinical trials (Figure 1). Traditional explanatory or mechanistic trials, which recruit homogeneous populations and determine how an intervention works under ideal conditions (efficacy trials), rarely satisfy all the needs of decision-makers striving to make evidence-based determinations, particularly at the policy level. In contrast, pragmatic or practical clinical trials, which assess the extent to which an intervention produces a result under ordinary circumstances (effectiveness trials), are formulated according to the information needed to make a decision and are conducted in heterogeneous patient populations under "real-world" conditions (Table 1) (de Zoysa et al. 1998; Roland and Torgerson 1998; Tunis et al. 2003; The Cochrane Collaboration 2005).
[Table 1]
The majority of trials conducted to date have been explanatory trials because most major research funding organizations do not have an explicit mandate to fund studies that are important to decision-makers (Hotopf 2002; Tunis et al. 2003). In an attempt to redress this deficiency, various research funding agencies in the USA and Canada have sponsored pragmatic trials in recent years, but these organizations still do not have a systematic mechanism for identifying decision-makers' areas of priority (Tunis et al. 2003; Glasgow et al. 2006). The resulting evidence gap can be defined as all the evidence missing from a body of research on a particular topic that would otherwise potentially answer the questions of decision-makers (clinicians, other practitioner groups, administrators, policy makers) (Figure 1). By systematically summarizing the available evidence in response to policy-driven questions, HTAs routinely identify evidence gaps that are relevant to policy makers and the attendant "research gaps," that is, the additional research needed, from a policy maker's perspective, to address the evidence gap in the available primary research. There are almost always fewer research gaps than evidence gaps because, while it would be nice to know everything (the evidence gap), most of the time decision-makers must be content with picking a few aspects of the evidence gap that would be the most useful for informing decisions and the most practicable to answer within the time and resource constraints of the research environment (the research gap).
Thus, HTAs are an as yet unexploited source of systematically generated, comprehensive information that could be used by research funding agencies to bridge the evidence gap and formulate researchable questions that are relevant to decision-makers. However, relying solely on the producers of HTA reports to identify research gaps will result in an extensive list, but not necessarily one that is relevant to clinicians or policy makers (de Vet et al. 2001), since HTAs are circumscribed by the inherent limitations of the evidence base they summarize. Any endeavour to derive researchable questions from the research gaps identified by HTAs must include researchers, policy makers, clinicians, consumers and the public, since each group will often have different opinions on the need for future research and how it should be designed, financed and developed (Black 2001; Lomas et al. 2003).
[Table 2]
Using HTA to Inform the Research Funding Agenda
World experience
From the results of a recent survey of members of the International Network of Agencies for Health Technology Assessment, it appears that only two countries, Belgium and the United Kingdom, have a formal process for linking the identification of research gaps from HTA reports to the research funding process (Table 2) (Scott et al. 2006). Among many other HTA agencies, the use of HTA reports to help funding agencies address evidence gaps usually occurs in an ad hoc, serendipitous fashion, if at all. The agencies surveyed identified a number of challenges in pursuing such a process, including insufficient resources, in terms of personnel and time, to commit to such a project; the difficulty of providing clear explanations and valid recommendations for future research from HTAs; a reluctance to interfere with an already established research agenda; and the logistical complexity of establishing such a long-term strategy.
The system in the United Kingdom, which seems to be the most comprehensive and systematic, is facilitated by two agencies, the National Coordinating Centre for Health Technology Assessment (NCCHTA) and the National Institute for Health and Clinical Excellence (NICE). NICE issues guidance for the National Health Service on public health, clinical practice and the use of health technologies. Evidence gaps identified by NICE guidance reports are fed into the NICE Research and Development Programme, where they are prioritized by a Research and Development Advisory Committee according to their importance, relevance and feasibility. As NICE is unable to commission research directly, the research recommendations and their priority ranking are published on the NICE website. High-priority topics are actively promoted to public and private research funding bodies (Claxton and Sculpher 2006; NICE 2006).
In contrast, the NCCHTA, which contracts review groups to undertake the HTA reports used to inform NICE guidance, is one of the largest public funders of research in the United Kingdom. It has an annual budget of GB£13 million, 90% of which is spent on new randomized controlled trials (Stevens and Milne 2004; NCCHTA 2007a). The HTA reports identify areas where further research is required, and this information is used by the NCCHTA, together with research recommendations from other sources, to establish a list of research topics. Informal descriptions of the research questions (vignettes) are then submitted to the relevant advisory panel and an HTA Prioritisation Strategy Group, which prioritize the topics according to their importance, urgency and potential cost (Claxton and Sculpher 2006; NCCHTA 2007a,b). Although this system appears to work well, the separation of research prioritization and commissioning from reimbursement decisions is not ideal (Claxton and Sculpher 2006; NCCHTA 2007a).
Canadian experience: A pilot project in Alberta, Canada
In Alberta, a unique situation exists in which an independent, government-sponsored HTA program is housed within a provincial research funding organization, the Alberta Heritage Foundation for Medical Research (AHFMR). The AHFMR disburses over $65 million each year and is one of the main sources of public funding for biomedical and health research in the province of Alberta (AHFMR 2003). Funding applications are assessed for their feasibility, importance and originality by external reviewers with expertise in the relevant field. The applications are then ranked by an AHFMR committee of reviewers (AHFMR 2003). While the AHFMR designates broad research priority areas for different categories of funding, there is no mechanism for systematically and objectively identifying evidence gaps. Therefore, a pilot project was undertaken to
- assess how well HTA reports published by the AHFMR HTA program could identify evidence gaps and delineate the concomitant research gaps and
- develop a process for distilling researchable questions from the research gaps identified by an AHFMR HTA report to inform the research funding programs of the AHFMR.
Objective 1: To assess how well AHFMR HTA reports delineate research gaps
An internal assessment was conducted of a consecutive series of HTA reports - four full HTA reports and four shorter reports (TechNotes) - published by the HTA program between 2002 and 2003 (Scott et al. 2006). All bar one of the reports were produced in response to questions posed by health ministry policy makers, who are the main clients of the HTA program.
The problem of limited evidence was reported severally in the reviewed HTA reports, but evidence and research gaps were not consistently or clearly highlighted and were often embedded within lengthy discussion sections. More useful information on evidence gaps was gleaned from personal interviews with the HTA researchers than from reading their reports (Scott et al. 2006).
[Figure 2]
Objective 2: To develop a process for distilling relevant researchable questions from the research gaps identified in AHFMR HTA reports
Two questionnaires were developed, one for researchers and one for clinicians/policy makers, to simultaneously formulate researchable questions from the research gaps uncovered in HTA reports and to capture the different perspectives and priorities of a representative cross-section of stakeholders (Figure 2). Three of the four questions were the same; the fourth item was omitted from the researcher questionnaire. The questionnaires focused on two HTA reports (Ospina and Harstall 2002, 2003) published by the HTA program on chronic pain and were piloted with an Information Sharing Group on Chronic Pain comprising one policy maker, one policy maker/health services researcher, one clinician, one clinician/health administrator and one HTA researcher who co-authored the reports.
[Table 3]
The results were compiled into a list of research questions on chronic pain, reflecting the three stakeholder perspectives (health services research, clinical and policy), along with the names of potential researchers identified in the questionnaire as being willing to undertake the research. An illustrative set of results is summarized in Table 3. These were presented to the Vice President of Programs and the Director of Grants and Awards of the AHFMR. The consensus was that the process held promise and could work on a case-by-case basis. In addition, the following comments were made:
- A dedicated group needs to be identified that will have the commitment to shepherd the process from start to finish. It is important for the HTA program to link the stakeholders.
- Research in Alberta is largely investigator driven, so a paradigm shift is required. The research gaps project may be an important step in achieving this.
- The level of complexity of the process should reflect the research dollars available for funding.
- Using HTAs to identify research gaps could provide the AHFMR and its stakeholders with a mechanism for pinpointing research needs.
Moving from Theory to Practice
Implementation issues
Prioritizing the research questions
Priority should be given to medical and health services research that is most likely to improve health and the performance of the healthcare system (Fraser 2000). Since it is unrealistic to expect that all HTA reports will automatically undergo the process outlined to identify research gaps, formalized, objective criteria for prioritizing which HTA reports are chosen must be developed. Also, in cases where a number of researchable questions are identified from an HTA with no clear front runner, there must be an established process and criteria for prioritizing these questions.
Assembling a representative group that can provide a balanced review of the funding proposals may be challenging. In addition, the entire process must be shepherded to ensure that it is timely, that the proposals focusing on the research gaps do not get sidelined by other funding priorities and that the needs of all stakeholders are taken into account in the research design. Questions identified in HTA reports, which are often based on international research, must also be contextualized against local needs, the extant research capacity and the mandate of the funding agency. This is particularly relevant for Alberta, which has sizable financial resources available but only three million residents (AHFMR 2006; Statistics Canada 2006). If Alberta does not have the required clinical expertise for a specific research proposal, consideration should be given to the question of pursuing an out-of-province collaboration and possible sources of additional research dollars. The role of other Canadian HTA agencies in coordinating and establishing research policy also needs to be ascertained to ensure a unified strategy. For example, the Ontario Health Technology Advisory Committee has already established a Program for Assessment of Technology in Health (PATH) that undertakes field evaluations to collect primary data in parallel with an HTA for new technologies that have a scant evidence base (Goeree and Levin 2006).
To ensure acceptance by stakeholders in the research community, additional targeted funds may need to be found for identified research gaps rather than shifting money within the pool of currently available dollars. Care must also be taken to ensure that explanatory trials and basic curiosity-driven research are not underfunded as a result of an increased focus on policy-related research (Califf and Woodlief 1997; "Research Funding" 2003).
Obtaining clinical, policy, consumer and public input
The Alberta pilot project showed that the research questions identified varied with the respondent's interests, role and educational background, thus emphasizing the need for multiple perspectives to increase relevance. The project owed much to the serendipitous existence of the Information Sharing Group, which provided a pool of accessible, motivated and knowledgeable clinicians, researchers and policy makers who could participate in the process. In most cases, such a group is not likely to be available, so who then provides clinical and policy input? And who provides public input? One possible solution is to engage the policy maker(s) who asked the question in the first place, and the clinicians who were either external reviewers for the HTA report, or who may have provided clinical expertise during its synthesis. Professional organizations may also be able to identify clinical experts, and lobby groups and consumer advocacy agencies may be a potential source of consumer participants. Involving the public and consumers in the production of HTA reports, rather than just at the tail end, would make this process even more seamless.
[Figure 3]
Establishing a feedback loop
For the process of identifying research gaps to be effective, the funded research needs to be fed back into another HTA, or some other mechanism, to provide the answer to the decision-maker who originally asked the question and close the loop (Figure 3). Therefore, criteria need to be established at the outset for determining when the question has been answered, or when further research is unlikely to yield any significant additional value (Claxton et al. 2004). These criteria will also help in gauging whether the research dollars were well spent. This decision would most likely be made within the funding agency by a committee that would ideally include the multidisciplinary team that helped formulate the initial research question. The cycle may have to go through a number of iterations before there is sufficient evidence to satisfy the decision-maker (Irwig et al. 1998).
Good coordination and communication among all the actors in the process are essential for success, and the process must be timely to ensure that the end result is still relevant to the policy maker and the current clinical context. In the Alberta pilot project, the HTA reports were the common link between the disparate stakeholders and, thus, provided a focal point for getting all the key players at the table and potentially narrowing the problematic discontinuity in decision-making between the research commissioning and reimbursement spheres.
Issues from the funder's perspective
The proposed approach of utilizing HTA reports to identify evidence gaps and subsequently derive research questions should in theory appeal to funders. After all, the relevance to decision-making, and the importance of the area in which the evidence gaps have been identified, should already be evident from the fact that the original question leading to the HTA came directly from the policy makers' environment. This is a big advantage for research funders eager to demonstrate the impact of the research they support on the efficiency of the health system. The process could also help funders find the appropriate balance between supporting broadly based, investigator-driven research (usually referred to as basic research, even within the areas of health services and population-based research) and the more applied, targeted research based on an HTA-linked process.
However, at the end of even the most elaborate process for identifying evidence gaps and defining researchable questions is the need to link such questions with appropriately qualified, available investigators willing to do the research. For most publicly run, non-profit funders this may require a significant cultural shift from a reactive, application-driven, competitive approach process for allocating scarce research resources to one that is more proactive, targeted and outcomes-based, and requires a more collaborative, contractual approach. Research funders still face a number of barriers to developing such an approach, including the scarcity of funds available for investigator-driven research, an academic recognition system that generally does not value "contract" research to the same extent as conventional, peer-reviewed, competitive research, and the perception that such applied research should be supported from within the healthcare system itself.
Methodological issues: Pragmatic versus explanatory trials
HTA is policy driven, so most research gaps identified by HTAs will involve questions of effectiveness rather than efficacy. Using HTAs to formulate researchable questions for funding may encourage more pragmatic clinical trials (Table 1). This, in turn, could force an expansion of HTA quality assessment criteria and entail more methodological development and training in the HTA community to tackle such issues as interpreting discrepant results between explanatory and pragmatic trials and ensuring accurate synthesis of the research evidence.
Key Elements in Using HTAs to Identify Research Gaps
During the Alberta pilot project, it became apparent that certain elements are essential for moving the initiative forward.
Explicit, actionable research questions
To be a facilitating factor in setting the research agenda, HTAs must be more explicit and consistent in defining specific research gaps and questions. However, such definition cannot be done in a contextual vacuum. The pilot project demonstrated the importance of incorporating input from motivated stakeholders.
Good communication
Because research funders and policy makers often have different priorities, HTA researchers must be able to "translate" HTA-derived research recommendations into a language that funders understand. Involving the funders in the design of the questionnaire provided a forum for ascertaining and understanding their priorities. In addition, we found the funders surprisingly open to exploring ways of increasing the relevance and impact of their disbursement decisions, so HTA agencies need not be shy in approaching local research funding agencies with such proposals.
On June 30, 2006, after 11 years at the AHFMR, the HTA program moved to the Institute of Health Economics, a prominent provincial HTA agency. While ostensibly problematic, this move actually proved beneficial. Firstly, the potential conflict of interest that may have been cited by other provincial HTA groups was removed by the disengagement. Secondly, the relationship forged between the AHFMR and the HTA program will now provide a model for establishing similar partnerships with other external HTA agencies.
Commitment
The infrastructure needed for prioritizing HTA-derived research proposals and managing the process outlined already exists within the AHFMR. Nonetheless, the composition of existing panels and committees that oversee funding allocation will have to change to incorporate a broader range of stakeholders than is currently represented. An intensive, ongoing commitment of resources and guidance from the HTA program is also necessary until the process gains enough momentum to be self-perpetuating. Even in the long term, HTA researcher input will remain a crucial part of the translation process between policy makers and the funding agency.
Conclusion
A process was developed in Alberta for distilling evidence gaps identified in HTAs on chronic pain management into researchable questions that a provincial research funding agency can use to inform its research agenda. This novel approach also identified a research team to coordinate and potentially conduct the necessary research studies. Although there will always be evidence gaps, such a process could serve as a starting point for funding agencies to fill some of these gaps by reconciling the different agendas of researchers and healthcare decision-makers. A detailed review of more established programs, particularly the system in the United Kingdom, may help to inform these efforts. The producers of HTAs can augment such processes by identifying and explicitly describing evidence gaps in the clinical research in their reports and involving clinicians, policy makers, consumers and the public in the production of HTAs.
Using HTA results to identify research gaps may be a way that funding agencies can better incorporate the needs of healthcare decision-makers into the research agenda and demonstrate the impact of the research they fund. Like the research endeavour itself, the orchestration of such a paradigm shift will involve an incremental evolution from this first step.
Utiliser l'évaluation des technologies de la santé pour déceler les lacunes dans la recherche : une ressource inexploitée pour rehausser la valeur de la recherche clinique
Résumé
Les évaluations des technologies de la santé (ETS) sont une source encore inexploitée d'information détaillée et produite de façon systématique. Elles pourraient être utilisées par les organismes qui financent la recherche pour formuler des questions de recherche pertinentes pour les décideurs. Un processus a été élaboré pour transformer les lacunes dans les preuves décelées dans les ETS en questions de recherche qu'un organisme provincial de financement peut utiliser pour orienter son propre programme de recherche. On discute des défis liés à la mise en œuvre d'une telle initiative. L'utilisation des résultats des ETS pour repérer les lacunes dans la recherche permettra aux organismes de financement de concilier les différents objectifs des chercheurs et des décideurs du domaine de la santé et mènera probablement à un financement plus équilibré des essais pragmatiques et explicatifs. Cette initiative pourrait nécessiter un important virage culturel, soit l'abandon du cadre de financement actuel, qui est principalement réactif et fondé sur une approche concurrentielle axée sur les applications, dans la répartition des maigres ressources de recherche, au profit d'une approche contractuelle, proactive, ciblée et davantage axée sur la collaboration et les résultats.
About the Author(s)
N. Ann Scott, PhD
Research Associate, Health Technology Assessment Unit
Institute of Health Economics
Edmonton, AB
Carmen Moga, MD
Research Associate, Health Technology Assessment Unit
Institute of Health Economics
Edmonton, AB
Christa Harstall, MHSA
Director, Health Technology Assessment Unit
Institute of Health Economics
Edmonton, AB
Jacques Magnan, PhD
Vice President - Programs
Alberta Heritage Foundation for Medical Research
Edmonton, AB
Correspondence may be directed to: Dr. Ann Scott, Institute of Health Economics, #1200 - 10405 Jasper Avenue, Edmonton, AB T5J 3N4; tel.: 1-780-448-4881; fax: 1-780-448-0018; e-mail: capstone@shaw.ca.
Acknowledgment
We are indebted to Dr. R. Taylor for provision of information and for valuable comments on the draft version of the report upon which this manuscript is based. We are grateful to the following individuals for their participation in and contribution to the pilot project: Mr. M. Taylor, Ms. L. Chan, Ms. P. Corabian, Ms. L. Dennett, Dr. B. Guo, Dr. D. Juzwishin, Ms. M. Ospina and Dr. Z. Tang. None of the authors have any personal conflicts of interest.We also wish to thank the INAHTA member agencies who responded to the environmental scan surveys conducted in 2004 and 2006. Members of the Information Sharing Group on Chronic Pain were Mr. H. Borowski, Dr. S. Rashiq, Dr. D. Schopflocher and Dr. P. Taenzer.
References
Alberta Heritage Foundation for Medical Research (AHFMR). 2003. AHFMR 99-02 Triennial Report. Retrieved January 23, 2008. < http://www.ahfmr.ab.ca/publications >.
Alberta Heritage Foundation for Medical Research (AHFMR). 2006. Highlights. Retrieved January 23, 2008. < http://www.ahfmr.ab.ca/publications >.
Banta, H.D., W.J. Oortwijn and the European Commission, Directorate-General for Employment, Industrial Relations and Social Affairs, Directorate V/F. 1999. Health Technology Assessment in Europe: The Challenge of Coordination. Luxembourg: Office for Official Publications of the European Communities.
Black, N. 2001. "Evidence Based Policy: Proceed with Care." British Medical Journal 323(7307): 275-79.
Califf, R.M. and L.H. Woodlief. 1997. "Pragmatic and Mechanistic Trials." European Heart Journal 18(3): 367-70.
Claxton, K., L. Ginnelly, M. Sculpher, Z. Philips and S. Palmer. 2004. "A Pilot Study on the Use of Decision Theory and Value of Information Analysis as Part of the NHS Health Technology Assessment Programme." Health Technology Assessment 8: 1-118. Retrieved January 23, 2008. < http://www.hta.nhsweb.nhs.uk/fullmono/mon831.pdf >.
Claxton, K. and M. Sculpher. 2006. "Using Value of Information Analysis to Prioritise Health Research: Some Lessons from Recent UK Experience." Pharmacoeconomics 24: 1055-68.
The Cochrane Collaboration. 2005. Glossary of Terms in the Cochrane Collaboration. Version 4.2.5. Chichester, UK: John Wiley & Sons. Retrieved January 23, 2008. < http://www.cochrane.org/resources/handbook/glossary.pdf >.
de Vet, H.C., M.E. Kroese, R.J. Scholten and L.M. Bouter. 2001. "A Method for Research Programming in the Field of Evidence-Based Medicine." International Journal of Technology Assessment in Health Care 17(3): 433-41.
de Zoysa, I., J.-P. Habicht, G. Pelto and J. Martines. 1998. "Research Steps in the Development and Evaluation of Public Health Interventions." Bulletin of the World Health Organisation 76(2): 127-33.
Fraser, D.W. 2000. "Overlooked Opportunities for Investing in Health Research and Development." Bulletin of the World Health Organisation 78(8): 1054-61.
Glasgow, R.E., K.W. Davidson, P.L. Dobkin, J. Ockene and B. Spring. 2006. "Practical Behavioral Trials to Advance Evidence-Based Behavioral Medicine." Annals of Behavioral Medicine 31(1): 5-13.
Godwin, M., L. Ruhland, I. Casson, S. MacDonald, D. Delva, R. Birtwhistle, M. Lam and R. Seguin. 2003. "Pragmatic Controlled Clinical Trials in Primary Care: The Struggle between External and Internal Validity." BMC Medical Research Methodology 3: 28.
Goeree, R. and L. Levin. 2006. "Building Bridges between Academic Research and Policy Formulation: The PRUFE Framework - An Integral Part of Ontario's Evidence-Based HTPA Process." Pharmacoeconomics 24(11): 1143-56.
Hotopf, M. 2002. "The Pragmatic Randomised Controlled Trial." Advances in Psychiatric Treatment 8: 326-33.
International Network of Agencies for Health Technology Assessment (INAHTA). 2008. Retrieved January 23, 2008. < http://www.inahta.org >.
Irwig, L., M. Zwarenstein, A. Zwi and I. Chalmers. 1998. "A Flow Diagram to Facilitate Selection of Interventions and Research for Health Care." Bulletin of the World Health Organisation 76(1): 17-24.
Lenfant, C. 1994. "Research Needs and Opportunities. Maintaining the Momentum." Circulation 90(5): 2192-93.
Lenfant, C. 2003. "Clinical Research to Clinical Practice - Lost in Translation?" New England Journal of Medicine 349(9): 868-74.
Lomas, J., N. Fulop, D. Gagnon and P. Allen. 2003. "On Being a Good Listener: Setting Priorities for Applied Health Services Research." Milbank Quarterly 81(3): 363-88.
MacRae, K.D. 1989. "Pragmatic versus Explanatory Trials." International Journal of Technology Assessment in Health Care 5(3): 333-39.
Muir Gray, J.A. 1997. Evidence-Based Healthcare. New York: Churchill Livingstone.
National Coordinating Centre for Health Technology Assessment (NCCHTA). 2007a. "Annual Report of the NIHR Health Technology Assessment Programme 2006." Retrieved January 23, 2008. < http://www.hta.nhsweb.nhs.uk/publicationspdfs/ annualreports/annualreportfinalweb.pdf >.
National Coordinating Centre for Health Technology Assessment (NCCHTA). 2007b. "The Principles Underlying the Work of the National Coordinating Centre for Health Technology Assessment." Retrieved January 23, 2008. < http://www.hta.nhsweb.nhs.uk/about/probity.pdf >.
National Institute for Health and Clinical Excellence (NICE). 2006. "National Institute for Clinical Excellence Research and Development Strategy." Retrieved January 23, 2008. < http://www.nice.org.uk/page.aspx?o=295953 >.
Office of Technology Assessment. 1976. Development of Medical Technology: Opportunities for Assessment. Retrieved January 23, 2008. < http://govinfo.library.unt.edu/ota/ Ota_5/DATA/1976/7617.PDF >.
Ospina, M. and C. Harstall. 2002. Prevalence of Chronic Pain: An Overview. HTA #29. Retrieved January 23, 2008. < http://www.ihe.ca/hta/publications.html >.
Ospina, M. and C. Harstall. 2003. Multidisciplinary Pain Programs for Chronic Pain: Evidence from Systematic Reviews. HTA #30. Retrieved January 23, 2008. < http://www.ihe.ca/hta/publications.html >.
Pearson, P. and K. Jones. 1997. "Primary Care - Opportunities and Threats. Developing Professional Knowledge: Making Primary Care Education and Research More Relevant." British Medical Journal 314(7083): 817-20.
"Research Funding: The Problem with Priorities." 2003. Editorial. Nature Materials 2(10): 639.
Roland, M. and D.J. Torgerson. 1998. "What Are Pragmatic Trials?" British Medical Journal 316(7127): 285.
Savulescu, J., I. Chalmers and J. Blunt. 1996. "Are Research Ethics Committees Behaving Unethically? Some Suggestions for Improving Performance and Accountability." British Medical Journal 313(7069): 1390-93.
Scott, A., C. Moga and C. Harstall. 2006. Using HTA to Identify Research Gaps: A Pilot Study. HTA Initiative #24. Retrieved January 23, 2008. < http://www.ihe.ca/hta/publications.html >.
Statistics Canada. 2006. Population by Year, by Province and Territory. Retrieved October 2007. < http://www40.statcan.ca/l01/cst01/demo02a.htm >.
Stevens, A. and R. Milne. 2004. "Health Technology Assessment in England and Wales." International Journal of Technology Assessment in Health Care 20(1): 11-24.
Tunis, S.R., D.B. Stryer and C.M. Clancy. 2003. "Practical Clinical Trials: Increasing the Value of Clinical Research for Decision Making in Clinical and Health Policy." Journal of the American Medical Association 290(12): 1624-32.
Zwarenstein, M. and A. Oxman. 2006. "Why Are So Few Randomized Trials Useful, and What Can We Do about It?" Journal of Clinical Epidemiology 59(11): 1125-26.
Comments
Be the first to comment on this!
Personal Subscriber? Sign In
Note: Please enter a display name. Your email address will not be publically displayed