JOIN > Urologists Practicing Outside of AUA Section Boundaries > Application Cycle

Module 15: Research Ethics

Upon completion of this module, the resident should be able to:

  1. Identify and describe the significance of key milestones in the evolution of clinical research ethics.
  2. Describe the meaning and practical applications of the principles of respect for persons, beneficence, and justice in clinical research.
  3. Describe other key criteria or requirements for the ethical conduct of clinical research, including scientific and social value, fair risk-benefit ratios, respect for subjects, etc.

Module: 15 / Clinical Research Ethics

Contents

  1. Historial, Legal, and Ethical Background
  2. The Ethics of Clinical Research
  3. Clinical Trials: A Brief Note on Phases, Types, and the Gold Standard of Clinical Research
  4. The Ethics of Clinical Research: Consensus and Controversy
  5. Suggested Solo or Group Activity

Historial, Legal, and Ethical Background

For most of the 2,500 year history of Western medicine, clinical research or human experimentation was not clearly distinguished from medical practice. Any medical intervention via drugs or surgery was an experiment in the sense that its outcome was largely uncertain. In 1865, the renowned French physiologist, Claude Bernard, published his Introduction to the Study of Experimental Medicine in which he addressed the question of whether physicians have the right to perform experiments and vivisection on human beings. It was Bernard’s view that physicians actually have a duty to do so whenever such experimentation offers the hoping of saving life, of curing disease, or of benefiting the patient. At the same time, however, he emphasized what he took to be the principle of medical and surgical morality, that is, that a physician must never perform on a human being an experiment that might cause harm to any degree, even though the result may be beneficial to the cause of science and thereby the health of other human beings. Although he made no mention of voluntary consent, Bernard was both insightful and prescient with respect to the ethical moorings of clinical research.

Despite Bernard’s enunciation of a sound principle of surgical and medical morality, enthusiasm for experiment in medicine rapidly gained ground in the late 19th and early 20th centuries—and abuses of human subjects began to accumulate. The work of the Italian bacteriologist, Sanarelli, is especially notorious in this respect. In his search for the cause of yellow fever, he injected his candidate bacillus into five human subjects with neither their knowledge nor their consent. Several years later, in 1900, Walter Reed set out to refute Sanarelli’s claim in an experiment involving 22 volunteers, all of whom underwent a process of informed consent, including the disclosure of the attendant risks and benefits of the experiment. In 1908, in testimony before a Royal Commission on Vivisection, none other than Sir William Osler testified that voluntary consent is the hinge on which the morality of any proposed human experimentation turns. Thus, by turn of the 20th century, three principles had emerged as critical to the ethics of clinical research: voluntary consent, worthy scientific purpose, and the integrity of the investigator. An especially robust expansion of these principles is found in a set of regulations issued by the German government in 1931, providing clear directives for informed consent, documented justification for any deviation from protocols, a risk-benefit analysis, and a justification for the study of vulnerable populations. It is a matter of historic irony as well as tragedy that ten years later, German physicians, under the banner of the Third Reich, were engaged in some of the most blatantly unethical – better said: cruel and inhumane – clinical research ever performed: these “experiments” were unprecedented in the scope and the degree of harm and suffering inflicted on human subjects and included such barbarisms as injecting people with gasoline and live viruses, immersing people in ice water, and forcing people to ingest poison. After the defeat of the Germans, in December 1946, twenty-three physicians and administrators, many of them leading members of the German medical profession, were indicted and tried before the War Crimes Tribunal at Nuremberg. In their defense, many justified their participation on the grounds of medical necessity. The tribunal condemned the experiments as crimes against humanity. Sixteen of the twenty-three defendants were found guilty and imprisoned; the other seven were found guilty and sentenced to death. The August 1947 verdict against them included a section entitled “permissible medical experiments,” which became known as the Nuremberg Code.

In the post-World War II era, the Nuremberg Code was the first of many efforts to reckon with the ethical challenges of research involving human subjects; it was, as well, the first set of international standards. A set of “first principles” for the professional ethics of clinical investigators, the Code enunciated ten points, including the assertions that:

  • The voluntary consent of human subjects is absolutely essential.
  • The aim of human subjects research is to yield social benefits that cannot be obtained through any other method.
  • A human subjects research protocol should be based on the results of animal experimentation.
  • Clinical investigators must take measures to avoid all unnecessary physical and mental suffering in the human subjects of their research.
  • The degree of risk to which human subjects are exposed should never exceed the “humanitarian importance of the problem,” this risk should be minimized, and subjects should be protected against “even remote possibilities of injury, disability or death.”
  • Participants should be free to withdraw from the research at any time and investigators should terminate an experiment if injury, disability or death for any subject becomes likely.

One year later, in 1948, the General Assembly of the United Nations adopted the Universal Declaration of Human Rights, in part, to ensure that the findings at Nuremberg were not comfortably dismissed as a German aberration and that the principles enunciated in the Code were understood as universal in application and scope. Six years later, in 1954, the World Medical Association, which was founded in 1946, adopted a statement of principles for professionals involved in research and human experimentation. In 1964, an expanded version of this statement was adopted at the 18th World Medical Assembly in Helsinki. Known as the Declaration of Helsinki, the statement has been amended five times and clarified twice; it includes the following as “basic principles for medical research”:

  • It is the duty of the investigator to protect the life, health, privacy, and dignity of the human subject.
  • Responsibility for human subjects rests with investigator.
  • The design of the research and its ethical legitimacy should be subject to independent review and approval by a committee formed and designated for this purpose.
  • The design should include an assessment of risks and benefits; an inadequate assessment of these parameters is cause for abstaining from the research and any research protocol should cease in the event that disproportionate risks arise or benefits of the intervention in focus are established.
  • There should be a reasonable likelihood that subject populations will benefit from the research.
  • No research should go forward without the voluntary, informed consent by subjects whose privacy is protected.
  • The benefits, risks, burdens and effectiveness of a new intervention should be tested against and compared with those of the best, established intervention.

Closer to home (that is, in the United States), the Clinical Center at the National Institutes of Health (NIH) was established in 1953. Along with research subjects who were also patients, the Center welcomed “normal volunteers”—men and women who offer themselves as healthy controls in drug studies. Among administrators at the Center, it was recognized that these individuals fell outside the boundaries in which clinicians, caring for patients, operate; lacking the protections inherent in the nature of such fiduciary relationships, they needed special protection from the risks they incurred from participation in human subjects research. It was recognized as well, however, that patients, through their participation in research entered into a different relationship with physicians, one unlike the traditional healing relationship. As one observer at the time noted, “the relationship between experimenter and the experimented on, entered upon not to help but to confirm or disprove some biological generalization, is impersonal and objective. The original, basic patient-physician relationship implies the concept of solidarity, of life’s finiteness … experimentation as just described is foreign to it.” In struggling with the question of how to protect those at risk, there was a growing realization and conviction among the Center’s leaders that the good will of the investigator was not enough. In 1964, James Shannon, MD, director of the Institutes, asked an internal committee to review issues involved in human experimentation. Referring to recent reports of human subjects abuse, the committee counseled that the judgment of investigators is not sufficient to ensure the ethical validity of an experiment. Two years later, in 1966, the National Institutes of Health issued an internal policy and procedure order mandating the establishment of independent research review bodies—that is, what is known today as institutional review boards (IRBs).

The revelations that shaped the conclusions of the NIH committee were of abuses that occurred in the course of what was known as the Jewish Chronic Disease Hospital Study. Launched in 1963, the study was aimed at illuminating the pathogenesis of cancer and involved the injection of foreign, live cancer cells into patients hospitalized with various chronic diseases. Although the patients involved gave oral consent to participation, there was no disclosure or discussion of the fact that the study involved the injection of cancer cells, nor was there any documentation of such consent. The investigators argued that they believed documentation was unnecessary because it was customary to undertake even more dangerous procedures without any consent at all; moreover, they maintained that it was legitimate to withhold information about the express purpose of the research because of the likelihood that the participant would probably reject the cells. The investigators were eventually found guilty of fraud, deceit and unprofessional conduct in this study—which was, it should be noted, funded by the NIH.

Revelations about yet another abuse-ridden study contributed to the growing sense of alarm and urgency about the ethics of clinical research in the United States. Known as the Willowbrook Study, this series of inquiries were undertaken, from 1963 to 1966, at a New York-based institution for “mentally defective” children known as the Willowbrook State School. The aim was to elucidate the natural history of infectious hepatitis under controlled circumstances. Children newly admitted to the school were injected with the virus. In the course of the study, the school closed its doors, citing overcrowded conditions, but the study unit itself continued to operate and admit new participants, whose parents learned that the only way to gain admission for their children to Willowbrook was to consent to their involvement in the research. Once revealed, the study raised troubling questions about the adequacy and freedom of consent; inadequate disclosure of the risks, for example, of developing chronic liver disease; and deficiencies in the information given to parents about, for example, access to gamma globulin for treating the disease. The principal investigators’ defensive response to these questions centered on the claim that the vast majority of the children would have been infected at Willowbrook due to the crowded, unsanitary conditions at the school, as well as on the argument that the study participants were children whose parents have given their consent. The thorny problem of using vulnerable populations—and by virtue of their mental impairments and their age, the participants in the Willowbrook study were doubly vulnerable—is clearly illustrated in this dark chapter in the evolution of clinical research ethics in the United States. A milestone in that evolution occurred in 1966, when Harvard physician, Henry Beecher, published a controversial article, “Ethics and Clinical Research,” in The New England Journal of Medicine. A distinguished professor of anesthesiology, Beecher presented twenty-two examples of unethical experimentation, all drawn from published articles by leading research scientists. The studies in focus exposed patients to excessive risks; ignored the need for consent; used poor, mentally incapacitated persons; and involved the withholding of therapies of known efficacy. Beecher’s antidote to this long list of abuses was reliance on the traditional ethic that looks to the integrity and conscience of the individual investigator as the best guarantor of a study’s ethical validity.

The decisive turning point came in 1972, when the first media accounts of the Tuskeegee Syphillis Study were published. Tuskeegee is the most notorious example in the United States of prolonged, knowing violations of the rights of a vulnerable group of research participants. The study was launched in the early 1930s as an examination of the natural history of untreated syphilis and continued until 1972. More than 400 African-American men with syphilis participated, with about 200 without the disease serving as controls; the men were recruited without informed consent and, in fact, were deliberately misinformed that some of the procedures done in the interest of research (for example, spinal taps) were actually “special free treatment.” By 1936, it was clear that many more infected men than controls developed complications; ten years later, an interim report of the study indicated that the death rate among those with the disease was about two times as high as it was among the controls. In the 1940s, penicillin was found to be effective in treating syphilis; the study continued, however, and the men were neither informed nor treated with the antibiotic.

Once the study was exposed, the ensuing public outrage led to the appointment of an ad hoc panel by what was then known as the U.S. Department of Health, Education and Welfare (now, the U.S. Department of Health and Human Services) to review the study and issue advisory opinions on how to ensure that such abuses never occur again. The panel recommended that the U.S. Congress establish a permanent body with the authority to regulate all federally support involving human subjects. In 1974, the U.S. Congress passed the National Research Act, creating the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Commission was charged with two tasks: (1) recommending regulations to protect the rights and the welfare of human subjects, especially of those with disabilities; and (2) developing principles to govern the ethical conduct of research.

In the years leading up the formation of the National Commission, there was awareness among many investigators of the special ethical duties incumbent upon researchers working with human subjects—for example, the duty to avoid harm to the subjects and to obtain their consent. The specifics of these duties, however, were vague and investigators enjoyed wide latitude in determining the scope of these duties. The “mood” of this era in the history of clinical research in the United States was dominated by a sense of social urgency and social progress; there was, in this context, a resultant, strong tendency to tip the balance between risks and benefits toward the benefit side and to determine benefit in terms of social rather than patient needs. After the Commission and its work, the ethical framework for clinical research was more clearly defined. Research was more clearly distinguished from therapy and the essential features of required consent were spelled out. Social progress could no longer trump human rights due to heightened sensitivity about the potential for exploitation of the disadvantaged. This is not to say that the work of the Commission and its impact on clinical research was without problems or difficulties. But one thing was clear: there was a new climate for clinical research in the United States. Between 1974 and 1978, the National Commission published eleven reports, addressing the ethics of research with the fetus (1975), prisoners (1976), children (1977), and the mentally infirm (1978); it also explored the ethics oversight and regulatory functions of institutional review boards (1978), as well as such topics as psychosurgery (1977), ethical guidelines for the delivery of health services (1978), and the implications of advances in biomedical and behavioral research (1978). Its final report—The Belmont Report—was published in 1979. It enunciates a set of ethical principles and guidelines for protecting human subjects and will be explored at greater length below.

The Belmont Report served as ethical basis for the development of legal standards and regulations promulgated by the NIH and the U.S. Food and Drug Association in 1981 and found in Title 45, Part 46 of the Code of Federal Regulations (in shorthand, 45 CFR 46). In 1991, the standards and regulations contained therein were extended to protect human subjects in virtually all federally funded clinical research: 45 CFR 46 is known as the Common Rule. Subpart A of the Common rule sets out the ethical guidelines for the protection of human subjects; Subpart B addresses additional protections for research involving fetuses, pregnant women and human in vitro fertilization; Subpart C addresses protections for prisoners as human subjects; and Subpart D focuses on protections for children, when they are human subjects. Two years later, another series of government-sponsored experiments came to light: beginning in 1944 and ending in 1974, the U.S. government sponsored several thousand human radiation studies with multiple aims, for example, of advancing biomedical science and promoting national interests in defense and space exploration. Most of these studies involved the administration of radioactive tracers in amounts not likely to cause physical harm; nonetheless, the studies were conducted without the awareness or the consent of the subjects, none of whom were likely to benefit from the research. In 2000, the U.S. Department of Health and Human Services established the Office of Human Research Protections (OHRP) (elevating and replacing the NIH’s Office for Protection from Research Risks); OHRP provides leadership to the seventeen federal agencies engaged in the conduct and/or funding of human subjects research under the Common Rule.

[return to top]

The Ethics of Clinical Research

There is, in clinical research, an inherent—indeed, omnipresent and unavoidable—ethical problem. The problem can best be understood by recalling the argument set forth in module 1, that is, that the end (or telos) of medicine—of every clinical encounter and of the relationship between physician and patient—is a right and good healing action for the individual patient. It is not too much of a stretch to argue that physicians are bound by a sort of Kantian categorical imperative, to treat their patients as ends and never as means to some other end. To an inescapable degree, clinical research depends upon the abrogation of this duty. The end of clinical research is not care and healing for a patient, but rather knowledge and the human subject is a means to this end. Thus, the ethical challenge inherent in clinical research is to minimize the risk, the potential for exploitation and harm.

In the recent history of the United States, the Belmont Report is a landmark in the long effort to grapple with this problem and this challenge. The National Commission’s Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research was published in 1979 as the last of its eleven reports. It is, in a sense, a retrospective reckoning with the ethical underpinnings of its previous work and a prospective attempt to shape the subsequent development of the legal and regulatory framework for clinical research in the United States. Its substance consists in the enunciation and explication of three ethical principles and in their practical application in the context of human subjects research. The three principles—or “prescriptive judgments,” as the report calls them—are: (1) respect for persons; (2) beneficence; and (3) justice.

  • Respect for persons incorporates two ethical “convictions” and attendant moral requirements. The first conviction is that individuals should be treated as autonomous agents: the associated moral requirement is to acknowledge the autonomy of individuals, particularly of human subjects in research. The second conviction is that individuals with diminished autonomy—for example, children, prisoners, the mentally impaired, the fetus—are entitled to protection: the correlative moral requirement is to protect these vulnerable populations. In the context of human subjects research, the principle of respect for persons finds its most concrete application in efforts to ensure informed consent with its hallmarks of adequate disclosure of information, comprehension of that information by the subjects of research, and their voluntary consent to participate in research.
  • Beneficence, in the language of the report, is an “obligation” with two general rules: (1) to do no harm and (2) to maximize possible benefits (that is, of participation in research) and to minimize possible harms. The principle or prescriptive judgment of beneficence is applied through careful, systematic assessments of risk (that is, of the probability and the magnitude of possible harms) and benefits. A few explanatory remarks are in order. First, the language here should be familiar from module 1: the familiar Hippocratic maxim, do no harm (found not in the Hippocratic Oath but rather in the treatise known as Epidemics from the Hippocratic Corpus) is extended from the realm of clinical practice to the realm of clinical research. Harm, in this context, should be broadly construed to embrace physical as well as psychological harms and legal, social, and economic harms. With this principle, moreover, there is a conundrum: we have to undertake clinical investigation both to identify and to measure benefits and risks of harm and a pivotal ethical challenge of such research is deciding when it is justifiable to seek certain benefits despite the risks involved and when such benefits should be foregone because the risks are too great.
  • A key ethical question in clinical research is this: who should receive the benefits of research and who should bear its burdens? This is a question of justice – and of justice in two senses. First, justice in clinical research looks to issues of fairness in distribution, that is, to the question of what each deserves; second, justice in clinical research takes its moral bearings from the long-standing dictum that “equals should be treated equally.” Applying the principle of justice, clinical researchers must use fair procedures in selecting the subjects of their research. There are two levels of relevance in the application of this principle, the individual and the social. At the individual level, investigators exhibit justice as fairness by not offering potentially beneficial research to a favored population or risk-laden research to a “socially” undesirable population—as was the case, for example, with the Tuskeegee Syphillis Study. At the social level, justice requires that distinctions be drawn between classes of subjects that ought to participate and ought not to participate in any kind of research, based on the ability of members of those classes to shoulder the burdens of research and on the appropriateness of placing additional burdens on an already burdened class. Thus, it is in deference to considerations of justice that adults, rather than children, are preferred as subjects and that certain categories—for example, prisoners or the institutionalized—maybe be enlisted, if at all, only on certain conditions.

The principles described in the Belmont Report continue to be essential “structures” of the ethical framework for clinical research—but there are others as well. In their essay, “What Makes Clinical Research Ethical?”, Ezekiel J. Emmanuel, David Wendler, and Christine Grady of the NIH Clinical Center’s Department of Clinical Bioethics (The Journal of the American Medical Association, May 2000, Volume 283: 2701-2711) spell out seven “universal ethical requirements” that any proposed study must meet: (1) social-scientific value; (2) scientific validity; (3) fair subject selection; (4) favorable risk-benefit ratio; (5) independent review; (6) informed consent; and (7) respect for potential and enrolled subjects.

  • Social and scientific value: Ethically valid studies generate new knowledge—from knowledge of human biology to knowledge that has relevance to the development of diagnostic or therapeutic interventions or to the evaluation thereof. The rationale for this requirement is the responsible use of finite financial and other resources and the avoidance of exploitation.
  • Scientific validity: The same rationale animates this requirement, which mandates that studies deploy valid, practically feasible methods; have clear objectives and a sound design; and exhibit clinical equipoise. “Clinical equipoise” is that condition that provides a scientific as well as ethical condition for the conduct of clinical research: that is, a state of authentic uncertainty about the comparative merits of the interventions under study. There are debates about whether that state of uncertainty should hold for the individual investigator or for the medical community as a whole; the latter alternative is the more stringent and, arguably, “objective” alternative. If, in the course of the investigation, an intervention is discovered to be superior to its alternatives, investigators are ethically obliged to offer the human subjects in the investigation that particular intervention.
  • Fair subject selection: The previous remarks concerning the principle of justice are applicable here: that is, that equals should be treated as equals and the benefits and burdens of clinical research should be distributed fairly. In studies that meet this requirement, the goals of the study itself are used to determine the groups and individuals who should be selected as participants—not such factors as the vulnerability or prestige of the group or individuals or the convenience of their selection and enlistment. Moreover, the results of the proposed study should be generalizable to the population that will use intervention; and, consistent with the goals of the study, subjects should be selected to minimize risks and enhance benefits and those who bear the risks the burdens of the research should enjoy its benefits.
  • Favorable risk-benefit ratio: The Belmont principle of beneficence (and its twin principle, non-maleficence) provide the rationale for this ethical requirement, which stipulates that risks and benefits be systematically assessed, that potential risks to individuals be minimized and potential benefits enhanced, and that the more likely and/or severe the risks, the greater the likelihood and magnitude the prospective benefits must be.
  • Independent review: Investigators (and those who fund and sponsor their research) often labor diverse pressures—and with multiple interests. The potential for conflicts (of interests and obligations) and for distortions, prejudice, or bias in judgment is real. Independent review and social accountability are critical strategies in the effort to maintain and protect the integrity of clinical research—and thus the trust, not only of participants and the scientific community but also of society.
  • Informed consent: The rationale for this requirement is the Belmont principle of respect for persons. The requisite information must be disclosed to potential subjects and they (or their surrogates) must demonstrate understanding and comprehension of that information; they must exercise voluntary choice in the decision to consent to, or refuse participation.
  • Respect for potential and enrolled subjects: The rationale for this seventh and final requirement is provided by two Belmont principles, beneficence (and non-maleficence and respect for persons. This requirement mandates that investigators respect the privacy of subjects, permit their withdrawal from studies at any time and for any reason, provide new information as it is generated or gathered in the course of the study (especially information relevant to their participation), closely monitor the welfare of the subjects (especially adverse events), and provide information on the outcomes of the research.

[return to top]

Clinical Trials: A Brief Note on Phases, Types, and the Gold Standard of Clinical Research

Clinical research involves the conduct of what are usually termed clinical trials, which can have as their objects of study any one of a variety of foci, including treatment, prevention, diagnostics, screening, and quality of life. Such trials are conducted in phases:

  1. Phase I typically involves the investigation of a new drug or treatment in a small group of subjects for the first time with a focus on evaluating the safety of the drug or treatment, determining safe dosage, and identifying side effects.
  2. Phase II typically involves larger group of subjects with a focus on determining the effectiveness and further evaluating the safety of the treatment.
  3. Phase III extends the trial to an even larger group of subjects, this time with a focus on confirming effectiveness, monitoring side effects, comparing the treatment with other commonly used treatments, and refining safety information.
  4. Phase IV usually involves post-marketing studies of a drug (or treatment) with a focus on delineating additional information about the drug, including its associated risks, benefits, and optimal use.

Randomized controlled clinical trials are the “gold standard” in clinical research; by design, they offer the potential for the most rigorous evaluation of alternative interventions under study. Subjects are randomly assigned to different “arms” of the protocol, with each arm focused on the evaluation of one of the competing interventions. The ethical requirement of clinical equipoise demands that there be genuine uncertainty about which of the competing interventions is the superior alternative.

[return to top]

The Ethics of Clinical Research: Consensus and Controversy

In the United States and, indeed, throughout the world, the ethics of clinical research is a multifaceted—and exceedingly dynamic—issue. One continuing source of controversy and scrutiny is the ethics of clinical trials conducted in the developing countries of what is often referred to as “the third world.” Should the same ethical standards that hold here in the United States and other “advanced,” industrialized countries be applied to human subjects research in these locales where different demographic and cultural conditions prevail? The epidemic of human immunodeficiency virus (HIV) and acquired immune deficiency syndrome (AIDS) has brought this and related questions to the fore as clinical investigators from the United States have sought to conduct trials of HIV medications in these settings in which the burden of disease has been devastating and the need for wide scale therapeutic interventions (for example, both vaccines and protease inhibitors) is urgent. Some research ethicists have argued that such ethical criteria as informed consent require an altogether different approach in these settings in which Anglo-American conceptions of autonomy, privacy, and self-determination are utterly foreign.

Closer to home, there is ongoing concern about such issues as the quality of informed consent and the risks and benefits of participation in clinical trials. Some empirical studies suggest that potential and actual subjects often do not understand the information usually presented in the process of informed consent for participation and labor under the so-called “therapeutic misconception”—believing, that is, that the clinical trials in which they are enrolled have the purpose of enhancing their treatment and care. Understanding or comprehension—for example, of risks and benefits—is one of the pillars of informed consent; in the absence of understanding on the part of human subjects, informed consent for participation in clinical research remains deficient and thus calls into question the ethical validity of any given trial for which this problem exists. With respect to risks and benefits—which must exhibit a favorable balance of the latter versus the former—there are empirical studies that suggest that the risks associated with participation in clinical research are relatively small. Indeed, empirical studies of clinical trials in oncology indicate that cancer patients participating in trials reap a net benefit—improved survival—from their inclusion in studies.

[return to top]

Suggested Solo or Group Activity

Placebos are a somewhat controversial issue in clinical research ethics. Conduct a search of the scholarly literature on the ethics of “placebo”—sometimes referred to as “sham”—surgical procedures. What are the ethical guidelines governing the use of placebos in clinical research, generally? And what the arguments that have been advanced on behalf of—and against—the use of placebos of a surgical nature?

[return to top]



Additional Resources


An interesting public broadcast system telecast on the ethics of clinical research can be found at http://www.pbs.org/wnet/religionandethics/week848/cover.html.

The Association of Clinical Research Professionals has developed a code of ethics that may be found here: http://www.acrpnet.org/about/ethics.html.

Henry K. Beecher, MD’s landmark study, “Ethics and Clinical Research,” can be found here: http://www.scielosp.org/pdf/bwho/v79n4/v79n4a13.pdf.

The full text of the Nuremberg Code can be found here: http://www.hhs.gov/ohrp/references/nurcode.htm.  The full text of the Declaration of Helsinki can be found here: http://www.scielosp.org/pdf/bwho/v79n4/v79n4a14.pdf.

Another classic statement of ethical precepts for clinical research can be found here: http://www.who.int/docstore/bulletin/pdf/2001/issue4/vol79.no.4.365-372.pdf.

An article exploring the impact of therapeutic research on informed consent from the medical oncology perspective can be found here: http://www.jco.org/cgi/reprint/17/5/1601.

The complexities of clinical research ethics are explored in a succinct article found here: http://www.cmaj.ca/cgi/reprint/158/10/1303.pdf.

Individuals involved in human subjects research funded by the National Institutes of Health must undergo education on this topic; frequently asked questions about this requirement can be found at: http://grants1.nih.gov/grants/policy/hs_educ_faq.htm.

The text of the Common Rule can be found here: http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm.

The website of the U.S. Department of Health and Human Services Office of Human Research Protection can be found here: http://www.hhs.gov/ohrp/.

The U.S. Department of Health and Human Services Office of Human Research Protection offers a series of decision charts designed to help investigators understand the ethical dimensions and regulatory requirements of proposed studies.  These charts can be found here:  http://www.hhs.gov/ohrp/humansubjects/guidance/decisioncharts.htm.

A comprehensive, but layperson-focused overview of clinical trials can be found at: http://clinicaltrials.gov/ct/gui/c/w1b/screen/PrintURL?file=resources.html&JServSessionldcs_current=e7rhe2u5q5.



Posttest / Transcript

Status
Module: 15 ETHICSMOD-15 Research Ethics Posttest Available  
ADVERTISEMENT

ADVERTISEMENT
Donate
Contact
Press/Media
Sections
Term of Use
Site Map


ADVERTISEMENT