Reference no: EM131524106
Question: ASSESSING CANCER RISK-FROM MOUSE TO MAN
Cancer is a frightening disease. The biological process that creates cancerous cells from healthy tissue is poorly understood at best. Much research has been conducted into how external conditions relate to cancer. For example, we know that smoking leads to substantially higher incidence of lung cancer in humans. Cancer appears spontaneously, however, and its onset seems to be inherently probabilistic, as shown by the fact that some people smoke all their lives without developing lung cancer. Some commentators claim that the use of new and untested chemicals is leading to a cancer epidemic. As evidence they point to the increase in cancer deaths over the years. Indeed, cancer deaths have increased, but people generally have longer life spans, and more elderly people now are at risk for the disease. When cancer rates are adjusted for the increased life span, cancer rates have not increased substantially. In fact, when data are examined this way, some cancer rates (liver, stomach, and uterine cancer) are less common now than they were 50 years ago (1986 data of the American Cancer Society). Nevertheless, the public fears cancer greatly. The Delaney Amendment to the Food, Drug, and Cosmetics Act of 1954 outlaws residues in processed foods of chemicals that pose any risk of cancer to animals or humans.
One of the results of this fear has been an emphasis in public policy on assessing cancer risks from a variety of chemicals. Scientifically speaking, the best way to determine cancer risk to humans would be to expose one group of people to the chemical while keeping others away from it. But such experiments would not be ethical. Thus, scientists generally rely on experiments performed on animals, usually mice or rats. The laboratory animals in the experimental group are exposed to high doses of the substance being tested. High doses are required because low doses probably would not have a statistically noticeable effect on the relatively small experimental group. After the animals die, cancers are identified by autopsy. This kind of experiment is called a bioassay. Typical cancer bioassays involve 600 animals, require 2 to 3 years to complete, and cost several hundred thousand dollars. When bioassays are used to make cancer risk assessments, two important extrapolations are made. First, there is the extrapolation from high doses to low doses. Second, it is necessary to extrapolate from effects on test species to effects on humans. On the basis of data from laboratory experiments and these extrapolations, assessments are made regarding the incidence of cancer when humans are exposed to the substance.
1. Clearly, the extrapolations that are made are based on subjective judgments. Because cancer is viewed as being an inherently probabilistic phenomenon, it is reasonable to view these judgments as probability assessments. What kinds of assessments do you think are necessary to make these extrapolations? What issues must be taken into account? What kind of scientific evidence would help in making the necessary assessments?
2. It can be argued that most cancer risk assessments are weak evidence of potential danger or lack thereof. To be specific, a chemical manufacturer and a regulator might argue different sides of the same study. The manufacturer might claim that the study does not conclusively show that the substance is dangerous, while the regulator might claim that the study does not conclusively demonstrate safety. Situations like these often arise, and decisions must be made with imperfect information. What kind of strategy would you adopt for making these decisions? What trade-offs does your strategy involve?
3. In the case of risk assessment, as with many fields of scientific inquiry, some experiments are better than others for many reasons. For example, some experiments may be more carefully designed or use larger samples. In short, some sources of information are more "credible." For a simple, hypothetical example, suppose that you ask three "experts" whether a given coin is fair. All three report that the coin is fair; for each one the best estimate of P(Heads) is 0.50. You learn, however, that the first expert flipped the coin 10,000 times and observed heads on 5,000 occasions. The second flipped the coin 20 times and observed 10 heads. The third expert did not flip the coin at all, but gave it a thorough physical examination, finding it to be perfectly balanced, as nearly as he could measure. How should differences in credibility of information be accounted for in assessing probabilities? Would you give the same weight to information from the three examiners of the coin? In the case of putting together information on cancer risk from multiple experiments and expert sources, how might you deal with the information sources' differential credibility?