When asked the question about the coin toss, I chose answer A. Not because I knew the answer, but because I had to pick one. I figured that answer B being equal was not the answer. It would have been too easy. I thought the answer was not C because posting the answer in a trick question to be the last answer one would choose would be too easy for the purpose of his talk. If I were setting the answer up with the intent to persuade the audience to choose the one I wanted, I would have placed the answer in C. It is more common for someone to remember the last thing said rather than the first, and the audience is more likely to choose the last answer heard. I chose A because I believed he was setting up the audience to be confused and or mislead them to prove his point (Donnelly, 2005). It is, in fact, a lucky guess as the speaker did just that thing. My choice of the correct answer had nothing to do with the understanding of statistics or of the probability of heads or tails. It had more to do with experience and human nature. He had a point to prove and manipulated the question and answers to prove his point. This is not unheard of and in this case proved effective.

Order Now
Use code: HELLO100 at checkout

A number of stories exist where a person received a diagnosis of a terminal illness then finds out later that they do not have the disease as a result of a false-positive test. As a patient, when a test is positive a second test immediately occurred to verify the results of the test to avoid the false-positive issue. In the case of HIV where the treatment plan immediately requires notifying anyone who would be at risk of contracting the disease from you and a drug cocktail to aggressively treat the disease, it is fearful to hear the number of false-positives or error rates attached to the test. However, the explanation that the 99% accurate test result does not add up to 99% across the board began to demonstrate just how difficult the understanding of statistics is (Donnelly, 2005). In particular, it demonstrates how easy it is to change the message received by presenting the statistics differently. Another issue, it shows how easy it is when the receiver of the information understands it incorrectly. Nowhere in the statement, “the HIV test is 99% accurate” does it state that only one percent of the test takers will have results that indicate an error occurred. The statement indicates that there is a 99% chance that result from one test will be accurate. Each person has one percent chance of the test result coming back wrong. In this case, it is not a purposeful misleading of the population, but it is a case of failing to explain fully what it means which misleads.

The comments that Donnelly (2005) made regarding the use of statistics in a trial are not a surprise. As the previous examples of the coin toss and HIV tests indicate, many ways exist to interpret the statistics that are incorrect for the intent of finding the truth. As Donnelly demonstrated in the coin toss example, each of the answers appeared valid, but two while chosen by individuals in the statistics world, were wrong. The answers were wrong because it did not take into account all the information surrounding the statistics. In the case of the trial, the statistics did not take into account the problems that made the occurrence of cot death likely in the defendant’s home, twice. Because the mother did not fall into obvious high-risk categories of a smoking mother in poverty or lower middle class, the statistician blatantly ignored the other factors. This occurs because the original or established statistics demonstrated what the prosecution side wanted them to prove. The belief was the mother was guilty, and the statistics failed to prove her innocence. However, when one reviews the information it is extremely one-sided proving against the mother rather than providing an accurate portrayal of the probability of such an event occurring twice in a family.

While this is not a discussion on the course of action of the defense team, one has to wonder why the questions remain unasked. The question about the validity of the statistics or even refuted by the defense team as to other data that proved other high risk factors of cot death exist and applicable to this mother (Donnelly, 2005). Donnelly (2005) made an excellent point that the problem likely stemmed from the belief that one who studies in the field the statistics come from have experience and expertise in interpreting the data. However, the problem is that additional exploration did not occur on the subject. The failure hurts both the defendant and faith in the field of statistics. The problems with misinterpreted statistics encourage the belief that one cannot trust statistics at all because the false use of the statistics causes the innocent individuals to suffer unreasonable consequences (Donnelly, 2005). The defense could have easily addressed these issues to the jury in terms that they understand even if the world of statistics is foreign to them. Using examples that Donnelly used would certainly prove to the jury how misleading statistics could be.

    References
  • Donnelly, P. (2005). Using Statistics to Fool Juries. Ted.com.