In their diagnostic process, GPs combine large amounts of knowledge they have accumulated during their personal training. They consider illness scripts or prototypes, which they then accept or reject. At the same time, considering diagnostic data also implies a quantitative process, namely assessing the probability that the patient has a particular disease. Thomas Bayes (1702-1761) formulated in mathematical terms how the probability of a particular conclusion – in medicine a diagnosis – is altered by new data that become available, for instance from history-taking or examination. Bayesian logic can be described as a mathematical rule combining prior information with evidence from data.(1;2)
There is a difference between the classical, so called probabilistic approach and the Bayesian approach.(3) The first approach is common in most clinical trials comparing therapies and uses the classical hypothesis testing with a P value and a 95% confidence interval. Statements about the probability of the null hypothesis or an alternative hypothesis are difficult. Using the Bayesian approach, however, the prior likelihood of a disease or of the effect of a treatment must be estimated. The prior chance is then combined with new data, leading to a posterior chance of a diagnosis or effect of a treatment.
Bayesian rule can play an important role in diagnostics today, allowing ‘hard’ data, based on scientific evidence, to be combined with subjective assessments. Yet doctors often find it hard to include probabilities and other numbers into their diagnostic process. Concepts like sensitivity and specificity are often incorrectly used in interpreting test results.(4) There is evidence to suggest that GPs find it easier to deal with information presented in different form, such as ‘this symptom is five times as likely to occur in patients with this disease than in the rest of the patient population’.(5-7) The epidemiological equivalent of this statement is the likelihood ratio: the likelihood ratio (LR+) of this particular symptom for this particular disease is 5. The LR value integrates sensitivity and specificity (LR+ = sensitivity/1-specificity).
Although clinicians seem to use the Bayes’ theorem in their diagnostic reasoning, and the patient’s history, signs and symptoms represent powerful information for updating prior probabilities, (8) GPs do not calculate a running tally of likelihood ratios.(6;9) The assessment of prior probabilities is based on their knowledge of patients and their expertise and is usually expressed on an ordinal scale from very unlikely to almost certain. GPs add evidence to a prior probability instead of multiplying evidence by a prior chance.(10) They base the values needed for use in formulas on subjective evaluations.(11;12) Furthermore, the power of a diagnostic indicator to confirm or exclude is mostly assessed in terms like insignificant, weak, good, strong or very strong, and GPs usually apply their own estimated decision thresholds when deciding whether to wait, initiate further examinations or take action.(13;14) This categorical intuitive estimations of prior probabilities and the excluding or confirming power of a test leading to a posterior probability of a diagnosis can be represented in a formula: the prior probability of a diagnosis expressed in log 10 odds plus the categorical class of power of a diagnostic test expressed in log10 LR results in a log10 odds of post-test probability of a diagnosis.(15;16) This categorical approach might be an interesting instructional tool.
Recently the question has been discussed whether GPs really do need an etiological diagnosis each time a patient presents symptoms and signs.(17) Sometimes it might be more useful to start a treatment based on symptoms and signs alone, bypassing the diagnosis since the disease picture is undifferentiated and there is not yet a clear diagnosis.(18) The feeling ‘there is something wrong here’ play a prognostic rather than diagnostic role.(17)
          Â
(1) Â Woodworth GG. Biostatistics: a bayesian introduction. Hoboken, New Jersey.: John Wiley&Sons.; 2004.
(2)Â Habbema JDF, Eijkemans R, Krijnen P, Knottnerus JA. Analysis of data on the accuracy of diagnostic tests. In: Knottnerus JA, Buntinx F, editors. The Evidence Base of a Clinical Diagnosis: Theory and methods of diagnostic research. 2-the ed. London: Wiley-Blackwell; 2009. p. 118-45.
(3)Â Lewis RJ, Wears RL. An introduction to the Bayesian analysis of clinical trials. Ann Emerg Med 1993 Aug;22(8):1328-36.
(4)Â Berwick DM, Fineberg HV, Weinstein MC. When doctors meet numbers. Am J Med 1981 Dec;71(6):991-8.
(5)Â Attia J. Moving beyond sensitivity and specificity: using likelihood ratios to help interpret diagnostic tests. Australian prescriber 2003;26(5).
(6)Â Steurer J, Fischer JE, Bachmann LM, Koller M, ter Riet G. Communicating accuracy of tests to general practitioners: a controlled study. BMJ 2002 Apr 6;324(7341):824-6.
(7)Â Bachmann LM, Steurer J, ter Riet G. Simple presentation of test accuracy may lead to inflated disease probabilities. BMJ 2003 Feb 15;326(7385):393.
(8)Â Gill CJ, Sabin L, Schmid CH. Why clinicians are natural bayesians. BMJ 2005 May 7;330(7499):1080-3.
(9)Â Reid MC, Lane DA, Feinstein AR. Academic calculations versus clinical judgments: practicing physicians’ use of quantitative measures of test accuracy. Am J Med 1998 Apr;104(4):374-80.
(10) Van den Ende J, Van Gompel A, Van den Ende E, Van Damme W, Janssen PA. Bridging the gap between clinicians and clinical epidemiologists: Bayes theorem on an ordinal scale. Theor Surg 1994;9(195).
(11) Kleinmuntz B. Why we still use our heads instead of formulas: Toward an integrative approach. Psychological Bulletin 1990;107(3):296-310.
(12) Hammond KR, Hamm RM, Grassia JL, Pearson T. Direct comparision of the efficacy and analytical cognition in expert judgement. IEEE Transactions on Systems, Man and Cybernetics. 1987. p. 753-70.
(13) Pauker SG, Kassirer JP. The threshold approach to clinical decision making. N Engl J Med 1980 May 15;302(20):1109-17.
(14) Van Puymbroeck H, Remmen R, Denekens J, Scherpbier A, Bisoffi Z, Van den Ende J. Teaching problem solving and decision making in undergraduate medical education: an instructional strategy. Med Teach 2003 Sep;25(5):547-50.
(15) Van den Ende J, Bisoffi Z, Van Puymbroek H, Vanderstuyft P, Van Gompel A, Derese A, et al. Bridging the gap between clinical practice and diagnostic clinical epidemiology: pilot experiences with a didactic model based on a logarithmic scale. J Eval Clin Pract 2007 Jun;13(3):374-80.
(16) Moreira J, Bisoffi Z, Narvaez A, Van den Ende J. Bayesian clinical reasoning: does intuitive estimation of likelihood ratios on an ordinal scale outperform estimation of sensitivities and specificities? J Eval Clin Pract 2008 Oct;14(5):934-40.
(17) Dinant GJ, Buntinx FF, Butler CC. The necessary shift from diagnostic to prognostic research. BMC Fam Pract 2007;8:53.
(18) Dinant GJ. Diagnosis and decision. Undifferentiated illness and uncertainty in diagnosis and management. In: Jones R, Britten N, Gulpepper L, Gass D, Grol R, Mant D, et al., editors. Oxford Textbook of Primary Medical Care.Oxford: Oxford University Press; 2004. p. 201-3.