Jakob and Todd discuss the philosophy of statistics. Frequentist and Bayesian approaches. Fisher, Neyman, and Pearson and statistical methods for evaluating hypotheses. Deborah Mayo and statistical inference as severe testing. Proper and improper uses of p-values. The pitfalls of data dredging and p-hacking. Conditions under which prior probabilities make Bayesian approaches particularly useful. The utility of Bayesian concepts like priors, posteriors, updating, and loss functions in machine learning. Bayes’ Theorem versus Bayesianism as a statistical philosophy. An algorithmic ‘method of methods’ for when to apply various statistical tools as an AI-complete problem. Important test cases in statistics like the Higgs Boson observation, the Eddington experiment for General Relativity, and the causal link between smoking and cancer. The problem of induction. Inferring the direction of causation for correlated variables. Karl Popper, falsification, and the impossibility of confirmation. What counts as evidence. Randomness as a limitation on knowledge and as a feature of reality itself. The ontological status and nature of a probability distribution, of classical values and as a quantum property.