From Richard McElreath’s textbook Statistical Rethinking, via Data Elixir:
…The above is a kind of folk Popperism, an informal philosophy of science common among scientists but not among philosophers of science. Science is not describe by the falsification standard, and Popper recognized that. In fact, deductive falsification is impossible in nearly every scientific context. In this section, I review two reasons for this impossibility: 1) Hypotheses are not models… 2) Measurement matters…
…For both of these reasons, deductive falsification never works. The scientific method cannot be reduced to a statistical procedure, and so our statistical methods should not pretend. Statistical evidence is part of the hot mess that is science, with all of its combat and egotism and mutual coercion. If you believe, as I do, that science does often work, then learning that it doesn’t work via falsification shouldn’t change your mind. But it might help you do better science. It might open your eyes to many legitimately useful functions of statistical golems…
…So if attempting to mimic falsification is not a generally useful approach to statistical methods, what are we to do? We are to model. Models can be made into testing procedures–all statistical tests are also models–but they can also be used to design, forecast, and argue…