It all starts with candidate guesses
There is something peculiar about the way we humans solve problems.
Not the tidy, procedural kind of solving, follow the steps, get the answer, but the messier, more interesting kind, where the problem is poorly defined, the tools are uncertain, and the only move available is to propose something and see what happens.
Most hypotheses are wrong. Most proposals miss. And yet the enterprise of science, of philosophy, of ordinary practical reasoning, depends entirely on our willingness to keep proposing.
In the late seventeenth century, one of the central unsolved problems in instrument-making was deceptively simple: what should a thermometer be calibrated against? Temperature, unlike length or weight, has no obvious natural unit. You cannot hold it next to a ruler. Early instrument-makers tried everything – the temperature of deep cellars, the heat of the human body, the point at which anise oil congeals.
In 1688, the French scientist Joachim Dalencé proposed something that now reads as almost comically domestic: the melting point of butter as the upper anchor of his scale, with the freezing point of water at the lower end. Butter is not a pure substance (though Dalencé was substantially French); it melts at different temperatures depending on its composition, its age, the season. As a scientific standard it is hopeless.
And yet the proposal carries a genuine insight: that temperature needs to be calibrated against something in the world that ordinary people can test and reproduce without specialist equipment. That instinct – reach for the tangible, the everyday, the reproducible – is exactly the instinct that eventually worked. Daniel Fahrenheit used body temperature and brine; Anders Celsius used the boiling and freezing of water. Dalencé was wrong about butter and right about everything that mattered. Not to say that butter doesn’t matter, we all know it does, unless you’re vegan.
Science progressed because proposals never stopped - though some occasionally were undercut.
In 1847, the Hungarian physician Ignaz Semmelweis noticed that women in the Viennese maternity ward attended by medical students who came directly from the autopsy room were dying from childbed fever at catastrophically higher rates than those attended by midwives who had touched no corpses. His hypothesis was that the doctors were carrying “cadaverous particles” on their unwashed hands. He had no germ theory to work from; Pasteur’s microbiology was still decades from general acceptance.
The hypothesis was wrong at the mechanistic level and yet operationally, clinically, morally exactly right. Handwashing with chlorinated lime cut mortality from around 16% to below 2% within months. Semmelweis’s supervisors rejected him anyway, partly because they could not tolerate the idea that their own hands had been killing patients. He died in an asylum, vindicated only posthumously. But the hypothesis that saved those lives was never rigorously derived. It was read off from a pattern in the world, through careful attention and something close to instinct. The proposal came, and the proposal died through contact with the ‘cadaverous particles’ of intellectual myopia.
There is a temptation to explain all of this as luck, or as unconscious induction, or as the natural output of experts who have absorbed so much information that the right answer surfaces automatically. But that undersells something important.
These thinkers were doing something that no algorithm, by definition, can do. They were generating candidate truths – guesses, really – and testing them against the world before they had any logical entitlement to confidence. This is what philosopher Maurice Merleau-Ponty meant by “optimal grip”: the mind constantly adjusts its orientation to things in pursuit of the richest and most stable view and grasp. At the cognitive level, this is inductive reasoning. Not a view established by proof, but by a kind of perceptual negotiation with reality. You step back from the painting until the details and the whole suddenly cohere. Too close, too much noise; too far, too much abstraction.
The raw material of a good hypothesis is deep immersion in the domain. What cannot be programmed, what cannot be extracted from that immersion by any deterministic process, is the leap – provisional, revisable, often wrong – from what you know to a structure that explains it.



