Post-science: Expertise before validation
It was 11 AM when I finally saw it, a crystalline image confirming something I already knew deep down. I had designed this experiment, this structure, this approach only hours earlier. Not through months of methodical literature review or linear hypothesis testing, but through something I couldn’t easily explain to anyone: intuition built from years of immersion in my field. The image matched what I had pictured before I even started. Every prediction was there. Every instinct confirmed. I should have been elated. Instead, I felt unsettled. Everything in my doctoral training insisted this wasn’t how science was supposed to work. The message had always been clear: follow the scientific method; design hypotheses systematically; review the literature exhaustively; proceed step by step. Real science, I’d been told, is not intuition or gut feeling 1. Yet here was physical evidence contradicting the story I had been taught about how discovery should happen. That moment sparked a question that stayed with me for the rest of my PhD: what if we have been told the wrong story about how scientific discovery truly works?
That morning was no exception; it was part of a pattern that subtly
recurs throughout my doctoral work. Again and again, I found myself intuitively
designing solutions first and then validating them thoroughly afterwards. I
would start with a deep, almost wordless understanding built from years of
experiments, failures, observations, and reading. An approach would emerge in
my mind with a clear sense of “this is the way to go.” Only then would I sit
down to design formal experiments, collect data, analyze results, and submit
the work for peer review. Time after time, the validation aligned with what my
intuition had already suggested. The approaches were practical, and the
mechanisms were coherent enough to withstand scrutiny. The outcomes were
satisfactory and sufficient to lead to patents, publications, and recognition
in my field. Gradually, I had to accept that this way of working was not a
lucky coincidence; it reflected how expertise truly functions when solving
complex problems in large, messy search spaces. What puzzled me was not that
this kept happening but that almost nobody in academic research discussed it
openly.
When I pitched an essay about this pattern to a prestigious scientific
outlet, the reply came back quickly: “outside the scope of what we generally
look for.” It stung at first, but then I realised the rejection made my point
for me. If scientists rarely discuss how intuition precedes validation in
fundamental research, and if we quietly mark such talk as “unscientific” or
“off-topic,” we protect a comforting myth at the expense of truth. That myth
harms early-career researchers. Imagine a graduate student whose best ideas
come as intuitive leaps, not from grinding through a pre-planned sequence of
steps. Instead of feeling that this is how expertise often works, they may hear
a voice saying: You shouldn’t have felt confident before you had all the data;
real scientists proceed more mechanically; you must be doing something wrong.
That is a fast track to imposter syndrome. Students then learn to distrust
their own emerging expertise, to hide or ignore the pattern-recognition
abilities they are slowly developing. They burn out trying to force their minds
into a version of “proper science” that does not reflect how experts actually
do complex work.
Part of the problem lies in how we teach. Universities excel at
methodology: designing rigorous experiments, analysing data, navigating peer
review, and documenting protocols for reproducibility. What we rarely teach
explicitly is judgment: deciding which questions matter, which approaches are
worth limited time, money, and emotional energy, or which apparent failures conceal
something interesting 2. This is precisely what distinguishes
experts from novices in complex fields.
To check whether I was flattering myself, I turned to the cognitive
science literature on expertise. What I found was both reassuring and
unsettling. Decades of research show that trained intuition is not magic; it is
fast pattern recognition built on extensive experience 3. When Alexander Fleming noticed unexpected
growth in a Petri dish, he did not sit down to derive its importance from first
principles 4.
His trained eye recognised something worth attention. When Henri Poincaré wrote about mathematical solutions arriving after periods of struggle, he was describing the same thing: unconscious restructuring of knowledge built up over years 5.
Chess grandmasters appear to “just know” the right move because their brains have absorbed countless board positions; they see a configuration and respond before they can explain why. None of this is mystical. It is neuroscience 6. Pattern-recognition systems in the brain operate faster than conscious reasoning. After years of working across domains, whether with abstract equations, cells in a dish, or complex datasets, our neural networks build internal models that predict what is likely to work. Intuition, in this sense, is accumulated learning compressed into rapid inference.
This matters because much of modern research takes place in what is
sometimes called "high-dimensional space." There may be millions of
plausible combinations of variables, models, or conditions. Exhaustively
testing them all is impossible. Therefore, expert researchers do not explore
these spaces randomly or in a purely step-by-step way. They rely on
disciplinary intuition to navigate, selecting a few promising directions,
dismissing others without fully explaining why, and then rigorously testing
their choices. Years of experience train these internal models so that, even
before we have words for them, our brains have already filtered the search
space for us.
Only after noticing this in my own work did I discover a framework that gave it a name: post-normal science. In the 1990s, Silvio Funtowicz and Jerome Ravetz argued that traditional, linear views of science break down when facts are uncertain, values are contested, stakes are high, and decisions are urgent 7. In these situations, climate research, pandemic response, and policy analysis cannot simply follow a fixed protocol. Researchers must instead draw on integrated knowledge, tacit inference, and the collective judgement of expert communities. Many fields of contemporary research, especially those involving complex systems, operate under these conditions. We work with an incomplete understanding of mechanisms, face time pressure to act, and must decide which questions to pursue and which to leave aside. In such circumstances, the quality of research depends not only on correct methods but also on the quality of expert judgement that guides them.
This is what I mean by post-science: a way of working where expertise
shapes initial hypotheses through pattern recognition and tacit knowledge, and
rigorous science then confirms, refines, or overturns them. Philosophers
sometimes call this mediated understanding 7. Insight arises from structured engagement
with complex phenomena before a full mechanistic explanation is available. As
long as ideas are later tested and explained, this approach is
not a retreat from rigour; it is an honest way to describe how rigour and
intuition work together in fundamental research.
Of course, intuition can be wrong. History shows many clever people
whose “obvious” ideas failed under testing. That’s why post-science does not
put intuition above evidence. It simply reaffirms intuition’s basic role: a
strong tool for forming hypotheses and focusing attention in wide possibility
spaces. Every intuitive idea must still go through careful experimentation,
critical review, and peer evaluation. When intuition and proof agree, it
indicates not that we can rely on feelings alone, but that actual expertise
operates at levels deeper and faster than conscious thought.
Once I stopped treating my own intuitions as embarrassing and started
seeing them as products of genuine expertise, my practice changed. I became
more deliberate about asking myself and my students why a particular idea “felt
right”. What were we noticing, however vaguely, that made one direction
attractive and another uninteresting? Putting that into words often surfaced
background knowledge we had been taking for granted. It also made it easier to
challenge our own assumptions. If an experiment contradicted a strong intuitive
expectation, the question became: what did we miss in our mental model? That
made failures more interesting and less personally threatening. Over time, my
work became not only more efficient but also more honest. Instead of pretending
to move in neat linear steps from question to method to result, I began to
acknowledge the loops and leaps that actually occur.
If you are a graduate student or postdoc working in a complex area,
whether in lab science, modelling, fieldwork, or theory, this may sound
uncomfortably familiar. Your best ideas might come to you as hunches in the
shower, on a walk, or while half-distracted in a seminar. Maybe a particular
approach “makes sense” long before you can justify it fully. The message you
may have internalised is that this is somehow wrong or unscientific. It is not.
That feeling is your brain’s pattern-recognition system drawing on everything
you have absorbed so far. It does not mean you can skip the hard work of
designing careful tests, analysing data, or submitting to review. It does mean
you can allow yourself to take your own expertise seriously enough to test what
it suggests.
When a high-profile journal declined to publish my reflection on this
pattern, their reasoning was straightforward: they wanted practical career
advice, not an essay on the interaction between expertise and intuition. Fair
enough. But the experience revealed something important to me. If our most
visible platforms avoid discussing how research is actually conducted, then the
role of half-formed hunches, tacit pattern recognition, and collective
intuition will be overlooked. As a result, early-career researchers may feel
that the way they truly work must be kept secret or considered a flaw. That
silence fuels imposter syndrome and distorts our training systems.
Science is not just executing protocols. It involves cultivating
expertise sharp enough for intuition and logic to work together. Post-science,
as I use the term, isn’t a rejection of rigour but a broader view: intuition
guiding focus; method testing those intuitions; explanation catching up
afterwards. In the morning, I looked at that image and saw my intuition
confirmed. I learned something no methods lecture had ever explicitly said:
sometimes the most scientific thing you can do is notice what your expertise
tells you, design the best test possible, and carefully listen to what the data
says in return.
1. Couch BA,
Brown TL, Schelpat TJ, Graham MJ, Knight JK. Scientific teaching: Defining a
taxonomy of observable practices. CBE Life Sci Educ. 2015;14(1).
doi:10.1187/cbe.14-01-0002
2. Kovačić-Popović
A. Scientific method as the foundation of scientific research. Int Rev.
2021;(1-2):13-17. doi:10.5937/intrev2102013k
3. Schickore J,
Hangel N. “It might be this, it should be that…” uncertainty and doubt in
day-to-day research practice. Eur J Philos Sci. 2019;9(2).
doi:10.1007/s13194-019-0253-9
4. Yu D, Guo D,
Zheng Y, Yang Y. A review of penicillin binding protein and group A
Streptococcus with reduced-β-lactam susceptibility. Front Cell Infect
Microbiol.Frontiers Media S.A. 2023;13.
doi:10.3389/fcimb.2023.1117160
5. Poincaré H.
Poincaré on intuition in mathematics. Published online 2011:1-8. Accessed
November 21, 2025.
https://mathshistory.st-andrews.ac.uk/Extras/Poincare_Intuition/
6. Patterson RE,
Eggleston RG. Intuitive Cognition. J Cogn Eng Decis Mak.SAGE
Publications Inc. 2017;11(1):5-22. doi:10.1177/1555343416686476
7. Funtowicz SO,
Ravetz JR. Science for the post-normal age. Futures. 1993;25(7):739-755.
doi:10.1016/0016-3287(93)90022-L



Comments
Post a Comment