The Extension Gap: Why Farmers Aren’t Using the Latest Pest Management Innovations
We spend $3.4 billion a year on agricultural research through Land Grant universities and USDA programs. That’s billion with a B. And here’s what’s maddening about that number—a huge chunk of that research never makes it to farmers’ fields. Or if it does, it takes so long that the economic conditions that made it relevant have completely changed.
The USDA’s own research on agricultural technology adoption tells a frustrating story. They found that the median time from research validation to 50% farmer adoption ranges from 9 to 16 years. Nine to sixteen years. Think about how much changes in agriculture in that timeframe. We’re talking about the difference between flip phones and iPhones, except with pest management innovations that could be saving farmers real money right now.
And that’s if adoption happens at all. For integrated pest management specifically, adoption rates have stayed stubbornly low for decades despite massive research investment and extension efforts. I’ve watched this pattern play out across different states, different crops, different pests—the research is solid, the economics make sense, and farmers just… don’t adopt it.
This isn’t a new problem, but it’s gotten worse. And I think most entomology departments don’t see it clearly because they’re standing inside it.
The Reality on the Ground
Let’s look at what the USDA Agricultural Resource Management Survey actually shows. When universities develop mechanical innovations or new seed genetics, farmers adopt them fast—often hitting 50% adoption within 3-5 years. Makes sense. Buy a new planter, plant better seeds, pretty straightforward.
But management-intensive practices? IPM scouting protocols, economic thresholds, biological control methods? Those crawl toward adoption at a fraction of the speed. And it’s not because farmers are stupid or lazy—that’s the conclusion people jump to, and it’s completely wrong.
Penn State published an analysis in 2021 that actually dug into why IPM adoption stays low. They identified what I’d call the “lab to field” problem: research protocols assume conditions that simply don’t exist on working farms. Perfect information. Immediate pest identification. Precise scouting schedules. The assumption that a farmer is managing one crop on one piece of land with unlimited time.
I mean, think about it. Most IPM research is conducted on university research plots. Somebody’s entire job is monitoring those plots. They’re checking them multiple times per week. They have perfect weather data. They can identify every insect they find. Then they publish recommendations based on those conditions and wonder why a farmer managing 3,000 acres with two employees doesn’t implement them.
The research isn’t wrong. That’s not the problem. The research is right for a world that doesn’t exist.
Why University Structures Create This Gap
Here’s the uncomfortable truth: Land Grant universities are really good at producing research. They’re less good at getting that research adopted. And this isn’t because people don’t care—it’s because the incentive structures push everyone in the wrong direction.
If you’re an assistant professor in entomology, what do you need to get tenure? Publications in peer-reviewed journals. Novel research contributions. Grant funding. Speaking at scientific conferences. What you don’t need: farmers actually using your research.
In fact, spending a bunch of time on farmer adoption probably hurts your tenure case because that’s time you’re not spending on the things that actually count. It’s a cognitive blindness problem—the metrics that determine success in academia (publications, grants, citations) have drifted away from the metrics that matter in the field (adoption, implementation, farmer profitability). Organizations that optimize for the wrong metrics end up producing impressive-looking results that don’t translate to real-world impact. I’ve seen this same pattern wreck manufacturing companies—you measure activity instead of outcomes, and you get lots of activity with terrible outcomes.
Let me give you a specific example. A couple years ago I talked to an entomology professor—won’t name names—who’d developed an economic threshold model for a major corn pest. Brilliant work, mathematically sophisticated, validated in field trials. Published in a top journal. I asked him how many farmers were using it. He said he didn’t know. Wasn’t tracking it. That wasn’t part of his job.
And he’s right—it wasn’t part of his job. His job was producing research, not getting it adopted. The system worked exactly as designed. The problem is the system is designed wrong.
The Resource Allocation Problem
Universities tend to spread research capacity pretty evenly across faculty interests. Everybody gets some grad students, some funding, some lab space. Very democratic. Also very inefficient.
If you actually tracked impact—meaning documented changes in farmer practice—you’d probably find that 20% of research areas generate 80% of the adoption. But that’s not how resources get allocated. They get allocated based on faculty seniority, grant success, scientific interest. Not based on “which research will farmers actually use?”
This is the classic 80/20 problem that shows up everywhere: when you treat all activities as equally important, you end up under-investing in the few things that really matter and over-investing in the many things that don’t.
I’m not saying we shouldn’t do fundamental research. Basic science is essential. But right now most entomology departments can’t tell you which 20% of their research portfolio generates 80% of their farmer impact because they’re not measuring farmer impact at all. They’re measuring publications. Different thing.
What Extension Specialists Actually Face
I spent some time talking to extension specialists about this gap, and their frustration is real. They’re caught in the middle—they didn’t design the research, but they’re responsible for getting farmers to adopt it.
A 2020 study in the Journal of Extension (extension specialists studying their own field, which I appreciated) documented the core problem: research recommendations explain the “what” and the “why” really well. They’re terrible at the “how” under real-world constraints.
Michigan State looked at scouting protocol adoption in 2018 and found something telling. University-developed protocols averaged 2.3 field visits per week during critical periods. When they surveyed farmers about actual scouting frequency, the median was 0.6 visits per week.
Now, you could look at that and say farmers are doing it wrong. Or you could ask why universities are developing protocols that require 4x more time than farmers actually have. Both can’t be right. Either the protocols need to change or we need to 4x the number of farmers. I know which one is more realistic.
The economic threshold issue makes this worse. University of Illinois agricultural economists—I trust ag economists because they actually care about the money—found that recommended thresholds assume perfect pest identification and precise damage estimation. In actual field conditions? Uncertainty, time pressure, identification challenges. All of which push rational farmers toward preventive treatments even when thresholds suggest they’re unnecessary.
Cornell’s done probably the best work on this, looking at farmer decision-making under uncertainty. Their conclusion: when the cost of being wrong (crop loss) is way higher than the cost of over-applying inputs, farmers rationally choose the risk-averse option. That’s not farmers being irrational. That’s farmers being rational in a context researchers don’t fully account for.
How We Teach The Next Generation
Most entomology curricula are heavy on fundamentals—insect identification, life cycles, ecology, control mechanisms. All important. All necessary. Also not sufficient.
I looked at Nebraska’s entomology major curriculum (public info, not proprietary). Of roughly 40 credit hours, maybe 3 credits directly address the implementation challenges farmers actually face. The rest is scientific fundamentals. Which are important! But a student can graduate able to identify 200 insect species while having zero understanding of why a farmer might rationally reject the economically optimal recommendation.
Graduate research makes this worse. Ph.D. students need novel contributions to get their degrees. Novel means “nobody’s studied this before.” Implementation of existing knowledge? Not novel. Doesn’t count. So we train the next generation of researchers to ask “what hasn’t been studied?” instead of “what would make the biggest difference if adopted?”
A grad student who spends three years figuring out how to get farmers to actually use existing IPM practices produces one, maybe two publications. A student who discovers a new pest interaction publishes four papers regardless of whether anyone will ever use the information. Guess which student has an easier time getting hired?
We’re systematically training people to produce knowledge that won’t get used, then we’re surprised when it doesn’t get used.
What Actually Works: Deep Customer Understanding
The entomology programs that do manage to bridge this gap share something in common—they’re obsessed with understanding farmers. Not surveying them occasionally. Obsessed with them. The kind of customer obsession that makes other academics uncomfortable because it seems unscientific or whatever.
University of Wisconsin does something smart. They embed grad students on working farms for full growing seasons. Not as researchers—as operators. The students make actual decisions under actual constraints. They come back with completely different perspectives on what makes research adoptable versus what makes it publishable.
Iowa State started requiring “implementation reviews” for proposed research. You have to show not just scientific merit but a plausible adoption pathway given real farmer constraints. Doesn’t prevent fundamental research—just adds a lens. Makes people think about whether anyone will actually use this before investing three years studying it.
These aren’t radical changes. They’re pretty basic: understand your end user, design for their constraints, measure whether they actually use what you create. But they’re radical compared to how most entomology departments operate.
What Could Change (And Probably Won’t)
Look, I know what would close this gap. I’ve seen it work in other industries. But I’m also realistic about academic institutions changing, which they do very slowly.
If I ran an entomology department—which I don’t and won’t—here’s what I’d try:
Make adoption count in faculty reviews. Not as the only thing, but as 20% of the evaluation. Did farmers actually implement your research? Can you document it? If you can’t, why should the university keep funding your work?
This would be hugely controversial. Faculty would hate it. But it would also force people to think about adoption from day one rather than as an afterthought.
Require implementation in graduate research. Before defending your dissertation, demonstrate that your research addresses a documented farmer need and that you’ve tested your recommendations with actual farmers. Not a survey. Actual field testing under real conditions.
Concentrate resources on high-impact areas. Instead of spreading funding evenly, put 80% of resources into the 20% of research areas that generate the most farmer adoption. Yeah, some faculty would get less funding. That’s the point.
Integrate extension and research. Right now they’re separate. Researchers do research, extension specialists translate it later. What if researchers spent 20% of their time directly with farmers? Not teaching them—learning from them. Understanding their constraints before designing research.
None of these are complicated. They’re just hard politically. Universities don’t like upsetting faculty. Change is uncomfortable. But the current system has produced a huge gap between research and practice, and that gap costs farmers and ultimately society real money.
Why Students Should Care
If you’re studying entomology right now, here’s what matters: the graduates who have the most impact won’t be the ones who make the most novel scientific discoveries. They’ll be the ones who figure out how to get research adopted.
That requires different skills than most programs teach. You need to understand:
- How farmers actually make decisions under risk and uncertainty
- Why economically optimal recommendations might be practically suboptimal
- What constraints exist on working farms that don’t exist in research plots
- How to design research that’s adoptable from the start
- Why extension methods succeed or fail
The scientific fundamentals are still essential—you can’t skip those. But without understanding the human systems that determine whether research gets used, even perfect science might sit in journals forever while farmers keep doing what they’ve always done.
The Bottom Line
The extension gap isn’t a communication problem. It’s not a farmer education problem. It’s a systems design problem.
Universities are structured to produce research. They’re really good at it. But producing research and getting that research adopted are different objectives that require different incentives, different metrics, different resource allocation. Right now academic structures optimize for the first and hope the second happens naturally.
It doesn’t.
Some Land Grant universities are experimenting with adoption-focused approaches. We’ll see if those experiments spread or if institutional inertia wins. I’m not particularly optimistic—universities change slowly, and the incentive structures are deeply embedded.
But for students and young faculty, there’s opportunity here. The people who figure out how to bridge this gap won’t just help farmers—they’ll build careers around solving a problem that’s been sitting there for decades while everyone else argues about it.
For more on how organizational structures shape outcomes and why institutions optimize for the wrong metrics, see this analysis of organizational blindness patterns. And if you’re interested in how systematic customer understanding drives actual implementation rather than just research publications, this framework on prioritization might help explain why resources need to concentrate on high-impact activities.
Todd Hagopian is an SSRN-published researcher (ORCID: 0009-0002-7615-5482) studying why organizations optimize for internal metrics rather than practitioner outcomes. His research on bridging theory-practice gaps draws from Fortune 500 transformation experience at Berkshire Hathaway, Illinois Tool Works, Whirlpool, and JBT Marel. His work is available on Google Scholar.