You've always been able to find a study to support pretty much any position. Nutrition science is the classic example -- fat is bad, no wait fat is good, actually it depends on the fat. For every meta-analysis pointing one way, there's a contrarian paper pointing the other.
This isn't new. What's new is how fast you can do it.
AI tools have made it trivially easy to surface research that supports whatever you already believe. Ask a model to find evidence for X and it will. Ask it to find evidence against X and it will do that too, often just as convincingly. The bottleneck used to be effort -- you had to actually dig through papers, follow citations, evaluate methodology. That friction was a feature. It forced a minimum level of engagement with the material.
Now you can assemble a compelling-looking argument for almost anything in minutes. Cherry-picked studies, presented with confident summaries, wrapped in the authority of "the research says." It looks rigorous. It feels rigorous. But the selection bias is baked in from the prompt.
I think this makes a skill that was already undervalued even more critical: the ability to evaluate research, not just find it. Understanding sample sizes, methodology, replication status, funding sources, effect sizes[1]. The boring stuff that separates "I found a study" from "the evidence suggests."
The irony is that AI tools are also great at this kind of evaluation -- if you ask them to do it. The problem is that most people don't. They ask for support, not scrutiny.
So the tools aren't the problem. The intent is. And that was true before AI made the whole process ten times faster.