My Biggest Issue with AI (And Why You Should Still Use It)
MMDB Solutions is excited to welcome Michael Mingyar to our team. As a doctorate student at Montana State University, Michael has spent years pushing the boundaries of using the latest technology to solve some of his field's most challenging questions. His insight and experience has been invaluable to MMDB as we explore the frontiers of AI.
I see AI used every day. Sometimes I’m blown away with how it’s used, but often I’m left with more concern than excitement. Because I work in both industry and academia, I’m fortunate to see AI take on many forms. Innovations that integrate AI into broader workflows have revealed patterns in research that humans are incapable of measuring on their own due to the scale, and this philosophy is driving a large effort to handle AI more like a calculator than a collaborator. However, these efforts are in the minority. In the majority of cases I see, the problem is always the same: AI is viewed as a golden hammer, and everything around it suddenly looks like a nail.
There is no simple way to address the strengths and weaknesses of AI usage, and many others have already discussed both at length. Still, the conversation tends to be one sided -- AI will either save humanity or end it. While I doubt AI can do either of these on its own, we can certainly get closer to the pessimistic option if we refuse to understand the difference between a nail and a songbird. Dismissing AI entirely would be just as shortsighted, as it may represent one of the most significant paradigm shifts in both industry and academia to date.
What do we mean when we say AI?
While I personally dislike the term AI to describe them, it’s undeniable that chat bots and large language models are far beyond what they were capable of just five years ago. Rather than argue semantics, I’m going to use “AI” to refer to these generative chat models and set aside other uses for the term.
What are AI models good at, and why is that a problem?
Among their many strengths, AI models are terrific language generators. They are so good at this task that they regularly trick laymen (and some professionals) into believing that they’re alive. Their ability to mimic human speech patterns is matched only by their talent for making almost anything sound believable. I’ve seen tenured professors defer to the “knowledge” of open-source models in meetings and some have begun advising their PhD candidates to use AI to write their theses.
The Main Concern
This is where my issue with AI begins -- AI is often used as a replacement for human knowledge. General purpose AI models are not designed to be correct; they are designed to please their user. The result is eloquent sounding, often flowery writing that is full of subtle holes only an expert would notice. This is not conducive to academic writing, and it’s certainly not reliable enough for companies to trust blindly.
The Solution
There are a few ways to avoid this, including fine-tuning custom models and integrating the AI into a larger system built by a domain expert. Both have their advantages, but I will be discussing the latter here. It boils down to a very simple idea: if experts provide AI with reliable information, it can repackage that information in an endless number of useful ways. It should not be used to create the information.
I see so many people fail to grasp this simple principle. For example, general purpose AI models cannot be trusted to produce reliable product information on their own. Without direction, they’ll just make things up until the product sounds like the average of every other similar product on the internet.
But when e-commerce experts build the system and feed it actual product data, the AI can use that knowledge powerfully and at scale. Thousands of high quality product descriptions can be produced efficiently because experts curate the data and define safeguards.
This is the workflow of proper AI integration: the human brings knowledge, the AI puts it together. I’ve seen “AI tutors” fail because their creators just hook up a general purpose AI to the internet, leading to wildly irrelevant (or sometimes unsafe) information dressed up in academic language. Better performing AI tutors are tailored for specific courses by domain experts, often with the course’s textbook and course notes built in. These systems are the ones that last because as long as the information stays accurate, so does the AI.
This philosophy is a major focus at MMDB, and something I am proud to say I’ve never needed to argue about. Our focus is always quality first, and that means letting the AI take a back seat to our expertise. I just wish I could shake this idea into some of my peers in academia.