LLMs for Growth: What McKinsey Missed in 2024

There’s a staggering amount of misinformation circulating about Large Language Models (LLMs) and their application for growth, particularly among common and business leaders seeking to leverage LLMs for growth. It’s time to separate fact from fiction.

Key Takeaways

  • Successful LLM integration requires a clear definition of the problem you’re solving, not just chasing shiny new technology.
  • LLMs are powerful tools for augmentation, not outright replacement, of human roles in complex tasks like content creation and customer service.
  • Data privacy and security protocols must be established before deploying LLMs, especially when handling sensitive customer or proprietary information.
  • Return on Investment (ROI) for LLM initiatives needs to be measured with specific, quantifiable metrics beyond just efficiency gains, such as customer lifetime value or conversion rates.
  • Starting with smaller, well-defined pilot projects allows for iterative learning and adaptation, minimizing risk before broader enterprise deployment.

Myth 1: LLMs Are a “Set It and Forget It” Solution for Instant Growth

The idea that you can simply plug in an LLM and watch your business magically grow is probably the most pervasive myth I encounter. I had a client last year, a mid-sized e-commerce retailer based out of the Sweet Auburn Historic District here in Atlanta, who came to us convinced that an LLM would instantly handle all their customer service inquiries, write all their product descriptions, and even manage their social media. They just wanted to “turn it on.”

The reality? LLMs are powerful tools, but they demand significant strategic input and ongoing management. A recent report by McKinsey & Company in 2024 highlighted that businesses extracting the most value from AI are those that integrate it thoughtfully into existing workflows, often requiring significant process re-engineering and human oversight. It’s not about automation; it’s about augmentation. You need to define the specific problem you’re trying to solve, train the model (often with your proprietary data), and then continuously monitor its performance. Think of it like a highly skilled intern – incredibly capable, but still needing direction, feedback, and quality control. Without clear directives and a human in the loop, you risk generic, off-brand, or even factually incorrect outputs that can harm your reputation faster than they help.

Myth 2: LLMs Will Replace All Human Content Creators and Customer Service Agents

This fear-mongering narrative is rampant, often fueled by sensational headlines. While LLMs excel at generating text quickly, they lack genuine understanding, empathy, and the nuanced strategic thinking that human professionals bring to the table. A 2025 study from the Harvard Business Review indicated that while AI can automate up to 30% of tasks in certain roles, it rarely eliminates the entire position. Instead, it shifts the focus of human work towards higher-level strategic thinking, creativity, and complex problem-solving.

Consider content creation: an LLM can draft a blog post on “5 Ways to Improve Your Home’s Curb Appeal,” but it won’t inherently understand your brand’s unique voice, target audience’s specific pain points, or current market trends in the way a seasoned content strategist would. At my previous firm, we implemented an LLM to assist with first drafts of technical documentation. The output was grammatically sound and comprehensive, but it lacked the specific industry jargon and nuanced explanations our subject matter experts provided. We found that the LLM served as an excellent accelerator for the initial drafting phase, cutting down research time by about 40%, but the final product always required significant human refinement and strategic input to ensure accuracy, tone, and alignment with our client’s specific needs. The human role didn’t disappear; it evolved to become more focused on strategic oversight and quality assurance.

Myth 3: Any Data Can Be Fed to an LLM Without Privacy Concerns

This is a dangerous misconception that can lead to significant legal and reputational headaches. Many business leaders, in their eagerness to experiment, assume that simply throwing all their customer data or proprietary information into a publicly available LLM is fine. It is absolutely not. Public LLMs are trained on vast datasets, and while their direct output might not immediately reveal your specific input, the potential for data leakage or unintended exposure is very real.

The General Data Protection Regulation (GDPR) and various state-level privacy laws, such as the California Consumer Privacy Act (CCPA), are not going away. In fact, privacy regulations are only becoming more stringent. I’ve personally advised several companies in the Peachtree Corners Technology Park to establish robust data governance frameworks before even considering LLM deployment. This includes classifying data sensitivity, anonymizing or pseudonymizing data where possible, and utilizing private, enterprise-grade LLMs or securely hosted solutions when dealing with confidential information. For example, if you’re using an LLM to summarize customer feedback, ensure that personally identifiable information (PII) is stripped out before it ever touches the model. Ignoring data privacy isn’t just risky; it’s a direct path to hefty fines and a shattered customer trust. We are talking about compliance that could make or break a business, especially in sensitive sectors like healthcare or finance. For more on navigating these challenges, consider our insights on AI’s Ethical Minefield.

Myth 4: LLMs Are Exclusively for Tech Giants with Massive Budgets

While it’s true that developing your own foundational LLM from scratch requires immense resources, the market has matured significantly. There’s a burgeoning ecosystem of accessible LLM tools and services for businesses of all sizes. We’re not in 2022 anymore; the barrier to entry has plummeted.

Consider the rise of API-driven LLM services from providers like Anthropic (with their Claude models) or Google Gemini. These platforms allow businesses to integrate powerful LLM capabilities into their applications without needing to build or maintain the underlying infrastructure. A small marketing agency in Buckhead, for instance, could subscribe to an LLM service to generate ad copy variations or analyze sentiment from customer reviews for a few hundred dollars a month, rather than hiring additional staff or investing in prohibitive hardware. The key is to start small, identify specific use cases, and scale incrementally. A proof-of-concept project, perhaps automating internal knowledge base searches or drafting initial email responses, can demonstrate significant value without breaking the bank. The idea that only a Fortune 500 company can afford this technology is simply outdated. For a deeper dive into selecting the right tools, check out our guide on picking your AI powerhouse.

Myth 5: Measuring LLM ROI is Impossible or Too Complex

I often hear business leaders lament that they “can’t quantify the value” of their LLM initiatives. This is a cop-out. While the benefits might not always be as straightforward as a direct sales increase, robust metrics absolutely exist. The issue isn’t the impossibility of measurement, but rather the failure to define clear objectives and corresponding key performance indicators (KPIs) from the outset.

When we partnered with a local Atlanta-based logistics firm to implement an LLM for optimizing their route planning communications and customer updates, we didn’t just look at “efficiency.” We set specific, measurable goals:

  1. Reduce time spent by dispatchers on routine customer inquiries by 25%.
  2. Increase customer satisfaction scores related to communication clarity by 15%.
  3. Decrease instances of misrouted deliveries due to unclear instructions by 10%.

Over six months, we tracked these metrics diligently. The LLM, integrated with their existing Samsara fleet management system, helped achieve a 30% reduction in dispatcher inquiry time, a 12% boost in communication satisfaction, and an 8% drop in misrouted deliveries. This translated directly into tangible cost savings from reduced labor hours and improved customer retention, demonstrating a clear return on investment of 180% within the first year. The trick is to define what success looks like before you even start, and then diligently track against those benchmarks. Don’t just hope for the best; plan for it. Many companies struggle with this, leading to failed implementations, as discussed in “Why 78% of Tech Implementations Fail.”

Myth 6: LLMs Are Omniscient and Always Produce Factual Outputs

This is perhaps the most dangerous myth, leading to what we call “hallucinations” – where LLMs generate plausible-sounding but entirely false information. Because LLMs predict the most statistically probable next word, they can confidently assert falsehoods, especially when they lack sufficient or accurate training data on a specific topic. They don’t “know” facts in the human sense; they predict patterns.

We ran into this exact issue at my previous firm when a client used an LLM to generate summaries of legal precedents. The model, in its effort to be helpful, fabricated citations and even entire case details that simply didn’t exist. This could have had severe consequences. As the National Institute of Standards and Technology (NIST) regularly emphasizes, trustworthiness in AI is paramount. For critical applications, human review is non-negotiable. Furthermore, techniques like Retrieval Augmented Generation (RAG) are becoming standard practice. RAG systems integrate LLMs with external, verifiable knowledge bases, allowing the model to retrieve factual information before generating a response. This significantly reduces hallucinations and increases the reliability of outputs. Never assume an LLM is a perfect oracle; always fact-check, especially for high-stakes information. For more on refining LLM outputs, consider learning how to fine-tune for expert performance.

The path to growth with LLMs is paved not with magical thinking, but with strategic planning, rigorous implementation, and a clear understanding of their capabilities and limitations.

What is the most critical first step for a business leader considering LLM adoption?

The most critical first step is to clearly define the specific business problem or opportunity you aim to address with an LLM, rather than adopting the technology for its own sake. Without a defined problem, you risk a costly solution searching for a purpose.

How can small businesses afford to implement LLMs?

Small businesses can leverage LLMs affordably by utilizing API-driven services from major providers like Anthropic or Google, which offer pay-as-you-go models. Starting with well-defined, smaller pilot projects also minimizes initial investment and allows for iterative learning.

What are “LLM hallucinations” and how can they be mitigated?

LLM hallucinations are instances where the model generates plausible-sounding but factually incorrect information. Mitigation strategies include implementing Retrieval Augmented Generation (RAG) systems that connect LLMs to verifiable knowledge bases, and always incorporating human review for critical outputs.

Is it safe to use public LLMs with sensitive customer data?

No, it is generally not safe to use public LLMs with sensitive customer data due to potential data leakage and privacy compliance risks. Businesses should use private, enterprise-grade LLMs or securely hosted solutions, and ensure all personally identifiable information (PII) is anonymized or removed before processing.

How should businesses measure the ROI of their LLM initiatives?

Businesses should measure LLM ROI by establishing clear, quantifiable Key Performance Indicators (KPIs) at the project’s inception, such as reductions in operational costs, improvements in customer satisfaction scores, increases in conversion rates, or faster content generation cycles, and then track these metrics diligently.

Courtney Little

Principal AI Architect Ph.D. in Computer Science, Carnegie Mellon University

Courtney Little is a Principal AI Architect at Veridian Labs, with 15 years of experience pioneering advancements in machine learning. His expertise lies in developing robust, scalable AI solutions for complex data environments, particularly in the realm of natural language processing and predictive analytics. Formerly a lead researcher at Aurora Innovations, Courtney is widely recognized for his seminal work on the 'Contextual Understanding Engine,' a framework that significantly improved the accuracy of sentiment analysis in multi-domain applications. He regularly contributes to industry journals and speaks at major AI conferences