There’s no denying the massive wave of excitement around Agentic AI, Multi-Agent Systems, Generative AI, and Large Language Models (LLMs). Whether it’s people experimenting with Copilots, implementing RAG (Retrieval-Augmented Generation) patterns, spinning up AI agents, or integrating LLMs into enterprise apps, there’s a real energy behind this shift.

But as exciting (and disruptive) as this technology is, the reality is that the core challenge still centers around the one we’ve faced in other waves of innovation: it’s all about the data.

This realization feels a bit like déjà vu for anyone who was involved in the Internet of Things (IoT) boom years ago. Everyone was excited about connecting devices and creating smart things. But at the heart of every successful IoT solution was one foundational truth: it’s a data problem. IoT wasn’t just about sensors — it was about collecting, managing, and making sense of real-time and historical data at scale. The core of IoT solutions is still solving big data problems; even though the Internet connected devices are the shiny, exciting aspect.

AI Is Just Another Data Problem (Mostly)

Generative AI feels like magic sometimes. Type in a prompt, and a model spits out text, code, images — whatever. But when you move from using general-purpose models like OpenAI’s GPT-4o or o4-mini to building real-world AI-powered applications, things get real, real fast.

The success of AI solutions — whether RAG-based, multi-agent systems, or custom copilots — comes down to the same thing: How good is your data? How accessible is it?

An LLM is a powerful engine, but without the right fuel (data), it won’t take you anywhere useful.


RAG, Multi-Agent, and Copilots: The Pattern Is Obvious

Take RAG (Retrieval-Augmented Generation) as an example. It’s one of the most common patterns today for extending the capabilities of an LLM with your own data. The LLM doesn’t magically “know” your company’s internal documentation, product manuals, enterprise data or proprietary processes. You have to structure that data, make it accessible, and build pipelines that connect it effectively to the model.

Multi-agent systems? Same deal. You can wire up an army of agents to interact with each other and external APIs, but if they don’t have reliable, high-quality, contextual data to work with, they’re not helpful. You just get expensive hallucinations faster.

Even copilots — whether for software development, finance, or internal operations — succeed or fail based on how well they can access, interpret, and reason over the data they’re given.


Garbage In, Garbage Out — AI Edition

This isn’t a new lesson. Anyone who has worked with machine learning, data science, or even traditional software understands that garbage in, garbage out still holds.

Generative AI shifts the interface — natural language in, natural language out — but it doesn’t change the fundamental constraints:

  • Is the data accurate?
  • Is the data complete?
  • Is the data organized in a way the AI can access meaningfully?

Data Strategy Is Now AI Strategy

If you’re serious about adopting Generative AI — not just dabbling, but really building valuable tools — then you need to think of your data infrastructure as a core part of your AI strategy.

This means:

  • Inventory your data. What do you actually have? Where does it live? Is it locked away in silos or accessible via APIs?
  • Assess quality. Is it clean? Current? Free of duplication and noise?
  • Structure it. Whether it’s vector databases for embeddings, search indexes, or well-organized knowledge graphs, your data has to be structured in ways AI systems can use effectively.
  • Secure it. AI systems don’t absolve you of security and privacy responsibilities. If anything, they amplify them.
  • Pipeline it. Build systems that keep your data updated, synced, and reliable.

From IoT to AI: The Same Underlying Lesson

The AI hype cycle today feels very similar to the IoT hype cycle a decade ago. Back then, everyone wanted to build “smart” products. But success didn’t come from the sensors alone — it came from turning that sensor data into actionable insights.

With AI, the models are the enablers, but the real value comes from how well you can connect those models to the right data.

  • IoT was about real-time data pipelines, big data storage, and analytics.
  • AI is about contextual data retrieval, knowledge augmentation, and reasoning over complex information.

Different tech, same lesson: it’s always about the data.


The Takeaway: Focus on the Right Problem

If you’re a developer, architect, or technology leader thinking about how to adopt Generative AI, don’t get too caught up in the coolness of the models themselves. The LLM is a tool — a really powerful one — but the real opportunity is in how you pair it with your own data.

This is where the differentiation happens. Everyone has access to the same foundational models. What they don’t have is your data — your knowledge, your processes, your history.

Make your data accessible. Make it clean. Make it meaningful.

That’s how you build AI that’s actually valuable.

Chris Pietschmann is a Microsoft MVP, HashiCorp Ambassador, and Microsoft Certified Trainer (MCT) with 20+ years of experience designing and building Cloud & Enterprise systems. He has worked with companies of all sizes from startups to large enterprises. He has a passion for technology and sharing what he learns with others to help enable them to learn faster and be more productive.
Microsoft MVP HashiCorp Ambassador

Discover more from Build5Nines

Subscribe now to keep reading and get access to the full archive.

Continue reading