Are your generative AI initiatives truly set up for success, or are you just scratching the surface of their potential?
As organizations have leaned into AI over the last couple of years, it's become clear that there are three essential ingredients for successful generative AI implementations:
- Technology
- Data
- Ethics
However, there's a fourth, often overlooked, pillar that dictates success: change management.
So, in this article, we'll explore the critical opportunities within these four key areas. You’ll discover how to choose the right technological approach, ensure data quality and governance, navigate the ethical complexities of AI, and effectively drive organizational change to cultivate an organization that harnesses AI’s full potential.
Let’s get into it.
1. Choosing the right technological approach
Selecting the appropriate technological path is paramount for long-term success in generative AI. There isn't a one-size-fits-all solution; what's "right" depends entirely on your specific use case.
There are three main approaches to consider:
- Commercial
- Open-source
- In-house
Let’s take a look at the pros and cons for each.
Commercial models: Quick wins and low initial commitment
Commercial models, such as ChatGPT, Claude, and Gemini, are publicly available for a fee and offer a plug-and-play solution.
- Pros: They’re incredibly quick to implement and often best-in-class for general applications. They also require very limited financial commitment in the short term, allowing you to experiment or run production applications with minimal initial investment.
- Cons: You aren't building in-house capabilities or fine-tuning the model to your specific needs. Crucially, the long-term costs can be unpredictable. Many mid-sized companies offering marketing copy or social media post generation often build user interfaces on top of these models, meaning they don't add significant unique value from a model perspective.
Open-source models: A cost-effective way to handle sensitive data
Open-source models, like Llama or Stable Diffusion (a text-to-image model), provide an alternative where you only pay for the computing power required.
- Pros: Open-source can be the cheapest way to run your own model within your private cloud, offering a crucial layer of protection for highly sensitive intellectual property (IP) data that you want to keep secure from external internet sources.
- Cons: This can be a false economy, as running large models on your own cloud or on-premise can be incredibly data and power-hungry. Additionally, open-source models aren't always the best technology available.
I recommend using open-source primarily for managing highly sensitive IP data due to its ability to protect your information.
In-house development: The long-term differentiator
Developing your own in-house model is arguably the most challenging but potentially most rewarding approach for long-term results.
- Pros: In-house capabilities are what will ultimately differentiate winning companies that generate revenue from their AI products and services. It's a significant differentiator in terms of unique value.
- Cons: This path is incredibly expensive, requiring long-term commitment from the C-suite for both capital and human capital investments. It also demands huge amounts of data, which not every company possesses, and requires you to hire an entire dedicated team.
This article is based on Francesco Federico’s presentation at the AI for Marketers Summit, which is returning to San Francisco on September 11, 2025. Grab your ticket here.
2. Data: Quality, governance, and security are paramount
One of the most critical lessons learned we’ve learned since generative AI burst onto the scene a few years ago is that data quality is as important as quantity – especially when it comes to marketing applications of AI.
The old adage "garbage in, garbage out" has always been true for data-driven applications, but it's particularly relevant for generative AI. You must select the best data for your specific use case.
It's not enough to simply select data; you also need to focus on how you clean and prepare it for use. There's a myth that you can just throw unstructured data at a model, and it will understand. While models are good at interpretation, you need to provide prepared data in a structured format to yield the best results.