AI is everywhere — from content creation to customer support — but beneath the surface, a clear structural shift is emerging. What often begins with a simple prompt in ChatGPT is evolving toward contextual, towards centrally managed AI agents that span entire organisations.7 key stages that reflect how organisations evolve, although rarely linear
From Loose Prompts to Full Control
Over the past months, we’ve had countless conversations with customers exploring AI. In this blog, we'll outline seven stages that reflect how organisations evolve in their use of AI — from early experimentation to full integration and control.
Note: This model is written from the customer’s perspective — it focuses on how much control, integration and governance the customer experiences as AI adoption evolves.
Stage 1 – Standalone AI chats (prompt-based experimentation)
Everyone starts here: you open a chat, type a question, and copy the output into another tool. It’s fast, useful — and completely ephemeral. No context, no memory, no integration.
Stage 2 – Prompt engineering
Users begin to realise the value of well-crafted prompts. Prompt libraries and templates become common, experimentation and internal best practices emerge. AI is still used manually and reactively, but more strategically with increasing structure and intention.
Stage 3 – User-built cloud agents (hosted externally)
Tools like ChatGPT (via custom GPTs), Copilot Studio or Botsonic allow users to create their own AI agents with specific instructions and behaviours. These agents typically run on external platforms.
While helpful, the setup still requires manual handling of output, and there’s no real memory nor guarantee of data control or auditability.
Stage 4 – AI embedded per tool (fragmented automation)
At this stage, AI becomes embedded in the customer’s existing tools: PIM platforms auto-generate product descriptions or translations, ERP systems assist with calculations and quotes, and CRM systems offer summarisation.
However, each instance of AI is managed and configured separately — leading to fragmentation, inconsistent logic or behaviour. These embedded AI systems often act as black boxes, lacking transparency or governance.
Stage 5 – Centralised AI agent via third-party platform
Rather than configuring AI in every tool, customers now use a centralised AI agent hosted on a third-party platform like GPTs, Botsonic or Copilot Studio. This agent is accessible via multiple channels and tools.
- ✅ Prompt behaviour and configuration are managed centrally
- ✅ The experience is more consistent across interfaces, the same agent can serve multiple interfaces
- ⚠️ The infrastructure and data control remain with the external vendor - you don’t control the models, memory, logs, or deployment
This is a highly practical phase. From the customer’s viewpoint, the logic is centralised — but the trust boundary is not.
Stage 6 – AI hosted in the customer’s own cloud
Here, the customer moves AI capabilities into their own infrastructure — typically a private cloud environment like Azure or AWS.
- ✅ The customer chooses the model, defines policies, stores memory and governs access
- ✅ API-based integrations connect this agent to various internal tools and systems
- ⚠️ This requires solid DevOps and AI governance maturity
At this stage, AI is no longer a feature — it becomes a strategic asset embedded in the customer’s IT architecture.
Stage 7 – AI fully hosted on customer infrastructure
In highly regulated or security-sensitive sectors, even the public cloud may not be acceptable. Customers opt to run AI entirely on-premises or in a tightly controlled private environment.
- Local LLMs (e.g. Llama, Mistral)
- Private GPU infrastructure
- Internal audit logs and access control
- Custom vector databases and policy enforcement
This gives the customer maximum control — over data, behaviour, infrastructure, and compliance. From the company’s perspective, AI becomes a fully internal capability, subject to the same rules as ERP or Identity Management.
In a follow-up blog post, we’ll explore how 2imagine Pulse fits into this maturity model — and how it enables companies to move beyond fragmented tools toward fully automated, brand-compliant document creation.👉👉 👉
Let's talk
Why this evolution is rarely linear
Although the stages seem sequential, most organisations do not move through them linearly. Many stall at Stage 3, 4 or 5 — not due to technical or budget limitations, but because of organisational friction:
- Lack of AI ownership
- Unclear security or compliance boundaries
- Uncertainty around IP or audit requirements
- Internal fragmentation between teams and tools
Without a strong vision, AI risks becoming just another siloed feature — rather than building a scalable strategy.
Let's talk
Final thoughts
The future of AI isn’t about faster models or fancier prompts — it’s about embedding intelligence where it matters, and doing so in a controlled, transparent, and scalable way for your company.
As customers move up the maturity curve, their ability to control, integrate, and govern AI grows — regardless of where the vendor platform is hosted.
Those who invest in structured, context-aware AI architectures will unlock far greater value than those who rely on disconnected tools or isolated agents. Whether SaaS-based or self-hosted — your architecture will determine how much scalability, consistency, and resilience you gain.
Curious how Pulse fits your AI journey? Let’s explore how we can streamline your content workflows.
Let's talk