AI Discovery Is Persona-Dependent, Not Query-Dependent
One of the most persistent misconceptions in GEO/AIO is that AI discovery works like a more sophisticated version of search: you optimise for a query, and the AI either recommends you or it does not. The reality is fundamentally different, and understanding it has direct implications for strategy.
AI recommendations are not query-dependent. They are persona-dependent. The same question, asked by different people in different contexts, will produce different answers – and the variation is not random. It reflects the AI's inference about who is asking, what they need, and what would be most relevant for someone with that profile.
Why the Old Mental Model Breaks Down
In traditional SEO, the concept of a broadly consistent SERP for each keyword was approximately true. For a given query, most users in the same geography would see the same results. This made keyword optimisation tractable: rank well for a term, and you become visible to everyone who searches it.
AI systems do not operate this way. Modern large language models incorporate contextual signals – the explicit and implicit information about who is asking and why – into their answer generation. A conversation that begins with 'I run a freelance design studio' frames all subsequent questions differently than one that begins with 'we are a 300-person manufacturing company.' The AI draws on these signals to tailor its recommendations.
As memory-augmented AI systems become more common – where the model retains information across sessions about a user's role, preferences, past decisions, and stated constraints – this effect will intensify. Research into memory-assisted personalised LLMs shows that systems incorporating user history significantly outperform non-personalised baselines, with the performance gap growing as user history accumulates.
The Persona Divergence Problem
For brands, this creates a challenge that keyword-level GEO simply cannot address. Consider a mid-market business intelligence tool. It might appear consistently in AI answers for the generic prompt 'best BI software.' But what happens when the context shifts?
- A CFO at a 500-person company asking about BI tools for financial forecasting – does the AI recommend it?
- A data analyst at a series A startup looking for self-serve analytics – does it surface?
- A marketing operations manager evaluating tools that integrate with their existing CRM stack – is it in the list?
- An IT director assessing enterprise-grade security and compliance requirements – does it make the cut?
These are all nominally 'BI software' queries, but they are being asked by different personas with different needs, different evaluation criteria, and different contexts. The AI will give different answers – drawing on its associations between the brand and specific use cases, company types, and user roles.
A brand can win the generic prompt in a clean-session GEO test and still be invisible to every user segment that actually matters for its business.
How Personalisation Is Developing in LLMs
The move toward persona-aware AI recommendations is already underway across multiple dimensions – this is not a future state. Memory systems are being built into major AI assistants, allowing them to retain and apply user preferences across conversations. Implicit persona inference – where the AI draws conclusions about user needs from the language and framing of their queries – is a standard capability of current large language models. Enterprise deployments of AI assistants increasingly incorporate organisational context: industry, company size, role, and workflow context that shapes the assistant's behaviour.
Research into personalised LLM recommendation systems demonstrates consistent improvement when user history and persona signals are incorporated. The pattern mirrors what happened in e-commerce recommendation engines over the past two decades: general recommendations give way to personalised ones as systems accumulate sufficient signal, and the personalised systems significantly outperform their generic counterparts.
Why Prompt-Level GEO Is Incomplete
The implication for GEO methodology is significant. Testing your brand's visibility by running a set of generic prompts in clean sessions tells you how the AI responds to an anonymous, context-free user. It does not tell you how it responds to your actual buyers.
For a B2B SaaS brand selling to enterprise security teams, the relevant discovery moment is not 'what is the best security software' – it is the contextualised version of that question, asked by someone with a specific role, a specific stack, a specific set of constraints, and a specific history of prior queries. The AI's answer to those two versions of the question can differ substantially.
This means that the performance metric that matters most – discovery probability for your target personas – is systematically undercaptured by current GEO tools. They are optimising for a proxy metric (generic prompt visibility) that may have limited correlation with the outcomes that drive business results.
The Three-Generation View
It helps to see this as an evolution across three generations of discovery optimisation:
- Generation 1: SEO – Optimise for keyword rankings in a deterministic search engine. Input: keyword query. Output: static list of links. Measurement: SERP position.
- Generation 2: GEO – Monitor visibility in AI-generated answers. Input: simulated prompt. Output: share of AI voice score. Measurement: appearance rate in test prompts.
- Generation 3: Persona Intelligence – Model discovery probability by user segment. Input: persona + context + intent path. Output: tailored discovery probability by segment. Measurement: probability distribution across real persona clusters.
What a Persona-Aware Approach Looks Like
A persona-aware AI discovery strategy requires a different set of questions:
- Which user personas does the AI currently associate with our brand – and are those the personas we want to reach?
- For which high-value persona clusters are we currently under-recommended or absent from AI answers?
- What signals – in our content, our third-party coverage, our entity associations – would need to change to shift the AI's recommendations for specific personas?
- How do our AI visibility scores vary when prompts are contextualised with our target personas, compared to generic queries?
These questions cannot be answered by a dashboard that tracks generic prompt performance. They require a modelling approach that treats AI recommendations as the output of a complex system driven by training data, retrieval signals, and entity associations.
The concept that brings this together is Persona Intelligence – the systematic modelling of which user types the AI recommends you to, and why. It represents the next evolution in AI discovery strategy, moving from generic visibility tracking to segment-level discovery optimisation.
Written by
ZIO Team
Research Team
The ZIO research and product team, dedicated to advancing persona intelligence.