Unpacking the AI search revolution

by Colm Hebblethwaite

image overlay

Over the last few years, every business has had to ask itself what AI means for its operations, customers and employees. But in the scramble to implement, upskill and show progress, it is hard know how other businesses are dealing with these challenges. That’s why events like our recent roundtable on AI search are so important. They promote open dialogue on common challenges and create shared insights that help us all.

The Black Sun hosted event saw communications and digital leaders from across the UK gather at the Clermont Hotel in London to talk about how AI is reshaping search strategies. The discussions ranged from how to keep content discoverable and trusted to the implications for website design. Here’s a recap of the main themes that emerged across the different discussions.

Search is changing fast

There is no denying that AI is impacting the way people search for information online. Data shows that when AI summaries appear in results, click-through rates can fall from 15% to 8%. So, while search activity is increasing, actual click-through rates to websites are declining. At the same time, AI website scraping rocketed up 1200% in 2025, with an estimated 400 times more scrapes than click-throughs to source sites.

This is a direct challenge to the traditional model of optimising content to drive traffic to a website. Businesses now need to think about how best to create content that serves both human visitors and the AI systems that want to summarise your content for users that may never end up at your site.

These two audiences operate very differently too. While humans browse, AI systems follow a rigid path of crawling, scoring relevance and verifying trust signals. But despite getting to content independently, both AI and humans share a preference for well-organised, clear content that solves a real problem. So how do we ensure that both find the messaging we want them to?

Is it time to rethink analytics and visibility?

One of the big recurring questions among attendees was how to measure AI search activity. Traditional analytics platforms are still key to understanding the on-site behaviour of users, but they don’t tell the full story anymore. As AI systems increasingly consume and synthesise content directly, a ‘visibility gap’ has appeared. We can no longer rely on referral traffic alone to judge the success of content. How can digital teams effectively distinguish between bots fetching data to answer an off-site user, and autonomous AI agents interacting with your site without leaving a standard footprint in user-behaviour reports?

Tracking scrape activity to identify which pages are being cited in AI responses and understanding your site’s overall ‘AI authority’ is becoming a strategic priority. Many attendees agreed that AI reporting in its current form is not only difficult, but incomplete too. And of course, the key question remains that once you can identify AI scraping on your server logs, what action should you take? How should that information influence content or governance decisions? No one in the room had a definitive answer but everyone agreed that getting more visibility was a necessary first step.

Do we need to consider accessibility for AI now too?

There were also a lot of questions about which format information should be presented in. It’s clear that businesses need to start considering the accessibility needs of AI in their content strategies. For example, many businesses produce half-year and full-year reports as PDFs but these can actually act as data siloes that AI can struggle to analyse. Some attendees reported improved AI engagement when financial results were also published in HTML. Does human-focused content need to backed up by an “AI-ready” layer that uses things HTML summaries, tagging PDFs and transcripts for video content to ensure accessibility?

The way sites are currently structured may also need to change. A few attendees noted that investor sites have to keep large volumes of historical content for compliance reasons, creating a risk that outdated material is scraped and surfaced as current information. Structuring these sites in a way that keeps archive material accessible could reduce the likelihood of misinterpretation. Schema markup, metadata and header tags as well as structural siloing were highlighted as potential ways to help AI find up-to-date information.

How can we stand out in a sea of same?

As AI-generated content proliferates, differentiating your brand can be a real competitive advantage. But brands also need to follow a few simple rules to optimise content for AI. Number one is making sure you use clear, simple language and always add enough context. This means that key points still make sense even if AI presents them in isolation to users. Other essentials are strong headings, logical structure and clearly answering the what, why, when, where, who and how questions with FAQ-style content that reflects how people search.

Attendees agreed that authentic, original thought leadership is increasingly valuable, especially when it is backed up by proprietary data or research. However, there is a balance between demonstrating authority and not giving too much away and compromising your commercial advantage. Finding this balance requires a clear content strategy and governance processes that set clear guardrails without putting your teams in handcuffs. Many internal approval processes still reflect slower, more traditional digital models, especially in regulated sectors. Complex governance can delay publication and limit a business’ ability to produce the timely, well-structured content that AI systems are more likely to gravitate to.

We are all figuring this out together

One thing that was really clear throughout the event is that no one has all the answers yet. In fact, the conversations raised some interesting questions that need to be explored further.
For example, some asked whether publishing LLM.txt files makes a difference. While current research suggests it does not significantly improve performance, it also causes no harm and may become important later. Another table asked if users’ personalisation of the AI models they use has an impact on the way information is presented back to them. For example, if I set my AI’s base tone to colloquial rather than professional, will this alter the information the AI gives me about a brand? This was highlighted as an area that needed more research.

Underpinning all of these questions was a shared recognition that we are all are navigating structural change without clear benchmarks.

A shift in mindset

Perhaps the biggest takeaway from the day was that we are seeing the beginning of a significant strategic shift. Organisations need to reassess how their content is discovered, interpreted and cited within AI-generated answers. That requires clarity, structure, authority and ongoing measurement.

The event closed with a clear message: those who proactively adapt their content, governance and measurement frameworks now are going to be better placed to maintain visibility and credibility as AI search becomes more popular .

Sign up to hear from us