AI adoption is emerging as a key technology goal as businesses look to define their strategies and ready their organisation for an AI-driven future of work. Solutions like Microsoft 365 Copilot, which are deployed alongside other familiar Microsoft tools, are understandably seen as a good stepping on point for those taking their first steps with AI, keen to realise the promises of improvements in productivity and operational efficiency.
Business leaders also recognise the need to understand the role and impact of AI in keeping their business competitive but equally realise that any deployment represents a significant investment and a major business change. In an emerging technology area, deciding where to place your bets is something that requires further qualification, and this task often falls to IT teams, who must identify the best AI opportunities for the organisation and crucially, prove its value through a pilot to then justify a larger deployment thereafter.
Falling victim to the AI hype cycle
Every new technology has its own hype cycle, and AI is no exception. In fact, it has arguably generated the biggest hype of any emerging technology in the last decade, possibly ever.
Businesses see the value and hear the opportunities being discussed in every corner of their market, and this creates a high level of expectation and excitement about what is possible but also FOMO if decisions are not made quickly. These feelings aren’t just amongst leadership, either. End users also see AI positively impacting their personal lives and connect with the value this could deliver at work.
It’s for this reason that many businesses accelerate deployment of tools like M365 Copilot, viewing AI as a platform that can benefit everyone. There’s a common expectation that AI can offer a solution to many challenges, and that users will naturally discover the best use cases for themselves once they’re exposed to the technology. But this approach rarely delivers as expected, even when it’s restricted to a small group of “invested” users.
When organisations operate with a hands-off methodology, any initial optimism can quickly fade. Consider an AI deployment like making a first impression. If you fail to make a good account of yourself on first meeting, it can take plenty of time, effort and goodwill to win someone over. The same is true of your users first encounter with AI. Without clear guidance or alignment to real life workflows, pilots lose momentum and find themselves in the “trough of disillusionment”. Users who lack the skills and knowledge to harness AI effectively quickly become frustrated and return to older processes and traditional ways of working.
Equally, a lack of defined objectives and measures can become a blocker. Where outcomes are vague, business value is elusive, and enthusiasm dwindles. It’s not usually a technology failure. More often, it’s people who feel disengaged, inadequately supported or uncertain about how to put new AI tools to work. Ultimately, this disillusionment sees user buy-in dry up, and the pilot fail to deliver the measurable business value needed to justify further investment. This isn’t uncommon. Gartner estimates that 30% of generative AI projects will have been abandoned after Proof of Concept by the end of 2025.
Pilots that put people first
With a lack of user engagement derailing so many pilot projects, a different approach is needed to help organisations ensure initial adoption cements, and that any trial programme delivers measurable outcomes. As the tech company with people at heart, it will come as no surprise that people are at the centre of Advania’s adoption methodology – with the explicit intention of tackling these obstacles upfront.
When user input defines the scope, the pilot becomes focused and relevant from the start. Before any pilot is launched, our approach starts with the user base to understand what they need to support their roles today. Through in-person sessions and ideation workshops, we speak with users to reveal any ongoing frustrations, operational challenges, or time-sapping bottlenecks.
This helps to reveal the actual areas where AI can add value, and importantly, the areas where it is not best suited, taking the burden of use case identification away from end users.
Through this process, we also look to identify AI advocates who can lead with the pilot, building trust, and enabling other users before the pilot begins. Early champions are identified from across the business, not just the IT department. Managers and sponsors are shown exactly how AI is being focused on the key use case(s) identified by users, and leadership is clear as to how the success of these initial test deployments will be measured.
The final, and perhaps most critical step, is to ensure that users are equipped with knowledge and practical skills BEFORE the pilot launches. This removes the need for self-discovery, and the potential disillusionment that can result. Users must be clear on the use cases and workflows being supported by AI, and how they can leverage AI tools to support with this effectively.
Clear instructions over the reporting of any test use cases is also important, as this creates the model for assessment that will ultimately determine the measured value and viability of any wider scale deployment.
It’s only at this point, when the boundaries of the pilot are defined, and users are fully enabled, that any technology is deployed.
People-led pilots deliver results
In our experience, traditional, top-down pilots that hope for engagement through user discovery and self-training fail to deliver the evidence needed to build a successful business case. Their broad scope also becomes a significant drain on resource, and see pilots extend over several months in the pursuit of reportable outcomes.
Having delivered our people-led process for dozens of organisations, some of whom have run unsuccessful pilots in the past, we see that this approach not only delivers a truly measurable pilot programme, but one that can be actioned in a fraction of the time.
Focusing on the right workflows and ensuring users are appropriately enabled ensures a better starting point. Users are at the start line ready to go, clear about the event they are running, as opposed to being caught still getting their shoes on when the starting gun sounds, and unsure whether they are running 100m or 10,000m. You don’t have to be a sports fan to know which approach gives you the best chance of winning the race.
It’s this defined scope that ultimately leads to a more targeted pilot, and one that helps build an effective business case that streamlines deployment. The pilot proves the value of specific AI applications, and that value can then be easily scaled across the business.
As a result of our methodology, we’ve helped several customers build exceptional business cases for deployment, shaving huge time-savings per user on everyday tasks, and millions of pounds saved through more efficient operations, with some customers purchasing thousands of licences as a result.
Ready for a better AI adoption journey?
M365 Copilot and other AI technologies can only achieve their potential when people are at the heart of adoption. Our people-led methodology makes sure an AI pilot becomes more than a test, instead emerging as a launchpad for measurable, long-term business improvement.
If you’re ready to start your AI adoption journey the right way, book an initial discovery call with the Advania team and experience how a people-first approach can move your business forward.