Alan Kalton and Graham Rapier
Aktana presents a three-part series focused on dispelling common misconceptions about AI scalability and how to engineer success after the pilot phase. In Part 1, we discuss the myths surrounding localization and foundational data requirements.
Interest in Artificial Intelligence (AI) technology in commercial life sciences surged during the period of digital disruption catalyzed by the pandemic. Yet while AI holds the cards to increase operational efficiency and personalize HCP engagements at scale, many companies are still hesitant to green-light deployments that go beyond the pilot phase.
Misperceptions about what’s required to scale AI globally are partly to blame. Unsure about costs, data requirements, new internal infrastructures and the impact of global initiatives on local teams, it’s all too easy for analysis paralysis to settle in, causing initiatives to stall.
This doesn’t need to be the case. With the right global strategy, you can begin to equip your entire organization with AI solutions that are appropriately robust for each brand or region. Remember: AI isn’t an all-or-nothing proposition. Advanced AI technology continuously learns and improves quickly, whether a company has a massive database or a small one, a world-class centralized CRM platform or different point solutions from region to region. Regardless of the starting point, AI is the fastest way to reach the destination: a superior omnichannel customer experience.
To help dispel some of these misunderstandings, we’re here to shed light on some of the biggest myths about AI and digital transformation.
Myth 1: “Local technology complexity hinders global AI scalability.”
One-size-fits-all AI fits no one. Advanced AI platforms are built to accommodate each organization’s global technology landscape, regardless of complexity – even enabling data aggregation across geographies.
Whether it is a CRM, marketing automation system, or data warehouse, every technology platform will have varying levels of central control and autonomy compared to local systems configured for each country. Even a single system deployed globally may be used differently in different countries, so the key is to map out the full technology landscape and identify where AI can add value across a broad spectrum of use cases within that ecosystem.
Apply the 80/20 rule to accommodate regional autonomy
“Every region wants and needs some degree of autonomy, so we apply the 80/20 rule to our global AI deployment,” said Klas Eriksson, global head of Scale and Analytics at Bayer. Bayer is leveraging AI to support the commercialization of its premier cardiovascular product, Xarelto, with initial rollouts in France, the UK, Italy, Denmark, and Canada. “We deployed a common CRM and business rules for aggregating data in the same way for every country. Our open AI platform includes a self-service component for local flexibility in data reporting.”
Eriksson continued, “The technology is complex, but, like anything, successful AI requires change management. Articulate the changes, especially the benefits, to all teams. As important, align related business processes – such as how commercial teams segment customers and input data – globally before you deploy. It’s not that everyone has to work the same, but broad-strokes commonality across regions makes deploying any new technology easier as long as the solution offers some flexibility.”
Use AI to connect your technology landscape
AI technology should maintain a symbiotic relationship with sales and marketing automation systems, feeding off each other to achieve a common goal. An API-driven, open AI platform makes this possible, seamlessly integrating with all systems and databases regardless of geography. In this way, AI is a bridge, not a wall, for connecting the global technology landscape.
Myth 2: “I don’t have enough data to take advantage of AI.”
While it’s true that quality AI outputs are impossible without quality data, AI does not require a complete data set to start providing value. Often, companies need far less data than they expect to achieve the desired outcome. And while some life sciences companies have robust historical data, many others do not. Either way, AI creates better data over time by encouraging users to collect quality information about their interactions with healthcare professionals (HCPs) to naturally build an ever-growing database.
Tune AI to your unique data starting point
To start, AI leverages any existing data, reinforces good data input, and evolves continuously – eliminating data quality concerns as a barrier to deployment. The key to overcoming this misperception is to build an AI solution tuned to each company’s data starting point.
For example, if a company already has a marketing automation system that captures HCP responses, AI can immediately begin drawing insights that will trigger actions across all commercial teams. Encouraged, field teams will then input more data into the system, generating ever-more nuanced suggestions. The more good results reinforce this behavior, the more users will input good data, and the faster the data grows. All the while, AI is learning from new inputs and becoming increasingly effective.
Nurture a virtuous cycle of data collection and refinement
“With our recommendation engine, we worked with whatever we had to start and are flexing to accommodate other sources and needs as the engine grows. For instance, there may be one specific data source in one country that must be accommodated,” added Erikkson. “And there may be a specific way one market wants to segment customers. We are here to enhance customer engagement, so we must be flexible enough to accommodate. Work from a common playbook but also be able to adapt.”
AI can make recommendations simply based on customer profile information, too. As teams add information into the system, the value compounds continuously to enable richer customer experiences. You can start the AI journey with basic data as the cornerstone, building a framework for ongoing data quality improvement while the system reinforces and rewards a “value in, more value out” process.
Up Next
In Part 2, we’ll debunk common myths around the costs of a global AI implementation, job replacement and the ideal time to scale up your efforts. Stay tuned!