‘AI’ is one of the most prominent buzz words in innovation across today’s technology conference circuit. Most executives, when asked, will tell you of their organization forays in machine learning and artificial intelligence solutions, and many are. However, the conversation changes when one asks about the value the company is deriving from these same technologies.
The truth is that, while the technology is often sound, many companies are struggling with the obstacles around AI implementation that keep them from seeing strong results. Part of this comes from a recent period of “pilot-itis” according to EY’s Darrin Williams. Eager to stay current with technology, companies greenlight AI proof-of-concept studies across their organizations but do not move swiftly to scale-up or kill these projects based on results. This hesitance stagnates implementation and limits most companies to seeing only small gains from these innovative projects.
But the problems in value generation do not stop at the technology. There are three key reasons that companies are struggling to garner value from their AI endeavors, even as the technology is praised across the capital markets.
Regulation keeping pace
Early adopters of AI, particularly those in capital markets, need to be cognizant of the likely eventuality of AI regulation coming down the road. Financial institutions need to take steps to ensure their understanding of AI functions within their infrastructure to assure that these applications will hold up to future regulations.
Companies investing heavily in machine learning should work to assure that their respective regulatory bodies are as educated on these solutions as their employees are. Not only does this support critical relationship building with regulators, but also it brings an increased understanding of how AI impacts various capital markets functions to help shape future regulations.
It is also important for companies to adjust compliance practices when AI is used to replace job functions previously performed by humans. Those actions are still regulated, even if an AI solution is performing that activity. Since AI technology can perform these tasks significantly faster, internal compliance oversight may even need to become more frequent, since there will be a higher number of transactions in the same time period. Considering shifts in compliance audit strategy will be key for organizations looking to drive value from AI solutions.
Some companies fall into the trap of using the latest start-up technology simply because it is new, yet it doesn’t always deliver what was promised. With the growth of machine learning technology, this practice has become more commonplace and organizations need to ensure that this aligns with their own emerging technology strategy. There are some areas to consider.
Firstly, before implementing AI solutions, organizations need to clearly define the problem they are hoping to address. The pace of change is rapid, and companies need to remain current, but new technologies should not be stacked into existing infrastructure simply for the sake of innovation.
Secondly, the importance of a clear objective is essential. Implementing AI strategies without a clear direction can lead these problems to flounder, neither growing nor being scrapped. While these companies can still rightfully claim to be using AI, the lack of strategy to solve problems across the organization keep these companies from being leaders.
Talent training and development
While internal inexperience can be remedied with outside support during the development and implementation processes, the same lack of understanding is inexcusable among a corporate culture that plans to use AI solutions going forward. I have seen many an AI project fail because the company did not build a culture of AI savviness or even awareness around their solutions. They lost out on AI value simply because the company did not take the time to implement the solution into their culture, only their infrastructure
Employees have to comprehend how the systems they are using operate. They need to understand if a technology is designed to replace them or make a function of their job easier, as well as how the system works, the speed at which the AI can perform tasks, and the type of pattern recognition the system is capable of.
Arguably most importantly, employees should understand how AI can be managed and incorporated into their own role. This understanding can get employees onboard with the investment in AI products, and shift the corporate culture to be more geared towards innovations. Rather than be pointed at AI innovations, employees will seek to take part in these projects, and offer their own perspectives into performance and value the company gets from the front lines.
Ultimately, AI should not just be considered a solution – it is a digital labor force, and companies should start treating them as such. If AI is looked at as an employee, rather than just a tool, then companies can adjust strategy, compliance and corporate culture accordingly, just as they do with their human capital.
Randy Guy is the CTO of Capital Markets at FIS.