Whether it is HAL 9000 from 2001: A Space Oddity or Mycroft XXX from The Moon is a Harsh Mistress, instances of artificial intelligence running amok is a well-trod trope in science fiction but remains a theoretical exercise for the capital markets.
What happens if an algorithmic black box accidentally crashes a market or wipes out a client’s portfolio?
No one knows for sure due to the lack of existing case law regarding artificial intelligence, according to Matthew Berkowitz, a litigation partner at global law firm Shearman & Sterling.
The potential liability from a defective AI is not an issue for financial services, but for the wide variety of instances that also would use the technology, such as autonomous vehicles.
“When an algorithm makes a mistake and causes harm, you potentially have a deep-pocket defendant rather than a negligent individual, meaning that injured parties may be aggressive in pursuing manufacturers, even if they bring an overall benefit to society with improved technology,” he told IntelAlley. “The law needs to strike the right balance between protecting the public and incentivizing technological advancement. AI potentially changes how we think about that balancing.”
The typical reactive nature of regulation and legislation likely means that US regulators and legislators will not address the issue until something serious breaks.
Industries have made significant investments in how they operate, which makes a legislative change all the more difficult in the largest jurisdictions like the US, added Joshua Thompson, a finance partner at Shearman & Sterling.
“As the needs arise, smaller jurisdictions which we would consider modern and nimble economies likely will be able to legislate faster and more comprehensively,” he said. “Their sunk costs in the way things are done maybe less. Parties of influence will be able to get on board with new regulations much more easily.”
The concerns regarding the disruption fueled by AI, machine learning, and big data, has reached the boardroom across industry verticals.
“Whether you take very traditional industries like the car industry and look at the disruption caused by new companies that can harness AI for automated driving or otherwise,” said Thompson. “There are several other sectors of the economy that will have potential rapid change that is a concern to virtually every board. The unknown and unpredictable gives people pause for thought.”
It may be too early for firms to consider possible legal implications for something that might happen five years in the future, but firms should be aware of the issues, noted Berkowitz.
As AI implementations become more sophisticated and organizations deploy them further across the enterprise, firms will find themselves exposed to a higher level of risk.
Organizations may want to rethink how they go about deploying AI and other disruptive technologies in the future, said Thompson. “We would expect the legal and compliance teams to have even a greater role in its development, its application, and the monitoring of its performance going forward.”