July 30, 2018
Artificial Intelligence (AI) has transformative potential in financial services as it relates to both better business practices for providers and creating more personalized experiences for consumer and commercial customers.
However, when deciding where to apply AI, financial institutions (FIs) will encounter some key control and governance considerations when plugging into their respective operating environments. How much authority should AI have? What about regulators? And what if things don’t go as planned?
When considering when and where to apply AI strategies, there are a few considerations FIs should think through before going all in:
It’s all about the data
AI software is only as smart as the data used to train it, therefore, a human review must be conducted to oversee the predictions of the machine to ensure fairness and protection against bias decisions.
Humans still matter
Don’t forget to think through “off-ramps” that steer customers to human backups when needed. Even the most competent AI has its limits, which means having human intelligence will always be critical.
Transparency is key
FIs cannot assume that regulators will trust AI technology so they would be well advised to provide examples of the AI training, including how the algorithms were determined based on facts and actionable insights.
For AI technologies to succeed, they should be audited so FIs can explain why their algorithm reached a certain decision. Give AI systems the opportunity to learn from the outcomes of human interactions.