Ethics in AI Are Not Optional
Artificial intelligence is a critical feature in the future of the financial services, but firms should not be penny-wise and pound-foolish in their race to develop the most advanced offering as possible, caution experts.
“You do not need to be on the frontier of technology if you are not a technology company,” said Greg Baxter, the chief digital officer at MetLife, in his keynote address during Celent’s annual Innovation and Insight Day. “You just have to permit your people to use the technology.”
More effort should be spent on developing the various policies that will govern the deployment of the technology, he added.
MetLife spends more time on ethics and legal than it does with technology, according to Baxter.
Firms should be wary when implementing AI in such a fashion that it alienates clients by being too intrusive and ruining the customer experience. “If data is the new currency, its credit line is trust,” said Baxter.
Financial institutions want to avoid Facebook’s experience when it continued to provide live streaming of March’s Christchurch massacre in New Zealand only late to blame it on the company’s algorithms.
In response, New Zealand’s Privacy Commissioner John Edwards reportedly took to Twitter tweeting that Facebook was morally bankrupt pathological liars for permitting the live-streaming and post-event streaming of acts of violence.
The ethical use of AI performs the same job that breaks have on automobiles, he said. “They allow people to drive faster by giving them faith in their ability to slow down. Ethics provide the same function.”
Developing and implementing the necessary ethical policy is not that difficult and should not be left to the end of the process. Ethics should be part of the initial development process and act as guardrails throughout development.
“It would prevent the ‘We really should not have done that’ experience,” said Baxter.”
The most straightforward approach would be to include the ethics discussion at the start of the iterative development process, he added.
Although AI technology will continue to develop almost exponentially, a firm’s ethics regarding the technology’s use should not.
“You need to stand on your principles and define a true north,” said Baxter. “Don’t play with your ethics just because a new technology has come out.”
Legislators need to pay attention to results instead of the methods.
The new offering supports reconciliation, matching and exception management applications.
Method reuses a model developed for a task as the starting point for the next task.
A strong data foundation, dedicated AI teams, and C-suite buy-in are needed for success.
Limeglass’ Research Atomisation uses proprietary technology to smart-tag paragraphs in context.