Making AI Explainability better – what could/should things look like?
Making AI Explainability better – what could/should things look like?
KK Gupta
13 Jul 2022
RegTech
RegTech
RegTech
RegTech
RegTech
There’s no doubt that AI’s algorithms can be highly complex and sophisticated and, for that reason, valuable. Many have been put to work in a black-box environment, where their workings are not visible, and their value is kept within the firm and not made available elsewhere. While this might make sense in commercial and copyright terms, it goes against the grain in a highly-regulated financial services industry where procedural and technical transparency is central to the compliance framework.
Moreover, there’s almost been a willingness to dismiss AI as unexplainable or opaque and happily commit it to a black box.
But not only does that clash with the precepts of governance frameworks – that require decisions to be supported by evidence – it is also short sighted. Only by learning from how the AI is working and being able to understand and demonstrate the techniques used can the industry develop and learn. This is particularly important in a firm-wide risk framework where a company needs to know how reliable the model is when it comes to being robust, traceable, and defendable – and thus worthy of trust and further development.
For this to happen, the black box needs to become a white box model where the outcome is explainable by design and without any additional capabilities. This is due to testing being an integral part of the system and the programming – which is designed to incorporate test cases.
In this way, algorithms can be configured with a set of controls to ensure automated decisions are aligned to risk profiles and regulatory expectations. By choosing a technology that is designed as a white box, something useful and explainable is easier to achieve than going it alone. Indeed, open box enables clients to make better use of AI tech as they understand it better; with white box, the purchaser has the blue prints. This in turn leads to a more sustainable use of software because clients can build new processes without having to resort to a vendor’s professional services team to unlock the inner workings of the tech. With white box, the purchaser has the blue prints.
Companies can also choose to work closely with a vendor where there is a partnership approach and the vendor is able to advise and coach the firm to get the best out of their technology. This is a very desirable trait for vendors to have no matter whether the financial services firm is experienced or not – two heads are nearly always better than one!
There’s no doubt that AI’s algorithms can be highly complex and sophisticated and, for that reason, valuable. Many have been put to work in a black-box environment, where their workings are not visible, and their value is kept within the firm and not made available elsewhere. While this might make sense in commercial and copyright terms, it goes against the grain in a highly-regulated financial services industry where procedural and technical transparency is central to the compliance framework.
Moreover, there’s almost been a willingness to dismiss AI as unexplainable or opaque and happily commit it to a black box.
But not only does that clash with the precepts of governance frameworks – that require decisions to be supported by evidence – it is also short sighted. Only by learning from how the AI is working and being able to understand and demonstrate the techniques used can the industry develop and learn. This is particularly important in a firm-wide risk framework where a company needs to know how reliable the model is when it comes to being robust, traceable, and defendable – and thus worthy of trust and further development.
For this to happen, the black box needs to become a white box model where the outcome is explainable by design and without any additional capabilities. This is due to testing being an integral part of the system and the programming – which is designed to incorporate test cases.
In this way, algorithms can be configured with a set of controls to ensure automated decisions are aligned to risk profiles and regulatory expectations. By choosing a technology that is designed as a white box, something useful and explainable is easier to achieve than going it alone. Indeed, open box enables clients to make better use of AI tech as they understand it better; with white box, the purchaser has the blue prints. This in turn leads to a more sustainable use of software because clients can build new processes without having to resort to a vendor’s professional services team to unlock the inner workings of the tech. With white box, the purchaser has the blue prints.
Companies can also choose to work closely with a vendor where there is a partnership approach and the vendor is able to advise and coach the firm to get the best out of their technology. This is a very desirable trait for vendors to have no matter whether the financial services firm is experienced or not – two heads are nearly always better than one!
There’s no doubt that AI’s algorithms can be highly complex and sophisticated and, for that reason, valuable. Many have been put to work in a black-box environment, where their workings are not visible, and their value is kept within the firm and not made available elsewhere. While this might make sense in commercial and copyright terms, it goes against the grain in a highly-regulated financial services industry where procedural and technical transparency is central to the compliance framework.
Moreover, there’s almost been a willingness to dismiss AI as unexplainable or opaque and happily commit it to a black box.
But not only does that clash with the precepts of governance frameworks – that require decisions to be supported by evidence – it is also short sighted. Only by learning from how the AI is working and being able to understand and demonstrate the techniques used can the industry develop and learn. This is particularly important in a firm-wide risk framework where a company needs to know how reliable the model is when it comes to being robust, traceable, and defendable – and thus worthy of trust and further development.
For this to happen, the black box needs to become a white box model where the outcome is explainable by design and without any additional capabilities. This is due to testing being an integral part of the system and the programming – which is designed to incorporate test cases.
In this way, algorithms can be configured with a set of controls to ensure automated decisions are aligned to risk profiles and regulatory expectations. By choosing a technology that is designed as a white box, something useful and explainable is easier to achieve than going it alone. Indeed, open box enables clients to make better use of AI tech as they understand it better; with white box, the purchaser has the blue prints. This in turn leads to a more sustainable use of software because clients can build new processes without having to resort to a vendor’s professional services team to unlock the inner workings of the tech. With white box, the purchaser has the blue prints.
Companies can also choose to work closely with a vendor where there is a partnership approach and the vendor is able to advise and coach the firm to get the best out of their technology. This is a very desirable trait for vendors to have no matter whether the financial services firm is experienced or not – two heads are nearly always better than one!
Latest blogs
Watchlist Management
SEPA Instant Payments: Transforming Real-Time Transactions
Watchlist Management
SEPA Instant Payments: Transforming Real-Time Transactions
Watchlist Management
SEPA Instant Payments: Transforming Real-Time Transactions
Watchlist Management
SEPA Instant Payments: Transforming Real-Time Transactions
Sanctions Screening
Self-Assessments – Getting the Relationship Between the Firm and the Software Right
Sanctions Screening
Self-Assessments – Getting the Relationship Between the Firm and the Software Right
Sanctions Screening
Self-Assessments – Getting the Relationship Between the Firm and the Software Right
Sanctions Screening
Self-Assessments – Getting the Relationship Between the Firm and the Software Right
Sanctions Screening
Demystifying Sanctions Screening: 5 Critical Questions Every Financial Institution Must Answer
Sanctions Screening
Demystifying Sanctions Screening: 5 Critical Questions Every Financial Institution Must Answer
Sanctions Screening
Demystifying Sanctions Screening: 5 Critical Questions Every Financial Institution Must Answer
Sanctions Screening
Demystifying Sanctions Screening: 5 Critical Questions Every Financial Institution Must Answer
Watchlist Management
SEPA Instant Payments: Transforming Real-Time Transactions
Sanctions Screening
Self-Assessments – Getting the Relationship Between the Firm and the Software Right
Sanctions Screening
Demystifying Sanctions Screening: 5 Critical Questions Every Financial Institution Must Answer
AML Compliance