A practical approach to realizing the benefits without the headaches!

[article_ad]

There is significant growth in the application of machine learning (ML) and artificial intelligence (AI) techniques within collections as it has been proven to create countless efficiencies; from enhancing the results of predictive models, to powering AI bots that interact with customers leaving staff free to address more complex issues. At present, one of the major constraining factors to using this advanced technology is the difficulty that comes with explaining the decisions made by these solutions to regulators. This regulatory focus is unlikely to diminish, especially with the various examples of AI bias which continue to be uncovered within various applications, resulting in discriminatory behaviors towards different groups of people.

While collections-specific regulations remain somewhat undefined on the subject, major institutions are resorting to their broader policy; namely that any decision needs be fully explainable. Although there are explainable Artificial Intelligence (xAI) techniques that can help us gain deeper insights from ML models such as FICO’s xAI Toolkit, the path to achieving sign-off within an organization can be a challenge.

The winners of the 2018 Explainable Machine Learning Challenge may provide a solution to realizing the benefits of AI / ML without the associated headache!

The challenge was a collaboration between Google, FICO and academics at Berkeley, Oxford, Imperial, UC Irvine and MIT, where teams of researchers were challenged with creating and explaining a black box model. The team from Duke University, which was awarded the FICO Recognition Award for their submission detailed in this blog post, We Didn't Explain the Black Box — We Replaced it with an Interpretable Model, took a different approach - in essence, they ultimately didn't use a black-box model, but a traditional, explainable one instead.

Machine Learning as the teacher for predictive modeling

By developing an ML-based model, in parallel with a more traditional interpretable model, the data scientist, or analytical modeler, can ‘learn’ from the outputs of the ML solution. For example, the ML solution can highlight:

  1. Alternative modeling approaches (e.g., decision tree-based approach or alternate statistical functions)
  2. New and interesting attributes to include within the model, including key complex interactions between attributes
  3. Novel ways of binning attributes (e.g., finding a new way to group a number of more of less continuous values into a smaller number of 'bins'. The bins are then used as new data groups).

The modeler can include these ML learnings in their traditional interpretable model, and attempt to ‘chase’ the superior predictive power of the ML model. It’s a never-ending game, where each round poses new intelligence as well as challenges.

 

The end result of this approach is a model whose decisions can be easily explained to a regulator. In addition to this, the risks associated with AI bias are mitigated as the inputs and calculations within the model continue to be understood by the analytics modeler. In summary, this solution provides a great interim step, realizing immediate benefits from ML techniques, while an organization becomes comfortable with the use of a fully functioning, explainable AI / ML solution.

---

This article previously appeared on FICO's Analytics and AI Blog and was republished here with permission.


Next Article: Freshly Signed Taxpayer First Act of 2019 ...

Advertisement