Episode 1 of 20: A Conversation About Bias

Often, conversations on the topic of bias and AI are primarily concerned with prejudicial or intolerant decision making processes hidden within the algorithms. That is a real concern, but it limits discourse to a binary debate of AI = Good OR Bad.

In this conversation about bias, our panelists take a nuanced and neutral approach. The algorithms that power AI are not inherently biased because math, on which algorithms are built, is not inherently biased.  Bias, good and bad, is introduced through data, strategy, explainability, and projected expectations.

Data

AI decisions are the result of the volume and quality of the data fed to the algorithms during training. If a lot of good data is knowledge, a little bit of bad data is ignorance. For example, certain segments of the population are not well represented in data sets.  When we train AI to make credit worthiness decisions, opportunities for both financial institutions and potential customers are lost due to a very limited view of the world. Better decisions will come from a broader set of data.

Strategy

When considering credit worthiness of a business, bias is dependent on having enough of the right data, but it is also a direct effect of strategy. Some examples include:

  • Models may initially suggest not to extend credit to a customer, but if they are a big brand like a ‘Microsoft’ or an ‘Amazon’, you may want to ensure that they are represented in the portfolio as a strategy to build credibility in a competitive lending landscape.

  • If the portfolio is designed to align with environmental sustainability, the strategy may be biased against the oil and gas industry. 

  • If you are building a portfolio that includes only certain types of technology, the strategy may be biased toward emerging technology start-ups.

The industry is doing a lot of soul searching, re-evaluating business lending strategies and learning how to streamline the decision making process. While AI may help resolve the latter, it will not directly affect the former.

Explainability

Traditional AI has lived in a black box. Data goes in, a decision comes out, but we are unaware of how that decision was made. Technology exists that opens the box by exposing algorithms and raising awareness of how they work. AI on AI helps to understand the type of data that is filtering into different models, what algorithms are being created, and highlights what could possibly be construed as bias.

Platforms such as these provide monitoring, explainability, and bias detection that essentially opens the black box, but the decisions past that point are up to the organization and the strategies that they put in place.

Projected Expectations

Math doesn’t have bias, but as we project our expectations into the algorithm, that is when we start to see bias.  Bias can be good or bad and we hope that the expectations we put into AI models are towards the good end especially when it comes to finances.

Scoring the credit worthiness of a customer is often a binary risk mitigation exercise; is that person or business able to make payments on the requested loan - yes or no? But, if we project an expectation of customer financial success, the decision becomes more nuanced. A x-dollar rather than a y-dollar loan may put the customer in a better financial position and lay the groundwork of future bank business. A positive result is a win/win for both. AI provides the opportunity to run and score the many simulations and projections needed to give sound advice.   

Positive expectations can be baked into algorithms to create the right bias that makes people and businesses successful.

About This Series

Banks and other financial institutions have tripled their use of artificial intelligence (AI) for credit risk decisioning, partly as a response to the challenges of post-pandemic rebuilding. Are we doing it right?

On September 9, 2021, senior practitioners and decision makers in the financial services industry came together to examine the shortcomings and opportunities of automating the credit provisioning process.

Automated credit decisioning is designed for scale, efficiency and, in a world of blue skies and sunshine, to reduce subjective bias from the process. But the world is not perfect. 

Determining the credit worthiness of an individual, a small business, or a corporate entity has real-world impacts that cannot be taken lightly.  While accountability, governance and oversight is necessary, the safeguards we employ should not impair progress.

So, what is progress? Through 90 engaging minutes of conversation, this panel worked to answer that question by focusing on three key factors of AI: people, data, and algorithms. They also explored how an Independent Audit of AI Systems (IAAIS) can serve as a framework from which progress is achieved within the context of ethics, bias, privacy, trust and cybersecurity.

Quite a lot was covered in ninety minutes, so for your on-demand convenience, we have broken it down into twenty bite sized pieces.  Enjoy!