Often, conversations on the topic of bias and AI are primarily concerned with prejudicial or intolerant decision making processes hidden within the algorithms. That is a real concern, but it limits discourse to a binary debate of AI = Good OR Bad.
In this conversation about bias, our panelists take a nuanced and neutral approach. The algorithms that power AI are not inherently biased because math, on which algorithms are built, is not inherently biased. Bias, good and bad, is introduced through data, strategy, explainability, and projected expectations.
Data
AI decisions are the result of the volume and quality of the data fed to the algorithms during training. If a lot of good data is knowledge, a little bit of bad data is ignorance. For example, certain segments of the population are not well represented in data sets. When we train AI to make credit worthiness decisions, opportunities for both financial institutions and potential customers are lost due to a very limited view of the world. Better decisions will come from a broader set of data.
Strategy
When considering credit worthiness of a business, bias is dependent on having enough of the right data, but it is also a direct effect of strategy. Some examples include:
Models may initially suggest not to extend credit to a customer, but if they are a big brand like a ‘Microsoft’ or an ‘Amazon’, you may want to ensure that they are represented in the portfolio as a strategy to build credibility in a competitive lending landscape.
If the portfolio is designed to align with environmental sustainability, the strategy may be biased against the oil and gas industry.
If you are building a portfolio that includes only certain types of technology, the strategy may be biased toward emerging technology start-ups.
The industry is doing a lot of soul searching, re-evaluating business lending strategies and learning how to streamline the decision making process. While AI may help resolve the latter, it will not directly affect the former.
Explainability
Traditional AI has lived in a black box. Data goes in, a decision comes out, but we are unaware of how that decision was made. Technology exists that opens the box by exposing algorithms and raising awareness of how they work. AI on AI helps to understand the type of data that is filtering into different models, what algorithms are being created, and highlights what could possibly be construed as bias.
Platforms such as these provide monitoring, explainability, and bias detection that essentially opens the black box, but the decisions past that point are up to the organization and the strategies that they put in place.
Projected Expectations
Math doesn’t have bias, but as we project our expectations into the algorithm, that is when we start to see bias. Bias can be good or bad and we hope that the expectations we put into AI models are towards the good end especially when it comes to finances.
Scoring the credit worthiness of a customer is often a binary risk mitigation exercise; is that person or business able to make payments on the requested loan - yes or no? But, if we project an expectation of customer financial success, the decision becomes more nuanced. A x-dollar rather than a y-dollar loan may put the customer in a better financial position and lay the groundwork of future bank business. A positive result is a win/win for both. AI provides the opportunity to run and score the many simulations and projections needed to give sound advice.
Positive expectations can be baked into algorithms to create the right bias that makes people and businesses successful.