Artificial Intelligence (AI) ethics: what does it mean for digital lenders? – End Tech

To print this article, all you need to do is be registered or log in to Mondaq.com.

The RBI wants digital lenders to adhere to ethical AI standards. But what does this mean in practice? In this article, we explore how digital lenders can adopt ethical AI standards before strict regulation disrupts business.

“When we first started at Upstart, we couldn’t know for sure if our model would be biased. It wasn’t until loans were created that we were able to demonstrate that our platform was fair. was a risk worth taking, but not a risk a major bank would have considered.”

He’s the CEO of Upstart bear witness before the US Congress in 2019. Upstart was the first digital lender to receive a “no action letter” from the Consumer Finance Protection Bureau (CFPB). The letter allowed Upstart to undertake AI-powered credit underwriting without risking penalties under anti-discrimination laws. Upstart, as part of the agreement, has also agreed to regularly share data on its AI model with the CFPB. This included a comparison of the results of the AI ​​model with those of traditional processes. Data noted that compared to traditional processes, everyone, including disadvantaged groups, was better off when Upstart’s AI model was used. This story is a helpful reminder that while AI-based decision making is far from perfect, it can still be better than what we have now. If AI models can at least improve the status quo, we shouldn’t let the perfect become the enemy of the good.

Closer to home, the RBI may soon require digital lenders to comply with ethical AI standards. RBI indicated this in its August 10 press release on the implementation of the digital lending guidelines. AI ethical standards must ensure transparency, impartiality, inclusiveness, reliability, security and confidentiality. Although not immediately applicable, the RBI has adopted this directive in principle. And it will deliberate further before notifying more specific requirements.

Digital lenders currently use AI for processes such as credit risk assessment, loan application processing, collections and follow-up, customer support and fraud prevention. With the Account Aggregator system
gain ground, the volume of data available for digital lending will only increase. And so will the use of AI to process this data. So before regulators take a hardline stance, the digital lending industry needs to proactively get its house in order.

It is difficult to adapt AI ethical standards to an AI model after it is designed and deployed. Thus, digital lenders must prepare for upcoming regulations. Here are some best practices to help digital lenders understand how ethical AI principles can be implemented in their business.

  1. Selection of use cases – Not all use cases are suitable for AI-based decision making. Digital lenders should assess whether the use of AI is appropriate based on (a) the importance of the function, (b) the consequences of errors, (c) the ease of use of the available datasets and (d) the capability of AI models.

  2. Data quality – AI models rely on a large number of data points ranging from device operating system to font choices for credit rating. It is therefore crucial to ensure that the data fed into AI systems is accurate and representative. The RBI has also emphasized this requirement in its digital lending guidelines. Alternative data may be inaccurate because these data points may have been from of third parties who originally collection for unrelated purposes. In addition, data may not be representative if they does not include certain demographic groups. For example, since women have never had access to financial services, existing datasets may provide much less information about women than about men. This, in turn, could affect the ability of the AI ​​model to make accurate predictions about women’s financial behavior. Data may also be unrepresentative if it does not count for different stages of the economic cycle such as recessions.

  3. Justice – AI models should not produce biased results that discriminate against disadvantaged groups like women, religious minorities, oppressed castes, people with disabilities, etc. This is easier said than done because even if gender, religion, caste, disability status, etc. are not directly used, AI models can rely on other variables that act as agents. For example, a person may be considered less creditworthy not because they are female but because they read an online magazine that is read primarily by women. However, technical solutions such as pre-processing the data used to train the AI, assessing counterfactual fairness, modifying post-processing, and introducing an adversarial AI system to counter biases are in development. explored to make AI systems fairer.

  4. Transparency -The explainability of decisions made by AI is an ongoing effort. Even AI model creators struggle to explain how different inputs interact with each other to generate an output. But it’s still important for digital lenders to explain how their AI systems work, where possible. For example, some industry players publish model sheets which are similar to nutritional labels. These model maps provide information about the AI ​​model training process, evaluation factors, uses, and known biases.

  5. Human monitoring – Human oversight is essential to compensate for the shortcomings of any AI model. This is especially important for edge cases where the AI ​​has not been trained. For example, if the dataset used to train the AI ​​model had less female representation, human oversight should be greater when the AI ​​model assesses a female client. Depending on the use case, the role of the AI ​​model may be limited to offering recommendations that must be approved by a human. These human approvers must also be trained to guard against the tendency to over-rely or misinterpret decisions made by the AI.

  6. Impact analysis and audits – Before deploying an AI model, digital lenders should carry out an impact study. This exercise should be performed periodically, even after an AI model has been deployed. Since AI models are self-learning, regular audits are also necessary to avoid model drift and identify the need for retraining. The RBI’s digital lending guidelines also require AI models to be auditable so that minimum underwriting standards and discriminating factors can be identified.

  7. Governance – Digital lenders must create an internal system of checks and balances for the use of AI. Individuals and/or committees should be tasked with creating and implementing ethical AI standards that include key performance indicators. Cross-functional collaboration between technical, business and legal teams is also necessary for effective AI governance.

The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your particular situation.

POPULAR ARTICLES ON: Indian technology

NFT, De-Fi, Metaverse – What’s next?

TMT law firm

The pandemic has led the sports and gaming industry to discover several alternative sources of income (the older ones have grown by leaps and bounds and the newer ones have grown in popularity), i.e. fantasy sports in line…

Comments are closed.