Anticipate the Potential Risk of Bias in AI post thumbnail image

Addressing bias in Artificial Intelligence models may be a matter of life and death. The example where United Health’s algorithm under-representation of certain demographics in datasets caused the inaccurate attribution that there was a lower probabilities for light-skin Americans contracting certain cancer when the opposite was true that they were 22 more times more likely to get cancer than certain segments of Americans let to under treatment, care and monitoring for the disease.  This group may have a real-world repercussion in premature death due to bias in the AI system. Our organization could see similar response from bias within our loan program if certain demographics are not fairly represented in the datasets, with contextual data where certain groups may live in low cost geographies, or face a history of marginalized economic opportunities that did not provide opportunity to build assets, then they could be condemned to unfair interest rate and levels of debt that kill their credit.

Bias is introduced in an AI system based on the way it is built and from the type of data used to train and evaluate it. The impact of this bias to certain demographics is that we could marginalize a significant market, people may not trust our AI based loan application system or other financial products and services offered by the firm, we could suffer reputation damage in the industry and we may undergo government regulatory reviews or penalization for unfair customer practices. We can address bias, starting by ensuring that our leadership establishing guidelines to let stakeholders, including diverse teams that touch the data know that they are responsible for checking data integrity and confirming its use is appropriate, optimizing the model without marginalizing representations of the solution set. A diverse team ensures different points of view, experiences, and participants from various demographics to confirm the model we are building and testing is fair. Vendors and third-party teams that help collect and curate data are also responsible for ensuring no bias creep into the system by ensuring digital divide, poverty of populations are properly represented.

Our experience is that our AI Ethics Guidelines clearly state that stakeholders, employees, and vendors are accountable for the minimization of bias and optimization of AI models. Development teams for the AI models to ensure datasets are also limit conscious and unconscious bias and constraints on the model boundaries. All parties are accepting this responsibility because they know that any reputation damage from biased algorithm affects the company and its spokes of clients, employees, and investors. Therefore, it is in the interest of all parties to ensure if they see any problems with bias getting at the stage of designing the purpose of the algorithm, input stage of curating and loading data, or evaluating the model, they need to say something and tap into process to report model issues for review and correction.

All three parties, developers, organizers, and stake holders need to be held accountable for optimizing AI Systems to exclude bias. If an AI system is successful, the entire team and wider organization get lifted by the tides of its success. Likewise, if any bias results in unfair practices of the AI model, the reputation of the firm suffers. There are multiple stages of building the AI model from creating and establishing its purpose (realm of leadership and development team), to inputs (data scientist, analysts, and curators, vendors) to teams for data labeling, tests, and audit. As such, it is appropriate that all members are held responsible for a reliable, dependable, and accurate AI system.

The struggle to communicate AI risks to stakeholders may be addressed by creating an AI ethics committee with members of different technical and non-technical backgrounds, experiences, and varied disciplines to hash out a plan of approach. Clear communication may be easier to capture if efforts to share the management of risks and trust in AI systems are established and driven by goals for transparency, explainability, and wide compliance. If we cannot collaborate on a strategy to manage bias and risks in AI systems, then organizational efforts to mitigate breeches of this code will fail. The chance of success improves if we collaborate with teams of diverse disciplines, incorporating  social science and domain knowledge to reduce naivety. The subject of mitigating bias requires broadening strokes of understanding, established standards of prevention and open communication amongst stakeholders to control any potential risks of reliance on garbage data in AI training models.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post