Stop Bias Before It Starts: Why AI Gatekeepers and Generative AI applications (ChatGPT) Need to Invest in Bias Prevention Centers
In an era when most business work environments are constantly changing because of digital transformation, job uncertainty, labor shortages, and new pandemic-related hybrid or part-time work arrangements, AI-powered algorithmic automation has taken on the role of managerial gatekeeper.
While some business leaders suggest this is the best time to invest in AI automation and digital transformation, it is also important to invest in an innovative bias prevention center.
There is a growing interest amongst tech Venture Capitalists (VC) to invest in generative AI applications. When Microsoft chatbot Tay came to market it was quickly shut down due to pushback. Many express concerns about the AI avatar Lensa’s image generation of Asian women.
Bias prevention would help high-performance organizations and tech giants to achieve their goal of making their organizations more diverse and inclusive. This is at a time when the White House recently stressed the need to make AI algorithms safer and to prevent AI biases as more businesses move to AI-based systems. With the recent launch of ChatGPT, an AI platform developed by OpenAI, where customers can get answers on just about anything from finding a simple food recipe to learning how to code a website, some comment that it is a revolutionized search engine and would be better than google.
Yet there are concerns that such an AI chatbot can lead to more harm than benefit and thus requires specialized and advanced supervision to prevent harm and bias in such advanced AI applications.
The informational data for developing apps, chatbots or automation and transformation of services requires gatekeepers. Historically, the gatekeeper is a concept Prof. Thomas Allen of MIT’s Sloan School of Management introduced in 1969, recognizing that gatekeepers are essential in guarding the information from going out of an organization as well as coming into an organization.
But today’s gatekeepers may be AI agents, specialized generative AI chatbot, powerful AI algorithms, or algorithmic automation. They are more efficient than humans, cheaper to implement, and more reliable.
Several industries use AI algorithms as gatekeepers, including banks that use AI-automated systems to screen individuals and determine their eligibility for loans and mortgages and to determine credit card limits. There are pros and cons to this approach.
AI algorithms function as gatekeepers for the bank loan system, which can very efficiently approve or reject an application within minutes. Applications are available to customers 24/7 all year round and thus can benefit both the banks and the customers.
AI-powered algorithms provide convenience to customers, lower the cost of operations, and lower the chances of mistakes. According to an Accenture report, “banks can achieve a two to five times increase in the volume of interactions or transactions with the same headcount.”
However, there are concerns for potential racial bias embedded in these algorithms. Some banks approved only 47% of applications for homeownership from Black applicants compared with 72% from White applicants, according to a Bloomberg report. If the gatekeeper algorithms are trained using biased data—for example, using a dataset with a majority of African Americans who have lower FICO scores—there is a higher chance that members of certain low-income and racial groups may never be approved.
Many hospitals and clinics are using AI algorithms to triage patients more efficiently and navigate them to the appropriate physicians, thereby using AI as a managerial gatekeeper. The Mayo Clinic, for example, used an AI patient triage system during the pandemic to determine the patients’ need to come to the hospital based on urgency. If this system is biased, certain populations may not get access to care.
Almost all of the big tech giants are using conversational AI chatbots, which serve as gatekeepers, in their retail businesses. One prominent example is Amazon, which uses chatbots to help customers track down lost and delayed packages. If their voice detection system is biased, service may be efficient for only certain groups depending on accent or race.
Since organizations and tech giants using AI-automated gatekeepers must avoid bias, there is a dire need for an innovative AI bias prevention and screening center. The center would help prevent bias from the start and would create new jobs in the industry. Attention should be focused on the algorithms with embedded human biases.
An innovative AI bias prevention center would include three departments. The first would be an algorithm screening, or a data screening and data bias detection department to search for bias. Data scientists need to hire and train a diverse team responsible to develop the raw data collection and screening protocols. The biases in an algorithm originate from the data used to feed and train the algorithm for certain tasks; so data needs to be screened from the start. This would be a crucial step to detect bias before it appears in the system.
Additionally, an algorithm monitoring department would develop bias detection software tools after the algorithm is developed. By using specialized responsible AI software tools capable of detecting bias in the system, there can be specialized filters that can control the image generation in apps like Lensa or detect hallucinations in applications like ChatGPT.
Finally, the third would be an algorithm testing needs human supervision department. Algorithms come with a black box that presents a hurdle, but there are creative ways to detect bias. The diverse and inclusive team needs to be tasked with designing, performing, and analyzing experiments and protocols for testing and checking on the gatekeeper algorithms. The organization’s leadership team can be part of this supervision to prevent bias in a more efficient and regulatory manner.
At a time when most tasks will be automated in order to save money and provide faster, more convenient, and more efficient services, there is an urgent need for AI bias prevention centers that can make the existing gatekeepers and new specialized chatbots and apps entering the market more accepting, efficient, transparent and inclusive.
AUTHOR: Sahar Hashmi, MD-PhD, is CEO of Myriad Consulting LLC, and a Faculty Instructor at Harvard University and a Titan Business Award winner of 2022.