Share
Home Technology AI: Navigating bias and building trust will take time

AI: Navigating bias and building trust will take time

AI warrants an industry-wide tectonic shift, believe experts
AI: Navigating bias and building trust will take time
Generative AI raises legal and ethical questions

From AI hallucinations to AI bias, this new technology is throwing up all sorts of ethical considerations. As AI takes over the world, we speak to experts to understand how businesses can navigate these murky waters.

Terrence Alsup, lead data scientist, Finastra, says that while it is easy for humans to recognize bias, it’s difficult to train machines to do so.

One popular technique that has been used to steer the output of LLMs (large language models) like ChatGPT is reinforcement learning from human feedback (RLHF), says Alsup. “RLHF works by including a human in the loop to evaluate sample LLM responses and rank them based on how appropriate they are, with a specific focus on eliminating bias.”

Read | How to get started with ChatGPT: A guide for beginners

Tackling AI bias

Eric Prugh, chief product officer, Authenticx, offers three ways to limit bias in AI. Firstly, he suggests the companies invest to train the people training their AI models. Secondly, companies should also have an intimate understanding of their training data. Finally, they should build checks and balances to identify or challenge bias appearing in results.

“While it’s unlikely that bias will ever be completely eliminated, organizations can minimize its impact by acknowledging that fact, and taking steps to monitor bias,” says Prugh.

However, Lilith Bat-Leah, VP-data services, Mod Op, says we need to take a more nuanced look at bias.

“Bias can actually be optimal for certain use cases if applied appropriately, asserts Bat-Leah.

Using the example of medical diagnosis models, she says some of them perform better on certain subsets of the population. Rather than one model that works for everyone, it’d be better to have different models for men and women. In this case, she says, bias might be acceptable. 

Read: How to use ChatGPT for business growth

“Rather than trying to create generally unbiased AI, it may be more constructive to assess potential harms from bias on a case-by-case basis,” believes Bat-Leah. “Completely eradicating bias from AI is an ill-defined and ambitious goal, but efforts to mitigate harm from bias are ongoing.”

Would regulations help?

Our experts all agree that completely removing bias in AI is an uphill task. However, when it comes to regulating the sector, they aren’t all on the same page.

Favoring regulations, Alsup suggests a few different ways to introduce them.

“For example, we could limit what training data is allowed, or require that certain filters are in place on the output,” suggests Alsup. “Another option is that the model must achieve a certain score on a benchmark test where any bias is evaluated by a panel of humans.”

Dipak Patel, CEO, GLOBO Language Solutions, has a different viewpoint. He points to a 2019 study that looked at a healthcare algorithm in the US, which discriminated against black citizens when ranked according to the cost of care. After researchers ranked patients by their illness, the percentage of black patients who should receive care jumped from 17.7 percent to 46.5 percent. 

Read | AI: Are we using it for the right things?

Based on this, Patel believes companies that persevere in successful AI adoption and usage should focus on three things. First, they should guarantee differentiated, balanced, and comprehensive data sets. Next, they should create a proven way to interpret data fairly and equally that excludes inherent bias. Third, they should seek to serve only certain parts of a population. 

“Stringent regulations are not needed to address these problems because the market will self-correct,” believes Patel.

Should companies hide behind legalese?

Recently, an Air Canada customer was misled by the airline’s AI bot. In court, the airline first argued “[the customer] never should have trusted the chatbot”. It then said that “the chatbot is a separate legal entity that is responsible for its own actions”. 

Our experts all criticized Air Canada’s stance and agreed companies shouldn’t try to shield their AI behind legal gobbledygook.

Attorney Adam Rosenblum, owner, Rosenblum Law, explains that the use of detailed legal agreements, like terms of service and privacy policies, is standard practice across industries, not solely within the realm of generative AI. He says these documents  inform users about their rights and obligations, and also protect the company from potential liabilities. 

Read: Time for UAE to introduce an AI law

But he says it’s unethical for companies to hide behind these. “So if a company is burying important information in legal mumbo-jumbo in order to mislead the general public into thinking their technology is safe and/or accurate, they need to be taken to task for that in my opinion,” says Rosenblum. 

Can AI be trusted?

Emin Can Turan, founder & CEO, Pebbles Ai, says that companies shouldn’t hide behind legalese to escape responsibility. However, he says as we transition to AI, we should extend some leeway to businesses adopting this new tech. 

“These bots are as good as the humans that have built them,” says Turan. He suggests users should, at least for the time being, double-check everything rather than accepting it at face value. 

“This is a novel technology. Even the developers themselves cannot exactly predict the generated output,” says Turan.

For more stories on tech, click here.

The stories on our website are intended for informational purposes only. Those with finance, investment, tax or legal content are not to be taken as financial advice or recommendation. Refer to our full disclaimer policy here.