EconomyLens.com
No Result
View All Result
Sunday, August 10, 2025
  • Home
  • Economy
  • Business
  • Markets
  • Tech
  • Editorials
EconomyLens.com
  • Home
  • Economy
  • Business
  • Markets
  • Tech
  • Editorials
No Result
View All Result
EconomyLens.com
No Result
View All Result
Home Tech

Can we rid artificial intelligence of bias?

Natalie Fisher by Natalie Fisher
May 18, 2024
in Tech
Reading Time: 8 mins read
A A
2
32
SHARES
402
VIEWS
Share on FacebookShare on Twitter

Google chief executive Sundar Pichai speaks during the tech titan’s annual I/O developers conference on May 14, 2024, in Mountain View, California. ©AFP

San Francisco (AFP) – Artificial intelligence built on mountains of potentially biased information has created a real risk of automating discrimination, but is there any way to re-educate the machines?

Related

New Instagram location sharing feature sparks privacy fears

OpenAI releases ChatGPT-5 as AI race accelerates

United Airlines flights grounded in the US

US government gets a year of ChatGPT Enterprise for $1

China’s Baidu to deploy robotaxis on rideshare app Lyft

The question for some is extremely urgent. In this ChatGPT era, AI will generate more and more decisions for health care providers, bank lenders or lawyers, using whatever was scoured from the internet as source material. AI’s underlying intelligence, therefore, is only as good as the world it came from, as likely to be filled with wit, wisdom, and usefulness, as well as hatred, prejudice and rants.

“It’s dangerous because people are embracing and adopting AI software and really depending on it,” said Joshua Weaver, Director of Texas Opportunity & Justice Incubator, a legal consultancy. “We can get into this feedback loop where the bias in our own selves and culture informs the bias in the AI and becomes a sort of reinforcing loop,” he said.

Making sure technology more accurately reflects human diversity is not just a political choice. Other uses of AI, like facial recognition, have seen companies thrown into hot water with authorities for discrimination. This was the case against Rite-Aid, a US pharmacy chain, where in-store cameras falsely tagged consumers, particularly women and people of color, as shoplifters, according to the Federal Trade Commission.

– ‘Got it wrong’ –

ChatGPT-style generative AI, which can create a semblance of human-level reasoning in just seconds, opens up new opportunities to get things wrong, experts worry. The AI giants are well aware of the problem, afraid that their models can descend into bad behavior, or overly reflect a western society when their user base is global.

“We have people asking queries from Indonesia or the US,” said Google CEO Sundar Pichai, explaining why requests for images of doctors or lawyers will strive to reflect racial diversity. But these considerations can reach absurd levels and lead to angry accusations of excessive political correctness. This is what happened when Google’s Gemini image generator spat out an image of German soldiers from World War Two that absurdly included a black man and Asian woman. “Obviously, the mistake was that we over-applied…where it should have never applied. That was a bug and we got it wrong,” Pichai said.

But Sasha Luccioni, a research scientist at Hugging Face, a leading platform for AI models cautioned that “thinking that there’s a technological solution to bias is kind of already going down the wrong path.” Generative AI is essentially about whether the output “corresponds to what the user expects it to” and that is largely subjective, she said.

The huge models on which ChatGPT is built “can’t reason about what is biased or what isn’t so they can’t do anything about it,” cautioned Jayden Ziegler, head of product at Alembic Technologies. For now at least, it is up to humans to ensure that the AI generates whatever is appropriate or meets their expectations.

– ‘Baked in’ bias –

But given the frenzy around AI, that is no easy task. Hugging Face has about 600,000 AI or machine learning models available on its platform. “Every couple of weeks a new model comes out and we’re kind of scrambling in order to try to just evaluate and document biases or undesirable behaviors,” said Luccioni.

One method under development is something called algorithmic disgorgement that would allow engineers to excise content, without ruining the whole model. But there are serious doubts this can actually work. Another method would “encourage” a model to go in the right direction, “fine tune” it, “rewarding for right and wrong,” said Ram Sriharsha, chief technology officer at Pinecone. Pinecone is a specialist of retrieval augmented generation (or RAG), a technique where the model fetches information from a fixed trusted source.

For Weaver of the Texas Opportunity & Justice Incubator, these “noble” attempts to fix bias are “projections of our hopes and dreams for what a better version of the future can look like.” But bias “is also inherent into what it means to be human and because of that, it’s also baked into the AI as well,” he said.

© 2024 AFP

Tags: artificial intelligencebiasdiscrimination
Share13Tweet8Share2Pin3Send
Previous Post

Dow finishes above 40,000 for first time as rally pauses in Europe

Next Post

Iraq father begins legal action against BP over son’s cancer death

Natalie Fisher

Natalie Fisher

Related Posts

Tech

Musk’s X accuses Britain of online safety ‘overreach’

August 1, 2025
Tech

Nvidia says no ‘backdoors’ in chips as China questions security

August 1, 2025
Tech

Nintendo quarterly revenue surges thanks to Switch 2

August 1, 2025
Tech

Nvidia says no ‘backdoors’ in chips as China questions security

July 31, 2025
Tech

Amazon profits surge 35% but forecast sinks share price

August 1, 2025
Tech

Amazon profits surge 35% as AI investments drive growth

July 31, 2025
Next Post

Iraq father begins legal action against BP over son's cancer death

The French 'Erin Brockovich' vs Goodyear

'Maldives what?': Saudi fashionistas attempt beach rebrand

Ryanair annual profit jumps on higher demand, fares

0 0 votes
Article Rating
Subscribe
Notify of
guest
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
  • Trending
  • Comments
  • Latest

New York ruling deals Trump business a major blow

September 30, 2024

Elon Musk’s X fights Australian watchdog over church stabbing posts

April 21, 2024

Women journalists bear the brunt of cyberbullying

April 22, 2024

France probes TotalEnergies over 2021 Mozambique attack

May 6, 2024

New York ruling deals Trump business a major blow

75

Ghanaian finance ministry warns against fallout from anti-LGBTQ law

74

Shady bleaching jabs fuel health fears, scams in W. Africa

71

Stock markets waver, oil prices edge up

65

Gold futures hit record on US tariff shock; mixed day for stocks

August 10, 2025

Designer says regrets Adidas ‘appropriated’ Mexican footwear

August 9, 2025

New Instagram location sharing feature sparks privacy fears

August 8, 2025

Swiss gold refining sector stung by US tariffs

August 9, 2025
EconomyLens Logo

We bring the world economy to you. Get the latest news and insights on the global economy, from trade and finance to technology and innovation.

Pages

  • Home
  • About Us
  • Privacy Policy
  • Contact Us

Categories

  • Business
  • Economy
  • Markets
  • Tech
  • Editorials

Network

  • Coolinarco.com
  • CasualSelf.com
  • Fit.CasualSelf.com
  • Sport.CasualSelf.com
  • SportBeep.com
  • MachinaSphere.com
  • MagnifyPost.com
  • TodayAiNews.com
  • VideosArena.com
© 2025 EconomyLens.com - Top economic news from around the world.
No Result
View All Result
  • Home
  • Economy
  • Business
  • Markets
  • Tech
  • Editorials

© 2024 EconomyLens.com - Top economic news from around the world.