Premium Only Content

What is AI? An A-Z guide to artificial intelligence - BBC
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
What is AI? An A-Z guide to artificial intelligence - BBC
(Image credit: Getty Images ) Artificial intelligence is arguably the most important technological development of our time – here are some of the terms that you need to know as the world wrestles with what to do with this new technology. Imagine going back in time to the 1970s, and trying to explain to somebody what it means "to google", what a "URL" is, or why it's good to have "fibre-optic broadband". You'd probably struggle.
For every major technological revolution, there is a concomitant wave of new language that we all have to learn… until it becomes so familiar that we forget that we never knew it.
That's no different for the next major technological wave – artificial intelligence. Yet understanding this language of AI will be essential as we all – from governments to individual citizens – try to grapple with the risks, and benefits that this emerging technology might pose. Over the past few years, multiple new terms related to AI have emerged – "alignment", "large language models", "hallucination" or "prompt engineering", to name a few.
To help you stay up to speed, BBC.com has compiled an A-Z of words you need to know to understand how AI is shaping our world. A is for… Artificial general intelligence (AGI) Most of the AIs developed to date have been "narrow" or "weak". So, for example, an AI may be capable of crushing the world's best chess player, but if you asked it how to cook an egg or write an essay, it'd fail. That's quickly changing: AI can now teach itself to perform multiple tasks, raising the prospect that "artificial general intelligence" is on the horizon.
An AGI would be an AI with the same flexibility of thought as a human – and possibly even the consciousness too – plus the super-abilities of a digital mind. Companies such as OpenAI and DeepMind have made it clear that creating AGI is their goal. OpenAI argues that it would "elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge" and become a "great force multiplier for human ingenuity and creativity".
However, some fear that going a step further – creating a superintelligence far smarter than human beings – could bring great dangers (see "Superintelligence" and "X-risk"). Most uses of AI at present are "task specific" but there are some starting to emerge that have a wider range of skills (Credit: Getty Images) Alignment While we often focus on our individual differences, humanity shares many common values that bind our societies together, from the importance of family to the moral imperative not to murder. Certainly, there are exceptions, but they're not the majority.
However, we've never had to share the Earth with a powerful non-human intelligence. How can we be sure AI's values and priorities will align with our own?
This alignment problem underpins fears of an AI catastrophe: that a form of superintelligence emerges that cares little for the beliefs, attitudes and rules that underpin human societies. If we're to have safe AI, ensuring it remains aligned with us will be crucial (see "X-Risk").
In early July, OpenAI – one of the companies developing advanced AI – announced plans for a "superalignment" programme, designed to ensure AI systems much smarter than humans follow human intent. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," the company said. B is for… Bias For an AI to learn, it needs to learn from us. Unfortunately, humanity is hardly bias-free. If an AI acquires its abilities from a dataset that is skewed – for example, by race or gender – then it has the potential to spew out inaccurate, offensive stereotypes. And as we hand over more and more gatekeeping and decision-making to AI, many worry that machines could enact hidden prejudices, preventing some people from accessing certain services or knowledge. This discrimination would be obscured by supposed algorithmic impartiality.
In the worlds of AI ethics and safety, some researchers believe that bias – as well as other near-term problems such as surveillance misuse – are far more pressing problems than proposed future concerns such as extinction risk.
In response, some catastrophic risk researchers point out that the various dangers posed by AI a...
-
2:02:59
Tim Pool
2 hours agoKarmelo Anthony Debate: Use Of Force & Murder | The Culture War with Tim Pool
101K72 -
2:08:02
Benny Johnson
3 hours ago🚨BOMBSHELL: New Trump Assassination Plot Revealed! Assassin Would 'Bomb' Trump, Escape to UKRAINE
77.6K56 -
1:57:56
Side Scrollers Podcast
3 hours agoBlabs’ Day is RUINED | Side Scrollers
28.2K -
1:05:44
The Big Migâ„¢
2 hours agoGlobal Finance Forum From Bullion To Borders We Cover It All
17.9K4 -
57:54
The Tom Renz Show
1 hour agoBREAKING: White House Acknowledges COVID Is From a Lab!
21.4K4 -
1:45:37
Flyover Conservatives
13 hours agoAmerica’s Turning Point: Is a New Wave of Heroes Rising? - Phil Williams; 5 Tips to Get Unstuck and Explode Your Business - Clay Clark | FOC Show
40.7K1 -
2:10:00
Badlands Media
14 hours agoBadlands Daily: April 18, 2025 – Teleportation Tech, Time-Space War Games, and Tish James Caught in Her Own Trap
104K18 -
59:43
Steven Crowder
5 hours ago🔴 Good Friday: A Day for Mourning or A Day for Celebration?
181K170 -
LIVE
LFA TV
17 hours agoTRUMP SPEAKING LIVE! - ALL DAY LIVE STREAM 4/18/25
4,379 watching -
2:07:07
IsaiahLCarter
18 hours ago $0.59 earnedApostate Radio #010: Jacob Urovsky Professor of Disaffection, Joshua Slocum
22.5K