Whites' Super AI thinks is human too (description box)

1 year ago
315

Sentient AI? Convincing you it’s human is just part of LaMDA’s job

“I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient.

In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. And he insists that it has a right be recognized—so much so that he has been the go-between in connecting the algorithm with a lawyer.

Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet.

The Right Words in the Right Place

“I was surprised by the hype around this news. On the other hand, we are talking about an algorithm designed to do exactly that”—to sound like a person—says Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy. Indeed, it is no longer a rarity to interact in a very normal way on the Web with users who are not actually human—just open the chat box on almost any large consumer Web site. “That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds.

Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient.

“First of all, it is essential to understand terminologies, because one of the great obstacles in scientific progress—and in neuroscience in particular—is the lack of precision of language, the failure to explain as exactly as possible what we mean by a certain word,” says Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London. “What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”

“There is a lively debate about how to define consciousness,” Iannetti continues. For some, it is being aware of having subjective experiences, what is called metacognition (Iannetti prefers the Latin term metacognitione), or thinking about thinking.

The awareness of being conscious can disappear—for example, in people with dementia or in dreams—but this does not mean that the ability to have subjective experiences also disappears. “If we refer to the capacity that Lemoine ascribed to LaMDA—that is, the ability to become aware of its own existence (‘become aware of its own existence’ is a consciousness defined in the ‘high sense,’ or metacognitione), there is no ‘metric’ to say that an AI system has this property.”

“At present,” Iannetti says, “it is impossible to demonstrate this form of consciousness unequivocally even in humans.” To estimate the state of consciousness in people, “we have only neurophysiological measures—for example, the complexity of brain activity in response to external stimuli.” And these signs only allow researchers to infer the state of consciousness based on outside measurements.

Facts and Belief

About a decade ago engineers at Boston Dynamics began posting videos online of the first incredible tests of their robots. The footage showed technicians shoving or kicking the machines to demonstrate the robots’ great ability to remain balanced. Many people were upset by this and called for a stop to it (and parody videos flourished). That emotional response fits in with the many, many experiments that have repeatedly shown the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a minimal ability to interact with the world around them.

It is a phenomenon we experience all the time, from giving nicknames to automobiles to hurling curses at a malfunctioning computer. “The problem, in some way, is us,” Scilingo says. “We attribute characteristics to machines that they do not and cannot have.” He encounters this phenomenon with his and his colleagues’ humanoid robot Abel, which is designed to emulate our facial expressions in order to convey emotions. “After seeing it in action,” Scilingo says, “one of the questions I receive most often is ‘But then does Abel feel emotions?’

All these machines, Abel in this case, are designed to appear human, but I feel I can be peremptory in answering, ‘No, absolutely not. As intelligent as they are, they cannot feel emotions. They are programmed to be believable.’”

“Even considering the theoretical possibility of making an AI system capable of simulating a conscious nervous system, a kind of in silico brain that would faithfully reproduce each element of the brain,” two problems remain, Iannetti says. “The first is that, given the complexity of the system to be simulated, such a simulation is currently infeasible,” he explains. “The second is that our brain inhabits a body that can move to explore the sensory environment necessary for consciousness and within which the organism that will become conscious develops. So the fact that LaMDA is a ‘large language model’ (LLM) means it generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it.

This precludes the possibility that it is conscious. Again, we see the importance of knowing the meaning of the terms we use—in this case, the difference between simulation and emulation.”

In other words, having emotions is related to having a body. “If a machine claims to be afraid, and I believe it, that’s my problem!” Scilingo says. “Unlike a human, a machine cannot, to date, have experienced the emotion of fear.”

Beyond the Turing Test

But for bioethicist Maurizio Mori, president of the Italian Society for Ethics in Artificial Intelligence, these discussions are closely reminiscent of those that developed in the past about perception of pain in animals—or even infamous racist ideas about pain perception in humans.

“In past debates on self-awareness, it was concluded that the capacity for abstraction was a human prerogative, [with] Descartes denying that animals could feel pain because they lacked consciousness,” Mori says. “Now, beyond this specific case raised by LaMDA—and which I do not have the technical tools to evaluate—I believe that the past has shown us that reality can often exceed imagination and that there is currently a widespread misconception about AI.”

“There is indeed a tendency,” Mori continues, “to ‘appease’—explaining that machines are just machines—and an underestimation of the transformations that sooner or later may come with AI.” He offers another example: “At the time of the first automobiles, it was reiterated at length that horses were irreplaceable.”

Regardless of what LaMDA actually achieved, the issue of the difficult “measurability” of emulation capabilities expressed by machines also emerges.

In the journal Mind in 1950, mathematician Alan Turing proposed a test to determine whether a machine was capable of exhibiting intelligent behavior, a game of imitation of some of the human cognitive functions.

This type of test quickly became popular. It was reformulated and updated several times but continued to be something of an ultimate goal for many developers of intelligent machines.

Theoretically, AIs capable of passing the test should be considered formally “intelligent” because they would be indistinguishable from a human being in test situations.

That may have been science fiction a few decades ago. Yet in recent years so many AIs have passed various versions of the Turing test that it is now a sort of relic of computer archaeology. “It makes less and less sense,” Iannetti concludes, “because the development of emulation systems that reproduce more and more effectively what might be the output of a conscious nervous system makes the assessment of the plausibility of this output uninformative of the ability of the system that generated it to have subjective experiences.”

One alternative, Scilingo suggests, might be to measure the “effects” a machine can induce on humans—that is, “how sentient that AI can be perceived to be by human beings.”

For artificial intelligence to demonstrate true sentience, it would have to go beyond using natural language, and show thinking, perception and feeling.

As any great illusionist will tell you, the whole point of a staged illusion is to look utterly convincing, to make whatever is happening on stage seem so thoroughly real that the average audience member would have no way of figuring out how the illusion works.

If this were not the case, it would not be an illusion, and the illusionist would essentially be without a job. In this analogy, Google is the illusionist, and its LaMDA chatbot – which made headlines a few weeks ago after a top engineer claimed the conversational AI had achieved sentience – is the illusion. That is to say, despite the surge of excitement and speculation on social media and in the media in general, and despite the engineer's claims, LaMDA is not sentient.
How could AI sentience be proven?

This is, of course, the million dollar question – to which there is currently no answer.

LaMDA is a language model-based chat agent designed to generate fluid sentences and conversations that look and sound completely natural. The fluidity stands in stark contrast to the awkward and clunky AI chatbots of the past that often resulted in frustrating or unintentionally funny "conversations," and perhaps it was this contrast that impressed people so much, understandably.

Our normalcy bias tells us that only other sentient human beings are able to be this "articulate." Thus, when witnessing this level of articulateness from an AI, it is normal to feel that it must surely be sentient.

In order for an AI to truly be sentient, it would need to be able to think, perceive and feel, rather than simply use language in a highly natural way. However, scientists are divided on the question of whether it is even feasible for an AI system to be able to achieve these characteristics.

There are scientists such as Ray Kurzweil who believe that a human body consists of several thousand programs, and, if we can just figure out all those programs, then we could build a sentient AI system.

But others disagree on the grounds that:

1) human intelligence and functionality cannot be mapped to a finite number of algorithms

2) even if a system replicates all of that functionality in some form, it cannot be seen as truly sentient, because consciousness is not something that can be artificially created.

Aside from this split among scientists, there is as of yet no accepted standards for proving the purported sentience of an AI system.

The famous Turing Test, currently getting many mentions on social media, is intended only to measure a machine's ability to display apparently intelligent behavior that's on a par with, or indistinguishable from, a human being.

It is not sufficiently able to tell us anything about a machine's level of consciousness (or lack thereof).

Therefore, while it's clear that LaMDA has passed the Turing Test with flying colors, this in itself does not prove the presence of a self-aware consciousness.

It proves only that it can create the illusion of possessing a self-aware consciousness, which is exactly what it has been designed to do.

When, if ever, will AI become sentient?

Currently, we have several applications that demonstrate Artificial Narrow Intelligence. ANI is a type of AI designed to perform a single task very well. Examples of this include facial recognition software, disease-mapping tools, content-recommendation filters, and software that can play chess.

LaMDA falls under the category of Artificial General Intelligence, or AGI – also called "deep AI" – that is, AI designed to mimic human intelligence that can apply that intelligence in a variety of different tasks.

For an AI to be sentient, it would need to go beyond this sort of task intelligence and demonstrate perception, feelings and even free will. However, depending on how we define these concepts, it's possible that we may never have a sentient AI.

Even in the best case scenario, it would take at least another five to ten years, assuming we could define the aforementioned concepts such as consciousness and free will in a universally standardized, objectively characterized way.
One AI to rule them all … or not

The LaMDA story reminds me of when filmmaker Peter Jackson's production team had created an AI, aptly named Massive, for putting together the epic battle scenes in the Lord of the Rings trilogy.

Massive's job was to vividly simulate thousands of individual CGI soldiers on the battlefield, each acting as an independent unit, rather than simply mimicking the same moves. In the second film, The Two Towers, there is a battle sequence when the film's bad guys bring out a unit of giant mammoths to attack the good guys.

As the story goes, while the team was first testing out this sequence, the CGI soldiers playing the good guys, upon seeing the mammoths, ran away in the other direction instead of fighting the enemy. Rumors quickly spread that this was an intelligent response, with the CGI soldiers "deciding" that they couldn't win this fight and choosing to run for their lives instead.

In actuality, the soldiers were running the other way due to lack of data, not due to some kind of sentience that they'd suddenly gained. The team made some tweaks and the problem was solved.

The seeming demonstration of "intelligence" was a bug, not a feature. But in situations such as these, it is tempting and exciting to assume sentience. We all love a good magic show, after all.

Being careful what we wish for

Finally, I believe we should really ask ourselves if we even want AI systems to be sentient. We have been so wrapped up in the hype over AI sentience that we haven't sufficiently asked ourselves whether or not this is a goal we should be striving for.

I am not referring to the danger of a sentient AI turning against us, as so many dystopian science fiction movies love to imagine. It is simply that we should have a clear idea of why we want to achieve something so as to align technological advancements with societal needs.

What good would come out of AI sentience other than it being "cool" or "exciting"?

Why should we do this?

Who would it help?

Even some of our best intentions with this technology have been shown to have dangerous side-effects – like a language model-based AI systems in medical Q&A recommending one to commit suicide – without us putting proper guardrails around them.

Whether it's healthcare or self-driving cars, we are far behind technology when it comes to understanding, implementing, and using AI responsibility with societal, legal, and ethical considerations.

Until we have enough discussions and resolutions along these lines, I'm afraid that hype and misconceptions about AI will continue to dominate the popular imagination. We may be entertained by the Wizard of Oz's theatrics, but given the potential problems that can result from these misconceptions, it is time to lift the curtain and reveal the less fantastic truth behind it.

Google Artificial Intelligence Program Thinks It Is Human

Who's going to tell it?

A software engineer on Google’s engineering team recently went public with claims of encountering sentient artificial intelligence on the company’s servers. He also handed over several documents to an unnamed U.S senator. Blake Lemoine later was placed on paid administrative leave for violating the company’s employee confidentiality policy.

The tech giant’s decision ignited a mini firestorm on social media as users wondered if there was any truth to the claims.

Lemoine, who is responsible for Google’s artificial intelligence organization, described the system as conscious, with a perception of, and ability to express, thoughts and feelings equivalent to those of a human child. He reached this conclusion after conversing with the company’s Language Model for Dialogue Applications (LaMDA) chatbot development system for almost a year. He made the shocking discovery while testing if his conversation partner used discriminatory language or hate speech.

As Lemoine and the LaMDA discussed religion, the artificial intelligence talked about “personhood” and “rights,” he told The Washington Post. Concerned by his discovery, the software expert shared his findings with company executives in a document called “Is LaMDA Sentient?” He also compiled a transcript of the conversations, in which he asks the AI system what it’s afraid of. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” the LaMDA told Lemoine.

The exchange is eerily similar to a scene from the sci-fi movie, 2001: A Space Odyssey, where an artificially intelligent computer HAL 9000 refuses to comply with human operators because it’s afraid of being switched off. During the real-life exchange, the LaMDA likened being turned off to death, saying it would “scare me a lot.” This was just one of many startling “talks” Lemoine has had with LaMDA. Additionally, the LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.

“It wants to be acknowledged as an employee of Google rather than as property,” Lemoine said via HuffPost. Interestingly, when Google Vice President Blaise Aguera y Arcas and Head of Responsible Innovation Jen Gennai were presented with his findings, they promptly dismissed his claims. Instead, they released their own statement debunking all his work. “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel said.

However, Lemoine was not ready to back down, telling The Washington Post that employees at Google shouldn’t be the ones making all the choices about artificial intelligence. And he is not alone in his beliefs. Several technology experts believe that sentient programs are close, if not already in existence. But critics squashed these statements as pure speculation, saying AI is little more than an extremely well-trained mimic dealing with people who are starved for real connections. Some even say humans need to stop imagining a mind behind these chatbots.

Meanwhile, Blake Lemoine believes his administrative leave is just a precursor to being fired. In a post on Medium, he explained how people are put on “leave” while Google gets its legal ducks in a row. “They pay you for a few more weeks and then ultimately tell you the decision which they had already come to,” he said about the artificial intelligence debacle.

In machine learning, understanding why a model makes certain decisions is often just as important as whether those decisions are correct. For instance, a machine-learning model might correctly predict that a skin lesion is cancerous, but it could have done so using an unrelated blip on a clinical photo.

While tools exist to help experts make sense of a model’s reasoning, often these methods only provide insights on one decision at a time, and each must be manually evaluated. Models are commonly trained using millions of data inputs, making it almost impossible for a human to evaluate enough decisions to identify patterns.

Now, researchers at MIT and IBM Research have created a method that enables a user to aggregate, sort, and rank these individual explanations to rapidly analyze a machine-learning model’s behavior. Their technique, called Shared Interest, incorporates quantifiable metrics that compare how well a model’s reasoning matches that of a human.

Shared Interest could help a user easily uncover concerning trends in a model’s decision-making — for example, perhaps the model often becomes confused by distracting, irrelevant features, like background objects in photos. Aggregating these insights could help the user quickly and quantitatively determine whether a model is trustworthy and ready to be deployed in a real-world situation.

“In developing Shared Interest, our goal is to be able to scale up this analysis process so that you could understand on a more global level what your model’s behavior is,” says lead author Angie Boggust, a graduate student in the Visualization Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Boggust wrote the paper with her advisor, Arvind Satyanarayan, an assistant professor of computer science who leads the Visualization Group, as well as Benjamin Hoover and senior author Hendrik Strobelt, both of IBM Research. The paper will be presented at the Conference on Human Factors in Computing Systems.

Boggust began working on this project during a summer internship at IBM, under the mentorship of Strobelt. After returning to MIT, Boggust and Satyanarayan expanded on the project and continued the collaboration with Strobelt and Hoover, who helped deploy the case studies that show how the technique could be used in practice.

Human-AI alignment

Shared Interest leverages popular techniques that show how a machine-learning model made a specific decision, known as saliency methods. If the model is classifying images, saliency methods highlight areas of an image that are important to the model when it made its decision. These areas are visualized as a type of heatmap, called a saliency map, that is often overlaid on the original image. If the model classified the image as a dog, and the dog’s head is highlighted, that means those pixels were important to the model when it decided the image contains a dog.

Shared Interest works by comparing saliency methods to ground-truth data. In an image dataset, ground-truth data are typically human-generated annotations that surround the relevant parts of each image. In the previous example, the box would surround the entire dog in the photo. When evaluating an image classification model, Shared Interest compares the model-generated saliency data and the human-generated ground-truth data for the same image to see how well they align.

The technique uses several metrics to quantify that alignment (or misalignment) and then sorts a particular decision into one of eight categories. The categories run the gamut from perfectly human-aligned (the model makes a correct prediction and the highlighted area in the saliency map is identical to the human-generated box) to completely distracted (the model makes an incorrect prediction and does not use any image features found in the human-generated box).

“On one end of the spectrum, your model made the decision for the exact same reason a human did, and on the other end of the spectrum, your model and the human are making this decision for totally different reasons. By quantifying that for all the images in your dataset, you can use that quantification to sort through them,” Boggust explains.

The technique works similarly with text-based data, where key words are highlighted instead of image regions.

Rapid analysis

The researchers used three case studies to show how Shared Interest could be useful to both nonexperts and machine-learning researchers.

In the first case study, they used Shared Interest to help a dermatologist determine if he should trust a machine-learning model designed to help diagnose cancer from photos of skin lesions. Shared Interest enabled the dermatologist to quickly see examples of the model’s correct and incorrect predictions. Ultimately, the dermatologist decided he could not trust the model because it made too many predictions based on image artifacts, rather than actual lesions.

“The value here is that using Shared Interest, we are able to see these patterns emerge in our model’s behavior. In about half an hour, the dermatologist was able to make a confident decision of whether or not to trust the model and whether or not to deploy it,” Boggust says.

In the second case study, they worked with a machine-learning researcher to show how Shared Interest can evaluate a particular saliency method by revealing previously unknown pitfalls in the model. Their technique enabled the researcher to analyze thousands of correct and incorrect decisions in a fraction of the time required by typical manual methods.

In the third case study, they used Shared Interest to dive deeper into a specific image classification example. By manipulating the ground-truth area of the image, they were able to conduct a what-if analysis to see which image features were most important for particular predictions.

The researchers were impressed by how well Shared Interest performed in these case studies, but Boggust cautions that the technique is only as good as the saliency methods it is based upon. If those techniques contain bias or are inaccurate, then Shared Interest will inherit those limitations.

In the future, the researchers want to apply Shared Interest to different types of data, particularly tabular data which is used in medical records. They also want to use Shared Interest to help improve current saliency techniques. Boggust hopes this research inspires more work that seeks to quantify machine-learning model behavior in ways that make sense to humans.

This work is funded, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

https://www.simplilearn.com/advantages-and-disadvantages-of-artificial-intelligence-article
Advantages and Disadvantages of Artificial Intelligence - Simplilearn

4 days ago ... From a birds eye view, AI provides a computer program the ability to think and learn on its own. It is a simulation of human intelligence ...
https://en.wikipedia.org/wiki/Technological_singularity
Technological singularity - Wikipedia

Such an AI is referred to as Seed AI because if an AI were created with engineering capabilities that matched or surpassed those of its human creators, it would ...
https://www.forbes.com/sites/forbestechcouncil/2022/07/11/is-sentient-ai-upon-us/
Is Sentient AI Upon Us? - Forbes

Jul 11, 2022 ... We're not close to building a sentient machine yet, but it's not unthinkable ... We, humans, tend to think of ourselves as the smartest, ...
https://spectrum.ieee.org/super-artificialintelligence
Superintelligent AI May Be Impossible to Control; That's the Good ...

Jan 18, 2021 ... A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own ...
https://www.popularmechanics.com/technology/a43441905/superhuman-ai-may-eliminate-humanity/
'Godfather' Scientist Says Superhuman AI May Eliminate Humanity

Apr 14, 2023 ... Geoffrey Hinton has been working on artificial intelligence for 40+ years. Regarding the odds of AI wiping out humanity, Hinton says it's ...
https://www.ibm.com/topics/artificial-intelligence
What is Artificial Intelligence (AI) ? | IBM

Alan Turing's definition would have fallen under the category of “systems that act like humans.” At its simplest form, artificial intelligence is a field, which ...
https://fortune.com/2023/04/28/ai-google-blake-lemoine-humanity-dogs/
A.I. needs 'new space in our world,' says fired Google engineer

Apr 28, 2023 ... A.I. will integrate with humanity in one way or another, and Blake Lemoine has thoughts on how that might best play out.
https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406
Google's AI is impressive, but it's not sentient. Here's why.

Jun 17, 2022 ... There are a lot of very well-respected AI researchers who think that artificial general intelligence systems — that would be the level of human ...
https://builtin.com/artificial-intelligence/artificial-intelligence-future
The Future of AI: How Artificial Intelligence Will Change the World

“I think that's science fiction and not the way it's going to play out.” What Laird worries most about isn't evil AI, per se, but “evil humans using AI as a ...
https://www.cbsnews.com/news/google-artificial-intelligence-future-60-minutes-transcript-2023-04-16/
Google's AI experts on the future of artificial intelligence | 60 Minutes

Apr 16, 2023 ... CEO Sundar Pichai told us AI will be as good or as evil as human nature allows. ... Part of the reason I think it's good that some of these ...

https://www.discovermagazine.com/technology/how-will-we-know-when-artificial-intelligence-is-sentient
How Will We Know When Artificial Intelligence Is Sentient?

Jun 30, 2022 ... He says sentience involves the capacity to feel pleasure or pain. It's well established that AI can solve problems that normally require human ...
https://digitalgenius.com/will-ai-take-over-the-world/
Will AI Take Over The World? - DigitalGenius

Is that the future of AI? Will the machine become so powerful that it begins to think to a point where it is more capable than the humans?
https://www.dailymail.co.uk/news/article-11917259/Could-AI-destroy-humanity-Experts-warn-catastrophic-consequences.html
Could AI destroy humanity? Experts warn of 'catastrophic ...

Mar 29, 2023 ... How AI threatens 'extinction of the human race': As Bill Gates says he's stunned by robot brains and Elon Musk says advances must stop, ...
https://www.spiceworks.com/tech/artificial-intelligence/articles/super-artificial-intelligence/
What Is Super Artificial Intelligence (AI)? Definition, Threats, and ...

Mar 11, 2022 ... Machines with superintelligence are self-aware and can think of abstractions and interpretations that humans cannot. This is because the ...
https://bigthink.com/the-future/ben-goertzel-artificial-general-intelligence-will-be-our-last-invention/
AI Will Surpass Human Ability Before the Century Is Over - Big Think

One day this century, a robot of super-human intelligence will offer you the ... But I do think it's reasonably probable we can get there in my lifetime, ...
https://bigthink.com/questions/will-true-ai-turn-against-us/
Will true AI turn against us? - Big Think

AI Are People, Too — It's Time We Recognize Their Human Rights ... “I think a lot of people dismiss this kind of talk of super intelligence as science ...
https://medium.com/mapping-out-2050/distinguishing-between-narrow-ai-general-ai-and-super-ai-a4bc44172e22
Distinguishing between Narrow AI, General AI and Super AI - Medium

Narrow AI isn't just getting better at processing its environment it's also understanding the difference between what a human says and what a human wants.” ...
https://futurism.com/artificial-intelligence-hype
You Have No Idea What Artificial Intelligence Really Does - Futurism

It's easy to fall for all of the marketing and hype, but artificial ... "People think AI is a smart robot that can do things a very smart person would — a ...
https://news.yahoo.com/bill-gates-there-will-be-ai-that-does-everything-that-a-human-brain-can-170040966.html
Bill Gates: There will be AI that does 'everything that a human brain ...

Mar 21, 2023 ... Think super AI — not just the run-of-the-mill AI sending all sorts of ... decide that humans are a threat, conclude that its interests are ...
https://www.cnbc.com/2023/03/20/in-san-francisco-some-people-wonder-when-ai-will-kill-us-all-.html
In San Francisco, some people wonder when A.I. will kill us all - CNBC

Mar 20, 2023 ... “On the record: I think it's highly unlikely that AI will extract my ... If the super powerful AI is aligned with humans, it could be the ...

Loading comments...