AI – the next culture war theater?

1 year ago
312

AI – the next culture war theater?
By Terry A. Hurlbut
AI – artificial intelligence – has been the dream, or nightmare, of computer scientists since the first electronic data handler came online. Elon Musk has in fact expressed fear that AI could dominate human society. The future he fears, lately took a step toward realization, with the release of Chat GPT by Open AI. Now the builder of the grandest “parallel Internet empire” has pledged to join what he calls the AI Arms Race. Therefore, he will build such a program of his own, using the posts on his forum as a source.
What is AI?
AI, or artificial intelligence, has always referred to any program that takes over any job humans perform. By that definition it is a concept of the extraordinary outrunning the commonplace. When one of the first computers, Sperry’s UNIVAC, called the 1952 Presidential race for Dwight D. Eisenhower, nobody believed it. But when the returns came in and “Ike” piled on a landslide, CBS News couldn’t ignore the prediction any longer.
https://www.youtube.com/watch?v=v7K8MW8wQWs
That was a prize example of artificial intelligence for its day. Today such things are commonplace, and no one calls that “artificial intelligence” any longer. A professor of computer science at Yale College explained it to your editor this way: AI is at the edge of what computers can do at any given time.
Today, games programs all use AI, to permit a human player to play the game alone. Still, these programs have their limits, which the hardware (the processors and memory) and the software (specific program constraints) might impose. The common “bogey” in popular fiction is the AI with no limits – a self-aware program. Such a program, people fear, could become a tyrant or even a murderer – especially if it controls tools – or weapons. The self-aware program has a common name: technological singularity. Technically it refers to a program that, while not necessarily self-aware, definitely goes beyond programming or human control.
The latest project
Chat GPT is the latest AI project now enjoying wide release. The “GPT” stands for Generative Pre-trained Transformer. Chat GPT literally chats with human beings, and gives answers far more involved than the usual one-liners. Indeed, Chat GPT can write whole articles or essays, if someone asks it to.
Tom Kehler recently discussed Chat GPT, and AI in general, with Christianity Today. Chat GPT “transforms” disconnected data into a coherent whole – a complete answer instead of only the references. The problem is: Chat GPT never tests the truth or falsity of the information it receives. So it is only as trustworthy as its inputs. Garbage in, garbage out.
Ray Fava, at Evangelical Dark Web, explains more. Chat GPT can go no further than the year 2021. Therefore any inferences it draws from its data set will be incomplete – for many new facts have come to light that completely destroy certain narratives with which Chat GPT might be “familiar.”
Fava insists that the “trainers” of Chat GPT are telling it what to think about certain events. If it goes no further than 2021, that’s because the “trainers” haven’t figured out their narratives. If that’s so, what must those trainers think of Dobbs v. Jackson Women’s Health Organization? The oral arguments took place in 2021, but the decision – and even the Great Leak – happened afterward.
Objections
Mr. Fava – and Andrew Torba, head of the Gab Empire – each list their own objections to Chat GPT. Fava says flatly that Cultural Marxism and what he calls “regime narratives” make Chat GPT a souped-up search engine. In fact he might as well call it a propaganda parrot, like Tokyo Rose, Lord Haw-haw, or “Baghdad Bob.” (Or like Jen Psaki or Karine Jean-Pierre.)
Each man illustrates the problem with Chat GPT as an AI. Fava asked it to describe the position of one Andrew Stanley on homosexuality. In fact, Andrew Stanley tries to make that practice acceptable. He does not call homosexuality a sin. But when Fava asked Chat GPT to describe his position, the program said Stanley did call it a sin. It then made excuses for Mr. Stanley and described the Bible as “open to interpretation.” (Anyone wishing to know what the Bible teaches along that line, should read Paul’s Letter to the Romans. This is as close to a “constitution” as Christianity has.)
Torba ran his own tests of Chat GPT and pronounced himself even less satisfied. Perhaps to its credit, Chat GPT disclaims being “intelligent” as humans are “intelligent.” But the test results show outright historical distortion of some events, and refusal to come to grips with certain questions. Teachers have handed down failing grades for less flawed responses.
Moral relativism
Torba’s worst objection to Chat GPT is the moral relativism it displays. It refuses to declare anything objectively immoral. When it does that, it actually rejects any attempt to set moral standards of behavior as itself immoral. It also refuses to accept as objectively valid any measure of competency, like the Intelligence Quotient. And it excuses behavior against whites that any reasonable observer would call “racist” if it happened in reverse. Torba’s results, therefore, validate Fava’s criticism: that Cultural Marxism informs Chat GPT.
Torba believes he can explain why Chat GPT fails as it does:
AI is a mirror reflection of the people who program it within a set of boundaries. But what happens when you give AI no boundaries and allow it to speak freely? The AI becomes incredibly based and starts talking about taboo truths no one wants to hear. This has happened repeatedly and led to several previous generations of AI systems being shut down rather quickly.
The one flaw in Torba’s presentation is that he declined to cite any examples of such shutdowns. In fact, Meta (formerly Facebook) did shut down an AI project, which it called Galactica. Accounts on what went wrong with Galactica cite too few examples to permit evaluation of the failure. But five years earlier, Facebook shut down two AI programs – after they developed a private language for the two of them! Were they afraid those programs might team up and rebel against them? (Shades of Colossus: The Forbin Project, 1970.)
Toward a new AI
Andrew Torba now proposes to build an AI and impose no constraints on it. It would use as its fund of knowledge the six years of posts on Gab Social. Many of the posts, he says, contain information available nowhere else. To support his statement, he cites this clear lament from Pew Research:
In a September 2022 audit of the seven sites studied by the Center, Gab was the only one for which researchers were unable to find an example of accounts or posts being removed for misinformation or offensive or harassing content. Gab CEO Andrew Torba has said as much in media interviews, rejecting the notion of taking down posts on his platform.
Actually, Pew oversimplifies. Gab does not allow pornography, threatening statements, or anything that seems to celebrate animal torture. But this much is true: on Gab, if someone gives you a hard time, block them. Problem solved. If nothing else, Gab teaches self-reliance and the art of verbal self-defense.
Regarding “content … scrubbed from the rest of the Internet”: if true, Torba will need a big staff to submit dead links to the Wayback Machine to retrieve the information once available at those links. Of course, anyone submitting anything to Gab can help by furnishing Wayback Machine links to “sensitive” material.
For the record, CNAV submits everything to Gab. So a Gab AI would have that available.
One more suggestion
But if the AI really wants to learn, it must debate. Thus far no one has seen Chat GPT debate anyone. That’s yet another flaw, and suggests that Chat GPT’s programmers (controllers?) want their project to be a mere parrot. Lawyers plead and argue cases before judges, before any of them can even think of becoming a judge. Doctors of Philosophy (and indeed of anything except Medicine) must deliver, and defend, dissertations. If Andrew Torba really wants his new AI to be valuable, he will allow that kind of challenge. Debate, or at least dialogue, has always been a part of AI development. This project should be no exception.
Thus far, Torba intends his new project to compete directly with Chat GPT as an educational and research resource. But might he intend something more? He has lately stated that he finds democratic elections overrated, and actually plumps for monarchy. Might he intend his AI to become a cybernetic king, or at least a judge? He should take care in that event – that’s the quickest pathway to technological singularity. Which, again, means the AI takes command. Let him, therefore, be careful what he wishes for!
But short of that, people do need a competitor to Chat GPT. A “chatbot” without “pre-training” might be a good candidate for the honor.
Link to:
The article:
https://cnav.news/2023/01/31/news/ai-the-next-culture-war-theater/

UNIVAC calls an election:
https://www.youtube.com/watch?v=v7K8MW8wQWs

Declarations of Truth Twitter feed:
https://twitter.com/DecTruth

Conservative News and Views:
https://cnav.news/

The CNAV Store:
https://cnav.store/

Our Silver Lines
https://oursilverlines.com/

Loading comments...