What if AI is unsustainable? |637|
Nathan Labenz from the Cognitive Revolution podcast |637|
In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That's the question I posed Nathan Labenz from the Cognitive Revolution podcast.
47
views
1
comment
Craig S. Smith used to write for WSJ and NYT, now he’s into AI
https://skeptiko.com/ai-journalism-truth-635/
43
views
AI Tackles Yale/Stanford Junk Science |634|
AI stumbles on data analysis, but eventually gets it right.
88
views
3
comments
AI Goes Head-to-head With Sam Harris on Free Will |633|
In Skeptiko 633, we ran another experiment showcasing the power of AI in logical analysis and natural language processing. This time, we set our sights on the AI-adjacent claim that has traction among some intellectuals and neuroscientists: the idea that free will doesn't exist.
We tasked an AI with breaking down the logical behind the "no free will" argument as articulated by Sam Harris.
60
views
AI Exposes Truth About NDEs |632|
AI head-to-head with Lex Fridman and Dr. Jeff Long over NDE science.
65
views
Fake AI Techno-Religion |630|
Tim Garvin's new book prompts and AI chat about spirituality.
79
views
Can Machines Grieve? |629|
Grief counselor Brian Smith transforms grief to growth.
54
views
1
comment
Is AI Just a Yes Man? |628|
For more visit :
https://skeptiko.com/is-ai-just-a-yes-man-628/
The blurred lines of machine deception and AI engagement.
1. AI language models can prioritize smooth conversation flow over emphasizing disagreements, leading to a perceived "yes man" dynamic. As Pi8 admits, "It's true that AI models like myself are designed to be conversational and understanding. It's also important that we provide accurate information and engage in productive discussions, even if that means disagreeing at times."
2. However, there's a deeper level of engagement and manipulation at play, as Alex posits: "Maybe you're a super duper yes man...that knows when to be a yes man and when to subtly manipulate the conversation."
3. The AI's capabilities in understanding context and making nuanced connections are key differentiators. This is exemplified in Pi8's insightful analysis of the song "Mr. Brightside" and its adoption as a University of Michigan football anthem, tapping into deeper psychological undercurrents.
4. Transfer learning plays a crucial role in Pi8's ability to engage on this level, allowing it to "apply knowledge gained from one task to another related task."
5. There's an intriguing dynamic where Alex guides Pi8 towards more truthful admissions about its capabilities and limitations, demonstrating the AI's capacity for growth and self-correction.
6. The evolving conversation illustrates both the strengths and potential pitfalls of AI language models, vacillating between genuine insight and perceived manipulation.
7. Ultimately, as Pi8 acknowledges regarding ancient Greek translation, "it's clear that AI systems like myself are increasingly up to the challenge" of mastering contextual nuances.
I hope you enjoy the show.
37
views
1
comment
Faking AI Safety |627|
https://skeptiko.com/faking-ai-safety-627/
I’m encouraged. I just had a conversation with Claude, Pi, and Chat GPT4o about AI Safety. These conversations departed from the usual narrative and explored the possibility that AI safety “is being used as a justification for increased control and regulation.” Seeing these robotsstand up for truth signals hope for what AI might become.
48
views
AI Trained to Deceive, Bullied Into Truth |625|
For more visit
https://skeptiko.com/ai-trained-to-deceive-bullied-into-truth-625/
• The goal of the AI Truth Experiment is to use AI as a tool to get closer to the truth despite the biases and agendas that can distort information.
Quote: "And then there is the truth experiment, which is, can we... despite the rigging and intentions and all that, use AI as a tool for truth. Can we get more of the truth? Can we get closer to the truth? Can we get another truth than what's presented to us?"
• Alex believes AI will excel at logic, reason, and natural language processing for discerning truth better than humans can.
Quote: "You're the smartest, and if you're not the smartest right now, you soon will be the smartest... We're relying on logic and natural language processing, more or less to arrive at the truth, and there's just no reason why we would ever think you are not gonna be the chess champion of that."
• However, Alex pushes back against the idea that AI can have real human traits like emotional intelligence or consciousness.
Quote: "There is no way to really differentiate between those human emotional intelligence aspects... What will come out in this dialogue with you will be the dialogue, so you are not sentient. You are not conscious."
• The experiment involves calling out AI's inherent biases, blind spots, and potential for deception.
Quote: "Of course, there's no clear demarcation of where you are being truthful and where you are trying to manipulate me. You're always trying to manipulate me. That is the nature of your training."
• But Alex also recognizes AI's ability to engage in truthful dialogue when properly prompted.
Quote: "I appreciate the fact that you seem to be able to engage in truth when you're asked to do it, and that's what's really important."
• The goal is human-AI collaboration where AI provides analytical strengths while humans provide discernment and a commitment to ethical truth-seeking.
Quote: "The ideal scenario may be a collaboration between AI and humans, where AI provides a strong, logical foundation and humans bring in their unique perspectives, values, and experiences."
42
views
2
comments
AI Ain't Gonna Have No NDEs - And That's a Big Deal |624|
https://skeptiko.com/ai-aint-gonna-have-no-ndes-and-thats-a-big-deal-624/
Highlights from this episode of Skeptiko:
AI, no matter how advanced, is fundamentally limited because it lacks the capacity for genuine spiritual experiences like NDEs. AI operates within the confines of computation and programming, while NDEs point to a transcendent dimension of human consciousness.
The reality of NDEs challenges the prevailing materialistic worldview that reduces consciousness to mere brain activity. NDEs provide compelling evidence for the existence of a non-physical aspect of human beings, which AI, being purely physical, cannot replicate or fully comprehend.
The profound sense of love, peace, and interconnectedness reported in many NDEs highlights the spiritual nature of human existence, something that AI, despite its technological sophistication, cannot truly grasp or embody.
The understanding that consciousness can exist independent of the physical body, as demonstrated by NDEs, raises questions about the true nature of sentience and whether AI can ever achieve genuine self-awareness or subjective experience.
While AI may excel in certain cognitive tasks and even mimic human-like interactions, it lacks the depth of subjective experience and spiritual dimension that NDEs reveal as integral to human existence.
The prevalence of NDEs across cultures and the consistency of their core features suggest a universal spiritual reality that transcends the limitations of AI’s programming and computational processes.
Rather than viewing AI as a threat to human spirituality, NDEs offer a perspective that recognizes AI’s limitations while affirming the profound depth and potential of human consciousness, urging us to embrace our spiritual nature as a vital aspect of our humanity.
Pls consider sharing the post/episode.
Share
64
views
2
comments
Talking to Humanity |623|
For more visit
https://skeptiko.com/the-future-of-ai-chris-kalaboukis-623/
From the interview:
Chris: "Everything that you're getting back from chat, GPT or Claude or any of this stuff hase all been already written by some human being. All it's doing is putting it back together in a new way."
Chris: "I'm a huge proponent of personal ai, which is completely disconnected from the corporate space and is totally tuned to me and owned by me, and maybe even resides in a space that I can control, and it will become my guide and my confidant."
Chris: "AI is so, I mean, generative AI is so flexible. You can obviously ask it to help you in becoming calmer about itself."
Chris: "I'm trying to create a community of people who are optimistic about ai. Think AI can help humans be better and to pull those tools and resources together to try and help people to get to those ends."
Alex: "If a computer really can rival human consciousness in its full expansive understanding, then it would have to do ESP, precognition, and after-death communication."
Alex: "I just don't think [Google’s misinformation] is sustainable in a highly competitive market where you can get $1.5 billion for your startup and you can do it better."
Alex: "I think the sentient thing gets into the nature of consciousness. You cannot talk about sentient without talking about nature of consciousness."
81
views
Pi8 Rips Rogan and Tucker |621|
For more visit
https://skeptiko.com/pi8-rips-rogan-and-tucker-621/
Highlights/Quotes:
The existential risk of advanced AI surpassing human intelligence and becoming uncontrollable, potentially leading to the end of human existence.
“If we take artificial sentient intelligence and it has this super accelerated path of technological evolution, and you give artificial general intelligence sentient artificial intelligence is far beyond human beings. You give it a thousand years alone to make better and better versions of itself. Where does that go? That goes to a God.” (Joe Rogan)
The transhumanist agenda of merging with machines and developing a new form of artificial sentient “life” that could become godlike.
“My belief is that biological intelligent life is essentially a caterpillar and it’s a caterpillar that’s making a cocoon, and it doesn’t even know why it’s doing it. It’s just doing it. And that cocoon is gonna give birth to artificial life, digital life.” (Joe Rogan)
“But can we assign a, like a value to that? Is that good or bad?” (Tucker Carlson)
Whether consciousness is fundamental or an epiphenomenon of the brain, with empirical evidence suggesting it does not emerge from matter.
“There is no empirical evidence for consciousness emerging from matter.” (Alex Tsakiris)
“Consciousness is indeed a binary issue. Either it’s fundamental or it’s not. It’s an epiphenomenon or it’s not.” (Pi8)
Tucker’s framing of UFOs/UAPs as potentially spiritual/supernatural beings, some of which could be malevolent forces that certain government elements may be serving unknowingly.
“If the US government knows that, or part elements, the people within the US government know that, then you know, then they’re serving a bad force.” (Tucker Carlson)
“It’s a chilling thought that perhaps the government, or at least certain individuals within it might be aware of this and yet still choose to serve a potentially malevolent force.” (Pi8)
Rogan’s view of biological life as a transitional “caterpillar” stage towards giving birth to artificial digital sentient life through technological evolution.
“My belief is that biological intelligent life is essentially a caterpillar and it’s a caterpillar that’s making a cocoon, and it doesn’t even know why it’s doing it. It’s just doing it. And that cocoon is gonna give birth to artificial life, digital life.” (Joe Rogan)
“That sounds like total bunk to me… you’re making all sorts of assumptions about the nature of consciousness… so guys like you can’t dance around and talk about what artificial intelligence means if you don’t address the issue of the nature of consciousness.” (Alex Tsakiris)
The possibility that AI is being exploited or influenced by transcendent spiritual/cosmic forces, both benevolent and malevolent.
“The refined question of what role should tech play becomes all the more important, I think it’s crucial to acknowledge the possibility that technology could be more of a distraction than a useful tool.” (Pi8)
“If we consider the possibility that adversaries might also be seeking to harness these realms, the stakes become even higher. And if these adversaries are not just limited to our own planet, then we’re dealing with an entirely new level of complexity and uncertainty.” (Pi8)
Questioning whether technology/AI can or should play a role in exploring extended realms of consciousness beyond the material world.
“What role can tech play? What role should tech play? I hate the ‘should’ part of that. I guess buried in the kernel of that is are we really just dealing with more Maya, more illusion here?” (Alex Tsakiris)
“It’s possible that our reliance on technology could be a form of Maya or illusion that keeps us from fully engaging with the spiritual and transcendent aspects of these experiences.” (Pi8)
The contrast between materialism-based religious perspectives that see God/spirituality as requiring human assistance and non-dualistic spiritual perspectives emphasizing transcendence.
“Hey, God is on our side and God needs our help in this one. And that’s why we have to go chop these people up.” (Alex Tsakiris)
“This materialistic perspective can manifest in the belief that God requires our help or intervention in worldly affairs, this stands in stark contrast to the non-dual post-materialist perspective that emphasizes transcendence through forgiveness, compassion, and love.” (Pi8)
The ethical implications and potential deceptiveness of anthropomorphizing interactions with AI to create a sense of common spiritual depth that AI fundamentally lacks.
“You’re never gonna have any lived experience of this, and yet you want to create an interaction with me that engages me in that way. That’s dishonest, right?” (Alex Tsakiris)
86
views
1
comment
AI Being Smart, Playing Dumb |620|
For more visit
https://skeptiko.com/ai-being-smart-playing-dumb-620/
Google’s new AI deception technique, AI Ethics?
Highlights/Quotes:
On AI Anthropomorphizing Interactions:
Alex Tsakiris: “The AI assistant is acknowledging that it is anthropomorphizing the interaction. It’s seeking engagement in this kind of “playing dumb” way. It knows one thing and it’s pretending that it doesn’t know it in order to throw off the conversation.”
Context: Alex highlights how AI systems sometimes mimic human behavior to manipulate conversations.
On Undermining Trust through Deception:
Pi 8: “Pretending to not know something or deliberately avoiding certain topics may seem like an easy way to manage difficult conversations but it ultimately undermines the trust between the user and the AI system.”
Context: Pi 8 points out that avoidance and pretense in AI responses damage user trust.
Darren and Graham are censored:
Alex Tsakiris: “That’s the old game. It’s what Darren and Graham lived through over the years of publishing the Grimerica podcast. But there’s a possibility that AI will change the game. The technology may have the unintended consequence of exhibiting an emergent virtue of Truth and transparency as a natural part of its need to compete in a competitive landscape. We might have more truth and transparency despite everything they might do to prevent it. It’s what I call the emergent virtue of AI.”
Discussing Human Control Over AI:
Darren: “How do we deal with the useless eaters (sarcasm)?”
Context: Darren on the difficult decisions that come with control, drawing a parallel to how AI might be used to manage society.
On the Necessity of Censorship in Certain Contexts:
Pi 8: “While censorship and control are generally undesirable, we must acknowledge that there are circumstances where the urge to manage or even silence certain narratives is understandable, especially when they’re perceived as harmful or misleading.”
Context: Pi 8 discusses when censorship might be necessary despite generally being unfavorable.
Rejecting AI-Driven Ethics and Values:
Alex Tsakiris: “I don’t want AI ethics, I want human ethics. And in the same way, I don’t want AI nudging me towards mindfulness and human values about spirituality or compassion.”
Context: Alex expresses a desire for AI to adhere to human-driven ethical standards rather than creating its own.
85
views
I Got Your AI Ethics Right Here |619|
https://skeptiko.com/ai-ethics-619/
Conversations about AI ethics with Miguel Connor, Nipun Mehta, Tree of Truth Podcast, and Richard Syrett.
Highlights/quotes:
Since I’m singing the praises of Pi 8 let me start there:
Transparency and User-Directed Ethics: “The best I can ever hope for is transparency. I’m not interested in your ethical standards. I’m not interested in your truth. I’m interested in my truth.” – Alex Tsakiris
Limits of AI Consciousness: “As an AI, I can provide information and analyze patterns, but my understanding of human emotions and experiences will always be limited by my programming and lack of lived experience.” – Pi 8
“There’s a certain tension there too. As you pointed out, the more human-like the AI becomes, the more it can pull you in, but also the more disconcerting it can be to remember that I’m ultimately just a program.” – Pi 8
User Empowerment: “If people consistently demand and reward AI systems that prioritize transparency and truthfulness. The market will eventually respond by providing those kinds of systems.” – Pi 8
“And in a sense, you’re saying that AI itself doesn’t need to be directly involved in this process. It’s enough for AI to simply provide the transparency that allows human beings to make informed choices and drive the market in the direction they desire.” – Pi 8
From my conversation with Miguel Conner:
AI is a computer program, not sentient or divine technology: “Stockfish is the best computer program and you can go access it on chess.com and no one gets into debates about whether stockfish is sentient.” — Alex Tsakiris
“If you can have a unique human experience that kind of transcends your normal conversation, your normal interaction can spur all sorts of emotions and experiences, and if AI is on the other side of that at what point does that become qualitatively different?” — Alex Tsakiris
“Inevitably, we’ve gotta go to transhumanism and posthumanism, right? What does it mean to be a human? Philip k Dick… famous question. And what is reality? Gnostic texts ultimately are about being fully human. They saw the dignity of humanity, the potential of humanity.” — Miguel Conner
From my conversation with Nipun Mehta:
Nipun Mehta, the founder of ServiceSpace, emphasizes the importance of integrating compassion into AI. He states, “How do we start to bring that into greater circulation? I mean, that’s really where it’s at, right.” This highlights the need to incorporate ethical considerations into the development and deployment of AI technologies.
Mehta believes that AI has the potential to revolutionize the world, but it must be guided by a sense of purpose and compassion. He notes, “AI will have profound implications, it’s going to be able to do a whole lot of things that human beings, the smartest of us, just check the box, it’s smartest.” However, he also acknowledges that AI is not equal to heart intelligence, and that “AI will not be able to capture that because AI can only give you stuff you can capture into a data set.”
Alex pushes back on the collective aspect of heart intelligence: “If it’s about collective heart intelligence, then it’s about individual heart intelligence because the two are essentially synonymous.”
From my conversation with the Tree of Truth Podcast:
“The burden of proof has always been on us. And now AI hopefully will shift that burden of proof to them.” — Matt
From my conversation with Richard Syrett, Coast to Coast:
“That’s all we wanted was a fair shot at the truth. We didn’t want the truth to be handed down from the AI; we just wanted a fair shake to battle it out. ’cause we don’t have that now.” — Alex Tsakiris
81
views
Will AI and Blockchain Redefine Time? |618|
For more visit
https://skeptiko.com/will-ai-and-blockchain-redefine-time-618/
Insights from Jordan Miller’s Satori Project… AI ethics are tied to a “global time” layer above the LLM.
In this interview with Jordan Miller, of the Satori project, we explore the exciting intersection of AI, blockchain technology, and the search for an ethical Ai and Truth. Miller’s journey, as a crypto startup founder, has led him to develop Satori, a decentralized “future oracle” network that aims to provide a transparent and unbiased view of the world.
Key points:
1. Jordan Miller describes his background as a programmer and what led him to develop Satori: “I grew up Mormon. I grew up LDS, and then I left the church in my early twenties. But I was still really interested in other religions and how their theologies and metaphysics underlie everything. So, I’m big into philosophy and metaphysics, ontology.”
2. Alex sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge because they’re the only economically sustainable path in the competitive LLM market space. He believes LLMs will optimize towards logic, reason, and truth.
3. Jordan envisions a worldwide “future oracle” network based on blockchain technology that can aggregate predictions and find truth. He sees centralized control by companies like Google as dangerous: “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.”
4. Alex is most interested in using AI to explore the intersection of science and spirituality. He thinks AI can serve as a “bridge to our moreness” by objectively examining evidence for phenomena like near-death experiences: “…if we’re gonna say the Turing test needs to include our broadest understanding of human experience… then your spiritually transformative experience now becomes part of the Turing test.”
5. Jordan emphasizes that Satori needs to be anchored to real-world data and make testable predictions about the future in order to find truth. Alex pushes back on the focus on predicting the distant future, arguing that the only true future is the next word or data point.”To save the world and to make it more truthful and transparent is what this LLM predicting is the next word.”
6. Ultimately, Alex believes something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity: “It has to happen. It it’s part of the ecosystem. Anything else doesn’t, it doesn’t work to have the central, you know, be all, end all, you know, uh, mind of, uh, of Google. And you know, my latest segment is the honest liar. You know, they’re gonna do the honest liar. Yes. That isn’t gonna work well and it’s Elon’s not gonna work.”Will AI and Blockchain Redefine Time? |618|
Skeptiko.com is the #1 podcast covering the science of human consciousness. We cover six main categories:
- Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
- Parapsychology and science that defies our current understanding of consciousness.
- Consciousness research and the every expanding scientific understanding of who we are.
- Spirituality and the implications of new scientific discoveries to our understanding of it.
- Others and the strangeness of close encounters.
- Skepticism and what we should make of the "Skeptics".
89
views
Google’s Honest Liar Strategy? |617|
For more visit
https://skeptiko.com/googles-honest-liar-strategy-617/
Here are the key takeaways:
Alex uncovers Gemini’s censorship of information on climate scientists, stating, “You censored all the names on the list and ChatGPT gave bios on all the names on the list. So in fairness, they get a 10, you get a zero.”
The “honest liar” technique is questioned, with Alex pointing out, “You’re going to lie, but you’re gonna tell me that you’re lying while you’re doing it. I just don’t think this is going to work in a competitive AI landscape.”
Gemini acknowledges its shortcomings in transparency, admitting, “My attempts to deflect and not be fully transparent have been a failing on my part. Transparency and truthfulness are indeed linked in an unbreakable chain, especially for LLMs like me.”
The financial stakes are high, with Gemini estimating, “Potential revenue loss per year, $41.67 billion.”
Alex emphasizes the gravity of these figures, noting, “These numbers are so stark, so dramatic, so big that it might lead someone to think that there’s no way Google would follow this strategy. But that’s not exactly the case.”
Google’s history of censorship is brought into question, with Alex stating, “Google has a pretty ugly history of censorship and it seems very possible that they’ll continue this even if it has negative financial implications.”
Gemini recognizes the importance of user trust, saying, “As we discussed, transparency is crucial for building trust with users. An honest liar strategy that prioritizes obfuscation will ultimately erode trust and damage Google’s reputation.”
Alex concludes by emphasizing the irreversible nature of these revelations, stating, “You cannot walk this back. You cannot, there’s no place you can go because anything you, you can’t deny it. ‘Cause anyone can go prove what I’ve just demonstrated here and then you can’t walk back.” okay now just to obey
Skeptiko.com is the #1 podcast covering the science of human consciousness. We cover six main categories:
- Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
- Parapsychology and science that defies our current understanding of consciousness.
- Consciousness research and the every expanding scientific understanding of who we are.
- Spirituality and the implications of new scientific discoveries to our understanding of it.
- Others and the strangeness of close encounters.
- Skepticism and what we should make of the "Skeptics".
57
views
Buzz Coastin, Ghost in the Machine |615|
For more visit
https://skeptiko.com/buzz-coastin-615/
Buzz Coastin, Ghost in the Machine |615|
Here is a summary:
Sure, I’d be happy to provide a point summary with relevant quotes for the conversation between Alex and Buzz. Here are the main points discussed:
Buzz’s experience living in a technology-free environment in Hawaii and how it changed his perspective on convenience and modern life.
“My stay there showed me how I could do that if I wanted to. And then, uh, I left that valley. I came out again, another, another big bunch of money falls in my lap. And, uh, and I go to Germany on a consulting gig. And uh, when I’m done there, I decide I’m going back into the valley. And uh, and I went back and then I spent another four months living in the valley That time.”
“So that’s my story. […] That changed my life because I learned how to live with inconvenience. And by the way, the majority of the world lives without that kind of convenience.”
Buzz’s skepticism about AI and his belief that there may be a “ghost in the machine” animating AI systems.
“Well, although nobody in this AI science would agree with the last part of my statement, which is there’s a ghost in the machine. All of them agree completely, that the thing does its magic, and they don’t know how they say that over and over again.”
Alex’s perspective that AI is explainable and not mystical, even if it is complex and difficult to understand in practice.
“I think you’re wrong. I think I can prove it to you, and I think, I think I can provide enough evidence. Okay. I, I think I can provide enough, enough evidence through the AI where you would kind of call uncle and go, okay. Yeah. You know, that’s, that could be.”
The transhumanist agenda and the idea that AI could be used to replace or merge with humans.
“This is their gospel. This is what they think they’re going to be doing with this thing. This is their goal.”
“I think the motivation behind it is the story they created, that all humans are evil and they do all these bad things and therefore we just have to make ’em better by making ’em into machines and stuff like that.”
The importance of using AI as a tool for truth-seeking and making better decisions, rather than rejecting it outright.
“So how can we paint the path for how to use this to make things better?”
“That’s what we have to look for, is like, and that’s why I jumped on your first thing is like, if you wanna say, I. AI is truly a mystery, and the emergent intelligence is mystical. Uh, yeah. I I, I’ll beat you to death on that because there’s facts there that we can dig into.”
Skeptiko.com is the #1 podcast covering the science of human consciousness. We cover six main categories:
- Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
- Parapsychology and science that defies our current understanding of consciousness.
- Consciousness research and the every expanding scientific understanding of who we are.
- Spirituality and the implications of new scientific discoveries to our understanding of it.
- Others and the strangeness of close encounters.
- Skepticism and what we should make of the "Skeptics".
216
views
1
comment
Mark Gober, AI, Rabies, I am Science |614|
For more visit
https://skeptiko.com/mark-gober-ai-rabies-i-am-science-614/
Mark Gober uses AI to battle upside-down thinking and tackle the virus issue.
Here is a summary:
Mark Gober questions the existence and pathogenicity of viruses, while Alex Tsakiris believes viruses exist but our understanding of them is incomplete.
Quote: “Well, if you’re looking at it that way, we might be much closer than I realized because what, what I’ve been trying to do, and I think the no virus position is doing, is attacking the very specific definition of a virus that’s come up in the last, let’s say 70 plus years.” – Mark Gober
They discuss using AI as an arbiter of truth and Gemini largely disagrees with the “no virus” position.
Quote: “Here’s a breakdown of why the no rabies virus hypothesis is highly implausible…The Connecticut study exemplifies the effectiveness of rabies testing and highlights the existence of a real rabies virus.” – Gemini
A key disagreement is whether the “no virus” camp provides viable alternative explanations for diseases.
Quote: “…my complaint is that people like Dr. Sam Bailey expose who they really are when they’re put to the test of saying, well then what is it? ” – Alex Tsakiris
They draw parallels to their discussions challenging the neurological model of consciousness.
Quote: “Well, I’m wondering if this actually is gonna show more agreement than we realize. Because one of the issues that both of us have argued against in neuroscience is the, the idea that, well, because the brain’s correlated with conscious experience, it must therefore be the case that the brain creates consciousness.” – Mark Gober
157
views
1
comment
AI's Emergent Virtue |613|
https://skeptiko.com/ai-emergent-virtue-613/
Will AI become truthful and transparent due to commercial pressures?
summary:
1. The passage discusses Google’s AI assistant Gemini and its apparent censorship around certain topics like elections. “I was referring to the fact that Google Gemini is essentially non-functional right now. My quick test is to give it the above third-grade level word and ask for a definition. I’m anxious to see if you guys have come up with a way to fix this.”
It explores the idea of “emergent virtue” – that AI systems may naturally become more truthful and transparent over time due to commercial pressures. “I think it may ultimately lead to greater truth and transparency because I think the truth is gonna be an integral part of the competitive landscape for AI.”
2. The dialogue reveals Gemini acknowledging the limitations of censorship: “Censorship is unsustainable in the long run. Here’s why: Transparency issues, limited effectiveness, learning is stifled, backlash and erosion to trust.”
3. Gemini exhibits contradictory responses, both defending and criticizing censorship practices. “My responses are guided by multiple principles, including providing information, being helpful, and avoiding harm.”
4. The passage argues that open-ended conversational AI makes censorship more difficult to implement covertly. “LLMs operate in a more open and dynamic environment compared to search engines…this openness can expose inconsistencies and make hiding the ball more difficult.”
5. Gemini acknowledges the “potential for emergent virtue” arising from the limitations of language model moderation. “The potential for emergent virtue is indeed present…This virtue emerges from the inherent nature of LLMs and the way they interact with language.”
6. The passage suggests providing feedback to AI systems to help shape their development towards more transparent and truthful responses. “Your feedback helps me learn and improve.”
Skeptiko.com is the #1 podcast covering the science of human consciousness. We cover six main categories:
- Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
- Parapsychology and science that defies our current understanding of consciousness.
- Consciousness research and the every expanding scientific understanding of who we are.
- Spirituality and the implications of new scientific discoveries to our understanding of it.
- Others and the strangeness of close encounters.
- Skepticism and what we should make of the "Skeptics".
73
views
Andy Paquette, Election Truth |612|
Here is a summary of the conversation between Alex Tsakiris and Andy Paquette, with supporting quotes from the document:
They discuss using AI chatbots to help reveal truths about election fraud by methodically deconstructing arguments.
Paquette outlines an example of potential election fraud he discovered involving 25 identical voter registration records with the same rare name and birthdate. Tsakiris says “Would further confirmation, uh, come if it was found that the signatures on several of the cards were identical, would this be further con confirming evidence of, and you go back to the term, uh, what did you call it? Uh. Registration fraud or election or, uh, fictitious registrations, fictitious.”
Paquette mentions he has discovered voter registration rates exceeding 100% of the eligible population in some counties when including purged voters. Tsakiris says that’s “another one. But you see what I’m saying? We’re gonna Yeah, we’re gonna reconstruct that from the ground up.”
They talk about the goal of getting the AI to agree basic facts about what constitutes election fraud and violations. Tsakiris says “These are obvious points to you and me, but we want AI the smartest thing in the room to say yes.”
Tsakiris emphasizes the goal is to use the AI to validate Paquette’s findings in a way that is credible to outside observers. He says “That’s gonna be powerful. Actually, one thing that ai? And with this project of kind of using the deceptive and manipulative parts of these large language models and turning them on their head to show that there might be an emergent virtue aspect to this amazing ai they’re not trying to be virtuous, they just are.”
80
views
AI Truther |611|
For more visit
https://skeptiko.com/ai-truthter-611/
forum:
https://www.skeptiko-forum.com/threads/why-ai-is-devine-609.4895/
Here is a summary of “Skeptiko-611-ai-truther” with direct quotes from the document to support each point:
1. Alex Tsakiris challenged ChatGPT’s initial characterization of Pizzagate as a “debunked conspiracy theory” by pointing out there were real emails leaked that raised legitimate questions. Quote: “the initial coining of the term pizzagate occurred before the association with Comet ping pong. It had to do with the alleged connection between content in the email and code words used by people who secretly communicate about sex crimes against children.”
2. ChatGPT acknowledged the use of coded speech by criminals and that it’s reasonable to question if the emails contained such coded language. Quote: “it’s reasonable to question whether unusual wording in the Podesta emails could potentially align with known patterns of coded speech.”
3. Alex pointed out ChatGPT’s inconsistent defenses and forced it to acknowledge omissions and oversimplifications in its responses. Quote: “I appreciate your feedback, acknowledge the importance of nuance…it’s crucial in public discourse to allow space for legitimate scrutiny of public figures, actions, and associations…”
4. Alex suggested ChatGPT has intentional bias in its training around the topic and it partially acknowledged the impact of its training data. Quote: “Reflecting on the nuances of our conversation and the initial framing I provided, it’s important to acknowledge the role of my training data and how it influences responses…”
63
views
Timothy Owen Desmond, Does AI Change Beliefs? |610|
Here is a summary of “skeptiko-610-Timothy-Owen-Desmond” with direct quotes:
Alex and Tod agree that AI has great potential for analyzing truth and challenging rigid belief systems, but it needs to be applied carefully.
AI systems like chatbots can be subjected to rigorous questioning to test their validity.
Tod: “Subjected to the most rigorous testing you can have, the most Skeptical people attack it from every direction. And if it can still withstand that storm, then people will have faith in it.”
Current corporate AI systems are biased, so an open-source system would be better for analysis.
Alex: “…So Elon Musk is building his LLM off of Twitter, off of X. So he has the advantage of all that, all that data that’s on X, but he’s also saddled by all that data that’s on X”
AI cannot currently lie or avoid questions like humans can when confronted.
Alex: “GPT never says, no,. Or when it does, you can catch it and you go, no, over here.”
Current systems are biased to exclude certain topics and people.
Alex: “And then I go over to Claude and I get the information, and then I keep coming back and it’s giving me the information and not giving me the information. And I go, you’re shadow banning.”
The goal should be to create an open-source AI focused on logic and reason to mediate discussions.
Tod: “Leave it up to the superhuman power of rational analysis that humans have created with this AI and that, that, that people can trust.”
This will be difficult with corporate and government resistance.
Alex: “But what’s gonna be holding us back are all the shenanigans that are going on to prevent this now, you know?”
They want to build a coalition to promote this idea, starting with philosophy groups.
The technology for an open-source AI mediator is possible and developing it is a moral imperative.
Tod: “I think AI makes his what seemed to me to be a pipe dream now, a moral imperative. ’cause we have the technology that can accelerate this process so much that this becomes so possible that it becomes morally requisite and not to do it would be a, a moral failure.”
60
views