Premium Only Content
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
July 16, 2023 8:20 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But it’s all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations? In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that. This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes. When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now But that’s far from being a piece of cake. First, let’s understand why AI interpretability is a must As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used. Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AI’s decision can have far-reaching consequences. Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage. AI interpretability is no easy task While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge. Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks — often used for tasks such as linear regressions or feature extraction — with the same amount of parameters and data. However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a model’s decision. In contrast, shallow networks, with their simple architecture, are highly interpretable. The structure of a deep neural network (DNN) (Image by author) To sum up, there’s often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark. Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique. What can be done? To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear mode...
-
LIVE
Vigilant News Network
17 hours agoGeneral Flynn Delivers Bone-Chilling Post-Election Warning | The Daily Dose
4,526 watching -
Jeff Ahern
2 hours agoFriday Freak out with Jeff Ahern (6am Pacific)
4.04K -
1:27:25
Game On!
10 hours ago $4.35 earnedMove over CHIEFS! The RAVENS are the NFL ref's new favorite team!
48.2K4 -
37:10
Popcorn In Bed
1 day agoL.A. CONFIDENTIAL (1997) | FIRST TIME WATCHING | MOVIE REACTION
33.3K7 -
1:05:00
PMG
14 hours ago $7.68 earned"DOJ Set to Drop Trump Cases; FBI Reforms; NYC Lab Scandal; J6 Persecutions | Breanna Morello Show"
41.4K8 -
6:22
Dr. Nick Zyrowski
1 day ago#1 Untold HIDDEN Benefit of Apple Cider Vinegar
105K15 -
48:28
Chrissy Clark
15 hours agoShould We Laugh At Cryin’ Libs? I Underreported Stories
76.4K152 -
4:30
Gamazda
17 hours ago $21.50 earnedDeep Purple - Smoke On the Water
59.9K45 -
2:44:23
Price of Reason
18 hours agoTrump Win TRIGGERS Hollywood! Celebrities MELTDOWN! Gamers for Kamala Harris VANISH!
105K26 -
2:27:45
The Charlie Kirk Show
14 hours agoTHOUGHTCRIME: Project 2025 Edition
343K314