Premium Only Content
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
🥇 Bonuses, Promotions, and the Best Online Casino Reviews you can trust: https://bit.ly/BigFunCasinoGame
The secret to enterprise AI success: Make it understandable and trustworthy - VentureBeat
July 16, 2023 8:20 AM Image Credit: VentureBeat made with Midjourney Head over to our on-demand library to view sessions from VB Transform 2023. Register Here The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But it’s all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations? In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that. This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes. When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage. Event VB Transform 2023 On-Demand Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions. Register Now But that’s far from being a piece of cake. First, let’s understand why AI interpretability is a must As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used. Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AI’s decision can have far-reaching consequences. Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage. AI interpretability is no easy task While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge. Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks — often used for tasks such as linear regressions or feature extraction — with the same amount of parameters and data. However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a model’s decision. In contrast, shallow networks, with their simple architecture, are highly interpretable. The structure of a deep neural network (DNN) (Image by author) To sum up, there’s often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark. Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique. What can be done? To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear mode...
-
1:29:47
Real Coffee With Scott Adams
4 hours agoEpisode 2669 CWSA 11/24/24
23.8K20 -
13:52
Hershberger's Kitchen
15 hours agoTwo Delicious Dip and Spread Recipes to Try, Inspirational Thought
15K1 -
3:41:05
Sacred Sage
6 hours ago $1.13 earnedDaz3D: Attempting to Finish Zoe Conversation!
13.3K1 -
LIVE
OneRandomDolly
4 hours ago[Stream #19 ] Gaming, Chatting, you know the drill!
416 watching -
11:07
TimcastIRL
1 day agoElon Musk Suggests He’ll BUY MSNBC, Joe Rogan Will REPLACE Rachel Maddow
60.1K50 -
34:24
The Rubin Report
22 hours agoFormer CEO: The Aftermath of Vice Media & What's Next for Mainstream Media | Shane Smith
75.9K31 -
9:14:23
tacetmort3m
1 day ago🔴 LIVE - BECOMING THE UNTOUCHABLE (RADIATION WHO?) - STALKER 2 - PART 3
28.9K4 -
16:05
China Uncensored
17 hours agoAnother Car-Ramming Strikes Outside a Chinese School!
21.2K12 -
9:26
Dr. Nick Zyrowski
9 hours agoAnti Inflammatory Foods | You MUST Eat These!
15.8K5 -
15:40
Bearing
4 hours agoEnd Stage Trump Derangement | Rosie O'Donnell is NOT Doing Well 😬
17.4K75