Premium Only Content
Create AI policies now to avoid lawsuits later!
I’ve advocated external-facing guidelines for artificial intelligence so that publishers can protect themselves against bots that scrape their content, train their models on it, and then use their own content to compete with them. But that’s not today’s topic.
Publishers also need internal guidelines about how AI can be used within their organization.
Bo Sacks distributed an article by Pierre de Villiers that makes this point pretty well. I’ll link to it below.
I’m a big fan of checklists, so I started to dream up a list of topics that such a policy ought to address.
* Can AI be used at all? I know some companies that prohibit it altogether.
* If AI can be used, is there a preference for one over another — e.g., ChatGPT over Bard.
* Or, rather, should there be a requirement that the results from one large language model be checked against another?
* If an article does rely on AI, does that need to be disclosed to the fact-checker, the editor, or to the reader?
* There are different levels of use of AI. Should these be defined and distinguished? For example, it might be okay to use AI for background, but not okay to use the text verbatim.
* Are there special procedures to deal with AI’s known bias and inaccuracies? Along those lines, this quote from the article made me laugh. “If you’re The New York Times, you cannot publish things that are not checked for biases.” I guess that means the correct biases.
In addition to policies, publishers need a user’s guide.
* As inaccuracies and biases are discovered, they should be reported to the people who use the technology so they can keep an eye out for them.
* As people find ways to avoid such inaccuracy and bias, they should be reported. For example, it’s often a good idea to ask ChatGPT if what it just told you is true. It’s remarkable how many times it will catch its own errors.
* Employees should share prompts and techniques and that work well for specific needs.
The user’s guide sounds like a user’s group, or a bulletin board for everyone on staff so they can collectively learn how to use AI more effectively.
Who should create these AI policies? I really liked this quote from the article.
“The key to an effective AI strategy, … is to combine the managerial ability of a company’s top structure with the creativity of those on the shop floor trying to find the best way to make AI work for them.
“You want to set a clear direction within your company, but you also want innovation and clever use cases and that seldom comes from the top of the organisation ….” “They come from the people using the technologies.
A good policy, in my opinion, will need input from at least three groups. Corporate, legal, and the people who actually use the AI.
Resources
Steffen Damborg: Publishers must urgently draw up AI guidelines
https://mediamakersmeet.com/steffen-damborg-publishers-must-urgently-draw-up-ai-guidelines/
-
1:31:18
Redacted News
10 hours agoEMERGENCY! NATO AND CIA ASSASSINATE TOP RUSSIAN GENERAL, PUTIN VOWS IMMEDIATE RETALIATION | Redacted
231K432 -
56:45
VSiNLive
8 hours ago $6.02 earnedFollow the Money with Mitch Moss & Pauly Howard | Hour 1
79.7K2 -
52:44
Candace Show Podcast
9 hours agoMy Conversation with Only Fans Model Lilly Phillips | Candace Ep 122
101K350 -
LIVE
tacetmort3m
9 hours ago🔴 LIVE - RELIC HUNTING CONTINUES - INDIANA JONES AND THE GREAT CIRCLE - PART 5
339 watching -
26:52
Silver Dragons
8 hours agoCoin Appraisal GONE WRONG - Can I Finally Fool the Coin Experts?
38.9K2 -
6:49:16
StoneMountain64
12 hours agoNew PISTOL meta is here?
39.6K1 -
20:58
Goose Pimples
13 hours ago7 Ghost Videos SO SCARY You’ll Want a Priest on Speed Dial
23.1K3 -
2:24:59
The Nerd Realm
11 hours ago $2.95 earnedHollow Knight Voidheart Edition #09 | Nerd Realm Playthrough
38K2 -
1:21:14
Awaken With JP
13 hours agoDrones are for Dummies - LIES Ep 70
121K63 -
1:47:29
vivafrei
11 hours agoJustin Trudeau Regime ON THE VERGE OF COLLAPSE! And Some More Fun Law Stuffs! Viva Frei
94.4K84