Image created with Bing AI using the prompt “create an image of a robot recording a podcast.”
In the last year, you might have noticed an emergence of AI-related positions on LinkedIn that fall under the umbrella of “AI product manager.” Similarly, newsrooms have expanded on their “prompt designer,” “technology reporter” and “innovation strategist” teams to navigate the complicated tug-of-war between AI and the information-business.
However, journalists have been following emerging technology for a long time, with Kara Swisher leading the beat. Swisher, an American journalist who has written three books on the business of the internet, is known for organizing The Wall Street Journal’s Code Conference series, writing tech columns on The New York Times and interviewing figureheads such as Sam Altman and Mark Zuckerberg on her podcast, Recode Decode.
On Sept. 7, Swisher came to Cactus Cafe for a conversation with the Director of UT Austin’s Digital Writing and Research Lab, Casey Boyle.
Her stance on AI tools remains unwavering: businesses (and governments) need to adopt regulations to prevent monopolies, protect the consumer and ensure transparency of the algorithm on social platforms.
Average users should not be the ones succumbing to the pitfalls of irresponsible AI development, Swisher said. Instead, major AI companies should be pacing themselves with social and legal precautions.
Trotting along Swisher’s demand for regulation is the Associated Press (AP) verification team. On Sept. 12, I attended their panel on “How AI is Reshaping the Local Political Landscape” to find out how newsrooms have been dealing with the spread of AI content in public sectors.
The most apparent impact of AI has been observed within down-ballot campaigns that target lesser known candidates with tighter budgets. An example of this was a political opposition ad that used machine learning to imitate a reelection candidate being scolded in a principal's office in Shreveport, Louisiana. The ad was successful in influencing public opinion about electing a new mayor.
The AP verification team also brought up AI-generated Taylor Swift endorsements of former President Trump and the New Hampshire robocall of President Biden, noting that official AI use for both parties is yet to be revealed.
When considering AI content on social media, the AP verification team brought up the dilemma of validation. A popular figure’s repost of manipulated content can be the turning point for its misrepresentation and virality, such as when Elon Musk reshared a Kamala Harris satire on X without the parody tag. Therefore, it’s worth thinking twice before publishing news (or debunks) about AI, since reporting on certain information can amplify its impact.
The AI Team at Moody has sought to understand AI recognition amongst students, too.
In April 2024, we ran a survey with questions about real and fake human faces to see how exactly students identify AI-generated images. We then asked ChatGPT to organize the participant responses into recurring patterns, and found that unnatural eye clarity, unrealistically smooth skin, lack of hair details, odd proportions and an expressionless demeanor are among the most useful features that students used to identify deep fakes on the fly.
In order to gauge signs of manipulated content, the AP verification team highlighted other overarching factors: the content creator, local knowledge and common sense. When evaluating unvetted content online, the team suggested that consulting with university researchers and collaborating with reporters in the field is the bare minimum that journalists can do at this uncertain, free-for-all time of the AI boom.
As unregulated tech makes headlines around the world, journalists can adapt common practices to reduce the risk of the news industry becoming part of the problem. Keeping an eye out for down ballot target ads, asking experts for advice and taking into consideration the validation dilemma are crucial steps journalists are taking right now — despite new developments testing the means everyday.