Weekly Roundup – November 18th, 2025
Roundup Links
China Is Cracking Down on AI Slop
"Like in other countries unleashing AI, China has been plagued by a rise in AI-generated fake news, particularly in the wake of natural disasters. As Nikkei reports, authorities have been making examples out of a number of misinformation peddlers, including a person who shared AI-generated images of a baby covered in debris following an earthquake, and a 28-year old man who faked his daughter’s kidnapping.
The regulation on misinformation is one of many provisions included in the campaign. Also forbidden: using AI to create and spread rumors, generate pornographic or violent images, impersonate others, manipulate web traffic or conduct “online trolling,” or abuse minors.
Taken together, the rules read like a shopping list of badly needed regulations in the US, where AI-generated misinformation and harmful images are nearly endemic across all social media, and where minors in particular have proven especially vulnerable to harms caused by AI models."
Our Take: While such regulations over AI as detailed in this article might be a net social good (restricting the use of AI “…to create or spread rumors, generate pornographic or violent images, impersonate others”), the truth is that Americans love the slop. In previous roundups we speculated that people might turn away from social media platforms (and YouTube) because of the amount of AI content blasted at the user. But it turns out that your average American loves AI content and would happily gobble up hours of AI slop on Instagram reels, TikTok, or YouTube. One just has to look at the song ‘Walk my Walk’ by Breaking Rust – an otherwise competent AI imitation of contemporary country music currently topping the billboard chart for digital country music downloads.
An AI Podcasting Machine Is Churning Out 3,000 Episodes a Week — and People Are Listening
"No industry is safe from artificial intelligence. Not even podcasting.
This isn’t hyperbole. There are already at least 175,000 AI-generated podcast episodes on platforms like Spotify and Apple. That’s thanks to Inception Point AI, a startup with just eight employees cranking out 3,000 episodes a week covering everything from localized weather reports and pollen trackers to a detailed account of Charlie Kirk’s assassination and its cultural impact, to a biography series on Anna Wintour.
Its podcasting network Quiet Please has generated 12 million lifetime episode downloads and amassed 400,000 subscribers — so, yes, people are really listening to AI podcasts."
Our Take: AI slop podcasts are here. And the people love them. Passive consumers are not discerning consumers. Passivity defines the nature of content consumption in a fragmented, attention-starved media ecology.
Deezer/Ipsos survey: 97% of people can’t tell the difference between fully AI-generated and human made music – clear desire for transparency and fairness for artists
"Deezer, the global music experience platform, has commissioned the world’s first survey focused on perceptions and attitudes towards AI-generated Music. The survey was carried out by Ipsos with a total of 9000 people in 8 countries – United States, Canada, Brazil, UK, France, Netherlands, Germany and Japan. It revealed a clear desire for tagging 100% AI-generated music and making sure artists and songwriters are being fairly treated and paid if their music is used to train AI-models.
Initially, all participants were asked to listen to three tracks and determine whether or not they were fully AI-generated – 97% of the respondents failed. A majority (71%) of the respondents were surprised by these results and more than half (52%) felt uncomfortable by not being able to tell the difference."
Our Take: A credible survey finds that 97% of people can’t distinguish AI generated music from the real thing. Not to sound the alarm, but this is an existential threat to the entire music industry. A little over half (52%) were uncomfortable with not being able to clock the AI music. But that shock and discomfort will quickly subside in due course after audiences have been beaten over the head with enough AI content. There’s a reason why most platforms aren’t meaningfully cracking down on AI slop…it’s cheap to produce and potentially just as profitable as the real thing.
A.I. Sweeps Through Newsrooms, but Is It a Journalist or a Tool?
"Artificial intelligence is sweeping through newsrooms, transforming the way journalists around the world gather and disseminate information. Traditional news organizations increasingly use tools from companies like OpenAI and Google to streamline work that used to take hours: sifting through reams of information, tracking down sources and suggesting headlines.
In some cases, including at Fortune and Business Insider, publications have explored using A.I. to write full articles, notifying readers they intend to use it for drafts.
Almost all of the news organizations have some guardrails in place to prevent errors, such as requiring a human to review anything that A.I. writes before it is published. But some embarrassing errors have appeared nonetheless, including from top publications such as Bloomberg, Business Insider and Wired.
And many journalists have also been left to wonder: Will A.I. replace journalism jobs in an already fast-shrinking market — or, rather, which jobs?
“A.I. is an extraordinary tool for journalists,” said Stephen Adler, a former editor in chief of Reuters who now runs the Ethics and Journalism Initiative at New York University. “It excels at analyzing large data sets, organizing notes, checking spelling and grammar, even pointing out possible flaws in a story. But, as with much of technology, it comes with significant risks.”"
Our Take: Yes, AI can be used in the newsroom to enhance the investigations of real-life journalists when they compose real-life news reports. The idea proposed is that AI is some powerful, quasi-magical tool that can be used as a supplement to real-human reportage. This is idealistic. The reality is corporations and for-profit news orgs are always looking for ways to cut costs and when the AI is good enough to do its own thing it will simply replace human jobs not ‘enhance’ preexisting ones (especially jobs for which a human receives a paycheck). Prestigious news orgs like NYTimes, Wall Street Journal, and others might proudly boast that they hire real reporters who do real reporting. But something tells me that in a few years most people will not care if a real person is doing the ‘real reporting’.