For Chinese Journalists, An Uphill Battle to Scrutinize AI
In 2019, my research partners and I began to track popular news stories about artificial intelligence on Chinese social media to answer two questions: What aspects of AI and algorithmic systems have been on the radar of Chinese journalists, and what journalistic techniques do they use in their reporting?
After four years of research and sifting through hundreds of news stories on WeChat, China’s leading social media app, we got mixed results. On the positive side, we found that the rise of AI has opened a space for critical journalism in China, enabling investigations into the tech’s overarching impact on work and private life. On the other hand, we found very limited use of advanced techniques to explain to readers the power, mistakes and biases of AI algorithms – what researchers call “algorithmic accountability reporting.”
Instead of advanced techniques, such as reverse engineering an AI image generator by trying out different inputs to understand how it can be discriminatory, Chinese journalists rely mostly on traditional reporting methods, including field research, interviews, and expert accounts.
This stands in sharp contrast to major Chinese media companies’ ambitious pledges to apply AI in newsrooms, a trend that can be traced back to the 2010s and has been accelerated by the heightened attention on large language models since 2022. Newsrooms across the country are racing to introduce “virtual anchors”, automate news writing, and experiment with video creation with the help of generative AI. As traditional media around the world struggle to stay relevant, these efforts aim to demonstrate that journalism is still worth investing in.
While AI is being hailed as potentially revolutionary for the news industry, this does not seem to have translated into advanced and complex reporting about the technology itself. What explains this gap?
First, we found clues from journalists’ academic and professional backgrounds. Among the 23 high-profile AI-related investigations we analyzed, the majority of the journalists behind these pieces have backgrounds in social or business reporting. While they may be experienced in covering social issues such as labor and consumer rights, they likely lack the resources and expertise needed for algorithmic accountability reporting involving more advanced techniques.
Industry insiders tell me that the main obstacle for recruiting journalists adept at using advanced techniques is limited newsroom resources. Talent is difficult to find and expensive. More and more resources are being devoted to opening up new distribution channels for content, from app development to social media, rather than original reporting.
As a result, most newsrooms remain a labor-intensive place, with reporters and editors working in much the same way as they did a decade ago. On top of their already heavy reporting workloads, Chinese journalists are also increasingly being asked to help with the business side of their news operations, such as finding corporate clients.
To be clear, I am not telling Chinese journalists how to do their job. Some of them have achieved remarkable feats using traditional reporting methods, with cases where tech giants have enacted changes to their algorithms following high-profile investigations. However, as AI becomes increasingly prevalent in our daily lives, there is a growing need for more critical examination of these systems.
Without the ability to conduct advanced investigations into AI, Chinese journalists are faced with imperfect options: rely on outside experts, who can provide technical knowledge but may not have insights on specific AI systems, or tech companies themselves, who rarely give journalists access to proprietary information or their engineers. According to journalist Lai Youxuan, most system engineers at food delivery giants Meituan and Ele.me rejected her interview requests for what later became her groundbreaking investigation into food delivery platforms’ algorithms in 2020, citing “company confidentiality.”
This undermines the chances of Chinese journalists to hold AI systems accountable, with the odds only worsening as AI systems become increasingly complex. To improve their chances, Chinese newsrooms would ideally allocate more resources towards training staff in advanced techniques, recruit more staff with tech reporting backgrounds, and encourage exchanges between journalists and AI experts. We have seen increasing interest in organizing such training programs and exchanges.
However, with many Chinese newsrooms struggling with their finances, a more realistic first step might be to educate all staff journalists on the basics of AI: What is AI actually? How is it developed? What issues arise at each development stage? General AI literacy in the newsroom is important as journalism itself is not immune to biases caused by AI. For example, social media algorithms may present search results tailored to a journalist’s own user profile, exacerbating confirmation bias. As AI adoption in newsrooms grows, clear guidelines on the ethical use of AI across the entire chain of content creation from newsgathering to publishing should be established.
With better overall AI literacy, journalists will not only be in better positions to conduct critical journalistic investigations on AI algorithms but also be more likely to adopt AI technologies in their work in more productive and, more importantly, responsible ways.
Ultimately, a newsroom’s core asset is its people, not its technology. Just as AI more broadly should always be beholden to humans, and not the other way around, management in newsrooms should also remember that AI reporting tools and advanced reporting techniques, important as they are, are only worth investing in if they will be wielded by high-quality journalists.
Ji Xiaolu, a postgraduate student in Media Studies at the University of Amsterdam, made an equal contribution to this article.
Editor: Vincent Chow; portrait artist: Wang Zhenhao.
(Header image: Visuals from VCG, reedited by Sixth Tone)