TOPICS 

    Subscribe to our newsletter

     By signing up, you agree to our Terms Of Use.

    FOLLOW US

    • About Us
    • |
    • Contribute
    • |
    • Contact Us
    • |
    • Sitemap
    封面
    NEWS

    China Proposes Mandatory Labeling of AI-Generated Content

    The draft regulations would punish content creators and online platforms for failing to clearly label content that was generated by artificial intelligence.

    China has proposed new regulations that would make it mandatory to clearly label any content generated by artificial intelligence, as the country tries to clamp down on a surge in AI-related fraud.

    The draft guidelines, published on Sept. 14 and open for public comment for one month, will require all AI-generated images, videos, and audio to be clearly labeled with a watermark and embedded metadata.

    This marks the first time that the Cyberspace Administration of China has proposed specific rules regarding the labeling of AI-generated content.

    According to the guidelines, AI-generated videos must contain an “explicit” label that is displayed at the beginning of the clip and is visible at all times in a corner of the screen. They also recommend flashing up a label at “appropriate” moments during the video.

    Metadata recording the file’s source and copyright information — referred to in the guidelines as an “implicit” label — must also be logged at the time of creation.

    The rules would apply not only to AI companies but also to individual content creators, online platforms, app stores, and any other content distributor, the guidelines state.

    Content distribution platforms are obligated to label files suspected of being AI-generated if the metadata is missing, according to the guidelines. App stores are required to ensure that content providers label AI-generated content correctly.

    If a user requests a service provider to deliver an AI-generated piece of content without the correct labeling, the service provider must keep a log of this request for at least six months.

    The proposals are the latest effort by Chinese authorities to combat a surge in AI-related fraud cases. According to the Chinese AI startup RealAI, over 185 million yuan ($26 million) was embezzled using AI in China during the first five months of 2024, compared with just 16.7 million yuan during the whole of 2023.

    Deepfake technology, voice synthesis, and AI chatbots are listed as “typical high-risk application scenarios” in the guidelines, highlighting their potential to be used to defraud users.

    China first mandated the labeling of deepfakes in January 2023, with the policy stating that any content that could “cause public confusion or misrecognition” or “generate or significantly alter content” must be clearly watermarked.

    The government then passed another set of measures specifically targeting AI-generated content in August 2023. The same month, the National Information Security Standardization Technical Committee published a guide detailing how to correctly label AI-generated text, images, audio, and video.

    The new guidelines will build on that earlier legislation by making labeling mandatory. But legal experts have cautioned that they may not be easy to implement in practice.

    Ma Ce, an attorney from Zhejiang Kinding Law Firm specializing in internet law, said that the guidelines lack a clear definition of what constitutes “AI-generated content requiring watermarking.”

    The requirement that creators and online platforms correctly add metadata to AI-generated files will pose technical challenges, Ma added.

    The guidelines also do not specify the penalties for content creators and distributors who violate the rules.

    China is not the only country trying to ensure that AI-generated content is clearly labeled. The European Union, United States, Singapore, and Canada are all moving forward with regulation in this area.

    However, there remain a number of questions regarding how to implement any rules requiring the watermarking of online content.

    On Sept. 19, Adobe, OpenAI, and Microsoft backed a bill in California that requires tech companies to clearly label AI-generated content and offer free AI detection tools to users. Violators of these rules would face a $5,000 fine per infraction, with the bill set to come into force on Jan. 1, 2026.

    (Header image: Francesco Carta fotografo/Getty Creative/VCG)