X to suspend paid creators 90 days for unlabeled AI war videos, permanent ban for repeaters
X will suspend Creator Revenue Sharing participants for 90 days for posting AI-generated armed conflict videos without disclosure, with repeat offenders permanently removed from the program.

X announced a new policy this week that will suspend creators from its Creator Revenue Sharing program for 90 days if they post AI-generated videos depicting armed conflict without disclosing the footage was made with AI, and will permanently remove repeat violators from the revenue program. The change targets creators who receive ad revenue through the platform and marks a notable enforcement step aimed at limiting the spread of misleading wartime imagery.
X said it will identify violations through a blend of generative-AI detection tools, metadata and other signals embedded in content, and its crowdsourced fact-checking system, Community Notes. The company said posts that trigger a Community Note or contain AI tool metadata will be flagged for enforcement. The platform also recently rolled out a visible “Made with AI” label to inform viewers when material was produced using generative tools.
In an explanatory message attributed to X’s head of product, identified in company posts as Nikita Beer and spelled Nikita Bier in some notices, the company defended the policy as essential during violent conflicts. “Today, we are updating our Creator Revenue Share terms to maintain the integrity of content on Timeline and prevent manipulation of the program. During times of war, it is crucial that people have access to accurate information from the ground. However, with today's AI technology, it is easy to create content that misleads people. Going forward, users who post AI-generated videos of an armed conflict without disclosing that they were created with AI will be suspended from the Creator Revenue Share program for 90 days. Subsequent violations will result in permanent ban from the program. Posts with community notes or content that includes metadata (or other signals) from the generative AI tool will be flagged. We are continuously evolving our policies and products to ensure X is trustworthy.”
The policy applies specifically to the Creator Revenue Sharing program, which pays eligible users a share of advertising revenue generated by their posts. For creators who rely on that income, a 90-day suspension can remove a steady stream of payouts and affect partnerships and sponsorships that are often tied to platform engagement. X did not provide an immediate public timetable for how quickly flagged posts will lead to suspension, what threshold of evidence will trigger enforcement, or the contours of an appeals process.

The move comes as the company that rebranded from Twitter after Elon Musk’s $44 billion 2022 acquisition faces renewed scrutiny over content moderation and misinformation. Company officials framed the change as a targeted measure to preserve informational integrity during conflicts in which fabricated or AI-manipulated video can rapidly influence public perception.
Operational questions remain. It is not yet clear how the platform will determine whether a disclosure is sufficient, whether automated labels such as the “Made with AI” tag will be accepted as compliant, or how Community Notes will interact with automated detection when a note and metadata disagree. Creators and civil society groups have pushed platforms to pair enforcement with clear guidance and timely appeals to avoid erroneous penalties when detection tools misclassify human-made or mixed-origin footage.
X said it will continue to refine its policies and product signals as generative AI develops. For creators, the immediate implication is sharp: failing to disclose AI-generated conflict footage risks a three-month cutoff from ad revenue, with permanent exclusion for subsequent violations.
Sources:
Know something we missed? Have a correction or additional information?
Submit a Tip

