Synthetic Media Is Becoming the Next Big Brand Safety Problem
AI video tools are making it easier than ever to create realistic promotional content.
Brands can now generate scenes, edit faces, clone voices, modify performances, create synthetic characters, and produce video ads at a speed that traditional production could never match.
That power is changing marketing. But it is also creating a serious new risk.
As synthetic media becomes more convincing, brands need to think carefully about consent, likeness rights, disclosure, trust, and brand safety. The question is no longer just, “Can we create this?”
The better question is, “Do we have the right to create this, and will people trust us after they see it?”
That is why synthetic media is becoming the next big brand safety problem.
Why This Topic Is Timely
A recent Business Insider report highlighted actors speaking out after their likenesses were allegedly used in AI-generated promotional ads that placed them in fake sexualized scenes for micro-drama apps. According to the report, several actors said their images were manipulated into misleading ads that did not reflect scenes from the original shows.
The issue is especially urgent because micro-drama apps are growing quickly and competing aggressively for attention. Business Insider reported that U.S. micro-drama industry revenue is projected to double to $7.8 billion in 2026, creating strong pressure to produce high-performing promotional ads quickly. Business Insider
Regulators are also paying closer attention. Reuters reported that New York passed legislation requiring disclosure when AI-generated synthetic performers are used in advertisements targeted at New York audiences, with the law taking effect June 9, 2026. The same reporting also noted expanded protections around digital replicas of deceased individuals for commercial use. Reuters
This is no longer a fringe issue. Synthetic media is now a marketing, legal, ethical, and reputational challenge.
What Is Synthetic Media?
Synthetic media refers to content that has been created, altered, or generated using artificial intelligence or digital manipulation.
It can include:
- AI-generated video
- Deepfake-style face replacement
- Voice cloning
- AI-generated actors or synthetic performers
- Digitally altered product demonstrations
- Manipulated promotional clips
- AI-generated influencer content
- Digital replicas of real people
Some synthetic media is harmless or clearly creative. It can be used for animation, product visualization, concept development, accessibility, translation, entertainment, and experimental storytelling.
The risk begins when synthetic media is realistic enough to confuse people, especially when it uses someone’s identity, voice, face, performance, or reputation without clear permission.
Why Synthetic Media Creates Brand Safety Risk
Brand safety used to mostly mean making sure ads did not appear next to harmful, offensive, or inappropriate content.
That still matters.

But synthetic media creates a different kind of brand safety issue. The risk is not just where the ad appears. The risk is what the ad itself contains.
If a brand uses AI-generated content that misrepresents a person, creates a false endorsement, manipulates a performance, or implies consent that was never given, the brand can damage trust quickly.
Even if a third-party vendor created the content, the audience may still blame the brand connected to the campaign.
That is why synthetic media needs to be treated as a brand safety issue, not just a production shortcut.
The Consent Problem
Consent is the foundation of responsible synthetic media.
If a person’s face, voice, likeness, name, or performance is used in advertising, brands need clear permission. That permission should be specific, documented, and appropriate for the intended use.
It is not enough to assume that because someone appeared in one piece of content, their likeness can be reused in any AI-generated promotion.
Brands should ask:
- Did this person explicitly agree to AI manipulation?
- Did they agree to this specific campaign?
- Did they agree to this platform and ad format?
- Did they agree to this message or scene?
- Are there limits on how their likeness can be used?
- Do contracts clearly cover AI-generated edits or digital replicas?
As AI tools become more capable, old contracts may not be enough. Many agreements were written before realistic generative video became widely accessible.
Manipulated Context Can Be Just as Harmful as a Fake Face
Synthetic media risk is not limited to deepfakes.
A person’s image can be placed in a misleading context without fully replacing their face. Their performance can be edited to suggest something they never did. Their body language, expression, or scene can be altered to create a different impression.
That is especially dangerous in advertising.
Ads are designed to persuade. If an ad uses a person’s likeness in a way that falsely suggests behavior, endorsement, intimacy, approval, or association, the harm can be serious.
For actors, creators, influencers, and public figures, this can affect reputation and future work. For brands, it can create backlash, legal exposure, and loss of credibility.
Disclosure Is Becoming More Important
As synthetic media becomes harder to detect, disclosure becomes more important.
Audiences should not have to guess whether a person, voice, scene, or performance is real.
New York’s synthetic performer disclosure law is one example of where regulation is heading. According to Reuters, the law requires clear labeling when AI-generated synthetic performers are used in ads targeted at New York audiences.
That kind of rule reflects a larger principle: if AI-generated realism could mislead people, brands should disclose it clearly.
Disclosure does not solve every problem. A brand still needs consent and quality control. But disclosure helps protect audience trust.
Platforms Are Under Pressure Too
Synthetic media also creates challenges for social platforms.
Business Insider reported that actors said misleading AI-generated promotional ads appeared on platforms such as TikTok and Meta. The same report noted that platforms have policies around AI labeling and nudity, but enforcement can be inconsistent when realistic synthetic content moves quickly through ad systems. Business Insider
There are also broader concerns around deepfake-style ads and celebrity misuse. Recent reporting from The Verge described AI-generated celebrity deepfake ads being used to promote scams on TikTok, including realistic videos of well-known figures used to lure users toward fraudulent offers.
This shows that synthetic media is not only a creative issue. It is also a platform trust issue.
Why This Matters for Everyday Brands
Some small businesses may assume synthetic media risk only applies to major advertisers, entertainment companies, or celebrities.
That is not true.
As AI video tools become easier to use, everyday brands may face similar risks. A small business might use AI to generate a testimonial-style video. A marketer might create a synthetic spokesperson. A freelancer might edit a client’s product video with AI. A social media manager might use an AI voice or face without realizing the rights are unclear.
The smaller the team, the easier it is to skip review steps.
But the reputational damage can still be real.
Even small brands need basic rules around synthetic media.
Responsible Synthetic Media Starts With a Review Workflow
The solution is not to avoid AI video entirely. The solution is to use it responsibly.
Brands should create a simple synthetic media review workflow before publishing AI-assisted video content.
That workflow should include:
- Consent review
- Likeness rights review
- Disclosure review
- Brand safety review
- Legal or policy review when needed
- Human approval before publishing
This does not need to be complicated for every small piece of content. But the more realistic the synthetic media is, and the more it involves a real person’s identity, the more careful the review should be.
Questions Brands Should Ask Before Publishing AI Video Ads
Before publishing any synthetic or AI-assisted video ad, brands should ask:
1. Does this use a real person’s likeness?
If the content includes a recognizable face, voice, name, body, or performance, permission matters.
2. Do we have explicit consent?
Consent should be clear, written, and specific to the type of AI use.
3. Could the audience be misled?
If people might believe the person said, did, endorsed, or experienced something that is not true, the content is risky.
4. Is the AI use disclosed clearly?
If synthetic elements are realistic, disclosure may be necessary for trust and compliance.
5. Does this align with our brand values?
Even if the content is technically legal, it may still be wrong for the brand.
6. Would we be comfortable explaining this publicly?
This is one of the best brand safety tests. If the campaign would be difficult to defend, rethink it.
Creator Rights Are Becoming a Marketing Issue
Creator rights are no longer only a legal department concern.
They are becoming a marketing issue because brands increasingly rely on creators, actors, influencers, editors, designers, and performers to build content.
If those people feel exploited, misrepresented, or replaced without consent, the brand relationship suffers.
Brands that respect creator rights will have an advantage. They will be more trusted by talent, partners, customers, and audiences.
In an AI-driven content environment, treating people fairly becomes part of the brand itself.
The Role of Contracts
Contracts need to evolve for synthetic media.
Brands working with creators, actors, influencers, or production partners should make sure agreements clearly address AI use.
Important contract questions may include:
- Can the brand use AI to alter the person’s likeness?
- Can the brand create digital replicas?
- Can the content be used in paid ads?
- Can the person’s voice be cloned?
- Can the footage be reused in future campaigns?
- Are there category restrictions?
- Can the person approve AI-modified versions?
- How long can the synthetic content be used?
This is not legal advice, but it is a sign of where marketing agreements are heading.
Trust Is the Real Asset
AI makes it possible to create more persuasive synthetic content. But trust is still the asset that matters most.
If audiences believe a brand is manipulating people, hiding AI use, misusing creator likenesses, or publishing misleading ads, the brand loses credibility.
That loss can be difficult to repair.
Responsible brands will understand that synthetic media is not just a creative capability. It is a trust responsibility.
What Sights.com Readers Should Take Away
For creators, marketers, business owners, and agencies, the lesson is simple: synthetic media needs governance.
AI video tools can be useful. They can help with ideas, production, editing, ad variations, localization, and creative testing. But they should not bypass consent or human review.
The brands that handle synthetic media well will likely follow three principles:
- Permission first: Do not use likeness, voice, or identity without clear consent.
- Transparency always: Disclose AI use when realistic synthetic content could mislead people.
- Human review: Make sure every synthetic media asset is reviewed for trust, accuracy, and brand safety.
That is how brands can use AI without damaging the very trust they are trying to build.
Final Thoughts
Synthetic media is becoming the next big brand safety problem because AI has made realistic manipulation easier, faster, and cheaper.
That does not mean brands should avoid AI video tools. It means they need better rules.
Consent matters.
Disclosure matters.
Creator rights matter.
Human review matters.
In the AI era, responsible creative governance is no longer optional. It is part of brand protection.
The brands that understand this early will be better prepared for the future of digital advertising.
Want to Improve Your AI Visibility?
Download the free AI Visibility Starter Kit from Sights.com and start improving your content clarity, customer questions, trust signals, and AI search readiness.

