Top AI Disclosure Policies in PR
Explore the importance of AI disclosure policies in public relations, enhancing trust, compliance, and ethical standards amidst evolving regulations.

AI is reshaping public relations, making disclosure policies more important than ever. By 2025, over 62% of consumers trust brands that disclose AI usage, but 57% question if AI is used responsibly. Hidden AI use can damage trust and lead to legal fines, such as the FTC’s $1M penalty in 2024 for exaggerated AI claims. Transparency builds credibility, mitigates misinformation, and aligns with growing regulations like California's AI Transparency Act, which will fine companies $5,000 per day starting in 2026.
Key takeaways:
- AI disclosure policies clarify when and how AI is used in PR tasks.
- Best practices include human oversight, clear client communication, and rigorous fact-checking.
- Legal risks include federal fines up to $50,120 per violation and state-level penalties.
- Tools like Media AI simplify compliance with features like campaign tracking and centralized media contact management.
Proactive AI disclosure isn't just compliance; it’s essential for maintaining trust and staying ahead in a regulated market.
My AI Manifesto: Disclose It or Don’t Use It
What Are AI Disclosure Policies and Why They Matter
AI disclosure policies are formal guidelines outlining when and how to reveal the use of AI in tasks like content creation, data analysis, and more. These guidelines are crucial for fostering trust and ensuring compliance with legal standards, as explained below.
A 2023 survey of 400 U.S. PR leaders revealed that AI tools are commonly used for activities such as content creation, research, and audience targeting. This widespread adoption underscores the growing need for clear and transparent policies.
Establishing these policies promotes accountability. Tina McCorkindale, President and CEO of the Institute for Public Relations, highlights the industry's concerns:
"We were seeing instances of 'AI speak', where you can tell something is readably AI-generated."
By implementing AI disclosure policies, PR professionals and firms can ensure accountability, helping them navigate the complexities of AI-powered communication while adhering to professional standards. Transparency in these policies not only ensures compliance but also cultivates trust.
Building Trust Through Clear Communication
Transparent AI disclosure enhances credibility with journalists, creators, and audiences. A survey of 2,000 U.K. consumers found that people are concerned about AI use and want brands to clearly disclose when AI is involved.
The PR industry recognizes that transparency strengthens credibility rather than undermining it. Andrea Gils, marketing director at JacobsEye Marketing Agency and PRSA Board of Directors member, stresses the importance of being upfront:
"It's all about honesty and transparency. Be very specific with how you're using [AI] to collect, analyze and communicate information."
This openness helps PR professionals maintain trust with media contacts and their audience. When journalists are aware that AI played a role in research or content creation, they can better assess the information and decide how to incorporate it into their work.
Transparency also mitigates the risks of misinformation. Cayce Myers, APR, professor and director of graduate studies at Virginia Tech's School of Communication, warns about the dangers of undisclosed AI use:
"The idea of a practitioner just using AI to generate content without having an active role in screening and editing the content is really dangerous because it (AI) can create something that may be untrue."
U.S. Legal Requirements for AI Disclosure
In the United States, AI disclosure regulations rely on a "soft law" approach at the federal level, focusing on voluntary standards and agency guidance rather than sweeping legislation. However, PR professionals must still adhere to certain compliance requirements.
The Federal Trade Commission (FTC) actively enforces rules around AI use in advertising and marketing. Violations can lead to fines of $50,120 per infraction. The FTC has already penalized companies for exaggerating their AI capabilities.
For example, in 2024, the FTC fined accessiBe $1 million for overstating its AI tool's ability to make websites WCAG compliant. That same year, the agency warned Workado about misleading claims regarding its AI content detection tools, requiring the company to revise its statements and submit compliance reports. Similarly, DoNotPay, an "AI legal robot company", was ordered to pay $193,000 for misrepresenting its law-related features.
At the state level, regulations are filling in gaps. By the end of 2024, 45 states and territories had introduced AI-related bills. For instance, California's AI Transparency Act, effective in 2026, will fine non-compliant companies $5,000 per day. States like Colorado, New York, and Illinois have also introduced their own rules.
One regulatory expert advises:
"Even though the US uses a soft-law approach to AI regulations, companies aiming for the market should track the latest regulations to avoid federal fines (which can exceed $50,000 per violation) or trigger product recalls."
Reputation Damage from Hidden AI Use
Beyond legal risks, failing to disclose AI use can severely damage a brand's reputation. Undisclosed AI involvement can lead to misinformation, ethical breaches, and loss of trust. This lack of transparency can harm long-term credibility and relationships.
The industry strongly supports proactive disclosure as a safeguard. PR professionals should prioritize transparency in AI-related activities like content development and chatbot implementation. This approach helps avoid the perception of dishonesty or attempts to conceal AI involvement.
Experts in the field echo this sentiment. As one analysis notes:
"Overall, disclosure of AI use in PR is not only a best practice for maintaining ethical standards and trust but also important for navigating the evolving landscape of digital communication and AI technology."
When in doubt, the rule is simple: always disclose AI use. This straightforward approach protects both individual practitioners and their organizations from the legal and reputational risks tied to hidden AI involvement.
Key Elements of Effective AI Disclosure Policies
Where to Place AI Disclosures
When it comes to AI-generated or AI-assisted content, transparency is key. That means disclosures should be clear, visible, and easy to find - not hidden in footnotes or fine print.
Take USA Today as an example. They include AI disclosures right at the top of their articles when "Key Points" are created using AI. They also clarify that a journalist has reviewed the material and provide a link to their ethical conduct policy. This approach ensures readers are immediately informed, building trust and aligning with transparency standards.
Best AI Disclosure Policies from PR Agencies and Brands
As AI adoption continues to grow, formal disclosure policies are struggling to keep pace. For instance, while 73% of Gen Z communicators rely on AI tools daily, only 38% of PR professionals report having established any guidelines for AI use. However, some organizations are stepping up with clear and structured policies to address this gap.
Case Study: Wild Card's Transparent Client-First Approach
Wild Card, a PR agency, has implemented a strict AI disclosure policy that prioritizes transparency and client trust. Their approach ensures that all content creation is rooted in human originality, with AI tools limited to tasks like data gathering and insights generation. Any AI-generated content undergoes thorough verification before being shared with clients. On top of that, the agency has committed to safeguarding client data by avoiding its integration into AI tools. This policy demonstrates how agencies can balance AI's capabilities with ethical considerations, setting a strong example for others in the industry.
Institute for Public Relations' Research-Focused Guidelines
The Institute for Public Relations (IPR) introduced an AI disclosure policy in March 2024 aimed at preserving the integrity of research. This policy requires authors to disclose the AI software used, the specific prompt given, and the sections influenced by AI. These disclosures are mandatory for substantial tasks like data collection and analysis but are not required for minor editing or brainstorming. The policy emerged as a direct response to challenges in submitted research, highlighting the importance of transparency and accountability in academic and professional work.
Common Practices from Top Performers
Organizations leading the way in AI disclosure often include these key practices:
- Human Oversight: Stephen Waddington explains, "AI will write incredibly good low-grade content... It will do a good first draft. It will not bring in human nuance and human emotion into the language. There's absolutely still a requirement within the creation of content to have a human being". This underscores the need for human involvement to ensure quality and emotional depth.
- Dual-Layer Review: A second layer of review is mandatory to ensure that AI-generated work aligns with organizational standards.
- Client Transparency: Teams proactively disclose AI use to clients, explaining how these tools are incorporated into their processes.
- Training and Guidance: Clear instructions are provided to employees on using AI ethically and effectively.
- Accuracy and Verification: Policies require rigorous fact-checking and proper sourcing of AI-generated data, addressing growing concerns about misinformation and disinformation.
These practices demonstrate how organizations can responsibly integrate AI into their workflows while maintaining trust and accountability.
Tools That Support AI Compliance in PR
The growing demand for transparency in AI usage has led to a surge in tools designed to simplify disclosure compliance. According to recent surveys, 77% of respondents and 85% of news consumers expect clear disclosure when AI is involved in content creation or decision-making. This push for openness is reshaping the way PR platforms operate.
To address these expectations, PR platforms are adding features that make compliance easier for teams while still allowing them to take full advantage of AI's capabilities. These tools act as a bridge, helping organizations balance the rapid adoption of AI with the need for clear and rigorous disclosure practices.
Media AI: Key Features Overview
Media AI is a powerful tool that gives PR professionals access to a database of over 30,000 journalists and creators. With its advanced filtering options, users can quickly find the right media contacts based on specific criteria like beat coverage, publication type, or audience demographics.
The platform offers a straightforward pricing structure with three plans:
- Journalist Database: $99 per month
- Creators Database: $99 per month
- Full Database (both combined): $149 per month
Each plan includes full database access, advanced filtering, and seamless export options, all without requiring a long-term contract. One of Media AI's standout features is its commitment to keeping contact information up-to-date, ensuring PR teams always have accurate data for their outreach efforts.
How Media AI Improves Disclosure Practices
Media AI's export and segmentation features make it easy for PR teams to document their outreach and engage in meaningful, targeted conversations about AI usage. This capability aligns with the 63% of respondents who want creative companies to be more transparent about how they use AI.
By centralizing contact management and campaign tracking, Media AI helps streamline workflows and maintain consistent AI disclosure. Its no-contract model adds flexibility, which is especially useful when organizations are implementing new disclosure policies. These features help PR teams adopt a disciplined approach to transparency, which is increasingly critical in today's landscape.
Michele E. Ewing from Kent State University highlights the importance of this approach:
"It's all about honesty and transparency. Be very specific with how you're using [AI] to collect, analyze and communicate information."
Media AI's ability to track and document outreach efforts supports this level of clarity, ensuring that PR professionals can clearly demonstrate how AI tools are being used throughout their media strategies.
The Future of AI Disclosure in PR
By 2025, more than 1,000 AI-related bills have been introduced, with 28 states and the Virgin Islands enacting at least 75 measures. This wave of legislation makes one thing clear: AI disclosure requirements are becoming a permanent fixture.
A recent poll reveals that 73% of voters back AI regulation, while 59% oppose a proposed 10-year moratorium. Pete Furlong from the Center for Humane Technology highlights why waiting a decade to act could be problematic:
"The ten year scope of the moratorium would prevent states from addressing emergent harms that we're not even aware of yet."
State regulations are now homing in on areas that directly affect public relations, such as AI governance programs, detailed risk assessments, staff training, and transparency around AI-generated content. These laws require organizations to maintain thorough AI inventories, monitor data sources, and address potential biases in their AI systems. For PR teams, this means adapting to a new regulatory environment.
Proactive preparation is key for PR professionals. Agencies that delay action until federal mandates are in place risk falling behind competitors already embracing robust disclosure practices. Those ahead of the curve recognize that AI transparency isn’t just about compliance - it’s a way to build trust and gain a competitive edge.
Technology is proving to be a vital ally in this shift. Tools like Media AI help PR teams keep detailed records to meet these new regulatory demands. By enabling clear and accountable communication, platforms like Media AI allow agencies to stand out in a market where transparent AI disclosure is quickly becoming essential.
As transparency becomes a requirement, it not only strengthens ethical standards in PR but also opens doors to new opportunities. Brands are becoming more cautious about their AI collaborations, and agencies that prioritize clear and consistent AI disclosure will be in a stronger position to secure business in this increasingly regulated landscape.
FAQs
What makes an AI disclosure policy effective in public relations?
An effective AI disclosure policy in public relations focuses on honesty, ethical practices, and clear communication. It should openly explain when and how AI is involved in creating content, helping audiences understand its role. This openness strengthens trust and ensures your communications remain credible.
Here are the key aspects to include:
- Open acknowledgment of AI's role in campaigns or materials.
- Ethical standards that ensure AI usage reflects the company's principles.
- Clear guidance for employees on how to use AI responsibly and ethically.
By incorporating these elements, PR professionals can maintain trust and build lasting connections with their audiences.
How can PR professionals stay compliant with AI disclosure laws in the U.S.?
PR professionals can navigate AI disclosure laws by staying informed about both federal guidelines and state-specific regulations, as these can differ significantly. While a universal federal law hasn’t been established yet, several states are rolling out their own rules, particularly focusing on transparency in AI-generated content used in political campaigns or promotional materials.
To comply, PR teams should implement straightforward disclosure practices, such as clearly labeling AI-generated content in all communications. Keeping an eye on updates from regulatory authorities and adopting transparency-focused best practices can make it easier to handle these shifting requirements. Being well-informed and proactive is essential for staying compliant across various jurisdictions.
What risks come with not disclosing the use of AI in PR, and how can they be avoided?
Not being open about using AI in public relations can have serious consequences for trust. If people find out that content was created by AI without it being disclosed, they might feel deceived, which could hurt your brand's reputation. Beyond that, this lack of openness can raise ethical questions, lead to regulatory issues, or even result in legal troubles, such as accusations of fraud or misleading practices.
To steer clear of these problems, implement clear and consistent AI disclosure policies. Let your audience know when AI plays a role in your content creation. Being upfront ensures your audience stays informed and confident in your brand’s values. This transparency not only safeguards your reputation but also helps build trust and credibility over time.