With the rapid deployment of AI-powered tools across websites — from customer service chatbots to AI-generated content — the question of whether website operators must disclose when users are interacting with an AI is becoming increasingly important. The answer depends on a combination of applicable laws, industry standards, ethical considerations, and user expectations.
- Legal Requirements: California and Beyond
In the United States, California is currently the only state with an explicit law requiring disclosure of AI bots in online communications under certain conditions.
California Bot Disclosure Law (SB 1001)
- Effective: July 1, 2019
- Codified: California Business and Professions Code § 17940–17943
- Key Requirement: It is unlawful for a person to use a bot to communicate with the intent to mislead the other person about its artificial identity for the purpose of influencing a commercial transaction or a vote in an election, unless the bot’s use is disclosed clearly and conspicuously.
Legal Summary:
You must disclose the use of a bot if:
- The bot is interacting online;
- You are attempting to sell a product/service or influence voting; and
- The recipient might reasonably believe they’re talking to a human.
Required Disclosure Format:
- Must be “clear, conspicuous, and reasonably designed” to inform the user.
- Common phrasing: “Hi, I’m a virtual assistant powered by AI.”
Enforcement:
- The law may be enforced by public prosecutors, and violations can be prosecuted as unfair or deceptive practices.
- Federal and Other State Laws
There is no current federal law that requires businesses to disclose AI interactions generally. However, the Federal Trade Commission (FTC) has issued guidance under its authority to prevent unfair or deceptive business practices:
FTC AI Guidance (2020–2023):
- Businesses must not mislead consumers about whether they are interacting with a human.
- AI use must be truthful and transparent, especially in advertising and automated decision-making.
- Failure to disclose that a user is speaking to an AI can be seen as deceptive, especially if it affects consumer decision-making.
Other States:
- Illinois, New York, and Washington have proposed similar laws, though not all have passed.
- Many states have general consumer protection laws that prohibit deceptive or misleading conduct, which could apply to AI bot nondisclosure.
- International Legal Frameworks
EU Artificial Intelligence Act
The EU’s AI Act, adopted in 2024, includes mandatory transparency requirements:
- Users must be informed that they are interacting with an AI system, unless this is obvious.
- Applies to chatbots, AI-generated content, and emotionally manipulative systems.
Violating these disclosure obligations can lead to significant administrative fines — up to 2.5% of annual global turnover for some violations.
UK & Canada
Both jurisdictions emphasize transparency and ethical AI use, though formal laws are still in development. The expectation is that AI systems should not impersonate humans without clear notice.
- Industry Standards & Best Practices
Even where not legally required, many businesses follow industry best practices and user experience (UX) norms by voluntarily disclosing AI interaction.
Reasons to Disclose:
- Builds user trust and transparency.
- Reduces risk of reputational damage or legal liability.
- Improves user experience by setting appropriate expectations.
- Avoids potential litigation under deceptive practices statutes.
How to Disclose:
- Add a label or message at the start of the interaction (e.g., “I’m an AI chatbot designed to help you…”);
- Include disclosure in the privacy policy or terms of service; or
- Use visual cues like avatars, bot names, or color schemes that imply automation.
- Ethical Considerations
From an ethics and trust standpoint, transparency in AI usage is a core principle endorsed by:
- The OECD AI Principles
- The White House Blueprint for an AI Bill of Rights
- The IEEE and other professional bodies
In fact, pretending an AI is a human can be considered manipulative especially in contexts involving vulnerable users such as:
- Children
- Elderly consumers
- Job seekers
- Patients
Conclusion
Yes — in many cases, you must disclose when website users are interacting with an AI bot, particularly:
- If you are based in California and using bots for commercial or political purposes;
- If FTC rules or state consumer protection laws may apply to your context;
- If you operate internationally, especially in the EU, where AI disclosure is mandatory; and/or
- Simply as a matter of best practice and ethical responsibility.
So, in short, even where it’s not legally required, disclosure is strongly advised to avoid regulatory scrutiny and to build trust with users. A simple, clear message can go a long way in ensuring transparency, compliance, and good faith AI deployment. Please contact our law firm to speak with an internet and technology lawyer about your questions.