The Federal Trade Commission has asked a federal court to stop Air AI from using allegedly deceptive claims about business growth, earnings potential, and refund guarantees to market its services to small businesses and entrepreneurs.
The agency alleges that consumers, many of whom are small business owners, lost as much as $250,000 after relying on Air AI’s promises and were often left in debt as a result.
FTC Bureau of Consumer Protection Director Christopher Mufarrige said:
Companies that market AI-related tools with false promises of unrealistic investment returns and guaranteed refunds harm hardworking small business owners and undermine legitimate business’s adoption of AI. The FTC is focused on ensuring the promise of new technology isn’t misused as a means to mislead consumers.
Explore More: TCPA Reset: Fifth Circuit Rejects Prior Express Written Consent Rule
The Complaint and Parties Involved
The Commission filed a complaint against Air AI Technologies, a Delaware-based company also operating under the names Air AI, Air.AI, and Scale 13, along with its owners; Caleb Matthew Maddix, Ryan Paul O’Donnell, and Thomas Matthew Lancer and multiple affiliated entities.
According to the complaint, since at least February 2023, the company and its operators have marketed and sold business coaching materials, support services, and licenses to resell their offerings through a bundled product referred to as an “Air AI Access Card.”
The Commission alleges that these offerings were promoted using claims that were not substantiated by actual performance or outcomes.
According to the complaint, the company advertised its flagship product as “conversational AI,” claiming that it could replace human customer service representatives and conduct extended, human-like sales conversations.
The FTC alleges that these claims were accompanied by representations that the product, when combined with other services, could generate significant income for business owners.
For example, the complaint alleges that Air AI and its operators claimed that consumers could earn back tens of thousands of dollars within days or months, and in some cases, generate millions of dollars using the system.
According to the FTC, Air AI promised to return money to the consumers who paid for the services if they did not achieve the advertised results.
According to the complaint, these promises were totally ignored, and the consumers who requested refunds experienced delays, lack of communication, and, in some cases, a complete cessation of contact from the company.
Alleged Violations Identified by the Commission
According to FTC, Air AI is involved in multiple unlawful practices, including:
- Air AI made false claims that purchasers of their services would earn a substantial amount of money , without real proof.
- Air AI falsely claimed that customers would get a refund if they wont achieve desired results.
- The company had misrepresented the performance, efficacy, and central characteristics of their services.
- By misleading earnings claims and failure to provide required disclosures, Air AI violated the federal rules governing telemarketing like TCPA.
What This Action Signals for Cold Calling Companies
The action taken by the FTC does not apply solely to one company or one product. This case signals that Regulators are now keenly observing how AI calling platforms actually work and whether the claims they make are true
Air AI was among the more prominently marketed AI voice calling platforms over the past two years. It has positioned itself as a system capable of conducting extended, human-like sales conversations and generating significant revenue outcomes for businesses. The Commission’s complaint, however, indicates that such claims, when not substantiated, may constitute deceptive practices under existing consumer protection laws.
The implication for cold calling companies is direct: vendor claims are no longer treated as marketing language alone. They are subject to regulatory scrutiny.
Why Is Platform Selection a Risk Decision, Not Just an Operational One?
The Commission’s allegations emphasize that representations made by AI vendors—particularly around earnings potential, automation capabilities, and guaranteed outcomes—can carry compliance implications beyond the vendor itself.
If a cold calling company adopts a platform based on claims that are later found to be misleading, those claims may influence how that company structures its outreach, communicates with consumers, and represents its own services. The result is that vendor credibility becomes a component of the company’s compliance posture.
In this context, selecting a platform is not solely an operational decision. It is a risk decision.
The complaint also highlights a secondary risk: If your business depends heavily on one AI calling platform, and that platform gets into legal trouble, it can affect your operations. Your whole system might stop working properly and the company behind operations might change or shut down. Even in cases where a platform continues to function, its ownership structure, service reliability, and long-term availability may be affected.
For cold calling companies, this introduces a form of pipeline risk that is not tied to performance metrics, but to the underlying legal and operational status of the platform itself.
Regulatory Attention on AI-Driven Calling
The Commission’s action is consistent with a broader pattern of enforcement activity related to AI-generated communications, telemarketing practices, and consent-based outreach.
Federal regulators, including the FTC and the Federal Communications Commission, have indicated increased scrutiny of:
- AI-generated voice calls that simulate human interaction;
- earnings and performance claims associated with automated systems;
- consent collection and disclosure practices in outbound communication;
- and representations made to consumers regarding automation and outcomes.
The Commission’s complaint reflects the position that the use of AI does not alter the underlying legal standards governing telemarketing or consumer protection.
How AI Performance Claims Are Now Being Evaluated by Regulators
The central theme emerging from this action is a shift from what AI systems are claimed to do, to what can be demonstrated and verified.
Claims such as “human-like conversations,” “automated sales at scale,” or “guaranteed revenue outcomes” are no longer evaluated solely in the context of innovation.
Now they will be evaluated based on key performance indicators, Proof of outcomes and compliance with existing federal and state regulations.
For cold calling companies, this establishes a new baseline: whatever they claim about their operation efficiency and performance must be grounded in verified available data and case studies.
Implications for Industry Practices
The Commission’s action indicates that:
- marketing representations made by AI calling platforms may be treated as factual claims subject to enforcement;
- reliance on those representations may introduce compliance exposure for businesses using such platforms;
- and the deployment of AI in outbound calling does not reduce regulatory obligations related to transparency, accuracy, or consumer protection.
This suggests that the AI calling sector is entering a phase in which regulatory expectations are being applied with the same rigor as in traditional telemarketing environments.
What to Look for in an Air AI Alternative
If you are now evaluating alternatives, whether you were an Air AI customer or a prospect who has been following the space, here is a straightforward framework for assessing AI calling platforms after this settlement.
Verifiable Compliance Infrastructure
Any platform can claim TCPA compliance. What you want to see is how compliance is actually enforced, including automated state-by-state dialing windows, real-time DNC suppression, consent validation integration, and documented opt-out handling across all channels. Ask vendors to show you the compliance mechanism, not just describe it.
Transparent, Substantiated Performance Data
The FTC action against Air AI centered largely on unsubstantiated earnings and performance claims. Before committing to any AI calling platform, ask for real case study data from real clients in your industry. Vague claims about “human-like conversations” and “unlimited scale” without specific, verifiable performance benchmarks should be treated with skepticism.
Number Management and Spam Protection
The number of calls is the operational variable that most directly determines your actual answer rates, and most AI calling vendors, including the developer-first platforms like Bland AI and Retell AI, leave this responsibility entirely to the customer. You want a platform that purchases, registers, and actively monitors dedicated phone numbers on your behalf, with continuous spam detection and number replacement built into the service.
Managed Infrastructure vs. Software Tools
The Air AI model, like most competitors in this space, is designed to sell you software while leaving the operational complexity to you. Number management, compliance enforcement, CRM integration, campaign optimization, and ongoing troubleshooting are all your problems. The platform itself carries those responsibilities in a fully managed solution.
Clean Regulatory Record
After Air AI’s settlement, this criterion should be explicit in any vendor evaluation. Verify whether the platform or its operators have any FTC, FCC, or state regulatory actions on record. This data is public information. A five-minute search can tell you what years of marketing material will not.
FAQs
1. Does the Air AI FTC action mean AI outbound calling itself is being banned?
No. The FTC action targets specific deceptive business practices by Air AI’s operators, false earnings claims, misrepresented refund guarantees, and Telemarketing Sales Rule violations..
AI outbound calling is legal when conducted in compliance with TCPA and other related federal and state level regulations. The action signals that the AI calling space is under regulatory scrutiny, not that the technology itself is prohibited
2. What should a call center do if they were using Air AI?
First, assess your operational continuity. If Air AI is your primary outbound calling infrastructure, begin evaluating alternatives now instead of waiting for service disruption. Second, review your consent documentation to ensure your calling practices are independently defensible, separate from whatever Air AI claimed about compliance. Third, evaluate replacement platforms specifically on their compliance infrastructure, number management, and verified performance data, not marketing claims.
3. What makes a compliant AI calling platform in 2026?
A compliant AI calling platform that enforces TCPA rule is automatically applied at the system level, including federal and state dialing windows, DNC suppression, consent validation, and opt-out handling. It uses registered, whitelisted phone numbers to maintain carrier trust and answer rates. It provides a full audit trail of call activity, consent records, and opt-out events. And it is transparent about what compliance it enforces vs. what remains the customer’s responsibility.



