AI in Michigan HOA and Condo Associations: Legal Risks, Fiduciary Duties, and the Policy Every Board Needs

Michigan community association boards are using artificial intelligence tools at an accelerating pace. Managers use ChatGPT to draft violation notices in seconds. Board members paste attorney memos into Gemini to create “quick summaries.” Committees run rules through Copilot to check for inconsistencies. The efficiency gains are real. So are the legal landmines.
Michigan community association attorneys and board members are discovering, often the hard way, that artificial intelligence tools like ChatGPT, Google Gemini, and Microsoft Copilot are rapidly infiltrating day-to-day association governance. From drafting violation notices to summarizing board meeting minutes, AI is already being used in community associations across Wayne County, Oakland County, Macomb County, and every corner of the state. The question is no longer whether your association will use AI. The question is whether it will use AI well — or whether the first time your board confronts the legal consequences will be in the middle of a lawsuit.
This article addresses what every Michigan condominium board member, HOA board member, and community association manager needs to understand about AI: what the law actually requires of you, where the specific risks lie, and, importantly, why your association needs a written AI policy before the next violation letter goes out the door.
The AI Revolution Is Already Inside Your Association
What is AI, and why should my Michigan HOA or condo board care? AI tools like ChatGPT and Google Gemini are large language models that generate text by predicting the next most likely word — they do not “know” facts or verify accuracy. Michigan community association boards must care because these tools are already being used for governance tasks, and the legal consequences of unguided AI use, including fiduciary duty breaches, privilege waiver, and selective enforcement claims, can expose boards to significant liability.
It would be convenient to treat AI as a future problem. It is not. According to a May 2025 industry survey, about 71% of respondents across the community-association industry reported currently using AI for association-related tasks. Among the tools in use: ChatGPT, Google Gemini, Microsoft Copilot embedded in Microsoft 365, and purpose-built platforms like STAN AI and Vantaca’s HOAi — a platform that claims to generate annual budgets in under two minutes.
The technology is not inherently dangerous. A Stanford University study found that general-purpose AI hallucinates, meaning it generates confident, plausible-sounding, factually incorrect information, on 58 to 82 percent of legal queries. Purpose-built legal AI tools still produce errors more than 17 percent of the time. A January 2025 MIT Press–published study found that LLMs often overstate their confidence and may report very high confidence even when their answers are incorrect. These are tools your board members are already using, often without any policy framework, without any training, and without any understanding of the legal consequences.
The Fiduciary Duty Problem Under MCL 450.2541
Does using AI violate a Michigan board member’s fiduciary duty? Using AI without verification and human oversight can arguably violate a board member’s fiduciary duty of care under MCL 450.2541. A director who blindly relies on AI output to make governance decisions , particularly enforcement actions or policy interpretations, may fail the “ordinarily prudent person” standard. AI does not qualify as a professional expert under MCL 450.2541(2)(b), so reliance on AI does not trigger the statutory protection that reliance on an attorney or CPA provides.
Michigan’s Nonprofit Corporation Act sets the legal standard for every board member of a condominium or HOA. Under MCL 450.2541(1), a director must discharge duties in good faith, with the care an ordinarily prudent person in a like position would exercise, and in a manner reasonably believed to be in the best interests of the corporation. This is the duty of care: and AI use implicates it directly.
MCL 450.2541(2)(b) permits directors to rely on the opinion of legal counsel, public accountants, engineers, or other persons the director reasonably believes to be within that person’s professional competence. ChatGPT is not a person. It is not licensed. It carries no malpractice insurance and cannot be cross-examined. Reliance on AI output does not qualify for the statutory protection that reliance on professional counsel provides.
The statute goes further. Under MCL 450.2541(3), a director is not entitled to rely on information if they have knowledge concerning the matter that makes such reliance unwarranted. Once a board member understands, as every board member reading this article now does, that AI generates incorrect legal and factual information at high rates, blind reliance on AI output is legally unwarranted. The business judgment rule, which insulates board decisions from judicial second-guessing when the board follows a reasonable process, protects process, but not outcomes. A board that substitutes ChatGPT for counsel and deliberation has a process problem.
The Business Judgment Rule Won’t Save You
Michigan courts have applied the business judgment rule to limit judicial review of association decisions to whether the board acted in good faith in furtherance of the association’s legitimate interests. The rule’s protection depends on process: did the board consult appropriate experts, review relevant information, and deliberate? A violation notice generated by an unverified AI prompt (citing a bylaw provision that does not exist) does not reflect a reasonable process. It reflects a process failure. That distinction matters enormously when a homeowner’s attorney is deposing your board president.
Attorney-Client Privilege: The Risk No One Sees Coming
Of all the AI risks facing Michigan community associations, the attorney-client privilege issue is the one that generates the most surprise, and the most damage. The scenario is familiar: a board president receives a detailed legal memorandum from the association’s attorney regarding a construction defect claim or a pending assessment dispute. The board president wants to share key points with the other directors before an executive session. So they paste the memo into ChatGPT and ask for a five-bullet summary.
The consequences may be severe and irreversible.
United States v. Heppner: The February 2026 Wake-Up Call
Can inputting attorney communications into AI waive attorney-client privilege? Yes. Under the reasoning of United States v. Heppner (S.D.N.Y. 2026), inputting privileged attorney-client communications into a consumer-tier AI tool may waive privilege over both the AI output and the underlying communication. Consumer AI platforms lack the confidentiality protections required to maintain privilege. Michigan community association boards should never input attorney communications into any AI tool without explicit guidance from association counsel and use of an enterprise-tier platform.
In United States v. Heppner, a federal court in New York held that a defendant’s communications with Anthropic’s Claude were protected by neither the attorney-client privilege nor the work-product doctrine where he used the public AI platform on his own initiative, and the court further stated that even if privileged information had been entered into Claude, privilege was waived by sharing it with Claude and Anthropic.
The holding’s practical implication for associations is stark. A board member who inputs privileged attorney communications into a consumer-tier AI tool (such as ChatGPT Free, Plus, or Pro; Gemini consumer; Copilot personal) may waive privilege not only over the AI-generated summary, but potentially over the underlying attorney communication itself.
At the same time, the law is not yet absolute in every setting. In Warner v. Gilbarco, Inc., the Eastern District of Michigan held that a litigant’s ChatGPT-assisted materials were protected work product, emphasized that ChatGPT and similar programs are “tools, not persons,” and held that work-product waiver requires disclosure to an adversary or in a manner likely to reach an adversary.
Discovery cases also show that AI prompts and outputs may become discoverable when a party affirmatively relies on them in litigation. In Tremblay v. OpenAI, Inc., the court required disclosure of the prompts, outputs, and account settings used for the positive testing results referred to in the complaint. In Concord Music Group, Inc. v. Anthropic PBC, the court found at least a limited waiver where the plaintiffs had relied on certain prompt-output pairs in their pleadings and filings, while rejecting an overbroad attempt to force production of every unrelied-upon prompt and output
The practical lesson for community associations is straightforward: do not assume that prompts entered into a public AI platform are privileged, and do not assume they are immune from discovery. If the communication originated with association counsel, it should not be pasted into ChatGPT, Claude, or any other public generative-AI tool without first obtaining the attorney’s advice and using an approved, secure workflow.
For boards, managers, and directors, the safest rule is simple: never feed privileged legal advice into a public AI tool and assume the privilege will survive.
Selective Enforcement: How AI Can Make a Bad Problem Worse
What is selective enforcement, and how does AI create selective enforcement risk? Selective enforcement occurs when a community association enforces rules against some owners but not others with identical violations — an often raised defense. AI can potentially create a selective enforcement risk by generating inconsistently worded notices across sessions, by processing only some owners’ complaints depending on who inputs information, and by producing enforcement patterns that may correlate with protected class characteristics under the Fair Housing Act without any intentional bias.
Selective enforcement — enforcing rules against some owners while ignoring identical violations by others — is a frequently raised defense in Michigan community association enforcement actions. Michigan courts have decided that even minor bylaw violations can be enforced, but that principle cuts both ways: if the association enforces against Owner A while ignoring the same violation by Owner B, the disparity becomes possible evidence of willful unfairness.
AI may amplify selective enforcement risk in ways boards often do not anticipate. If AI tools are used to process complaints about certain owners but not others, enforcement becomes biased by input selection. If AI generates differently-worded notices for identical violations in different sessions, the paper trail may appear to show disparate treatment. If AI-driven enforcement patterns correlate, even unintentionally, with protected class characteristics under the Fair Housing Act, the association could potentially face federal liability exposure. The fix requires standardized AI prompts for enforcement tasks and mandatory human review of every notice against the association’s enforcement log.
Why Your Association Needs a Written AI Policy
What should a Michigan community association AI policy include? A comprehensive Michigan community association AI policy should address five core areas: (1) board authorization by resolution specifying approved and prohibited uses; (2) data classification tiers identifying what information may and may not be input into AI tools; (3) mandatory human review and sign-off before any AI-generated content is distributed; (4) record retention protocols for AI prompts and outputs used in official documents; and (5) an annual review requirement as technology and law continue to evolve rapidly.
Governance without a framework is governance waiting for a crisis. It is highly likely that most Michigan community associations have not adopted any resolution or policy governing AI use. That gap creates several compounding problems: board members and managers are using AI tools without board authorization, potentially acting ultra vires; there is no standard for what data may be input; there is no human review requirement before AI-generated communications go out; and there is no audit trail to demonstrate the oversight that supports a business judgment defense.
Phase 1 — Board Authorization
The foundation of a responsible AI governance framework is a board resolution. The resolution should identify which AI tools or tiers are acceptable for association use, specify approved use cases (communication drafting, minutes summarization, FAQ development, vendor RFP templates), and explicitly prohibit others (legal interpretation, unsupervised enforcement escalation, processing of owner-identifiable data in consumer tools). Documenting this discussion in board meeting minutes creates the governance record that matters if decisions are later challenged.
Phase 2 — Data Classification and Tool Selection
Not all information is created equal. A sound AI policy classifies information into tiers. General announcements and non-sensitive administrative content may be used with consumer-tier AI tools. Owner names, addresses, and contact information should be used only with enterprise-tier tools. Assessment balances, delinquency records, accommodation requests, and personnel matters require enterprise protections and legal guidance. Attorney-client communications are categorically prohibited from input into any AI tool without explicit counsel direction.
The enterprise tier distinction is critical. Consumer tiers — ChatGPT Free, Plus, and Pro; Google Gemini consumer; Microsoft Copilot personal — use input data for model training by default (unless the user specifically opts out) and provide no contractual confidentiality protections. Enterprise tiers, such as ChatGPT Business or Enterprise, Google Workspace with Gemini, Microsoft 365 Copilot, do not use data for training, offer audit logging, SOC 2 compliance, and provide contractual protections that consumer tiers do not.
Phase 3 — Human Review Requirements
Every AI-generated document that touches association governance must be reviewed by a qualified human before distribution. For violation notices, this means verifying every bylaw citation against the actual governing documents, because the hallucination rate for AI legal citations is far too high to skip this step. For meeting minutes, the board secretary must attest to the accuracy of every motion, vote, and action item. For enforcement communications, the manager or board must confirm that the same violation is being treated the same way association-wide.
Phase 4 — Record Retention and Disclosure
Courts may now begin to treat AI prompts and outputs like emails, Slack messages, and server logs for discovery purposes. In the NYT v. OpenAI litigation (2025), the court ordered OpenAI to preserve certain consumer ChatGPT output logs that otherwise would have been deleted, and later ordered OpenAI to produce a de-identified 20 million-log sample of retained consumer ChatGPT logs; that production order was upheld in early 2026. Association records may include AI-generated content and the prompts that produced it. Associations may be well advised to maintain a clear record retention policy that distinguishes between AI working drafts and final approved documents, and that specifies how long AI-related records are retained.
Phase 5 — Annual Review
AI law is moving faster than almost any other area of technology regulation. The Heppner decision came down in February 2026 and purpose-built HOA AI platforms have launched within the last 18 months. A responsible AI policy requires annual review, or more frequent review when significant legal or technological changes occur. The board should consult association counsel at each review to confirm the policy remains legally sound.
Practical AI Uses That Are Safe (When Done Right)
Responsible AI adoption begins with understanding where the tool adds value without creating unacceptable risk. When used with appropriate human oversight, AI is well-suited to a range of community association tasks:
- Drafting first versions of seasonal owner communications, maintenance reminders, and common area notices
- Summarizing board meeting transcripts into structured minutes (subject to board secretary review and attestation)
- Creating FAQ documents from the association’s governing documents (subject to legal review before publication)
- Generating vendor RFP templates and bid comparison frameworks
- Drafting budget narrative explanations for owner distributions
- Standardizing the tone and format of violation notice templates (subject to bylaw citation verification)
Uses That Are Never Appropriate
| ⚠ These AI Uses Are Prohibited Regardless of How the Output Looks ⚠ Inputting attorney-client communications into any AI tool without enterprise protections and counsel’s guidance; inputting owner-identifiable data (names, units, balances, violations) into consumer-tier tools; using AI to interpret governing documents or Michigan statutes as a substitute for legal counsel; allowing AI-generated enforcement notices to leave the office without human review against actual bylaws; permitting AI to make or communicate board decisions without board authorization; and using AI to draft collection letters under the FDCPA without attorney review. |
Consult Qualified Michigan Counsel
AI governance in community associations sits at the intersection of Michigan corporate law (the Nonprofit Corporation Act), condominium law (the Michigan Condominium Act), privacy law (the Michigan Identity Theft Protection Act), federal fair housing law, federal debt collection law, attorney-client privilege doctrine, and rapidly evolving case law interpreting AI specifically. No technology checklist or generic AI policy template can account for your association’s specific governing documents, operational circumstances, and risk profile.
Michigan community association boards should consult with qualified legal counsel before adopting any AI use policy. Our firm has been actively drafting and implementing artificial intelligence policies for condominium associations and homeowners associations throughout Michigan. If you would like guidance on creating an AI policy tailored to your association’s governing documents and operational needs, or if you have questions about any of the legal issues discussed in this article, we welcome your inquiry. Contact us today to schedule a consultation.
Frequently Asked Questions
Yes, but with mandatory safeguards. Every AI-generated violation notice must be reviewed against the association’s actual governing documents before distribution — AI hallucinates bylaw section numbers at high rates. The notice must be reviewed for consistency with recent enforcement actions to avoid selective enforcement claims, and no owner-identifiable information should be input into consumer-tier AI tools. Consider adopting a board resolution authorizing this use with documented human review requirements.
A formal, standalone AI policy is strongly recommended. General technology policies do not address the unique legal risks of AI — specifically, attorney-client privilege waiver from inputting legal communications into AI tools, the data training practices of consumer-tier platforms, hallucination rates for legal content, and the fiduciary duty implications of AI reliance under MCL 450.2541. A comprehensive AI policy should be adopted by board resolution and reviewed annually. We assist association boards with creating an AI policy tailored for their community.
Consumer-tier AI tools (ChatGPT Free/Plus/Pro, Google Gemini consumer, Microsoft Copilot personal) use your input data for model training by default (unless specifically opted out) and provide no contractual confidentiality protections. Enterprise-tier tools (ChatGPT Business/Enterprise, Google Workspace with Gemini, Microsoft 365 Copilot) do not use data for training, offer audit logging, admin controls, and SOC 2 compliance. For community associations, this distinction determines whether association data and communications are subject to legal discovery and whether any confidentiality expectations can be maintained.
Potentially yes. Under MCL 450.2541, board members must exercise ordinary prudence in discharging their duties. A board that distributes a violation notice citing a fabricated bylaw provision, without verifying the citation, may have arguably failed the duty of care. Additionally, fabricated citations can potentially be cited to support selective enforcement claims and undermine the association’s enforcement credibility. The board’s best protection is a documented human review process that verifies all citations before any notice is distributed.
Not yet, as of early 2026. Michigan does not have AI-specific legislation governing community associations. However, existing Michigan statutes, including MCL 450.2541 (fiduciary duties), MCL 559.157 (records inspection), MCL 445.72 (data breach notification), and the Michigan Condominium Act generally, apply to AI-related governance activities. Federal law (Fair Housing Act, FDCPA) also applies. The absence of AI-specific legislation does not create a permissive environment. Rather, it means existing legal standards will likely apply to AI activities, and boards must comply with those standards.
About the author
Richard M. Delonis is a Michigan condominium and HOA lawyer at Szura & Delonis, PLC in Southfield (Metro Detroit). He advises association boards and community association managers on governance, rule enforcement, assessment collections, document amendments, and risk management, with a practical focus on helping boards reduce disputes and run defensible, well-documented processes.
Disclaimer: This article provides general Michigan-oriented information for condominium association and HOA boards and is not legal advice. Associations should consult experienced legal counsel about their specific documents, facts, and options.










