Artificial Intelligence Governance Policy
Contents
- Purpose & Scope
- Guiding Principles
- AI System Inventory & Risk Classification
- Risk & Impact Assessments
- Data Governance & Model Development
- Bias Testing & Mitigation
- Human Oversight & Escalation
- Transparency & Customer Communications
- Ongoing Monitoring & Lifecycle Management
- Third-Party & Customer Controls
- Accountability & Governance Structure
- Regulatory Cooperation & Policy Review
1. Purpose and Scope
This policy applies to all AI systems developed, licensed, deployed, or operated by the Company; all employees, contractors, and agents involved in AI-related activities; and all third-party models or tools integrated into Company products.
Applicable frameworks include the New York Responsible AI Safety and Education Act (RAISE Act), California automated decision-making and privacy regulations (CPRA/ADMT), the Colorado Artificial Intelligence Act, and similar frameworks.
2. Guiding Principles
Lawfulness
AI Systems will be designed and operated in compliance with applicable laws and regulations.
Fairness & Non-Discrimination
Reasonable measures will be taken to identify, test, and mitigate unlawful or prohibited bias.
Transparency
Appropriate disclosures regarding AI use, purpose, and limitations will be provided.
Human Oversight
Meaningful human review will be available for high-risk AI Systems. No output is finalized without DVM review.
Accountability
Clear ownership and escalation paths exist for AI-related decisions and incidents.
Security & Privacy
AI Systems will be developed and operated using appropriate data protection and security safeguards.
3. AI System Inventory & Risk Classification
The Company maintains an inventory of all AI Systems, documenting intended purpose and decision context, data sources and dependencies, and whether outputs materially affect legal, economic, or similarly significant rights.
Each AI System is classified as low-risk, moderate-risk, or high-risk. Risk classification is reviewed upon material modification, retraining, or expansion of use cases. Risk classification considers the nature of decisions made, impact on individuals, and regulatory exposure.
4. Risk & Impact Assessments
Prior to deployment of any moderate-risk or high-risk AI System, the Company conducts a documented AI risk and impact assessment addressing:
- Foreseeable discrimination or disparate impact risks
- Data quality, provenance, and representativeness
- Privacy, security, and misuse risks
- Degree of human reliance on AI outputs
- Availability of human review or override mechanisms
Risk assessments are updated upon material system changes.
5. Data Governance & Model Development Controls
The Company implements data governance practices including documented data sourcing and provenance, review of training and validation datasets for representational imbalance or proxy discrimination, data minimization and purpose limitation principles, and secure handling of training and inference data.
6. Bias Testing & Mitigation
The Company implements documented bias testing procedures proportionate to the risk level of each AI System, which may include statistical fairness and outcome disparity analysis, error rate comparisons across relevant populations, counterfactual or sensitivity testing, and human review of sampled outputs.
Where bias risks are identified, reasonable mitigation measures are implemented and documented, including data refinement, feature constraints, retraining, or output calibration.
Bias Testing is a continuous lifecycle obligation, not a one-time activity. The Company monitors for species and breed accuracy, drug name accuracy, and regional terminology variations as primary bias vectors in the veterinary context.
7. Human Oversight & Escalation
For high-risk AI Systems, the Company defines circumstances requiring human-in-the-loop review, provides override or appeal mechanisms where appropriate, trains reviewers on intervention criteria, and logs overrides and corrective actions.
For WoofNote specifically: all SOAP notes, dental charts, lab interpretations, imaging reports, and WoofPlan summaries are presented to the veterinarian for review before being saved, used clinically, or shared with clients. The veterinarian may edit, reject, or override any AI-generated content at any time.
8. Transparency & Customer Communications
The Company provides customers with clear, accurate information regarding the use of AI Systems in products or services, the intended purpose and appropriate use of AI Systems, known limitations and material risks, and availability of human review mechanisms.
The Company does not make representations that AI Systems are error-free or bias-free. All AI-generated outputs are clearly labeled as AI-generated within the interface.
9. Ongoing Monitoring & Model Lifecycle Management
The Company monitors AI Systems post-deployment to detect performance degradation, bias emergence, or model drift; identify feedback loops affecting training data; and trigger retraining, modification, or decommissioning where necessary. Monitoring frequency is proportionate to system risk and impact.
Jason Bonner, DVM uses WoofNote daily in active clinical practice at North Alabama Animal Hospital, providing real-time validation in live patient care situations โ a strong form of ongoing output monitoring.
10. Third-Party & Customer Controls
Where AI Systems rely on third-party models, the Company conducts reasonable diligence on third-party AI providers, allocates AI governance responsibilities contractually, prohibits unsupported or unlawful high-risk uses, and cooperates with customer and regulatory audits consistent with confidentiality obligations.
Current third-party AI providers: Deepgram (speech-to-text transcription) and Google Gemini API (SOAP generation and clinical documentation). Both are subject to contractual data protection obligations.
11. Accountability & Governance Structure
The Company designates responsible personnel for AI governance oversight, provides periodic training to relevant employees on AI risk and compliance obligations, and maintains documentation sufficient to demonstrate compliance with this Policy and applicable state AI laws.
Designated AI Governance Officer
Jason Bonner, DVM โ Bonner Veterinary Media LLC
Email: support@woofnote.com
12. Regulatory Cooperation & Policy Review
The Company reasonably cooperates with lawful regulatory inquiries related to AI Systems. This Policy is reviewed periodically and updated to reflect changes in law, technology, and industry standards.
AI Systems are probabilistic by nature. This Policy does not require error-free or bias-free outcomes, but rather the implementation of reasonable, good-faith, and risk-based governance measures.
This Policy is intended for internal governance and external diligence purposes and does not create third-party beneficiary rights.