A self-guided tool to help assess AI systems against the SOCI Act, the Australian AI Ethics Principles, and emerging AI guardrails. Preloaded with risk profiles for common tools (e.g. OpenAI, Claude, Otter.ai), and designed to support internal compliance teams.
Australian AI Tool Assessment Framework
Comprehensive evaluation for SOCI compliance and AI Ethics Principles/Guardrails
AI Tools Quick Reference
Pre-assessed compliance levels for common AI tools – click any tool to auto-populate the entire assessment:
Large Language Models
- ● OpenAI GPT-4 – Medium Risk
- ● Anthropic Claude – Low Risk
- ● Google Gemini – Medium Risk
- ● Meta Llama – High Risk
- ● Amazon Titan – Medium Risk
AI Note-Taking Apps
- ● Otter.ai – Medium Risk
- ● Fireflies.ai – Low Risk
- ● Granola – Medium Risk
- ● Gong – Low Risk
- ● Fathom – Medium Risk
AI Tool Information
SOCI Compliance Assessment
Not Assessed1. Risk Management Program
Has the AI tool been assessed under your organization’s SOCI risk management program?
Requirements:
- Identification of material risks
- Minimization or elimination of risks
- Regular review and update of risk assessments
2. Cyber Security Obligations
Does the AI tool meet enhanced cyber security obligations?
Requirements:
- Prevent unauthorized access and modification
- Maintain availability and integrity
- Prepare for and respond to cyber security incidents
3. Incident Reporting Capability
Can incidents involving the AI tool be reported within SOCI timeframes (12-72 hours)?
Requirements:
- Detect and assess incidents promptly
- Report within mandated timeframes
- Maintain incident logs and evidence
AI Ethics Principles Assessment
Not Assessed1. Human, Societal and Environmental Wellbeing
Does the AI tool benefit individuals, society and the environment?
2. Human-Centred Values
Does the AI respect human rights, diversity, and autonomy?
3. Fairness
Is the AI inclusive, accessible, and free from unfair discrimination?
4. Privacy Protection
Does the AI ensure privacy rights and data protection?
5. Reliability and Safety
Does the AI operate reliably and safely according to its intended purpose?
6. Transparency and Explainability
Is there transparency about AI use and can outcomes be explained?
7. Contestability
Can people challenge AI decisions that affect them?
8. Accountability
Is there clear accountability for AI impacts?
AI Guardrails Assessment
Not Assessed1. Accountability Process
Is there an established accountability process with governance and compliance strategy?
2. Risk Management Process
Are there processes to identify and mitigate AI risks?
3. Data Governance and Protection
Are AI systems and data quality protected through governance measures?
4. Testing and Monitoring
Is the AI tested before deployment and continuously monitored?
5. Human Oversight
Is there meaningful human oversight and intervention capability?
6. User Transparency
Are users informed about AI use and its role?
7. Contestability Process
Can affected parties challenge AI decisions?
8. AI-Generated Content Disclosure
Is AI-generated content clearly identified?
9. Record Keeping
Are comprehensive records maintained for AI operations?
10. Stakeholder Engagement
Is there engagement with stakeholders on safety, diversity and fairness?
Assessment Results
Not Assessed
Complete all sections to see results
Section Scores
SOCI Compliance
Not Assessed
AI Ethics Principles
Not Assessed
AI Guardrails
Not Assessed