Australian AI Tool Assessment Framework

A self-guided tool to help assess AI systems against the SOCI Act, the Australian AI Ethics Principles, and emerging AI guardrails. Preloaded with risk profiles for common tools (e.g. OpenAI, Claude, Otter.ai), and designed to support internal compliance teams.

Australian AI Tool Assessment Framework

Comprehensive evaluation for SOCI compliance and AI Ethics Principles/Guardrails

AI Tools Quick Reference

Pre-assessed compliance levels for common AI tools – click any tool to auto-populate the entire assessment:

Large Language Models

  • OpenAI GPT-4 – Medium Risk
  • Anthropic Claude – Low Risk
  • Google Gemini – Medium Risk
  • Meta Llama – High Risk
  • Amazon Titan – Medium Risk

AI Note-Taking Apps

  • Otter.ai – Medium Risk
  • Fireflies.ai – Low Risk
  • Granola – Medium Risk
  • Gong – Low Risk
  • Fathom – Medium Risk
⚠️
Disclaimer: These assessments are based on publicly available information and general implementation scenarios. Your specific deployment may have different compliance characteristics based on configuration, controls, and use case.
💡
Quick Start: Click any tool above or use the dropdown below to instantly populate all 26 compliance criteria with baseline scores and detailed evidence. You can then adjust any scores based on your specific implementation.
1
Tool Information
2
SOCI Assessment
3
AI Principles
4
AI Guardrails
5
Results

AI Tool Information

SOCI Compliance Assessment

Not Assessed
ℹ️
Note: This section applies if your organization is a Critical Infrastructure Entity under the Security of Critical Infrastructure Act 2018.

1. Risk Management Program

Has the AI tool been assessed under your organization’s SOCI risk management program?

Requirements:

  • Identification of material risks
  • Minimization or elimination of risks
  • Regular review and update of risk assessments

2. Cyber Security Obligations

Does the AI tool meet enhanced cyber security obligations?

Requirements:

  • Prevent unauthorized access and modification
  • Maintain availability and integrity
  • Prepare for and respond to cyber security incidents

3. Incident Reporting Capability

Can incidents involving the AI tool be reported within SOCI timeframes (12-72 hours)?

Requirements:

  • Detect and assess incidents promptly
  • Report within mandated timeframes
  • Maintain incident logs and evidence

AI Ethics Principles Assessment

Not Assessed

1. Human, Societal and Environmental Wellbeing

Does the AI tool benefit individuals, society and the environment?

2. Human-Centred Values

Does the AI respect human rights, diversity, and autonomy?

3. Fairness

Is the AI inclusive, accessible, and free from unfair discrimination?

4. Privacy Protection

Does the AI ensure privacy rights and data protection?

5. Reliability and Safety

Does the AI operate reliably and safely according to its intended purpose?

6. Transparency and Explainability

Is there transparency about AI use and can outcomes be explained?

7. Contestability

Can people challenge AI decisions that affect them?

8. Accountability

Is there clear accountability for AI impacts?

AI Guardrails Assessment

Not Assessed
⚠️
Note: These guardrails are currently voluntary but may become mandatory for high-risk AI systems.

1. Accountability Process

Is there an established accountability process with governance and compliance strategy?

2. Risk Management Process

Are there processes to identify and mitigate AI risks?

3. Data Governance and Protection

Are AI systems and data quality protected through governance measures?

4. Testing and Monitoring

Is the AI tested before deployment and continuously monitored?

5. Human Oversight

Is there meaningful human oversight and intervention capability?

6. User Transparency

Are users informed about AI use and its role?

7. Contestability Process

Can affected parties challenge AI decisions?

8. AI-Generated Content Disclosure

Is AI-generated content clearly identified?

9. Record Keeping

Are comprehensive records maintained for AI operations?

10. Stakeholder Engagement

Is there engagement with stakeholders on safety, diversity and fairness?