How to Create an AI Systems Inventory: The Foundation of Compliance
Step-by-step guide to building an AI systems inventory for EU AI Act compliance. Covers shadow AI discovery, department surveys, what to record per system, and maintaining the inventory over time.
Why Inventory Comes Before Everything Else
Every guide to EU AI Act compliance will eventually tell you to classify your AI systems, assess their risks, document their technical specifications, and implement human oversight. All of that is correct. But none of it is possible until you answer a deceptively simple question: what AI systems does your organization actually use?
This is where most compliance programs should start, and where many go wrong. Organizations jump straight into risk classification or documentation without first establishing a complete, accurate inventory of their AI systems. The result is compliance gaps — not because the organization failed to comply with known systems, but because entire systems were never identified in the first place.
An AI systems inventory is the foundation upon which all other compliance activities are built. Classification requires knowing what to classify. Risk assessment requires knowing what to assess. Documentation requires knowing what to document. Without an inventory, you are building compliance on guesswork.
The EU AI Act does not use the phrase "AI systems inventory" as a standalone legal requirement, but the obligation to maintain one is implicit throughout the regulation. Article 26 requires deployers to take appropriate measures to ensure they use high-risk AI systems in accordance with the instructions — which presumes they know what systems they are deploying. Article 49 requires registration of high-risk AI systems in the EU database — which requires knowing which systems are high-risk. The fundamental rights impact assessment under Article 27 requires deployers to describe their processes for using AI systems — which requires having identified those systems.
In practice, every national competent authority and every AI Act compliance framework starts with inventory. It is not a suggestion. It is where everything begins.
The Shadow AI Problem
What Shadow AI Is
Shadow AI is AI that employees have adopted, integrated, or built into their workflows without formal IT approval, procurement oversight, or organizational awareness. It is the AI equivalent of shadow IT, and it is significantly more prevalent than most organizations realize.
Shadow AI takes many forms:
- A marketing team using ChatGPT to draft customer emails
- A sales representative using an AI writing assistant to personalize proposals
- A finance analyst using an AI tool to summarize quarterly reports
- A developer using AI code generation tools without organizational approval
- An HR team member using an AI-powered resume screening service
- A project manager using an AI meeting transcription tool
- Customer service agents using AI chatbot tools alongside the official support platform
Each of these represents an AI system in use within the organization. Each may have implications under the AI Act, depending on what data it processes, what decisions it influences, and who it affects.
Why Shadow AI Is a Compliance Risk
Shadow AI creates compliance risk in several ways:
Unknown data exposure: AI tools process data. When employees use unapproved AI services, they may be feeding customer data, financial information, employee records, or proprietary business data into third-party systems without appropriate data processing agreements, security reviews, or GDPR compliance measures.
Unassessed risk: An AI tool used to screen job applications is high-risk under Annex III Category 4, regardless of whether IT approved it. If the organization does not know it exists, it cannot classify it, assess its risks, or comply with the AI Act's requirements.
No documentation trail: The AI Act requires documentation for high-risk systems and transparency for systems that interact with people. Shadow AI has no documentation, no transparency measures, and no compliance controls.
Liability without awareness: The organization is the deployer of every AI system used in its operations, even those it did not formally adopt. If a shadow AI tool causes harm — a biased hiring decision, a discriminatory customer interaction, a privacy breach — the organization bears the legal consequences.
The Scale of the Problem
Research consistently shows that organizations underestimate their AI usage. A 2025 study by McKinsey found that the average enterprise uses three to five times more AI tools than its IT department is aware of. For SMBs, the ratio is often worse because there is less formal procurement oversight.
The proliferation of AI features embedded in existing software compounds the problem. Your CRM may have added AI-powered lead scoring. Your email platform may have introduced AI writing assistance. Your accounting software may use AI for anomaly detection. These are AI systems under the EU AI Act's broad definition, even though they were not procured as "AI products."
The Discovery Process
Building an accurate inventory requires a systematic discovery process that combines multiple methods. No single approach will find everything.
Method 1: Department Surveys
Survey every department and team to identify AI tools in use. This is the most direct method but also the most dependent on honest and complete responses.
Design the survey carefully:
- Define what counts as AI in plain language. Many employees do not think of the tools they use as "AI." Frame questions around specific behaviors: "Do you use any tools that generate text, images, or suggestions automatically?" "Do you use any tools that make predictions or recommendations?" "Do you paste company data into any online tools to get summaries or analysis?"
- Ask about both formal and informal tools. Emphasize that the purpose is compliance, not discipline. If employees fear consequences for disclosing unapproved tool usage, they will not disclose.
- Cover the full spectrum: purchased software, free online tools, browser extensions, mobile apps, embedded features in existing software, custom-built tools, and tools accessed through personal accounts.
- Ask about frequency, purpose, and data types. "I tried ChatGPT once to brainstorm" is different from "I use an AI tool daily to process customer data."
Survey every department, including those you might not expect. AI usage is not limited to technical teams. Marketing, sales, HR, finance, legal, operations, and customer service all adopt AI tools — often independently of each other.
Method 2: IT Audit
Complement surveys with a technical audit of AI systems in your technology environment.
Network and SaaS analysis:
- Review network traffic logs for connections to known AI service providers (OpenAI, Anthropic, Google AI, Microsoft Azure AI, AWS AI services, and others)
- Audit SaaS subscriptions and license management systems for AI-related products
- Review browser extension policies and installed extensions across the organization
- Check API key usage and integrations — AI services accessed via API may not appear in standard SaaS inventories
Software inventory review:
- Examine every software product in your technology stack for AI-powered features. Many enterprise tools have added AI capabilities through updates without requiring new procurement decisions.
- Check for AI features in CRM systems, ERP systems, HR platforms, marketing automation tools, customer service platforms, and productivity suites
- Review custom-built applications for embedded AI components (machine learning models, API calls to AI services, automated decision logic)
Cloud and infrastructure review:
- Audit cloud environments for AI/ML services in use (model training, inference endpoints, AI APIs)
- Check for containerized AI models or services running in your infrastructure
- Review data pipelines for AI processing stages
Method 3: Procurement Review
Examine procurement records to identify AI systems acquired through formal purchasing channels.
- Review all software procurement contracts from the past three years for AI-related capabilities
- Check vendor contracts for clauses about AI processing, automated decision-making, or machine learning
- Review procurement requests that mention AI, ML, automation, or predictive capabilities
- Identify any AI-specific budget line items or cost centers
This method catches formally procured systems but misses shadow AI and embedded AI features. It is most useful as a cross-reference against the other methods.
Method 4: Vendor Assessment
Contact your existing software vendors to ask whether their products include AI capabilities.
Many vendors have added AI features through updates. Your CRM vendor may have introduced AI lead scoring. Your accounting software may have added AI-powered anomaly detection. Your customer service platform may have embedded AI chatbot capabilities.
Send a structured questionnaire to each vendor asking:
- Does your product use AI, machine learning, or automated decision-making?
- If yes, what specific AI features are included?
- What data does the AI process?
- Where is the AI processing performed (on-premises, vendor cloud, third-party cloud)?
- What transparency or explainability features are available?
- Can AI features be disabled if needed?
Use the AI discovery tool to systematically work through this process with guided workflows and tracking.
What to Record for Each AI System
Once you have identified your AI systems, you need to record detailed information about each one. The inventory is only useful if it contains enough information to support classification, risk assessment, and ongoing compliance management.
Core Information
For every AI system in your inventory, record:
System identification:
- System name and version
- Vendor or internal development team
- Unique identifier (for internal tracking)
- Date of initial deployment
- Current operational status (active, pilot, deprecated, planned)
Purpose and function:
- Intended purpose (what the system is designed to do)
- Actual use (how the system is used in practice — this may differ from intended purpose)
- Business process it supports
- Department or team responsible for the system
AI characteristics:
- Type of AI technology (machine learning, deep learning, rule-based, NLP, computer vision, generative AI, etc.)
- Whether the system learns or adapts over time (static model vs. continuously learning)
- Whether the system makes autonomous decisions or provides recommendations for human decision-makers
- The model provider (if using a third-party AI model)
Data and Privacy
Data inputs:
- Types of data the system processes (personal data, financial data, health data, biometric data, etc.)
- Data sources (internal databases, user inputs, third-party data, public data)
- Volume of data processed
- Whether special category data (under GDPR Article 9) is processed
Data outputs:
- What the system produces (scores, classifications, generated content, recommendations, decisions)
- Who receives the outputs (internal users, customers, third parties)
- How outputs are used in decision-making processes
Data governance:
- Data processing agreement status (for third-party AI services)
- Data storage location (EU, non-EU, vendor cloud)
- Data retention policy
- GDPR legal basis for processing
Risk and Compliance
Risk classification:
- Preliminary risk classification under the EU AI Act (unacceptable, high, limited, minimal)
- Basis for classification (which Annex III category, if applicable)
- Whether a full risk assessment has been conducted
- Date and outcome of the most recent risk assessment
Affected persons:
- Who is affected by the system's outputs (customers, employees, applicants, public)
- Estimated number of persons affected
- Whether vulnerable groups are affected (children, elderly, persons with disabilities)
- Potential impact on fundamental rights
Compliance status:
- Current compliance status (compliant, in progress, gap identified, not assessed)
- Outstanding compliance actions
- Responsible person or team for compliance
- Target date for full compliance
Operational Details
Human oversight:
- Level of human involvement in the system's operation
- Who has authority to override the system's outputs
- Escalation procedures for disputed outputs
- Training provided to human operators
Monitoring and maintenance:
- How system performance is monitored
- Frequency of model updates or retraining
- Known limitations or failure modes
- Incident history (past failures, errors, complaints)
Dependencies and integrations:
- Other systems the AI system connects to
- APIs or data feeds it consumes or produces
- Upstream and downstream dependencies
This may seem like a lot of information per system, but most of it is essential for the compliance activities that follow. Every field maps directly to a requirement in the AI Act or a practical need in risk assessment, documentation, or monitoring.
Use the systems inventory tool to structure and maintain this information in a format designed for AI Act compliance.
Building the Inventory: Practical Workflow
Phase 1: Rapid Discovery (Weeks 1-2)
The goal of Phase 1 is speed, not perfection. Identify as many AI systems as possible in a short time frame.
- Launch department surveys across the organization with a two-week response deadline
- Run an initial IT audit focused on known AI service providers and SaaS subscriptions
- Review procurement records for the past 12 months
- Create a preliminary inventory list with system name, department, and primary purpose
At the end of Phase 1, you should have a list of AI systems. It will not be complete, and the details will be sparse, but it gives you a working baseline.
Phase 2: Deep Discovery (Weeks 3-4)
Fill in the gaps from Phase 1 and find systems that the initial sweep missed.
- Follow up on survey non-responses and ambiguous answers
- Conduct targeted IT audits based on Phase 1 findings (e.g., if one department uses a specific AI tool, check whether other departments use it too)
- Send vendor assessment questionnaires to all major software vendors
- Interview department heads and power users about AI usage patterns
- Check for embedded AI features in enterprise software platforms
Phase 3: Detail and Classification (Weeks 5-8)
With a comprehensive list of systems, fill in the detailed information for each one.
- Complete the core information fields for every system
- Conduct preliminary risk classification for each system using the classification tool
- Identify high-risk systems that need immediate compliance attention
- Prioritize systems for full risk assessment based on risk level and deployment scale
- Assign owners for each system in the inventory
Phase 4: Validate and Publish (Week 9)
Validate the inventory with stakeholders and establish it as the organizational reference.
- Review the complete inventory with department heads for accuracy
- Cross-reference with IT systems of record to catch any remaining gaps
- Present findings to leadership, including the number and risk profile of AI systems discovered
- Establish the inventory as the official AI systems register for the organization
- Set up the ongoing maintenance process (below)
Start the discovery process today with the assessment tool to understand the scope of your AI landscape.
Maintaining the Inventory Over Time
An inventory that is accurate on the day it is created but never updated is nearly as useless as no inventory at all. AI adoption is accelerating. New systems are deployed, existing systems are updated, and employees continue to discover and adopt new AI tools. Your inventory must keep pace.
Governance Processes
New system approval: Establish a process that requires all new AI systems to be registered in the inventory before deployment. This should be integrated into your existing procurement, IT onboarding, and change management processes. No AI system should go live without an inventory entry.
Regular review cadence: Schedule quarterly reviews of the entire inventory. During each review:
- Verify that all active systems are still in use and accurately described
- Check for new AI features added to existing software through vendor updates
- Follow up on any systems flagged for sunset or replacement
- Update risk classifications if system usage or context has changed
Shadow AI monitoring: Continue periodic shadow AI discovery exercises. Run employee surveys at least annually. Maintain IT monitoring for connections to AI services. Shadow AI is not a one-time problem — it recurs every time a new AI tool gains popularity.
Change triggers: Define events that require an inventory update:
- New AI system procurement or deployment
- Significant changes to an existing AI system (new model, new data sources, new use case)
- Changes in the affected population (system now used for a different group of people)
- Vendor changes or acquisitions that affect AI system providers
- Regulatory changes that might reclassify a system's risk level
Roles and Responsibilities
Assign clear roles for inventory management:
- Inventory owner: A senior individual (often the AI governance lead or DPO) who is accountable for the inventory's completeness and accuracy
- Department AI coordinators: One person per department responsible for reporting new AI usage and verifying existing entries during quarterly reviews
- IT liaison: Responsible for the technical audit aspects of shadow AI discovery and for monitoring AI service connections
- Compliance reviewer: Responsible for verifying that risk classifications and compliance statuses are current
Tools and Automation
Manual spreadsheets work for organizations with a handful of AI systems but become unmanageable as the number grows. Consider dedicated tools that support:
- Structured data entry with required fields and validation
- Automated reminders for review dates and compliance deadlines
- Integration with IT asset management systems
- Reporting and dashboards for leadership visibility
- Audit trails showing when inventory entries were created or modified
The systems inventory tool provides a structured, purpose-built environment for maintaining your AI inventory with all the fields and workflows needed for AI Act compliance.
From Inventory to Compliance
A completed AI systems inventory unlocks every subsequent compliance activity:
Classification: With your inventory in hand, you can systematically classify every AI system against the EU AI Act's risk categories. The classification tool takes your inventory data and maps each system to its applicable risk tier.
Risk assessment: High-risk systems identified through classification proceed to full risk assessment under Article 9. Your inventory data provides the starting context for each assessment — you already know the system's purpose, data types, affected persons, and deployment context. See our AI risk assessment guide for the detailed methodology.
Gap analysis: Compare each system's current compliance status against what the AI Act requires for its risk category. The gap analysis tool identifies specific deficiencies and prioritizes remediation actions.
Documentation: The inventory data feeds directly into the technical documentation required under Article 11 and the transparency information required under Article 13.
Monitoring: The inventory becomes your ongoing monitoring framework. Track each system's compliance status, review dates, incident history, and performance metrics in one place.
Reporting: When regulators, auditors, or leadership ask about your AI governance, the inventory is your primary reference document. It demonstrates that you know what AI systems you use, you have assessed their risks, and you are managing them systematically.
Getting Started Today
You do not need to wait for perfect processes or dedicated tools to start building your inventory. Begin with what you have:
- Send a simple survey to every department asking three questions: What AI tools do you use? What do you use them for? What data do you put into them?
- List your known AI systems — the ones IT is aware of, the ones you procured formally, the ones that are part of your core business processes.
- Check your major software vendors for AI features you may not have noticed.
- Record what you find in whatever format works — a spreadsheet is fine for now. The structure can be improved later.
- Classify and prioritize using the classification tool to identify which systems need immediate compliance attention.
The organizations that start their inventory now will be ready when the August 2026 enforcement deadline arrives. The organizations that wait will be scrambling to answer a question they should have asked first: what AI systems do we actually use?
For a comprehensive look at the compliance journey that follows inventory, see the AI compliance checklist guide. The inventory is step one. But without step one, there are no other steps.